Development of MCNPX-ESUT computer code for simulation of neutron/gamma pulse height distribution
NASA Astrophysics Data System (ADS)
Abolfazl Hosseini, Seyed; Vosoughi, Naser; Zangian, Mehdi
2015-05-01
In this paper, the development of the MCNPX-ESUT (MCNPX-Energy Engineering of Sharif University of Technology) computer code for simulation of neutron/gamma pulse height distribution is reported. Since liquid organic scintillators like NE-213 are well suited and routinely used for spectrometry in mixed neutron/gamma fields, this type of detectors is selected for simulation in the present study. The proposed algorithm for simulation includes four main steps. The first step is the modeling of the neutron/gamma particle transport and their interactions with the materials in the environment and detector volume. In the second step, the number of scintillation photons due to charged particles such as electrons, alphas, protons and carbon nuclei in the scintillator material is calculated. In the third step, the transport of scintillation photons in the scintillator and lightguide is simulated. Finally, the resolution corresponding to the experiment is considered in the last step of the simulation. Unlike the similar computer codes like SCINFUL, NRESP7 and PHRESP, the developed computer code is applicable to both neutron and gamma sources. Hence, the discrimination of neutron and gamma in the mixed fields may be performed using the MCNPX-ESUT computer code. The main feature of MCNPX-ESUT computer code is that the neutron/gamma pulse height simulation may be performed without needing any sort of post processing. In the present study, the pulse height distributions due to a monoenergetic neutron/gamma source in NE-213 detector using MCNPX-ESUT computer code is simulated. The simulated neutron pulse height distributions are validated through comparing with experimental data (Gohil et al. Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment, 664 (2012) 304-309.) and the results obtained from similar computer codes like SCINFUL, NRESP7 and Geant4. The simulated gamma pulse height distribution for a 137Cs source is also compared with the experimental data.
NASA Astrophysics Data System (ADS)
Lou, Tak Pui; Ludewigt, Bernhard
2015-09-01
The simulation of the emission of beta-delayed gamma rays following nuclear fission and the calculation of time-dependent energy spectra is a computational challenge. The widely used radiation transport code MCNPX includes a delayed gamma-ray routine that is inefficient and not suitable for simulating complex problems. This paper describes the code "MMAPDNG" (Memory-Mapped Delayed Neutron and Gamma), an optimized delayed gamma module written in C, discusses usage and merits of the code, and presents results. The approach is based on storing required Fission Product Yield (FPY) data, decay data, and delayed particle data in a memory-mapped file. When compared to the original delayed gamma-ray code in MCNPX, memory utilization is reduced by two orders of magnitude and the ray sampling is sped up by three orders of magnitude. Other delayed particles such as neutrons and electrons can be implemented in future versions of MMAPDNG code using its existing framework.
The use of the SRIM code for calculation of radiation damage induced by neutrons
NASA Astrophysics Data System (ADS)
Mohammadi, A.; Hamidi, S.; Asadabad, Mohsen Asadi
2017-12-01
Materials subjected to neutron irradiation will being evolve to structural changes by the displacement cascades initiated by nuclear reaction. This study discusses a methodology to compute primary knock-on atoms or PKAs information that lead to radiation damage. A program AMTRACK has been developed for assessing of the PKAs information. This software determines the specifications of recoil atoms (using PTRAC card of MCNPX code) and also the kinematics of interactions. The deterministic method was used for verification of the results of (MCNPX+AMTRACK). The SRIM (formely TRIM) code is capable to compute neutron radiation damage. The PKAs information was extracted by AMTRACK program, which can be used as an input of SRIM codes for systematic analysis of primary radiation damage. Then the Bushehr Nuclear Power Plant (BNPP) radiation damage on reactor pressure vessel is calculated.
Nuclear Resonance Fluorescence for Materials Assay
DOE Office of Scientific and Technical Information (OSTI.GOV)
Quiter, Brian; Ludewigt, Bernhard; Mozin, Vladimir
This paper discusses the use of nuclear resonance fluorescence (NRF) techniques for the isotopic and quantitative assaying of radioactive material. Potential applications include age-dating of an unknown radioactive source, pre- and post-detonation nuclear forensics, and safeguards for nuclear fuel cycles Examples of age-dating a strong radioactive source and assaying a spent fuel pin are discussed. The modeling work has ben performed with the Monte Carlo radiation transport computer code MCNPX, and the capability to simulate NRF has bee added to the code. Discussed are the limitations in MCNPX's photon transport physics for accurately describing photon scattering processes that are importantmore » contributions to the background and impact the applicability of the NRF assay technique.« less
Application of the MCNPX-McStas interface for shielding calculations and guide design at ESS
NASA Astrophysics Data System (ADS)
Klinkby, E. B.; Knudsen, E. B.; Willendrup, P. K.; Lauritzen, B.; Nonbøl, E.; Bentley, P.; Filges, U.
2014-07-01
Recently, an interface between the Monte Carlo code MCNPX and the neutron ray-tracing code MCNPX was developed [1, 2]. Based on the expected neutronic performance and guide geometries relevant for the ESS, the combined MCNPX-McStas code is used to calculate dose rates along neutron beam guides. The generation and moderation of neutrons is simulated using a full scale MCNPX model of the ESS target monolith. Upon entering the neutron beam extraction region, the individual neutron states are handed to McStas via the MCNPX-McStas interface. McStas transports the neutrons through the beam guide, and by using newly developed event logging capability, the neutron state parameters corresponding to un-reflected neutrons are recorded at each scattering. This information is handed back to MCNPX where it serves as neutron source input for a second MCNPX simulation. This simulation enables calculation of dose rates in the vicinity of the guide. In addition the logging mechanism is employed to record the scatterings along the guides which is exploited to simulate the supermirror quality requirements (i.e. m-values) needed at different positions along the beam guide to transport neutrons in the same guide/source setup.
NASA Astrophysics Data System (ADS)
Chiavassa, S.; Aubineau-Lanièce, I.; Bitar, A.; Lisbona, A.; Barbet, J.; Franck, D.; Jourdain, J. R.; Bardiès, M.
2006-02-01
Dosimetric studies are necessary for all patients treated with targeted radiotherapy. In order to attain the precision required, we have developed Oedipe, a dosimetric tool based on the MCNPX Monte Carlo code. The anatomy of each patient is considered in the form of a voxel-based geometry created using computed tomography (CT) images or magnetic resonance imaging (MRI). Oedipe enables dosimetry studies to be carried out at the voxel scale. Validation of the results obtained by comparison with existing methods is complex because there are multiple sources of variation: calculation methods (different Monte Carlo codes, point kernel), patient representations (model or specific) and geometry definitions (mathematical or voxel-based). In this paper, we validate Oedipe by taking each of these parameters into account independently. Monte Carlo methodology requires long calculation times, particularly in the case of voxel-based geometries, and this is one of the limits of personalized dosimetric methods. However, our results show that the use of voxel-based geometry as opposed to a mathematically defined geometry decreases the calculation time two-fold, due to an optimization of the MCNPX2.5e code. It is therefore possible to envisage the use of Oedipe for personalized dosimetry in the clinical context of targeted radiotherapy.
Nuclear Resonance Fluorescence for Materials Assay
DOE Office of Scientific and Technical Information (OSTI.GOV)
Quiter, Brian J.; Ludewigt, Bernhard; Mozin, Vladimir
This paper discusses the use of nuclear resonance fluorescence (NRF) techniques for the isotopic and quantitative assaying of radioactive material. Potential applications include age-dating of an unknown radioactive source, pre- and post-detonation nuclear forensics, and safeguards for nuclear fuel cycles Examples of age-dating a strong radioactive source and assaying a spent fuel pin are discussed. The modeling work has ben performed with the Monte Carlo radiation transport computer code MCNPX, and the capability to simulate NRF has bee added to the code. Discussed are the limitations in MCNPX?s photon transport physics for accurately describing photon scattering processes that are importantmore » contributions to the background and impact the applicability of the NRF assay technique.« less
NASA Astrophysics Data System (ADS)
Hosseini, Seyed Abolfazl; Afrakoti, Iman Esmaili Paeen
2017-04-01
Accurate unfolding of the energy spectrum of a neutron source gives important information about unknown neutron sources. The obtained information is useful in many areas like nuclear safeguards, nuclear nonproliferation, and homeland security. In the present study, the energy spectrum of a poly-energetic fast neutron source is reconstructed using the developed computational codes based on the Group Method of Data Handling (GMDH) and Decision Tree (DT) algorithms. The neutron pulse height distribution (neutron response function) in the considered NE-213 liquid organic scintillator has been simulated using the developed MCNPX-ESUT computational code (MCNPX-Energy engineering of Sharif University of Technology). The developed computational codes based on the GMDH and DT algorithms use some data for training, testing and validation steps. In order to prepare the required data, 4000 randomly generated energy spectra distributed over 52 bins are used. The randomly generated energy spectra and the simulated neutron pulse height distributions by MCNPX-ESUT for each energy spectrum are used as the output and input data. Since there is no need to solve the inverse problem with an ill-conditioned response matrix, the unfolded energy spectrum has the highest accuracy. The 241Am-9Be and 252Cf neutron sources are used in the validation step of the calculation. The unfolded energy spectra for the used fast neutron sources have an excellent agreement with the reference ones. Also, the accuracy of the unfolded energy spectra obtained using the GMDH is slightly better than those obtained from the DT. The results obtained in the present study have good accuracy in comparison with the previously published paper based on the logsig and tansig transfer functions.
NASA Astrophysics Data System (ADS)
Hosseini, S. A.; Zangian, M.; Aghabozorgi, S.
2018-03-01
In the present paper, the light output distribution due to poly-energetic neutron/gamma (neutron or gamma) source was calculated using the developed MCNPX-ESUT-PE (MCNPX-Energy engineering of Sharif University of Technology-Poly Energetic version) computational code. The simulation of light output distribution includes the modeling of the particle transport, the calculation of scintillation photons induced by charged particles, simulation of the scintillation photon transport and considering the light resolution obtained from the experiment. The developed computational code is able to simulate the light output distribution due to any neutron/gamma source. In the experimental step of the present study, the neutron-gamma discrimination based on the light output distribution was performed using the zero crossing method. As a case study, 241Am-9Be source was considered and the simulated and measured neutron/gamma light output distributions were compared. There is an acceptable agreement between the discriminated neutron/gamma light output distributions obtained from the simulation and experiment.
Full core analysis of IRIS reactor by using MCNPX.
Amin, E A; Bashter, I I; Hassan, Nabil M; Mustafa, S S
2016-07-01
This paper describes neutronic analysis for fresh fuelled IRIS (International Reactor Innovative and Secure) reactor by MCNPX code. The analysis included criticality calculations, radial power and axial power distribution, nuclear peaking factor and axial offset percent at the beginning of fuel cycle. The effective multiplication factor obtained by MCNPX code is compared with previous calculations by HELIOS/NESTLE, CASMO/SIMULATE, modified CORD-2 nodal calculations and SAS2H/KENO-V code systems. It is found that k-eff value obtained by MCNPX is closer to CORD-2 value. The radial and axial powers are compared with other published results carried out using SAS2H/KENO-V code. Moreover, the WIMS-D5 code is used for studying the effect of enriched boron in form of ZrB2 on the effective multiplication factor (K-eff) of the fuel pin. In this part of calculation, K-eff is calculated at different concentrations of Boron-10 in mg/cm at different stages of burnup of unit cell. The results of this part are compared with published results performed by HELIOS code. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Fensin, Michael Lorne
Monte Carlo-linked depletion methods have gained recent interest due to the ability to more accurately model complex 3-dimesional geometries and better track the evolution of temporal nuclide inventory by simulating the actual physical process utilizing continuous energy coefficients. The integration of CINDER90 into the MCNPX Monte Carlo radiation transport code provides a high-fidelity completely self-contained Monte-Carlo-linked depletion capability in a well established, widely accepted Monte Carlo radiation transport code that is compatible with most nuclear criticality (KCODE) particle tracking features in MCNPX. MCNPX depletion tracks all necessary reaction rates and follows as many isotopes as cross section data permits in order to achieve a highly accurate temporal nuclide inventory solution. This work chronicles relevant nuclear history, surveys current methodologies of depletion theory, details the methodology in applied MCNPX and provides benchmark results for three independent OECD/NEA benchmarks. Relevant nuclear history, from the Oklo reactor two billion years ago to the current major United States nuclear fuel cycle development programs, is addressed in order to supply the motivation for the development of this technology. A survey of current reaction rate and temporal nuclide inventory techniques is then provided to offer justification for the depletion strategy applied within MCNPX. The MCNPX depletion strategy is then dissected and each code feature is detailed chronicling the methodology development from the original linking of MONTEBURNS and MCNP to the most recent public release of the integrated capability (MCNPX 2.6.F). Calculation results of the OECD/NEA Phase IB benchmark, H. B. Robinson benchmark and OECD/NEA Phase IVB are then provided. The acceptable results of these calculations offer sufficient confidence in the predictive capability of the MCNPX depletion method. This capability sets up a significant foundation, in a well established and supported radiation transport code, for further development of a Monte Carlo-linked depletion methodology which is essential to the future development of advanced reactor technologies that exceed the limitations of current deterministic based methods.
Interfacing MCNPX and McStas for simulation of neutron transport
NASA Astrophysics Data System (ADS)
Klinkby, Esben; Lauritzen, Bent; Nonbøl, Erik; Kjær Willendrup, Peter; Filges, Uwe; Wohlmuther, Michael; Gallmeier, Franz X.
2013-02-01
Simulations of target-moderator-reflector system at spallation sources are conventionally carried out using Monte Carlo codes such as MCNPX (Waters et al., 2007 [1]) or FLUKA (Battistoni et al., 2007; Ferrari et al., 2005 [2,3]) whereas simulations of neutron transport from the moderator and the instrument response are performed by neutron ray tracing codes such as McStas (Lefmann and Nielsen, 1999; Willendrup et al., 2004, 2011a,b [4-7]). The coupling between the two simulation suites typically consists of providing analytical fits of MCNPX neutron spectra to McStas. This method is generally successful but has limitations, as it e.g. does not allow for re-entry of neutrons into the MCNPX regime. Previous work to resolve such shortcomings includes the introduction of McStas inspired supermirrors in MCNPX. In the present paper different approaches to interface MCNPX and McStas are presented and applied to a simple test case. The direct coupling between MCNPX and McStas allows for more accurate simulations of e.g. complex moderator geometries, backgrounds, interference between beam-lines as well as shielding requirements along the neutron guides.
Benchmarking of Neutron Production of Heavy-Ion Transport Codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Remec, Igor; Ronningen, Reginald M.; Heilbronn, Lawrence
Accurate prediction of radiation fields generated by heavy ion interactions is important in medical applications, space missions, and in design and operation of rare isotope research facilities. In recent years, several well-established computer codes in widespread use for particle and radiation transport calculations have been equipped with the capability to simulate heavy ion transport and interactions. To assess and validate these capabilities, we performed simulations of a series of benchmark-quality heavy ion experiments with the computer codes FLUKA, MARS15, MCNPX, and PHITS. We focus on the comparisons of secondary neutron production. Results are encouraging; however, further improvements in models andmore » codes and additional benchmarking are required.« less
Benchmarking of Heavy Ion Transport Codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Remec, Igor; Ronningen, Reginald M.; Heilbronn, Lawrence
Accurate prediction of radiation fields generated by heavy ion interactions is important in medical applications, space missions, and in designing and operation of rare isotope research facilities. In recent years, several well-established computer codes in widespread use for particle and radiation transport calculations have been equipped with the capability to simulate heavy ion transport and interactions. To assess and validate these capabilities, we performed simulations of a series of benchmark-quality heavy ion experiments with the computer codes FLUKA, MARS15, MCNPX, and PHITS. We focus on the comparisons of secondary neutron production. Results are encouraging; however, further improvements in models andmore » codes and additional benchmarking are required.« less
NASA Astrophysics Data System (ADS)
Villoing, Daphnée; Marcatili, Sara; Garcia, Marie-Paule; Bardiès, Manuel
2017-03-01
The purpose of this work was to validate GATE-based clinical scale absorbed dose calculations in nuclear medicine dosimetry. GATE (version 6.2) and MCNPX (version 2.7.a) were used to derive dosimetric parameters (absorbed fractions, specific absorbed fractions and S-values) for the reference female computational model proposed by the International Commission on Radiological Protection in ICRP report 110. Monoenergetic photons and electrons (from 50 keV to 2 MeV) and four isotopes currently used in nuclear medicine (fluorine-18, lutetium-177, iodine-131 and yttrium-90) were investigated. Absorbed fractions, specific absorbed fractions and S-values were generated with GATE and MCNPX for 12 regions of interest in the ICRP 110 female computational model, thereby leading to 144 source/target pair configurations. Relative differences between GATE and MCNPX obtained in specific configurations (self-irradiation or cross-irradiation) are presented. Relative differences in absorbed fractions, specific absorbed fractions or S-values are below 10%, and in most cases less than 5%. Dosimetric results generated with GATE for the 12 volumes of interest are available as supplemental data. GATE can be safely used for radiopharmaceutical dosimetry at the clinical scale. This makes GATE a viable option for Monte Carlo modelling of both imaging and absorbed dose in nuclear medicine.
Benchmarking of neutron production of heavy-ion transport codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Remec, I.; Ronningen, R. M.; Heilbronn, L.
Document available in abstract form only, full text of document follows: Accurate prediction of radiation fields generated by heavy ion interactions is important in medical applications, space missions, and in design and operation of rare isotope research facilities. In recent years, several well-established computer codes in widespread use for particle and radiation transport calculations have been equipped with the capability to simulate heavy ion transport and interactions. To assess and validate these capabilities, we performed simulations of a series of benchmark-quality heavy ion experiments with the computer codes FLUKA, MARS15, MCNPX, and PHITS. We focus on the comparisons of secondarymore » neutron production. Results are encouraging; however, further improvements in models and codes and additional benchmarking are required. (authors)« less
Gifford, Kent A; Wareing, Todd A; Failla, Gregory; Horton, John L; Eifel, Patricia J; Mourtada, Firas
2009-12-03
A patient dose distribution was calculated by a 3D multi-group S N particle transport code for intracavitary brachytherapy of the cervix uteri and compared to previously published Monte Carlo results. A Cs-137 LDR intracavitary brachytherapy CT data set was chosen from our clinical database. MCNPX version 2.5.c, was used to calculate the dose distribution. A 3D multi-group S N particle transport code, Attila version 6.1.1 was used to simulate the same patient. Each patient applicator was built in SolidWorks, a mechanical design package, and then assembled with a coordinate transformation and rotation for the patient. The SolidWorks exported applicator geometry was imported into Attila for calculation. Dose matrices were overlaid on the patient CT data set. Dose volume histograms and point doses were compared. The MCNPX calculation required 14.8 hours, whereas the Attila calculation required 22.2 minutes on a 1.8 GHz AMD Opteron CPU. Agreement between Attila and MCNPX dose calculations at the ICRU 38 points was within +/- 3%. Calculated doses to the 2 cc and 5 cc volumes of highest dose differed by not more than +/- 1.1% between the two codes. Dose and DVH overlays agreed well qualitatively. Attila can calculate dose accurately and efficiently for this Cs-137 CT-based patient geometry. Our data showed that a three-group cross-section set is adequate for Cs-137 computations. Future work is aimed at implementing an optimized version of Attila for radiotherapy calculations.
Wareing, Todd A.; Failla, Gregory; Horton, John L.; Eifel, Patricia J.; Mourtada, Firas
2009-01-01
A patient dose distribution was calculated by a 3D multi‐group SN particle transport code for intracavitary brachytherapy of the cervix uteri and compared to previously published Monte Carlo results. A Cs‐137 LDR intracavitary brachytherapy CT data set was chosen from our clinical database. MCNPX version 2.5.c, was used to calculate the dose distribution. A 3D multi‐group SN particle transport code, Attila version 6.1.1 was used to simulate the same patient. Each patient applicator was built in SolidWorks, a mechanical design package, and then assembled with a coordinate transformation and rotation for the patient. The SolidWorks exported applicator geometry was imported into Attila for calculation. Dose matrices were overlaid on the patient CT data set. Dose volume histograms and point doses were compared. The MCNPX calculation required 14.8 hours, whereas the Attila calculation required 22.2 minutes on a 1.8 GHz AMD Opteron CPU. Agreement between Attila and MCNPX dose calculations at the ICRU 38 points was within ±3%. Calculated doses to the 2 cc and 5 cc volumes of highest dose differed by not more than ±1.1% between the two codes. Dose and DVH overlays agreed well qualitatively. Attila can calculate dose accurately and efficiently for this Cs‐137 CT‐based patient geometry. Our data showed that a three‐group cross‐section set is adequate for Cs‐137 computations. Future work is aimed at implementing an optimized version of Attila for radiotherapy calculations. PACS number: 87.53.Jw
NASA Astrophysics Data System (ADS)
Hartini, Entin; Andiwijayakusuma, Dinan
2014-09-01
This research was carried out on the development of code for uncertainty analysis is based on a statistical approach for assessing the uncertainty input parameters. In the butn-up calculation of fuel, uncertainty analysis performed for input parameters fuel density, coolant density and fuel temperature. This calculation is performed during irradiation using Monte Carlo N-Particle Transport. The Uncertainty method based on the probabilities density function. Development code is made in python script to do coupling with MCNPX for criticality and burn-up calculations. Simulation is done by modeling the geometry of PWR terrace, with MCNPX on the power 54 MW with fuel type UO2 pellets. The calculation is done by using the data library continuous energy cross-sections ENDF / B-VI. MCNPX requires nuclear data in ACE format. Development of interfaces for obtaining nuclear data in the form of ACE format of ENDF through special process NJOY calculation to temperature changes in a certain range.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hartini, Entin, E-mail: entin@batan.go.id; Andiwijayakusuma, Dinan, E-mail: entin@batan.go.id
2014-09-30
This research was carried out on the development of code for uncertainty analysis is based on a statistical approach for assessing the uncertainty input parameters. In the butn-up calculation of fuel, uncertainty analysis performed for input parameters fuel density, coolant density and fuel temperature. This calculation is performed during irradiation using Monte Carlo N-Particle Transport. The Uncertainty method based on the probabilities density function. Development code is made in python script to do coupling with MCNPX for criticality and burn-up calculations. Simulation is done by modeling the geometry of PWR terrace, with MCNPX on the power 54 MW with fuelmore » type UO2 pellets. The calculation is done by using the data library continuous energy cross-sections ENDF / B-VI. MCNPX requires nuclear data in ACE format. Development of interfaces for obtaining nuclear data in the form of ACE format of ENDF through special process NJOY calculation to temperature changes in a certain range.« less
Simulation of a beam rotation system for a spallation source
NASA Astrophysics Data System (ADS)
Reiss, Tibor; Reggiani, Davide; Seidel, Mike; Talanov, Vadim; Wohlmuther, Michael
2015-04-01
With a nominal beam power of nearly 1 MW on target, the Swiss Spallation Neutron Source (SINQ), ranks among the world's most powerful spallation neutron sources. The proton beam transport to the SINQ target is carried out exclusively by means of linear magnetic elements. In the transport line to SINQ the beam is scattered in two meson production targets and as a consequence, at the SINQ target entrance the beam shape can be described by Gaussian distributions in transverse x and y directions with tails cut short by collimators. This leads to a highly nonuniform power distribution inside the SINQ target, giving rise to thermal and mechanical stresses. In view of a future proton beam intensity upgrade, the possibility of homogenizing the beam distribution by means of a fast beam rotation system is currently under investigation. Important aspects which need to be studied are the impact of a rotating proton beam on the resulting neutron spectra, spatial flux distributions and additional—previously not present—proton losses causing unwanted activation of accelerator components. Hence a new source description method was developed for the radiation transport code MCNPX. This new feature makes direct use of the results from the proton beam optics code TURTLE. Its advantage to existing MCNPX source options is that all phase space information and correlations of each primary beam particle computed with TURTLE are preserved and transferred to MCNPX. Simulations of the different beam distributions together with their consequences in terms of neutron production are presented in this publication. Additionally, a detailed description of the coupling method between TURTLE and MCNPX is provided.
Fuel burnup analysis for IRIS reactor using MCNPX and WIMS-D5 codes
NASA Astrophysics Data System (ADS)
Amin, E. A.; Bashter, I. I.; Hassan, Nabil M.; Mustafa, S. S.
2017-02-01
International Reactor Innovative and Secure (IRIS) reactor is a compact power reactor designed with especial features. It contains Integral Fuel Burnable Absorber (IFBA). The core is heterogeneous both axially and radially. This work provides the full core burn up analysis for IRIS reactor using MCNPX and WIMDS-D5 codes. Criticality calculations, radial and axial power distributions and nuclear peaking factor at the different stages of burnup were studied. Effective multiplication factor values for the core were estimated by coupling MCNPX code with WIMS-D5 code and compared with SAS2H/KENO-V code values at different stages of burnup. The two calculation codes show good agreement and correlation. The values of radial and axial powers for the full core were also compared with published results given by SAS2H/KENO-V code (at the beginning and end of reactor operation). The behavior of both radial and axial power distribution is quiet similar to the other data published by SAS2H/KENO-V code. The peaking factor values estimated in the present work are close to its values calculated by SAS2H/KENO-V code.
Comparison of fluence-to-dose conversion coefficients for deuterons, tritons and helions.
Copeland, Kyle; Friedberg, Wallace; Sato, Tatsuhiko; Niita, Koji
2012-02-01
Secondary radiation in aircraft and spacecraft includes deuterons, tritons and helions. Two sets of fluence-to-effective dose conversion coefficients for isotropic exposure to these particles were compared: one used the particle and heavy ion transport code system (PHITS) radiation transport code coupled with the International Commission on Radiological Protection (ICRP) reference phantoms (PHITS-ICRP) and the other the Monte Carlo N-Particle eXtended (MCNPX) radiation transport code coupled with modified BodyBuilder™ phantoms (MCNPX-BB). Also, two sets of fluence-to-effective dose equivalent conversion coefficients calculated using the PHITS-ICRP combination were compared: one used quality factors based on linear energy transfer; the other used quality factors based on lineal energy (y). Finally, PHITS-ICRP effective dose coefficients were compared with PHITS-ICRP effective dose equivalent coefficients. The PHITS-ICRP and MCNPX-BB effective dose coefficients were similar, except at high energies, where MCNPX-BB coefficients were higher. For helions, at most energies effective dose coefficients were much greater than effective dose equivalent coefficients. For deuterons and tritons, coefficients were similar when their radiation weighting factor was set to 2.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zehtabian, M; Zaker, N; Sina, S
2015-06-15
Purpose: Different versions of MCNP code are widely used for dosimetry purposes. The purpose of this study is to compare different versions of the MCNP codes in dosimetric evaluation of different brachytherapy sources. Methods: The TG-43 parameters such as dose rate constant, radial dose function, and anisotropy function of different brachytherapy sources, i.e. Pd-103, I-125, Ir-192, and Cs-137 were calculated in water phantom. The results obtained by three versions of Monte Carlo codes (MCNP4C, MCNPX, MCNP5) were compared for low and high energy brachytherapy sources. Then the cross section library of MCNP4C code was changed to ENDF/B-VI release 8 whichmore » is used in MCNP5 and MCNPX codes. Finally, the TG-43 parameters obtained using the MCNP4C-revised code, were compared with other codes. Results: The results of these investigations indicate that for high energy sources, the differences in TG-43 parameters between the codes are less than 1% for Ir-192 and less than 0.5% for Cs-137. However for low energy sources like I-125 and Pd-103, large discrepancies are observed in the g(r) values obtained by MCNP4C and the two other codes. The differences between g(r) values calculated using MCNP4C and MCNP5 at the distance of 6cm were found to be about 17% and 28% for I-125 and Pd-103 respectively. The results obtained with MCNP4C-revised and MCNPX were similar. However, the maximum difference between the results obtained with the MCNP5 and MCNP4C-revised codes was 2% at 6cm. Conclusion: The results indicate that using MCNP4C code for dosimetry of low energy brachytherapy sources can cause large errors in the results. Therefore it is recommended not to use this code for low energy sources, unless its cross section library is changed. Since the results obtained with MCNP4C-revised and MCNPX were similar, it is concluded that the difference between MCNP4C and MCNPX is their cross section libraries.« less
MCNPX Cosmic Ray Shielding Calculations with the NORMAN Phantom Model
NASA Technical Reports Server (NTRS)
James, Michael R.; Durkee, Joe W.; McKinney, Gregg; Singleterry Robert
2008-01-01
The United States is planning manned lunar and interplanetary missions in the coming years. Shielding from cosmic rays is a critical aspect of manned spaceflight. These ventures will present exposure issues involving the interplanetary Galactic Cosmic Ray (GCR) environment. GCRs are comprised primarily of protons (approx.84.5%) and alpha-particles (approx.14.7%), while the remainder is comprised of massive, highly energetic nuclei. The National Aeronautics and Space Administration (NASA) Langley Research Center (LaRC) has commissioned a joint study with Los Alamos National Laboratory (LANL) to investigate the interaction of the GCR environment with humans using high-fidelity, state-of-the-art computer simulations. The simulations involve shielding and dose calculations in order to assess radiation effects in various organs. The simulations are being conducted using high-resolution voxel-phantom models and the MCNPX[1] Monte Carlo radiation-transport code. Recent advances in MCNPX physics packages now enable simulated transport over 2200 types of ions of widely varying energies in large, intricate geometries. We report here initial results obtained using a GCR spectrum and a NORMAN[3] phantom.
GEANT4 benchmark with MCNPX and PHITS for activation of concrete
NASA Astrophysics Data System (ADS)
Tesse, Robin; Stichelbaut, Frédéric; Pauly, Nicolas; Dubus, Alain; Derrien, Jonathan
2018-02-01
The activation of concrete is a real problem from the point of view of waste management. Because of the complexity of the issue, Monte Carlo (MC) codes have become an essential tool to its study. But various codes or even nuclear models exist in MC. MCNPX and PHITS have already been validated for shielding studies but GEANT4 is also a suitable solution. In these codes, different models can be considered for a concrete activation study. The Bertini model is not the best model for spallation while BIC and INCL model agrees well with previous results in literature.
FLUKA simulation studies on in-phantom dosimetric parameters of a LINAC-based BNCT
NASA Astrophysics Data System (ADS)
Ghal-Eh, N.; Goudarzi, H.; Rahmani, F.
2017-12-01
The Monte Carlo simulation code, FLUKA version 2011.2c.5, has been used to estimate the in-phantom dosimetric parameters for use in BNCT studies. The in-phantom parameters of a typical Snyder head, which are necessary information prior to any clinical treatment, have been calculated with both FLUKA and MCNPX codes, which exhibit a promising agreement. The results confirm that FLUKA can be regarded as a good alternative for the MCNPX in BNCT dosimetry simulations.
Chiavassa, S; Lemosquet, A; Aubineau-Lanièce, I; de Carlan, L; Clairand, I; Ferrer, L; Bardiès, M; Franck, D; Zankl, M
2005-01-01
This paper aims at comparing dosimetric assessments performed with three Monte Carlo codes: EGS4, MCNP4c2 and MCNPX2.5e, using a realistic voxel phantom, namely the Zubal phantom, in two configurations of exposure. The first one deals with an external irradiation corresponding to the example of a radiological accident. The results are obtained using the EGS4 and the MCNP4c2 codes and expressed in terms of the mean absorbed dose (in Gy per source particle) for brain, lungs, liver and spleen. The second one deals with an internal exposure corresponding to the treatment of a medullary thyroid cancer by 131I-labelled radiopharmaceutical. The results are obtained by EGS4 and MCNPX2.5e and compared in terms of S-values (expressed in mGy per kBq and per hour) for liver, kidney, whole body and thyroid. The results of these two studies are presented and differences between the codes are analysed and discussed.
Burn, K W; Daffara, C; Gualdrini, G; Pierantoni, M; Ferrari, P
2007-01-01
The question of Monte Carlo simulation of radiation transport in voxel geometries is addressed. Patched versions of the MCNP and MCNPX codes are developed aimed at transporting radiation both in the standard geometry mode and in the voxel geometry treatment. The patched code reads an unformatted FORTRAN file derived from DICOM format data and uses special subroutines to handle voxel-to-voxel radiation transport. The various phases of the development of the methodology are discussed together with the new input options. Examples are given of employment of the code in internal and external dosimetry and comparisons with results from other groups are reported.
Shielding evaluation for solar particle events using MCNPX, PHITS and OLTARIS codes
NASA Astrophysics Data System (ADS)
Aghara, S. K.; Sriprisan, S. I.; Singleterry, R. C.; Sato, T.
2015-01-01
Detailed analyses of Solar Particle Events (SPE) were performed to calculate primary and secondary particle spectra behind aluminum, at various thicknesses in water. The simulations were based on Monte Carlo (MC) radiation transport codes, MCNPX 2.7.0 and PHITS 2.64, and the space radiation analysis website called OLTARIS (On-Line Tool for the Assessment of Radiation in Space) version 3.4 (uses deterministic code, HZETRN, for transport). The study is set to investigate the impact of SPEs spectra transporting through 10 or 20 g/cm2 Al shield followed by 30 g/cm2 of water slab. Four historical SPE events were selected and used as input source spectra particle differential spectra for protons, neutrons, and photons are presented. The total particle fluence as a function of depth is presented. In addition to particle flux, the dose and dose equivalent values are calculated and compared between the codes and with the other published results. Overall, the particle fluence spectra from all three codes show good agreement with the MC codes showing closer agreement compared to the OLTARIS results. The neutron particle fluence from OLTARIS is lower than the results from MC codes at lower energies (E < 100 MeV). Based on mean square difference analysis the results from MCNPX and PHITS agree better for fluence, dose and dose equivalent when compared to OLTARIS results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Long, Daniel J.; Lee, Choonsik; Tien, Christopher
2013-01-15
Purpose: To validate the accuracy of a Monte Carlo source model of the Siemens SOMATOM Sensation 16 CT scanner using organ doses measured in physical anthropomorphic phantoms. Methods: The x-ray output of the Siemens SOMATOM Sensation 16 multidetector CT scanner was simulated within the Monte Carlo radiation transport code, MCNPX version 2.6. The resulting source model was able to perform various simulated axial and helical computed tomographic (CT) scans of varying scan parameters, including beam energy, filtration, pitch, and beam collimation. Two custom-built anthropomorphic phantoms were used to take dose measurements on the CT scanner: an adult male and amore » 9-month-old. The adult male is a physical replica of University of Florida reference adult male hybrid computational phantom, while the 9-month-old is a replica of University of Florida Series B 9-month-old voxel computational phantom. Each phantom underwent a series of axial and helical CT scans, during which organ doses were measured using fiber-optic coupled plastic scintillator dosimeters developed at University of Florida. The physical setup was reproduced and simulated in MCNPX using the CT source model and the computational phantoms upon which the anthropomorphic phantoms were constructed. Average organ doses were then calculated based upon these MCNPX results. Results: For all CT scans, good agreement was seen between measured and simulated organ doses. For the adult male, the percent differences were within 16% for axial scans, and within 18% for helical scans. For the 9-month-old, the percent differences were all within 15% for both the axial and helical scans. These results are comparable to previously published validation studies using GE scanners and commercially available anthropomorphic phantoms. Conclusions: Overall results of this study show that the Monte Carlo source model can be used to accurately and reliably calculate organ doses for patients undergoing a variety of axial or helical CT examinations on the Siemens SOMATOM Sensation 16 scanner.« less
MCNPX simulation of proton dose distribution in homogeneous and CT phantoms
NASA Astrophysics Data System (ADS)
Lee, C. C.; Lee, Y. J.; Tung, C. J.; Cheng, H. W.; Chao, T. C.
2014-02-01
A dose simulation system was constructed based on the MCNPX Monte Carlo package to simulate proton dose distribution in homogeneous and CT phantoms. Conversion from Hounsfield unit of a patient CT image set to material information necessary for Monte Carlo simulation is based on Schneider's approach. In order to validate this simulation system, inter-comparison of depth dose distributions among those obtained from the MCNPX, GEANT4 and FLUKA codes for a 160 MeV monoenergetic proton beam incident normally on the surface of a homogeneous water phantom was performed. For dose validation within the CT phantom, direct comparison with measurement is infeasible. Instead, this study took the approach to indirectly compare the 50% ranges (R50%) along the central axis by our system to the NIST CSDA ranges for beams with 160 and 115 MeV energies. Comparison result within the homogeneous phantom shows good agreement. Differences of simulated R50% among the three codes are less than 1 mm. For results within the CT phantom, the MCNPX simulated water equivalent Req,50% are compatible with the CSDA water equivalent ranges from the NIST database with differences of 0.7 and 4.1 mm for 160 and 115 MeV beams, respectively.
Shielding evaluation for solar particle events using MCNPX, PHITS and OLTARIS codes.
Aghara, S K; Sriprisan, S I; Singleterry, R C; Sato, T
2015-01-01
Detailed analyses of Solar Particle Events (SPE) were performed to calculate primary and secondary particle spectra behind aluminum, at various thicknesses in water. The simulations were based on Monte Carlo (MC) radiation transport codes, MCNPX 2.7.0 and PHITS 2.64, and the space radiation analysis website called OLTARIS (On-Line Tool for the Assessment of Radiation in Space) version 3.4 (uses deterministic code, HZETRN, for transport). The study is set to investigate the impact of SPEs spectra transporting through 10 or 20 g/cm(2) Al shield followed by 30 g/cm(2) of water slab. Four historical SPE events were selected and used as input source spectra particle differential spectra for protons, neutrons, and photons are presented. The total particle fluence as a function of depth is presented. In addition to particle flux, the dose and dose equivalent values are calculated and compared between the codes and with the other published results. Overall, the particle fluence spectra from all three codes show good agreement with the MC codes showing closer agreement compared to the OLTARIS results. The neutron particle fluence from OLTARIS is lower than the results from MC codes at lower energies (E<100 MeV). Based on mean square difference analysis the results from MCNPX and PHITS agree better for fluence, dose and dose equivalent when compared to OLTARIS results. Copyright © 2015 The Committee on Space Research (COSPAR). All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bianchini, G.; Burgio, N.; Carta, M.
The GUINEVERE experiment (Generation of Uninterrupted Intense Neutrons at the lead Venus Reactor) is an experimental program in support of the ADS technology presently carried out at SCK-CEN in Mol (Belgium). In the experiment a modified lay-out of the original thermal VENUS critical facility is coupled to an accelerator, built by the French body CNRS in Grenoble, working in both continuous and pulsed mode and delivering 14 MeV neutrons by bombardment of deuterons on a tritium-target. The modified lay-out of the facility consists of a fast subcritical core made of 30% U-235 enriched metallic Uranium in a lead matrix. Severalmore » off-line and on-line reactivity measurement techniques will be investigated during the experimental campaign. This report is focused on the simulation by deterministic (ERANOS French code) and Monte Carlo (MCNPX US code) calculations of three reactivity measurement techniques, Slope ({alpha}-fitting), Area-ratio and Source-jerk, applied to a GUINEVERE subcritical configuration (namely SC1). The inferred reactivity, in dollar units, by the Area-ratio method shows an overall agreement between the two deterministic and Monte Carlo computational approaches, whereas the MCNPX Source-jerk results are affected by large uncertainties and allow only partial conclusions about the comparison. Finally, no particular spatial dependence of the results is observed in the case of the GUINEVERE SC1 subcritical configuration. (authors)« less
DHS Summary Report -- Robert Weldon
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weldon, Robert A.
This summer I worked on benchmarking the Lawrence Livermore National Laboratory fission multiplicity capability used in the Monte Carlo particle transport code MCNPX. This work involved running simulations and then comparing the simulation results with experimental experiments. Outlined in this paper is a brief description of the work completed this summer, skills and knowledge gained, and how the internship has impacted my planning for the future. Neutron multiplicity counting is a neutron detection technique that leverages the multiplicity emissions of neutrons from fission to identify various actinides in a lump of material. The identification of individual actinides in lumps ofmore » material crossing our boarders, especially U-235 and Pu-239, is a key component for maintaining the safety of the country from nuclear threats. Several multiplicity emission options from spontaneous and induced fission already existed in MCNPX 2.4.0. These options can be accessed through use of the 6th entry on the PHYS:N card. Lawrence Livermore National Laboratory (LLNL) developed a physics model for the simulation of neutron and gamma ray emission from fission and photofission that was included in MCNPX 2.7.B as an undocumented feature and then was documented in MCNPX 2.7.C. The LLNL multiplicity capability provided a different means for MCNPX to simulate neutron and gamma-ray distributions for neutron induced, spontaneous and photonuclear fission reactions. The original testing on the model for implementation into MCNPX was conducted by Gregg McKinney and John Hendricks. The model is an encapsulation of measured data of neutron multiplicity distributions from Gwin, Spencer, and Ingle, along with the data from Zucker and Holden. One of the founding principles of MCNPX was that it would have several redundant capabilities, providing the means of testing and including various physics packages. Though several multiplicity sampling methodologies already existed within MCNPX, the LLNL fission multiplicity was included to provide a separate capability for computing multiplicity as well as including several new features not already included in MCNPX. These new features include: (1) prompt gamma emission/multiplicity from neutron-induced fission; (2) neutron multiplicity and gamma emission/multiplicity from photofission; and (3) an option to enforce energy correlation for gamma neutron multiplicity emission. These new capabilities allow correlated signal detection for identifying presence of special nuclear material (SNM). Therefore, these new capabilities help meet the missions of the Domestic Nuclear Detection Office (DNDO), which is tasked with developing nuclear detection strategies for identifying potential radiological and nuclear threats, by providing new simulation capability for detection strategies that leverage the new available physics in the LLNL multiplicity capability. Two types of tests were accomplished this summer to test the default LLNL neutron multiplicity capability: neutron-induced fission tests and spontaneous fission tests. Both cases set the 6th entry on the PHYS:N card to 5 (i.e. use LLNL multiplicity). The neutron-induced fission tests utilized a simple 0.001 cm radius sphere where 0.0253 eV neutrons were released at the sphere center. Neutrons were forced to immediately collide in the sphere and release all progeny from the sphere, without further collision, using the LCA card, LCA 7j -2 (therefore density and size of the sphere were irrelevant). Enough particles were run to ensure that the average error of any specific multiplicity did not exceed 0.36%. Neutron-induced fission multiplicities were computed for U-233, U-235, Pu-239, and Pu-241. The spontaneous fission tests also used the same spherical geometry, except: (1) the LCA card was removed; (2) the density of the sphere was set to 0.001 g/cm3; and (3) instead of emitting a thermal neutron, the PAR keyword was set to PAR=SF. The purpose of the small density was to ensure that the spontaneous fission neutrons would not further interact and induce fissions (i.e. the mean free path greatly exceeded the size of the sphere). Enough particles were run to ensure that the average error of any specific spontaneous multiplicity did not exceed 0.23%. Spontaneous fission multiplicities were computed for U-238, Pu-238, Pu-240, Pu-242, Cm-242, and Cm-244. All of the computed results were compared against experimental results compiled by Holden at Brookhaven National Laboratory.« less
MCNP/X TRANSPORT IN THE TABULAR REGIME
DOE Office of Scientific and Technical Information (OSTI.GOV)
HUGHES, H. GRADY
2007-01-08
The authors review the transport capabilities of the MCNP and MCNPX Monte Carlo codes in the energy regimes in which tabular transport data are available. Giving special attention to neutron tables, they emphasize the measures taken to improve the treatment of a variety of difficult aspects of the transport problem, including unresolved resonances, thermal issues, and the availability of suitable cross sections sets. They also briefly touch on the current situation in regard to photon, electron, and proton transport tables.
Evaluation of an alternative shielding materials for F-127 transport package
NASA Astrophysics Data System (ADS)
Gual, Maritza R.; Mesquita, Amir Z.; Pereira, Cláubia
2018-03-01
Lead is used as radiation shielding material for the Nordion's F-127 source shipping container is used for transport and storage of the GammaBeam -127's cobalt-60 source of the Nuclear Technology Development Center (CDTN) located in Belo Horizonte, Brazil. As an alternative, Th, Tl and WC have been evaluated as radiation shielding material. The goal is to check their behavior regarding shielding and dosing. Monte Carlo MCNPX code is used for the simulations. In the MCNPX calculation was used one cylinder as exclusion surface instead one sphere. Validation of MCNPX gamma doses calculations was carried out through comparison with experimental measurements. The results show that tungsten carbide WC is better shielding material for γ-ray than lead shielding.
Gallmeier, F. X.; Iverson, E. B.; Lu, W.; ...
2016-01-08
Neutron transport simulation codes are an indispensable tool used for the design and construction of modern neutron scattering facilities and instrumentation. It has become increasingly clear that some neutron instrumentation has started to exploit physics that is not well-modelled by the existing codes. Particularly, the transport of neutrons through single crystals and across interfaces in MCNP(X), Geant4 and other codes ignores scattering from oriented crystals and refractive effects, and yet these are essential ingredients for the performance of monochromators and ultra-cold neutron transport respectively (to mention but two examples). In light of these developments, we have extended the MCNPX codemore » to include a single-crystal neutron scattering model and neutron reflection/refraction physics. Furthermore, we have also generated silicon scattering kernels for single crystals of definable orientation with respect to an incoming neutron beam. As a first test of these new tools, we have chosen to model the recently developed convoluted moderator concept, in which a moderating material is interleaved with layers of perfect crystals to provide an exit path for neutrons moderated to energies below the crystal s Bragg cut off at locations deep within the moderator. Studies of simple cylindrical convoluted moderator systems of 100 mm diameter and composed of polyethylene and single crystal silicon were performed with the upgraded MCNPX code and reproduced the magnitude of effects seen in experiments compared to homogeneous moderator systems. Applying different material properties for refraction and reflection, and by replacing the silicon in the models with voids, we show that the emission enhancements seen in recent experiments are primarily caused by the transparency of the silicon/void layers. Finally the convoluted moderator experiments described by Iverson et al. were simulated and we find satisfactory agreement between the measurement and the results of simulations performed using the tools we have developed.« less
Simulation of neutron production using MCNPX+MCUNED.
Erhard, M; Sauvan, P; Nolte, R
2014-10-01
In standard MCNPX, the production of neutrons by ions cannot be modelled efficiently. The MCUNED patch applied to MCNPX 2.7.0 allows to model the production of neutrons by light ions down to energies of a few kiloelectron volts. This is crucial for the simulation of neutron reference fields. The influence of target properties, such as the diffusion of reactive isotopes into the target backing or the effect of energy and angular straggling, can be studied efficiently. In this work, MCNPX/MCUNED calculations are compared with results obtained with the TARGET code for simulating neutron production. Furthermore, MCUNED incorporates more effective variance reduction techniques and a coincidence counting tally. This allows the simulation of a TCAP experiment being developed at PTB. In this experiment, 14.7-MeV neutrons will be produced by the reaction T(d,n)(4)He. The neutron fluence is determined by counting alpha particles, independently of the reaction cross section. © The Author 2013. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Lin, Yi-Chun; Liu, Yuan-Hao; Nievaart, Sander; Chen, Yen-Fu; Wu, Shu-Wei; Chou, Wen-Tsae; Jiang, Shiang-Huei
2011-10-01
High energy photon (over 10 MeV) and neutron beams adopted in radiobiology and radiotherapy always produce mixed neutron/gamma-ray fields. The Mg(Ar) ionization chambers are commonly applied to determine the gamma-ray dose because of its neutron insensitive characteristic. Nowadays, many perturbation corrections for accurate dose estimation and lots of treatment planning systems are based on Monte Carlo technique. The Monte Carlo codes EGSnrc, FLUKA, GEANT4, MCNP5, and MCNPX were used to evaluate energy dependent response functions of the Exradin M2 Mg(Ar) ionization chamber to a parallel photon beam with mono-energies from 20 keV to 20 MeV. For the sake of validation, measurements were carefully performed in well-defined (a) primary M-100 X-ray calibration field, (b) primary 60Co calibration beam, (c) 6-MV, and (d) 10-MV therapeutic beams in hospital. At energy region below 100 keV, MCNP5 and MCNPX both had lower responses than other codes. For energies above 1 MeV, the MCNP ITS-mode greatly resembled other three codes and the differences were within 5%. Comparing to the measured currents, MCNP5 and MCNPX using ITS-mode had perfect agreement with the 60Co, and 10-MV beams. But at X-ray energy region, the derivations reached 17%. This work shows us a better insight into the performance of different Monte Carlo codes in photon-electron transport calculation. Regarding the application of the mixed field dosimetry like BNCT, MCNP with ITS-mode is recognized as the most suitable tool by this work.
MicroCT parameters for multimaterial elements assessment
NASA Astrophysics Data System (ADS)
de Araújo, Olga M. O.; Silva Bastos, Jaqueline; Machado, Alessandra S.; dos Santos, Thaís M. P.; Ferreira, Cintia G.; Rosifini Alves Claro, Ana Paula; Lopes, Ricardo T.
2018-03-01
Microtomography is a non-destructive testing technique for quantitative and qualitative analysis. The investigation of multimaterial elements with great difference of density can result in artifacts that degrade image quality depending on combination of additional filter. The aim of this study is the selection of parameters most appropriate for analysis of bone tissue with metallic implant. The results show the simulation with MCNPX code for the distribution of energy without additional filter, with use of aluminum, copper and brass filters and their respective reconstructed images showing the importance of the choice of these parameters in image acquisition process on computed microtomography.
Multi-resolution voxel phantom modeling: a high-resolution eye model for computational dosimetry
NASA Astrophysics Data System (ADS)
Caracappa, Peter F.; Rhodes, Ashley; Fiedler, Derek
2014-09-01
Voxel models of the human body are commonly used for simulating radiation dose with a Monte Carlo radiation transport code. Due to memory limitations, the voxel resolution of these computational phantoms is typically too large to accurately represent the dimensions of small features such as the eye. Recently reduced recommended dose limits to the lens of the eye, which is a radiosensitive tissue with a significant concern for cataract formation, has lent increased importance to understanding the dose to this tissue. A high-resolution eye model is constructed using physiological data for the dimensions of radiosensitive tissues, and combined with an existing set of whole-body models to form a multi-resolution voxel phantom, which is used with the MCNPX code to calculate radiation dose from various exposure types. This phantom provides an accurate representation of the radiation transport through the structures of the eye. Two alternate methods of including a high-resolution eye model within an existing whole-body model are developed. The accuracy and performance of each method is compared against existing computational phantoms.
Hosseini, Seyed Abolfazl; Esmaili Paeen Afrakoti, Iman
2018-01-17
The purpose of the present study was to reconstruct the energy spectrum of a poly-energetic neutron source using an algorithm developed based on an Adaptive Neuro-Fuzzy Inference System (ANFIS). ANFIS is a kind of artificial neural network based on the Takagi-Sugeno fuzzy inference system. The ANFIS algorithm uses the advantages of both fuzzy inference systems and artificial neural networks to improve the effectiveness of algorithms in various applications such as modeling, control and classification. The neutron pulse height distributions used as input data in the training procedure for the ANFIS algorithm were obtained from the simulations performed by MCNPX-ESUT computational code (MCNPX-Energy engineering of Sharif University of Technology). Taking into account the normalization condition of each energy spectrum, 4300 neutron energy spectra were generated randomly. (The value in each bin was generated randomly, and finally a normalization of each generated energy spectrum was performed). The randomly generated neutron energy spectra were considered as output data of the developed ANFIS computational code in the training step. To calculate the neutron energy spectrum using conventional methods, an inverse problem with an approximately singular response matrix (with the determinant of the matrix close to zero) should be solved. The solution of the inverse problem using the conventional methods unfold neutron energy spectrum with low accuracy. Application of the iterative algorithms in the solution of such a problem, or utilizing the intelligent algorithms (in which there is no need to solve the problem), is usually preferred for unfolding of the energy spectrum. Therefore, the main reason for development of intelligent algorithms like ANFIS for unfolding of neutron energy spectra is to avoid solving the inverse problem. In the present study, the unfolded neutron energy spectra of 252Cf and 241Am-9Be neutron sources using the developed computational code were found to have excellent agreement with the reference data. Also, the unfolded energy spectra of the neutron sources as obtained using ANFIS were more accurate than the results reported from calculations performed using artificial neural networks in previously published papers. © The Author(s) 2018. Published by Oxford University Press on behalf of The Japan Radiation Research Society and Japanese Society for Radiation Oncology.
Tekin, H O; Singh, V P; Manici, T
2017-03-01
In the present work the effect of tungsten oxide (WO 3 ) nanoparticles on mass attenauation coefficients of concrete has been investigated by using MCNPX (version 2.4.0). The validation of generated MCNPX simulation geometry has been provided by comparing the results with standard XCOM data for mass attenuation coefficients of concrete. A very good agreement between XCOM and MCNPX have been obtained. The validated geometry has been used for definition of nano-WO 3 and micro-WO 3 into concrete sample. The mass attenuation coefficients of pure concrete and WO 3 added concrete with micro-sized and nano-sized have been compared. It was observed that shielding properties of concrete doped with WO 3 increased. The results of mass attenauation coefficients also showed that the concrete doped with nano-WO 3 significanlty improve shielding properties than micro-WO 3 . It can be concluded that addition of nano-sized particles can be considered as another mechanism to reduce radiation dose. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Sabaibang, S.; Lekchaum, S.; Tipayakul, C.
2015-05-01
This study is a part of an on-going work to develop a computational model of Thai Research Reactor (TRR-1/M1) which is capable of accurately predicting the neutron flux level and spectrum. The computational model was created by MCNPX program and the CT (Central Thimble) in-core irradiation facility was selected as the location for validation. The comparison was performed with the typical flux measurement method routinely practiced at TRR-1/M1, that is, the foil activation technique. In this technique, gold foil is irradiated for a certain period of time and the activity of the irradiated target is measured to derive the thermal neutron flux. Additionally, the flux measurement with SPND (self-powered neutron detector) was also performed for comparison. The thermal neutron flux from the MCNPX simulation was found to be 1.79×1013 neutron/cm2s while that from the foil activation measurement was 4.68×1013 neutron/cm2s. On the other hand, the thermal neutron flux from the measurement using SPND was 2.47×1013 neutron/cm2s. An assessment of the differences among the three methods was done. The difference of the MCNPX with the foil activation technique was found to be 67.8% and the difference of the MCNPX with the SPND was found to be 27.8%.
Performance upgrades to the MCNP6 burnup capability for large scale depletion calculations
Fensin, M. L.; Galloway, J. D.; James, M. R.
2015-04-11
The first MCNP based inline Monte Carlo depletion capability was officially released from the Radiation Safety Information and Computational Center as MCNPX 2.6.0. With the merger of MCNPX and MCNP5, MCNP6 combined the capability of both simulation tools, as well as providing new advanced technology, in a single radiation transport code. The new MCNP6 depletion capability was first showcased at the International Congress for Advancements in Nuclear Power Plants (ICAPP) meeting in 2012. At that conference the new capabilities addressed included the combined distributive and shared memory parallel architecture for the burnup capability, improved memory management, physics enhancements, and newmore » predictability as compared to the H.B Robinson Benchmark. At Los Alamos National Laboratory, a special purpose cluster named “tebow,” was constructed such to maximize available RAM per CPU, as well as leveraging swap space with solid state hard drives, to allow larger scale depletion calculations (allowing for significantly more burnable regions than previously examined). As the MCNP6 burnup capability was scaled to larger numbers of burnable regions, a noticeable slowdown was realized.This paper details two specific computational performance strategies for improving calculation speedup: (1) retrieving cross sections during transport; and (2) tallying mechanisms specific to burnup in MCNP. To combat this slowdown new performance upgrades were developed and integrated into MCNP6 1.2.« less
Estimation of weekly 99Mo production by AHR 200 kW
NASA Astrophysics Data System (ADS)
Siregar, I. H.; Suharyana; Khakim, A.; Siregar, D.; Frida, A. R.
2016-11-01
The estimation of weekly 99Mo production by AHR 200 kW fueled with Low Enriched Uranium Uranyl Nitrate solution has been simulated by using MCNPX computer code. We have employed the AHR design of Babcock & Wilcox Medical Isotope Production System with 9Be Reflector and Stainless steel vessel. We found that when the concentration of uranium in the fresh fuel was 108 gr U/L of UO2(NO3)2 fuel solution, the multiplication factor was 1.0517. The 99Mo concentration reached saturated at tenth day operation. The AHR can produce approximately 1.96×103 6-day-Ci weekly.
Benchmarking Heavy Ion Transport Codes FLUKA, HETC-HEDS MARS15, MCNPX, and PHITS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ronningen, Reginald Martin; Remec, Igor; Heilbronn, Lawrence H.
Powerful accelerators such as spallation neutron sources, muon-collider/neutrino facilities, and rare isotope beam facilities must be designed with the consideration that they handle the beam power reliably and safely, and they must be optimized to yield maximum performance relative to their design requirements. The simulation codes used for design purposes must produce reliable results. If not, component and facility designs can become costly, have limited lifetime and usefulness, and could even be unsafe. The objective of this proposal is to assess the performance of the currently available codes PHITS, FLUKA, MARS15, MCNPX, and HETC-HEDS that could be used for designmore » simulations involving heavy ion transport. We plan to access their performance by performing simulations and comparing results against experimental data of benchmark quality. Quantitative knowledge of the biases and the uncertainties of the simulations is essential as this potentially impacts the safe, reliable and cost effective design of any future radioactive ion beam facility. Further benchmarking of heavy-ion transport codes was one of the actions recommended in the Report of the 2003 RIA R&D Workshop".« less
Simulation of a complete X-ray digital radiographic system for industrial applications.
Nazemi, E; Rokrok, B; Movafeghi, A; Choopan Dastjerdi, M H
2018-05-19
Simulating X-ray images is of great importance in industry and medicine. Using such simulation permits us to optimize parameters which affect image's quality without the limitations of an experimental procedure. This study revolves around a novel methodology to simulate a complete industrial X-ray digital radiographic system composed of an X-ray tube and a computed radiography (CR) image plate using Monte Carlo N Particle eXtended (MCNPX) code. In the process of our research, an industrial X-ray tube with maximum voltage of 300 kV and current of 5 mA was simulated. A 3-layer uniform plate including a polymer overcoat layer, a phosphor layer and a polycarbonate backing layer was also defined and simulated as the CR imaging plate. To model the image formation in the image plate, at first the absorbed dose was calculated in each pixel inside the phosphor layer of CR imaging plate using the mesh tally in MCNPX code and then was converted to gray value using a mathematical relationship determined in a separate procedure. To validate the simulation results, an experimental setup was designed and the images of two step wedges created out of aluminum and steel were captured by the experiments and compared with the simulations. The results show that the simulated images are in good agreement with the experimental ones demonstrating the ability of the proposed methodology for simulating an industrial X-ray imaging system. Copyright © 2018 Elsevier Ltd. All rights reserved.
A new cubic phantom for PET/CT dosimetry: Experimental and Monte Carlo characterization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Belinato, Walmir; Silva, Rogerio M.V.; Souza, Divanizia N.
In recent years, positron emission tomography (PET) associated with multidetector computed tomography (MDCT) has become a diagnostic technique widely disseminated to evaluate various malignant tumors and other diseases. However, during PET/CT examinations, the doses of ionizing radiation experienced by the internal organs of patients may be substantial. To study the doses involved in PET/CT procedures, a new cubic phantom of overlapping acrylic plates was developed and characterized. This phantom has a deposit for the placement of the fluorine-18 fluoro-2-deoxy-D-glucose ({sup 18}F-FDG) solution. There are also small holes near the faces for the insertion of optically stimulated luminescence dosimeters (OSLD). Themore » holes for OSLD are positioned at different distances from the {sup 18}F-FDG deposit. The experimental results were obtained in two PET/CT devices operating with different parameters. Differences in the absorbed doses were observed in OSLD measurements due to the non-orthogonal positioning of the detectors inside the phantom. This phantom was also evaluated using Monte Carlo simulations, with the MCNPX code. The phantom and the geometrical characteristics of the equipment were carefully modeled in the MCNPX code, in order to develop a new methodology form comparison of experimental and simulated results, as well as to allow the characterization of PET/CT equipments in Monte Carlo simulations. All results showed good agreement, proving that this new phantom may be applied for these experiments. (authors)« less
Evaluation of SNS Beamline Shielding Configurations using MCNPX Accelerated by ADVANTG
DOE Office of Scientific and Technical Information (OSTI.GOV)
Risner, Joel M; Johnson, Seth R.; Remec, Igor
2015-01-01
Shielding analyses for the Spallation Neutron Source (SNS) at Oak Ridge National Laboratory pose significant computational challenges, including highly anisotropic high-energy sources, a combination of deep penetration shielding and an unshielded beamline, and a desire to obtain well-converged nearly global solutions for mapping of predicted radiation fields. The majority of these analyses have been performed using MCNPX with manually generated variance reduction parameters (source biasing and cell-based splitting and Russian roulette) that were largely based on the analyst's insight into the problem specifics. Development of the variance reduction parameters required extensive analyst time, and was often tailored to specific portionsmore » of the model phase space. We previously applied a developmental version of the ADVANTG code to an SNS beamline study to perform a hybrid deterministic/Monte Carlo analysis and showed that we could obtain nearly global Monte Carlo solutions with essentially uniform relative errors for mesh tallies that cover extensive portions of the model with typical voxel spacing of a few centimeters. The use of weight window maps and consistent biased sources produced using the FW-CADIS methodology in ADVANTG allowed us to obtain these solutions using substantially less computer time than the previous cell-based splitting approach. While those results were promising, the process of using the developmental version of ADVANTG was somewhat laborious, requiring user-developed Python scripts to drive much of the analysis sequence. In addition, limitations imposed by the size of weight-window files in MCNPX necessitated the use of relatively coarse spatial and energy discretization for the deterministic Denovo calculations that we used to generate the variance reduction parameters. We recently applied the production version of ADVANTG to this beamline analysis, which substantially streamlined the analysis process. We also tested importance function collapsing (in space and energy) capabilities in ADVANTG. These changes, along with the support for parallel Denovo calculations using the current version of ADVANTG, give us the capability to improve the fidelity of the deterministic portion of the hybrid analysis sequence, obtain improved weight-window maps, and reduce both the analyst and computational time required for the analysis process.« less
Implementation of a small-angle scattering model in MCNPX for very cold neutron reflector studies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grammer, Kyle B.; Gallmeier, Franz X.
Current neutron moderator media do not sufficiently moderate neutrons below the cold neutron regime into the very cold neutron (VCN) regime that is desirable for some physics applications. Nesvizhevsky et al [1] have demonstrated that nanodiamond powder efficiently reflect VCN via small angle scattering. He suggests that these effects could be exploited to boost the neutron output of a VCN moderator. Simulation studies of nanoparticle reflectors are being investigated as part of the development of a VCN source option for the SNS second target station. We are pursuing an expansion of the MCNPX code by implementation of an analytical small-anglemore » scattering function [2], which is adaptable by scattering particle sizes, distributions, and packing fractions in order to supplement currently existing scattering kernels. The analytical model and preliminary studies using MCNPX will be discussed.« less
CT-based MCNPX dose calculations for gynecology brachytherapy employing a Henschke applicator
NASA Astrophysics Data System (ADS)
Yu, Pei-Chieh; Nien, Hsin-Hua; Tung, Chuan-Jong; Lee, Hsing-Yi; Lee, Chung-Chi; Wu, Ching-Jung; Chao, Tsi-Chian
2017-11-01
The purpose of this study is to investigate the dose perturbation caused by the metal ovoid structures of a Henschke applicator using Monte Carlo simulation in a realistic phantom. The Henschke applicator has been widely used for gynecologic patients treated by brachytherapy in Taiwan. However, the commercial brachytherapy planning system (BPS) did not properly evaluate the dose perturbation caused by its metal ovoid structures. In this study, Monte Carlo N-Particle Transport Code eXtended (MCNPX) was used to evaluate the brachytherapy dose distribution of a Henschke applicator embedded in a Plastic water phantom and a heterogeneous patient computed tomography (CT) phantom. The dose comparison between the MC simulations and film measurements for a Plastic water phantom with Henschke applicator were in good agreement. However, MC dose with the Henschke applicator showed significant deviation (-80.6%±7.5%) from those without Henschke applicator. Furthermore, the dose discrepancy in the heterogeneous patient CT phantom and Plastic water phantom CT geometries with Henschke applicator showed 0 to -26.7% dose discrepancy (-8.9%±13.8%). This study demonstrates that the metal ovoid structures of Henschke applicator cannot be disregard in brachytherapy dose calculation.
Using computational modeling to compare X-ray tube Practical Peak Voltage for Dental Radiology
NASA Astrophysics Data System (ADS)
Holanda Cassiano, Deisemar; Arruda Correa, Samanda Cristine; de Souza, Edmilson Monteiro; da Silva, Ademir Xaxier; Pereira Peixoto, José Guilherme; Tadeu Lopes, Ricardo
2014-02-01
The Practical Peak Voltage-PPV has been adopted to measure the voltage applied to an X-ray tube. The PPV was recommended by the IEC document and accepted and published in the TRS no. 457 code of practice. The PPV is defined and applied to all forms of waves and is related to the spectral distribution of X-rays and to the properties of the image. The calibration of X-rays tubes was performed using the MCNPX Monte Carlo code. An X-ray tube for Dental Radiology (operated from a single phase power supply) and an X-ray tube used as a reference (supplied from a constant potential power supply) were used in simulations across the energy range of interest of 40 kV to 100 kV. Results obtained indicated a linear relationship between the tubes involved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Iwamoto, Yosuke; /JAERI, Kyoto; Taniguchi, Shingo
Neutron energy spectra at 0{sup o} produced from stopping-length graphite, aluminum, iron and lead targets bombarded with 140, 250 and 350 MeV protons were measured at the neutron TOF course in RCNP of Osaka University. The neutron energy spectra were obtained by using the time-of-flight technique in the energy range from 10 MeV to incident proton energy. To compare the experimental results, Monte Carlo calculations with the PHITS and MCNPX codes were performed using the JENDL-HE and the LA150 evaluated nuclear data files, the ISOBAR model implemented in PHITS, and the LAHET code in MCNPX. It was found that thesemore » calculated results at 0{sup o} generally agreed with the experimental results in the energy range above 20 MeV except for graphite at 250 and 350 MeV.« less
Overview of Recent Radiation Transport Code Comparisons for Space Applications
NASA Astrophysics Data System (ADS)
Townsend, Lawrence
Recent advances in radiation transport code development for space applications have resulted in various comparisons of code predictions for a variety of scenarios and codes. Comparisons among both Monte Carlo and deterministic codes have been made and published by vari-ous groups and collaborations, including comparisons involving, but not limited to HZETRN, HETC-HEDS, FLUKA, GEANT, PHITS, and MCNPX. In this work, an overview of recent code prediction inter-comparisons, including comparisons to available experimental data, is presented and discussed, with emphases on those areas of agreement and disagreement among the various code predictions and published data.
Nedaie, Hassan Ali; Darestani, Hoda; Banaee, Nooshin; Shagholi, Negin; Mohammadi, Kheirollah; Shahvar, Arjang; Bayat, Esmaeel
2014-01-01
High-energy linacs produce secondary particles such as neutrons (photoneutron production). The neutrons have the important role during treatment with high energy photons in terms of protection and dose escalation. In this work, neutron dose equivalents of 18 MV Varian and Elekta accelerators are measured by thermoluminescent dosimeter (TLD) 600 and TLD700 detectors and compared with the Monte Carlo calculations. For neutron and photon dose discrimination, first TLDs were calibrated separately by gamma and neutron doses. Gamma calibration was carried out in two procedures; by standard 60Co source and by 18 MV linac photon beam. For neutron calibration by 241Am-Be source, irradiations were performed in several different time intervals. The Varian and Elekta linac heads and the phantom were simulated by the MCNPX code (v. 2.5). Neutron dose equivalent was calculated in the central axis, on the phantom surface and depths of 1, 2, 3.3, 4, 5, and 6 cm. The maximum photoneutron dose equivalents which calculated by the MCNPX code were 7.06 and 2.37 mSv.Gy-1 for Varian and Elekta accelerators, respectively, in comparison with 50 and 44 mSv.Gy-1 achieved by TLDs. All the results showed more photoneutron production in Varian accelerator compared to Elekta. According to the results, it seems that TLD600 and TLD700 pairs are not suitable dosimeters for neutron dosimetry inside the linac field due to high photon flux, while MCNPX code is an appropriate alternative for studying photoneutron production. PMID:24600167
Nedaie, Hassan Ali; Darestani, Hoda; Banaee, Nooshin; Shagholi, Negin; Mohammadi, Kheirollah; Shahvar, Arjang; Bayat, Esmaeel
2014-01-01
High-energy linacs produce secondary particles such as neutrons (photoneutron production). The neutrons have the important role during treatment with high energy photons in terms of protection and dose escalation. In this work, neutron dose equivalents of 18 MV Varian and Elekta accelerators are measured by thermoluminescent dosimeter (TLD) 600 and TLD700 detectors and compared with the Monte Carlo calculations. For neutron and photon dose discrimination, first TLDs were calibrated separately by gamma and neutron doses. Gamma calibration was carried out in two procedures; by standard 60Co source and by 18 MV linac photon beam. For neutron calibration by (241)Am-Be source, irradiations were performed in several different time intervals. The Varian and Elekta linac heads and the phantom were simulated by the MCNPX code (v. 2.5). Neutron dose equivalent was calculated in the central axis, on the phantom surface and depths of 1, 2, 3.3, 4, 5, and 6 cm. The maximum photoneutron dose equivalents which calculated by the MCNPX code were 7.06 and 2.37 mSv.Gy(-1) for Varian and Elekta accelerators, respectively, in comparison with 50 and 44 mSv.Gy(-1) achieved by TLDs. All the results showed more photoneutron production in Varian accelerator compared to Elekta. According to the results, it seems that TLD600 and TLD700 pairs are not suitable dosimeters for neutron dosimetry inside the linac field due to high photon flux, while MCNPX code is an appropriate alternative for studying photoneutron production.
A New Approach in Coal Mine Exploration Using Cosmic Ray Muons
NASA Astrophysics Data System (ADS)
Darijani, Reza; Negarestani, Ali; Rezaie, Mohammad Reza; Fatemi, Syed Jalil; Akhond, Ahmad
2016-08-01
Muon radiography is a technique that uses cosmic ray muons to image the interior of large scale geological structures. The muon absorption in matter is the most important parameter in cosmic ray muon radiography. Cosmic ray muon radiography is similar to X-ray radiography. The main aim in this survey is the simulation of the muon radiography for exploration of mines. So, the production source, tracking, and detection of cosmic ray muons were simulated by MCNPX code. For this purpose, the input data of the source card in MCNPX code were extracted from the muon energy spectrum at sea level. In addition, the other input data such as average density and thickness of layers that were used in this code are the measured data from Pabdana (Kerman, Iran) coal mines. The average thickness and density of these layers in the coal mines are from 2 to 4 m and 1.3 gr/c3, respectively. To increase the spatial resolution, a detector was placed inside the mountain. The results indicated that using this approach, the layers with minimum thickness about 2.5 m can be identified.
The COsmic-ray Soil Moisture Interaction Code (COSMIC) for use in data assimilation
NASA Astrophysics Data System (ADS)
Shuttleworth, J.; Rosolem, R.; Zreda, M.; Franz, T.
2013-08-01
Soil moisture status in land surface models (LSMs) can be updated by assimilating cosmic-ray neutron intensity measured in air above the surface. This requires a fast and accurate model to calculate the neutron intensity from the profiles of soil moisture modeled by the LSM. The existing Monte Carlo N-Particle eXtended (MCNPX) model is sufficiently accurate but too slow to be practical in the context of data assimilation. Consequently an alternative and efficient model is needed which can be calibrated accurately to reproduce the calculations made by MCNPX and used to substitute for MCNPX during data assimilation. This paper describes the construction and calibration of such a model, COsmic-ray Soil Moisture Interaction Code (COSMIC), which is simple, physically based and analytic, and which, because it runs at least 50 000 times faster than MCNPX, is appropriate in data assimilation applications. The model includes simple descriptions of (a) degradation of the incoming high-energy neutron flux with soil depth, (b) creation of fast neutrons at each depth in the soil, and (c) scattering of the resulting fast neutrons before they reach the soil surface, all of which processes may have parameterized dependency on the chemistry and moisture content of the soil. The site-to-site variability in the parameters used in COSMIC is explored for 42 sample sites in the COsmic-ray Soil Moisture Observing System (COSMOS), and the comparative performance of COSMIC relative to MCNPX when applied to represent interactions between cosmic-ray neutrons and moist soil is explored. At an example site in Arizona, fast-neutron counts calculated by COSMIC from the average soil moisture profile given by an independent network of point measurements in the COSMOS probe footprint are similar to the fast-neutron intensity measured by the COSMOS probe. It was demonstrated that, when used within a data assimilation framework to assimilate COSMOS probe counts into the Noah land surface model at the Santa Rita Experimental Range field site, the calibrated COSMIC model provided an effective mechanism for translating model-calculated soil moisture profiles into aboveground fast-neutron count when applied with two radically different approaches used to remove the bias between data and model.
Mendes, Bruno Melo; Trindade, Bruno Machado; Fonseca, Telma Cristina Ferreira; de Campos, Tarcisio Passos Ribeiro
2017-12-01
The aim of this work was to simulate a 6MV conventional breast 3D conformational radiation therapy (3D-CRT) with physical wedges (50 Gy/25#) in the left breast, calculate the mean absorbed dose in the body organs using robust models and computational tools and estimate the secondary cancer-incidence risk to the Brazilian population. The VW female phantom was used in the simulations. Planning target volume (PTV) was defined in the left breast. The 6MV parallel-opposed fields breast-radiotherapy (RT) protocol was simulated with MCNPx code. The absorbed doses were evaluated in all the organs. The secondary cancer-incidence risk induced by radiotherapy was calculated for different age groups according to the BEIR VII methodology. RT quality indexes indicated that the protocol was properly simulated. Significant absorbed dose values in red bone marrow, RBM (0.8 Gy) and stomach (0.6 Gy) were observed. The contralateral breast presented the highest risk of incidence of a secondary cancer followed by leukaemia, lung and stomach. The risk of a secondary cancer-incidence by breast-RT, for the Brazilian population, ranged between 2.2-1.7% and 0.6-0.4%. RBM and stomach, usually not considered as OAR, presented high second cancer incidence risks of 0.5-0.3% and 0.4-0.1%, respectively. This study may be helpful for breast-RT risk/benefit assessment. Advances in knowledge: MCNPX-dosimetry was able to provide the scatter radiation and dose for all body organs in conventional breast-RT. It was found a relevant risk up to 2.2% of induced-cancer from breast-RT, considering the whole thorax organs and Brazilian cancer-incidence.
NASA Technical Reports Server (NTRS)
Mashnik, S. G.; Gudima, K. K.; Sierk, A. J.; Moskalenko, I. V.
2002-01-01
Space radiation shield applications and studies of cosmic ray propagation in the Galaxy require reliable cross sections to calculate spectra of secondary particles and yields of the isotopes produced in nuclear reactions induced both by particles and nuclei at energies from threshold to hundreds of GeV per nucleon. Since the data often exist in a very limited energy range or sometimes not at all, the only way to obtain an estimate of the production cross sections is to use theoretical models and codes. Recently, we have developed improved versions of the Cascade-Exciton Model (CEM) of nuclear reactions: the codes CEM97 and CEM2k for description of particle-nucleus reactions at energies up to about 5 GeV. In addition, we have developed a LANL version of the Quark-Gluon String Model (LAQGSM) to describe reactions induced both by particles and nuclei at energies up to hundreds of GeVhucleon. We have tested and benchmarked the CEM and LAQGSM codes against a large variety of experimental data and have compared their results with predictions by other currently available models and codes. Our benchmarks show that CEM and LAQGSM codes have predictive powers no worse than other currently used codes and describe many reactions better than other codes; therefore both our codes can be used as reliable event-generators for space radiation shield and cosmic ray propagation applications. The CEM2k code is being incorporated into the transport code MCNPX (and several other transport codes), and we plan to incorporate LAQGSM into MCNPX in the near future. Here, we present the current status of the CEM2k and LAQGSM codes, and show results and applications to studies of cosmic ray propagation in the Galaxy.
Accelerator shield design of KIPT neutron source facility
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhong, Z.; Gohar, Y.
Argonne National Laboratory (ANL) of the United States and Kharkov Institute of Physics and Technology (KIPT) of Ukraine have been collaborating on the design development of a neutron source facility at KIPT utilizing an electron-accelerator-driven subcritical assembly. Electron beam power is 100 kW, using 100 MeV electrons. The facility is designed to perform basic and applied nuclear research, produce medical isotopes, and train young nuclear specialists. The biological shield of the accelerator building is designed to reduce the biological dose to less than 0.5-mrem/hr during operation. The main source of the biological dose is the photons and the neutrons generatedmore » by interactions of leaked electrons from the electron gun and accelerator sections with the surrounding concrete and accelerator materials. The Monte Carlo code MCNPX serves as the calculation tool for the shield design, due to its capability to transport electrons, photons, and neutrons coupled problems. The direct photon dose can be tallied by MCNPX calculation, starting with the leaked electrons. However, it is difficult to accurately tally the neutron dose directly from the leaked electrons. The neutron yield per electron from the interactions with the surrounding components is less than 0.01 neutron per electron. This causes difficulties for Monte Carlo analyses and consumes tremendous computation time for tallying with acceptable statistics the neutron dose outside the shield boundary. To avoid these difficulties, the SOURCE and TALLYX user subroutines of MCNPX were developed for the study. The generated neutrons are banked, together with all related parameters, for a subsequent MCNPX calculation to obtain the neutron and secondary photon doses. The weight windows variance reduction technique is utilized for both neutron and photon dose calculations. Two shielding materials, i.e., heavy concrete and ordinary concrete, were considered for the shield design. The main goal is to maintain the total dose outside the shield boundary at less than 0.5-mrem/hr. The shield configuration and parameters of the accelerator building have been determined and are presented in this paper. (authors)« less
Anigstein, Robert; Erdman, Michael C.; Ansari, Armin
2017-01-01
The detonation of a radiological dispersion device or other radiological incidents could result in the dispersion of radioactive materials and intakes of radionuclides by affected individuals. Transportable radiation monitoring instruments could be used to measure photon radiation from radionuclides in the body for triaging individuals and assigning priorities to their bioassay samples for further assessments. Computer simulations and experimental measurements are required for these instruments to be used for assessing intakes of radionuclides. Count rates from calibrated sources of 60Co, 137Cs, and 241Am were measured on three instruments: a survey meter containing a 2.54 × 2.54-cm NaI(Tl) crystal, a thyroid probe using a 5.08 × 5.08-cm NaI(Tl) crystal, and a portal monitor incorporating two 3.81 × 7.62 × 182.9-cm polyvinyltoluene plastic scintillators. Computer models of the instruments and of the calibration sources were constructed, using engineering drawings and other data provided by the manufacturers. Count rates on the instruments were simulated using the Monte Carlo radiation transport code MCNPX. The computer simulations were within 16% of the measured count rates for all 20 measurements without using empirical radionuclide-dependent scaling factors, as reported by others. The weighted root-mean-square deviations (differences between measured and simulated count rates, added in quadrature and weighted by the variance of the difference) were 10.9% for the survey meter, 4.2% for the thyroid probe, and 0.9% for the portal monitor. These results validate earlier MCNPX models of these instruments that were used to develop calibration factors that enable these instruments to be used for assessing intakes and committed doses from several gamma-emitting radionuclides. PMID:27115229
Anigstein, Robert; Erdman, Michael C; Ansari, Armin
2016-06-01
The detonation of a radiological dispersion device or other radiological incidents could result in the dispersion of radioactive materials and intakes of radionuclides by affected individuals. Transportable radiation monitoring instruments could be used to measure photon radiation from radionuclides in the body for triaging individuals and assigning priorities to their bioassay samples for further assessments. Computer simulations and experimental measurements are required for these instruments to be used for assessing intakes of radionuclides. Count rates from calibrated sources of Co, Cs, and Am were measured on three instruments: a survey meter containing a 2.54 × 2.54-cm NaI(Tl) crystal, a thyroid probe using a 5.08 × 5.08-cm NaI(Tl) crystal, and a portal monitor incorporating two 3.81 × 7.62 × 182.9-cm polyvinyltoluene plastic scintillators. Computer models of the instruments and of the calibration sources were constructed, using engineering drawings and other data provided by the manufacturers. Count rates on the instruments were simulated using the Monte Carlo radiation transport code MCNPX. The computer simulations were within 16% of the measured count rates for all 20 measurements without using empirical radionuclide-dependent scaling factors, as reported by others. The weighted root-mean-square deviations (differences between measured and simulated count rates, added in quadrature and weighted by the variance of the difference) were 10.9% for the survey meter, 4.2% for the thyroid probe, and 0.9% for the portal monitor. These results validate earlier MCNPX models of these instruments that were used to develop calibration factors that enable these instruments to be used for assessing intakes and committed doses from several gamma-emitting radionuclides.
An Eye Model for Computational Dosimetry Using A Multi-Scale Voxel Phantom
NASA Astrophysics Data System (ADS)
Caracappa, Peter F.; Rhodes, Ashley; Fiedler, Derek
2014-06-01
The lens of the eye is a radiosensitive tissue with cataract formation being the major concern. Recently reduced recommended dose limits to the lens of the eye have made understanding the dose to this tissue of increased importance. Due to memory limitations, the voxel resolution of computational phantoms used for radiation dose calculations is too large to accurately represent the dimensions of the eye. A revised eye model is constructed using physiological data for the dimensions of radiosensitive tissues, and is then transformed into a high-resolution voxel model. This eye model is combined with an existing set of whole body models to form a multi-scale voxel phantom, which is used with the MCNPX code to calculate radiation dose from various exposure types. This phantom provides an accurate representation of the radiation transport through the structures of the eye. Two alternate methods of including a high-resolution eye model within an existing whole body model are developed. The accuracy and performance of each method is compared against existing computational phantoms.
SU-E-T-493: Accelerated Monte Carlo Methods for Photon Dosimetry Using a Dual-GPU System and CUDA.
Liu, T; Ding, A; Xu, X
2012-06-01
To develop a Graphics Processing Unit (GPU) based Monte Carlo (MC) code that accelerates dose calculations on a dual-GPU system. We simulated a clinical case of prostate cancer treatment. A voxelized abdomen phantom derived from 120 CT slices was used containing 218×126×60 voxels, and a GE LightSpeed 16-MDCT scanner was modeled. A CPU version of the MC code was first developed in C++ and tested on Intel Xeon X5660 2.8GHz CPU, then it was translated into GPU version using CUDA C 4.1 and run on a dual Tesla m 2 090 GPU system. The code was featured with automatic assignment of simulation task to multiple GPUs, as well as accurate calculation of energy- and material- dependent cross-sections. Double-precision floating point format was used for accuracy. Doses to the rectum, prostate, bladder and femoral heads were calculated. When running on a single GPU, the MC GPU code was found to be ×19 times faster than the CPU code and ×42 times faster than MCNPX. These speedup factors were doubled on the dual-GPU system. The dose Result was benchmarked against MCNPX and a maximum difference of 1% was observed when the relative error is kept below 0.1%. A GPU-based MC code was developed for dose calculations using detailed patient and CT scanner models. Efficiency and accuracy were both guaranteed in this code. Scalability of the code was confirmed on the dual-GPU system. © 2012 American Association of Physicists in Medicine.
Topics in computational physics
NASA Astrophysics Data System (ADS)
Monville, Maura Edelweiss
Computational Physics spans a broad range of applied fields extending beyond the border of traditional physics tracks. Demonstrated flexibility and capability to switch to a new project, and pick up the basics of the new field quickly, are among the essential requirements for a computational physicist. In line with the above mentioned prerequisites, my thesis described the development and results of two computational projects belonging to two different applied science areas. The first project is a Materials Science application. It is a prescription for an innovative nano-fabrication technique that is built out of two other known techniques. The preliminary results of the simulation of this novel nano-patterning fabrication method show an average improvement, roughly equal to 18%, with respect to the single techniques it draws on. The second project is a Homeland Security application aimed at preventing smuggling of nuclear material at ports of entry. It is concerned with a simulation of an active material interrogation system based on the analysis of induced photo-nuclear reactions. This project consists of a preliminary evaluation of the photo-fission implementation in the more robust radiation transport Monte Carlo codes, followed by the customization and extension of MCNPX, a Monte Carlo code developed in Los Alamos National Laboratory, and MCNP-PoliMi. The final stage of the project consists of testing the interrogation system against some real world scenarios, for the purpose of determining the system's reliability, material discrimination power, and limitations.
Coupled Neutron Transport for HZETRN
NASA Technical Reports Server (NTRS)
Slaba, Tony C.; Blattnig, Steve R.
2009-01-01
Exposure estimates inside space vehicles, surface habitats, and high altitude aircrafts exposed to space radiation are highly influenced by secondary neutron production. The deterministic transport code HZETRN has been identified as a reliable and efficient tool for such studies, but improvements to the underlying transport models and numerical methods are still necessary. In this paper, the forward-backward (FB) and directionally coupled forward-backward (DC) neutron transport models are derived, numerical methods for the FB model are reviewed, and a computationally efficient numerical solution is presented for the DC model. Both models are compared to the Monte Carlo codes HETC-HEDS, FLUKA, and MCNPX, and the DC model is shown to agree closely with the Monte Carlo results. Finally, it is found in the development of either model that the decoupling of low energy neutrons from the light particle transport procedure adversely affects low energy light ion fluence spectra and exposure quantities. A first order correction is presented to resolve the problem, and it is shown to be both accurate and efficient.
Zaker, Neda; Zehtabian, Mehdi; Sina, Sedigheh; Koontz, Craig; Meigooni, Ali S
2016-03-08
Monte Carlo simulations are widely used for calculation of the dosimetric parameters of brachytherapy sources. MCNP4C2, MCNP5, MCNPX, EGS4, EGSnrc, PTRAN, and GEANT4 are among the most commonly used codes in this field. Each of these codes utilizes a cross-sectional library for the purpose of simulating different elements and materials with complex chemical compositions. The accuracies of the final outcomes of these simulations are very sensitive to the accuracies of the cross-sectional libraries. Several investigators have shown that inaccuracies of some of the cross section files have led to errors in 125I and 103Pd parameters. The purpose of this study is to compare the dosimetric parameters of sample brachytherapy sources, calculated with three different versions of the MCNP code - MCNP4C, MCNP5, and MCNPX. In these simulations for each source type, the source and phantom geometries, as well as the number of the photons, were kept identical, thus eliminating the possible uncertainties. The results of these investigations indicate that for low-energy sources such as 125I and 103Pd there are discrepancies in gL(r) values. Discrepancies up to 21.7% and 28% are observed between MCNP4C and other codes at a distance of 6 cm for 103Pd and 10 cm for 125I from the source, respectively. However, for higher energy sources, the discrepancies in gL(r) values are less than 1.1% for 192Ir and less than 1.2% for 137Cs between the three codes.
Electron Accelerator Shielding Design of KIPT Neutron Source Facility
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhong, Zhaopeng; Gohar, Yousry
The Argonne National Laboratory of the United States and the Kharkov Institute of Physics and Technology of the Ukraine have been collaborating on the design, development and construction of a neutron source facility at Kharkov Institute of Physics and Technology utilizing an electron-accelerator-driven subcritical assembly. The electron beam power is 100 kW using 100-MeV electrons. The facility was designed to perform basic and applied nuclear research, produce medical isotopes, and train nuclear specialists. The biological shield of the accelerator building was designed to reduce the biological dose to less than 5.0e-03 mSv/h during operation. The main source of the biologicalmore » dose for the accelerator building is the photons and neutrons generated from different interactions of leaked electrons from the electron gun and the accelerator sections with the surrounding components and materials. The Monte Carlo N-particle extended code (MCNPX) was used for the shielding calculations because of its capability to perform electron-, photon-, and neutron-coupled transport simulations. The photon dose was tallied using the MCNPX calculation, starting with the leaked electrons. However, it is difficult to accurately tally the neutron dose directly from the leaked electrons. The neutron yield per electron from the interactions with the surrounding components is very small, similar to 0.01 neutron for 100-MeV electron and even smaller for lower-energy electrons. This causes difficulties for the Monte Carlo analyses and consumes tremendous computation resources for tallying the neutron dose outside the shield boundary with an acceptable accuracy. To avoid these difficulties, the SOURCE and TALLYX user subroutines of MCNPX were utilized for this study. The generated neutrons were banked, together with all related parameters, for a subsequent MCNPX calculation to obtain the neutron dose. The weight windows variance reduction technique was also utilized for both neutron and photon dose calculations. Two shielding materials, heavy concrete and ordinary concrete, were considered for the shield design. The main goal is to maintain the total dose outside the shield boundary less than 5.0e-03 mSv/h during operation. The shield configuration and parameters of the accelerator building were determined and are presented in this paper. Copyright (C) 2016, Published by Elsevier Korea LLC on behalf of Korean Nuclear Society.« less
Neutron displacement cross-sections for tantalum and tungsten at energies up to 1 GeV
NASA Astrophysics Data System (ADS)
Broeders, C. H. M.; Konobeyev, A. Yu.; Villagrasa, C.
2005-06-01
The neutron displacement cross-section has been evaluated for tantalum and tungsten at energies from 10 -5 eV up to 1 GeV. The nuclear optical model, the intranuclear cascade model combined with the pre-equilibrium and evaporation models were used for the calculations. The number of defects produced by recoil atoms nuclei in materials was calculated by the Norgett, Robinson, Torrens model and by the approach combining calculations using the binary collision approximation model and the results of the molecular dynamics simulation. The numerical calculations were done using the NJOY code, the ECIS96 code, the MCNPX code and the IOTA code.
Electronics Devices and Materials
2008-03-17
Molecular -bea epitaxy MCNPX ............... Software code Misse6 ................. Satellite expected to carry ORMatE-I Misse7...patterning using electron beam lithography), spaces (class 1000 clean benches), and skills (appropriate mix of skilled technicians and professionals...34 Process samples for various projects such as Antimode Base High Electron Mobility Transistors ( HEMT ) and Double Heterojuction Bipolar Transistors
NASA Technical Reports Server (NTRS)
Mertens, Christopher J.; Moyers, Michael F.; Walker, Steven A.; Tweed, John
2010-01-01
Recent developments in NASA s deterministic High charge (Z) and Energy TRaNsport (HZETRN) code have included lateral broadening of primary ion beams due to small-angle multiple Coulomb scattering, and coupling of the ion-nuclear scattering interactions with energy loss and straggling. This new version of HZETRN is based on Green function methods, called GRNTRN, and is suitable for modeling transport with both space environment and laboratory boundary conditions. Multiple scattering processes are a necessary extension to GRNTRN in order to accurately model ion beam experiments, to simulate the physical and biological-effective radiation dose, and to develop new methods and strategies for light ion radiation therapy. In this paper we compare GRNTRN simulations of proton lateral broadening distributions with beam measurements taken at Loma Linda University Proton Therapy Facility. The simulated and measured lateral broadening distributions are compared for a 250 MeV proton beam on aluminum, polyethylene, polystyrene, bone substitute, iron, and lead target materials. The GRNTRN results are also compared to simulations from the Monte Carlo MCNPX code for the same projectile-target combinations described above.
Digital pile-up rejection for plutonium experiments with solution-grown stilbene
NASA Astrophysics Data System (ADS)
Bourne, M. M.; Clarke, S. D.; Paff, M.; DiFulvio, A.; Norsworthy, M.; Pozzi, S. A.
2017-01-01
A solution-grown stilbene detector was used in several experiments with plutonium samples including plutonium oxide, mixed oxide, and plutonium metal samples. Neutrons from different reactions and plutonium isotopes are accompanied by numerous gamma rays especially by the 59-keV gamma ray of 241Am. Identifying neutrons correctly is important for nuclear nonproliferation applications and makes neutron/gamma discrimination and pile-up rejection necessary. Each experimental dataset is presented with and without pile-up filtering using a previously developed algorithm. The experiments were simulated using MCNPX-PoliMi, a Monte Carlo code designed to accurately model scintillation detector response. Collision output from MCNPX-PoliMi was processed using the specialized MPPost post-processing code to convert neutron energy depositions event-by-event into light pulses. The model was compared to experimental data after pulse-shape discrimination identified waveforms as gamma ray or neutron interactions. We show that the use of the digital pile-up rejection algorithm allows for accurate neutron counting with stilbene to within 2% even when not using lead shielding.
Pölz, Stefan; Laubersheimer, Sven; Eberhardt, Jakob S; Harrendorf, Marco A; Keck, Thomas; Benzler, Andreas; Breustedt, Bastian
2013-08-21
The basic idea of Voxel2MCNP is to provide a framework supporting users in modeling radiation transport scenarios using voxel phantoms and other geometric models, generating corresponding input for the Monte Carlo code MCNPX, and evaluating simulation output. Applications at Karlsruhe Institute of Technology are primarily whole and partial body counter calibration and calculation of dose conversion coefficients. A new generic data model describing data related to radiation transport, including phantom and detector geometries and their properties, sources, tallies and materials, has been developed. It is modular and generally independent of the targeted Monte Carlo code. The data model has been implemented as an XML-based file format to facilitate data exchange, and integrated with Voxel2MCNP to provide a common interface for modeling, visualization, and evaluation of data. Also, extensions to allow compatibility with several file formats, such as ENSDF for nuclear structure properties and radioactive decay data, SimpleGeo for solid geometry modeling, ImageJ for voxel lattices, and MCNPX's MCTAL for simulation results have been added. The framework is presented and discussed in this paper and example workflows for body counter calibration and calculation of dose conversion coefficients is given to illustrate its application.
Interactive three-dimensional visualization and creation of geometries for Monte Carlo calculations
NASA Astrophysics Data System (ADS)
Theis, C.; Buchegger, K. H.; Brugger, M.; Forkel-Wirth, D.; Roesler, S.; Vincke, H.
2006-06-01
The implementation of three-dimensional geometries for the simulation of radiation transport problems is a very time-consuming task. Each particle transport code supplies its own scripting language and syntax for creating the geometries. All of them are based on the Constructive Solid Geometry scheme requiring textual description. This makes the creation a tedious and error-prone task, which is especially hard to master for novice users. The Monte Carlo code FLUKA comes with built-in support for creating two-dimensional cross-sections through the geometry and FLUKACAD, a custom-built converter to the commercial Computer Aided Design package AutoCAD, exists for 3D visualization. For other codes, like MCNPX, a couple of different tools are available, but they are often specifically tailored to the particle transport code and its approach used for implementing geometries. Complex constructive solid modeling usually requires very fast and expensive special purpose hardware, which is not widely available. In this paper SimpleGeo is presented, which is an implementation of a generic versatile interactive geometry modeler using off-the-shelf hardware. It is running on Windows, with a Linux version currently under preparation. This paper describes its functionality, which allows for rapid interactive visualization as well as generation of three-dimensional geometries, and also discusses critical issues regarding common CAD systems.
TH-C-BRD-02: Analytical Modeling and Dose Calculation Method for Asymmetric Proton Pencil Beams
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gelover, E; Wang, D; Hill, P
2014-06-15
Purpose: A dynamic collimation system (DCS), which consists of two pairs of orthogonal trimmer blades driven by linear motors has been proposed to decrease the lateral penumbra in pencil beam scanning proton therapy. The DCS reduces lateral penumbra by intercepting the proton pencil beam near the lateral boundary of the target in the beam's eye view. The resultant trimmed pencil beams are asymmetric and laterally shifted, and therefore existing pencil beam dose calculation algorithms are not capable of trimmed beam dose calculations. This work develops a method to model and compute dose from trimmed pencil beams when using the DCS.more » Methods: MCNPX simulations were used to determine the dose distributions expected from various trimmer configurations using the DCS. Using these data, the lateral distribution for individual beamlets was modeled with a 2D asymmetric Gaussian function. The integral depth dose (IDD) of each configuration was also modeled by combining the IDD of an untrimmed pencil beam with a linear correction factor. The convolution of these two terms, along with the Highland approximation to account for lateral growth of the beam along the depth direction, allows a trimmed pencil beam dose distribution to be analytically generated. The algorithm was validated by computing dose for a single energy layer 5×5 cm{sup 2} treatment field, defined by the trimmers, using both the proposed method and MCNPX beamlets. Results: The Gaussian modeled asymmetric lateral profiles along the principal axes match the MCNPX data very well (R{sup 2}≥0.95 at the depth of the Bragg peak). For the 5×5 cm{sup 2} treatment plan created with both the modeled and MCNPX pencil beams, the passing rate of the 3D gamma test was 98% using a standard threshold of 3%/3 mm. Conclusion: An analytical method capable of accurately computing asymmetric pencil beam dose when using the DCS has been developed.« less
A scintillator-based approach to monitor secondary neutron production during proton therapy.
Clarke, S D; Pryser, E; Wieger, B M; Pozzi, S A; Haelg, R A; Bashkirov, V A; Schulte, R W
2016-11-01
The primary objective of this work is to measure the secondary neutron field produced by an uncollimated proton pencil beam impinging on different tissue-equivalent phantom materials using organic scintillation detectors. Additionally, the Monte Carlo code mcnpx-PoliMi was used to simulate the detector response for comparison to the measured data. Comparison of the measured and simulated data will validate this approach for monitoring secondary neutron dose during proton therapy. Proton beams of 155- and 200-MeV were used to irradiate a variety of phantom materials and secondary particles were detected using organic liquid scintillators. These detectors are sensitive to fast neutrons and gamma rays: pulse shape discrimination was used to classify each detected pulse as either a neutron or a gamma ray. The mcnpx-PoliMi code was used to simulate the secondary neutron field produced during proton irradiation of the same tissue-equivalent phantom materials. An experiment was performed at the Loma Linda University Medical Center proton therapy research beam line and corresponding models were created using the mcnpx-PoliMi code. The authors' analysis showed agreement between the simulations and the measurements. The simulated detector response can be used to validate the simulations of neutron and gamma doses on a particular beam line with or without a phantom. The authors have demonstrated a method of monitoring the neutron component of the secondary radiation field produced by therapeutic protons. The method relies on direct detection of secondary neutrons and gamma rays using organic scintillation detectors. These detectors are sensitive over the full range of biologically relevant neutron energies above 0.5 MeV and allow effective discrimination between neutron and photon dose. Because the detector system is portable, the described system could be used in the future to evaluate secondary neutron and gamma doses on various clinical beam lines for commissioning and prospective data collection in pediatric patients treated with proton therapy.
Huet, C; Lemosquet, A; Clairand, I; Rioual, J B; Franck, D; de Carlan, L; Aubineau-Lanièce, I; Bottollier-Depois, J F
2009-01-01
Estimating the dose distribution in a victim's body is a relevant indicator in assessing biological damage from exposure in the event of a radiological accident caused by an external source. This dose distribution can be assessed by physical dosimetric reconstruction methods. Physical dosimetric reconstruction can be achieved using experimental or numerical techniques. This article presents the laboratory-developed SESAME--Simulation of External Source Accident with MEdical images--tool specific to dosimetric reconstruction of radiological accidents through numerical simulations which combine voxel geometry and the radiation-material interaction MCNP(X) Monte Carlo computer code. The experimental validation of the tool using a photon field and its application to a radiological accident in Chile in December 2005 are also described.
Zaker, Neda; Sina, Sedigheh; Koontz, Craig; Meigooni1, Ali S.
2016-01-01
Monte Carlo simulations are widely used for calculation of the dosimetric parameters of brachytherapy sources. MCNP4C2, MCNP5, MCNPX, EGS4, EGSnrc, PTRAN, and GEANT4 are among the most commonly used codes in this field. Each of these codes utilizes a cross‐sectional library for the purpose of simulating different elements and materials with complex chemical compositions. The accuracies of the final outcomes of these simulations are very sensitive to the accuracies of the cross‐sectional libraries. Several investigators have shown that inaccuracies of some of the cross section files have led to errors in 125I and 103Pd parameters. The purpose of this study is to compare the dosimetric parameters of sample brachytherapy sources, calculated with three different versions of the MCNP code — MCNP4C, MCNP5, and MCNPX. In these simulations for each source type, the source and phantom geometries, as well as the number of the photons, were kept identical, thus eliminating the possible uncertainties. The results of these investigations indicate that for low‐energy sources such as 125I and 103Pd there are discrepancies in gL(r) values. Discrepancies up to 21.7% and 28% are observed between MCNP4C and other codes at a distance of 6 cm for 103Pd and 10 cm for 125I from the source, respectively. However, for higher energy sources, the discrepancies in gL(r) values are less than 1.1% for 192Ir and less than 1.2% for 137Cs between the three codes. PACS number(s): 87.56.bg PMID:27074460
Cellular dosimetry calculations for Strontium-90 using Monte Carlo code PENELOPE.
Hocine, Nora; Farlay, Delphine; Boivin, Georges; Franck, Didier; Agarande, Michelle
2014-11-01
To improve risk assessments associated with chronic exposure to Strontium-90 (Sr-90), for both the environment and human health, it is necessary to know the energy distribution in specific cells or tissue. Monte Carlo (MC) simulation codes are extremely useful tools for calculating deposition energy. The present work was focused on the validation of the MC code PENetration and Energy LOss of Positrons and Electrons (PENELOPE) and the assessment of dose distribution to bone marrow cells from punctual Sr-90 source localized within the cortical bone part. S-values (absorbed dose per unit cumulated activity) calculations using Monte Carlo simulations were performed by using PENELOPE and Monte Carlo N-Particle eXtended (MCNPX). Cytoplasm, nucleus, cell surface, mouse femur bone and Sr-90 radiation source were simulated. Cells are assumed to be spherical with the radii of the cell and cell nucleus ranging from 2-10 μm. The Sr-90 source is assumed to be uniformly distributed in cell nucleus, cytoplasm and cell surface. The comparison of S-values calculated with PENELOPE to MCNPX results and the Medical Internal Radiation Dose (MIRD) values agreed very well since the relative deviations were less than 4.5%. The dose distribution to mouse bone marrow cells showed that the cells localized near the cortical part received the maximum dose. The MC code PENELOPE may prove useful for cellular dosimetry involving radiation transport through materials other than water, or for complex distributions of radionuclides and geometries.
NASA Astrophysics Data System (ADS)
LaFleur, Adrienne M.; Charlton, William S.; Menlove, Howard O.; Swinhoe, Martyn T.
2012-07-01
A new non-destructive assay technique called Self-Interrogation Neutron Resonance Densitometry (SINRD) is currently being developed at Los Alamos National Laboratory (LANL) to improve existing nuclear safeguards measurements for Light Water Reactor (LWR) fuel assemblies. SINRD consists of four 235U fission chambers (FCs): bare FC, boron carbide shielded FC, Gd covered FC, and Cd covered FC. Ratios of different FCs are used to determine the amount of resonance absorption from 235U in the fuel assembly. The sensitivity of this technique is based on using the same fissile materials in the FCs as are present in the fuel because the effect of resonance absorption lines in the transmitted flux is amplified by the corresponding (n,f) reaction peaks in the fission chamber. In this work, experimental measurements were performed in air with SINRD using a reference Pressurized Water Reactor (PWR) 15×15 low enriched uranium (LEU) fresh fuel assembly at LANL. The purpose of this experiment was to assess the following capabilities of SINRD: (1) ability to measure the effective 235U enrichment of the PWR fresh LEU fuel assembly and (2) sensitivity and penetrability to the removal of fuel pins from an assembly. These measurements were compared to Monte Carlo N-Particle eXtended transport code (MCNPX) simulations to verify the accuracy of the MCNPX model of SINRD. The reproducibility of experimental measurements via MCNPX simulations is essential to validating the results and conclusions obtained from the simulations of SINRD for LWR spent fuel assemblies.
Shot-by-shot spectrum model for rod-pinch, pulsed radiography machines
NASA Astrophysics Data System (ADS)
Wood, Wm M.
2018-02-01
A simplified model of bremsstrahlung production is developed for determining the x-ray spectrum output of a rod-pinch radiography machine, on a shot-by-shot basis, using the measured voltage, V(t), and current, I(t). The motivation for this model is the need for an agile means of providing shot-by-shot spectrum prediction, from a laptop or desktop computer, for quantitative radiographic analysis. Simplifying assumptions are discussed, and the model is applied to the Cygnus rod-pinch machine. Output is compared to wedge transmission data for a series of radiographs from shots with identical target objects. Resulting model enables variation of parameters in real time, thus allowing for rapid optimization of the model across many shots. "Goodness of fit" is compared with output from LSP Particle-In-Cell code, as well as the Monte Carlo Neutron Propagation with Xrays ("MCNPX") model codes, and is shown to provide an excellent predictive representation of the spectral output of the Cygnus machine. Improvements to the model, specifically for application to other geometries, are discussed.
NASA Astrophysics Data System (ADS)
Barbosa, N. A.; da Rosa, L. A. R.; Facure, A.; Braz, D.
2014-02-01
Concave eye applicators with 90Sr/90Y and 106Ru/106Rh beta-ray sources are usually used in brachytherapy for the treatment of superficial intraocular tumors as uveal melanoma with thickness up to 5 mm. The aim of this work consisted in using the Monte Carlo code MCNPX to calculate the 3D dose distribution on a mathematical model of the human eye, considering 90Sr/90Y and 160Ru/160Rh beta-ray eye applicators, in order to treat a posterior uveal melanoma with a thickness 3.8 mm from the choroid surface. Mathematical models were developed for the two ophthalmic applicators, CGD produced by BEBIG Company and SIA.6 produced by the Amersham Company, with activities 1 mCi and 4.23 mCi respectively. They have a concave form. These applicators' mathematical models were attached to the eye model and the dose distributions were calculated using the MCNPX *F8 tally. The average doses rates were determined in all regions of the eye model. The *F8 tally results showed that the deposited energy due to the applicator with the radionuclide 106Ru/106Rh is higher in all eye regions, including tumor. However the average dose rate in the tumor region is higher for the applicator with 90Sr/90Y, due to its high activity. Due to the dosimetric characteristics of these applicators, the PDD value for 3 mm water is 73% for the 106Ru/106Rh applicator and 60% for 90Sr/90Y applicator. For a better choice of the applicator type and radionuclide it is important to know the thickness of the tumor and its location.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zieb, Kristofer James Ekhart; Hughes, Henry Grady III; Xu, X. George
The release of version 6.2 of the MCNP6 radiation transport code is imminent. To complement the newest release, a summary of the heavy charged particle physics models used in the 1 MeV to 1 GeV energy regime is presented. Several changes have been introduced into the charged particle physics models since the merger of the MCNP5 and MCNPX codes into MCNP6. Here, this article discusses the default models used in MCNP6 for continuous energy loss, energy straggling, and angular scattering of heavy charged particles. Explanations of the physics models’ theories are included as well.
Monte Carlo Simulation of a Segmented Detector for Low-Energy Electron Antineutrinos
NASA Astrophysics Data System (ADS)
Qomi, H. Akhtari; Safari, M. J.; Davani, F. Abbasi
2017-11-01
Detection of low-energy electron antineutrinos is of importance for several purposes, such as ex-vessel reactor monitoring, neutrino oscillation studies, etc. The inverse beta decay (IBD) is the interaction that is responsible for detection mechanism in (organic) plastic scintillation detectors. Here, a detailed study will be presented dealing with the radiation and optical transport simulation of a typical segmented antineutrino detector withMonte Carlo method using MCNPX and FLUKA codes. This study shows different aspects of the detector, benefiting from inherent capabilities of the Monte Carlo simulation codes.
Shielding Analyses for VISION Beam Line at SNS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Popova, Irina; Gallmeier, Franz X
2014-01-01
Full-scale neutron and gamma transport analyses were performed to design shielding around the VISION beam line, instrument shielding enclosure, beam stop, secondary shutter including a temporary beam stop for the still closed neighboring beam line to meet requirement is to achieve dose rates below 0.25 mrem/h at 30 cm from the shielding surface. The beam stop and the temporary beam stop analyses were performed with the discrete ordinate code DORT additionally to Monte Carlo analyses with the MCNPX code. Comparison of the results is presented.
Proton Dose Assessment to the Human Eye Using Monte Carlo N-Particle Transport Code (MCNPX)
2006-08-01
current treatments are applied using an infrared diode laser 10 (projecting a spot size of 2-3 mm), used for about 1 minute per exposure. The laser heats...1983. Shultis J, Faw R. An MCNP Primer. Available at: http:// ww2 .mne.ksu.edu/-jks/MCNPprmr.pdf. Accessed 3 January 2006. Stys P, Lopachin R
A high-fidelity Monte Carlo evaluation of CANDU-6 safety parameters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Y.; Hartanto, D.
2012-07-01
Important safety parameters such as the fuel temperature coefficient (FTC) and the power coefficient of reactivity (PCR) of the CANDU-6 (CANada Deuterium Uranium) reactor have been evaluated by using a modified MCNPX code. For accurate analysis of the parameters, the DBRC (Doppler Broadening Rejection Correction) scheme was implemented in MCNPX in order to account for the thermal motion of the heavy uranium nucleus in the neutron-U scattering reactions. In this work, a standard fuel lattice has been modeled and the fuel is depleted by using the MCNPX and the FTC value is evaluated for several burnup points including the mid-burnupmore » representing a near-equilibrium core. The Doppler effect has been evaluated by using several cross section libraries such as ENDF/B-VI, ENDF/B-VII, JEFF, JENDLE. The PCR value is also evaluated at mid-burnup conditions to characterize safety features of equilibrium CANDU-6 reactor. To improve the reliability of the Monte Carlo calculations, huge number of neutron histories are considered in this work and the standard deviation of the k-inf values is only 0.5{approx}1 pcm. It has been found that the FTC is significantly enhanced by accounting for the Doppler broadening of scattering resonance and the PCR are clearly improved. (authors)« less
MCNP6 Simulation of Light and Medium Nuclei Fragmentation at Intermediate Energies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mashnik, Stepan Georgievich; Kerby, Leslie Marie
2015-05-22
MCNP6, the latest and most advanced LANL Monte Carlo transport code, representing a merger of MCNP5 and MCNPX, is actually much more than the sum of those two computer codes; MCNP6 is available to the public via RSICC at Oak Ridge, TN, USA. In the present work, MCNP6 was validated and verified (V&V) against different experimental data on intermediate-energy fragmentation reactions, and results by several other codes, using mainly the latest modifications of the Cascade-Exciton Model (CEM) and of the Los Alamos version of the Quark-Gluon String Model (LAQGSM) event generators CEM03.03 and LAQGSM03.03. It was found that MCNP6 usingmore » CEM03.03 and LAQGSM03.03 describes well fragmentation reactions induced on light and medium target nuclei by protons and light nuclei of energies around 1 GeV/nucleon and below, and can serve as a reliable simulation tool for different applications, like cosmic-ray-induced single event upsets (SEU’s), radiation protection, and cancer therapy with proton and ion beams, to name just a few. Future improvements of the predicting capabilities of MCNP6 for such reactions are possible, and are discussed in this work.« less
Influence of clouds on the cosmic radiation dose rate on aircraft.
Pazianotto, Maurício T; Federico, Claudio A; Cortés-Giraldo, Miguel A; Pinto, Marcos Luiz de A; Gonçalez, Odair L; Quesada, José Manuel M; Carlson, Brett V; Palomo, Francisco R
2014-10-01
Flight missions were made in Brazilian territory in 2009 and 2011 with the aim of measuring the cosmic radiation dose rate incident on aircraft in the South Atlantic Magnetic Anomaly and to compare it with Monte Carlo simulations. During one of these flights, small fluctuations were observed in the vicinity of the aircraft with formation of Cumulonimbus clouds. Motivated by these observations, in this work, the authors investigated the relationship between the presence of clouds and the neutron flux and dose rate incident on aircraft using computational simulation. The Monte Carlo simulations were made using the MCNPX and Geant4 codes, considering the incident proton flux at the top of the atmosphere and its propagation and neutron production through several vertically arranged slabs, which were modelled according to the ISO specifications. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
The radiation fields around a proton therapy facility: A comparison of Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Ottaviano, G.; Picardi, L.; Pillon, M.; Ronsivalle, C.; Sandri, S.
2014-02-01
A proton therapy test facility with a beam current lower than 10 nA in average, and an energy up to 150 MeV, is planned to be sited at the Frascati ENEA Research Center, in Italy. The accelerator is composed of a sequence of linear sections. The first one is a commercial 7 MeV proton linac, from which the beam is injected in a SCDTL (Side Coupled Drift Tube Linac) structure reaching the energy of 52 MeV. Then a conventional CCL (coupled Cavity Linac) with side coupling cavities completes the accelerator. The linear structure has the important advantage that the main radiation losses during the acceleration process occur to protons with energy below 20 MeV, with a consequent low production of neutrons and secondary radiation. From the radiation protection point of view the source of radiation for this facility is then almost completely located at the final target. Physical and geometrical models of the device have been developed and implemented into radiation transport computer codes based on the Monte Carlo method. The scope is the assessment of the radiation field around the main source for supporting the safety analysis. For the assessment independent researchers used two different Monte Carlo computer codes named FLUKA (FLUktuierende KAskade) and MCNPX (Monte Carlo N-Particle eXtended) respectively. Both are general purpose tools for calculations of particle transport and interactions with matter, covering an extended range of applications including proton beam analysis. Nevertheless each one utilizes its own nuclear cross section libraries and uses specific physics models for particle types and energies. The models implemented into the codes are described and the results are presented. The differences between the two calculations are reported and discussed pointing out disadvantages and advantages of each code in the specific application.
Nuclear Resonance Fluorescence to Measure Plutonium Mass in Spent Nuclear Fuel
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ludewigt, Bernhard A; Quiter, Brian J.; Ambers, Scott D.
2011-01-14
The Next Generation Safeguard Initiative (NGSI) of the U.S Department of Energy is supporting a multi-lab/university collaboration to quantify the plutonium (Pu) mass in spent nuclear fuel (SNF) assemblies and to detect the diversion of pins with non-destructive assay (NDA) methods. The following 14 NDA techniques are being studied: Delayed Neutrons, Differential Die-Away, Differential Die-Away Self-Interrogation, Lead Slowing Down Spectrometer, Neutron Multiplicity, Passive Neutron Albedo Reactivity, Total Neutron (Gross Neutron), X-Ray Fluorescence, {sup 252}Cf Interrogation with Prompt Neutron Detection, Delayed Gamma, Nuclear Resonance Fluorescence, Passive Prompt Gamma, Self-integration Neutron Resonance Densitometry, and Neutron Resonance Transmission Analysis. Understanding and maturity ofmore » the techniques vary greatly, ranging from decades old, well-understood methods to new approaches. Nuclear Resonance Fluorescence (NRF) is a technique that had not previously been studied for SNF assay or similar applications. Since NRF generates isotope-specific signals, the promise and appeal of the technique lies in its potential to directly measure the amount of a specific isotope in an SNF assay target. The objectives of this study were to design and model suitable NRF measurement methods, to quantify capabilities and corresponding instrumentation requirements, and to evaluate prospects and the potential of NRF for SNF assay. The main challenge of the technique is to achieve the sensitivity and precision, i.e., to accumulate sufficient counting statistics, required for quantifying the mass of Pu isotopes in SNF assemblies. Systematic errors, considered a lesser problem for a direct measurement and only briefly discussed in this report, need to be evaluated for specific instrument designs in the future. Also, since the technical capability of using NRF to measure Pu in SNF has not been established, this report does not directly address issues such as cost, size, development time, nor concerns related to the use of Pu in measurement systems. This report discusses basic NRF measurement concepts, i.e., backscatter and transmission methods, and photon source and {gamma}-ray detector options in Section 2. An analytical model for calculating NRF signal strengths is presented in Section 3 together with enhancements to the MCNPX code and descriptions of modeling techniques that were drawn upon in the following sections. Making extensive use of the model and MCNPX simulations, the capabilities of the backscatter and transmission methods based on bremsstrahlung or quasi-monoenergetic photon sources were analyzed as described in Sections 4 and 5. A recent transmission experiment is reported on in Appendix A. While this experiment was not directly part of this project, its results provide an important reference point for our analytical estimates and MCNPX simulations. Used fuel radioactivity calculations, the enhancements to the MCNPX code, and details of the MCNPX simulations are documented in the other appendices.« less
Advances in the computation of the Sjöstrand, Rossi, and Feynman distributions
Talamo, A.; Gohar, Y.; Gabrielli, F.; ...
2017-02-01
This study illustrates recent computational advances in the application of the Sjöstrand (area), Rossi, and Feynman methods to estimate the effective multiplication factor of a subcritical system driven by an external neutron source. The methodologies introduced in this study have been validated with the experimental results from the KUKA facility of Japan by Monte Carlo (MCNP6 and MCNPX) and deterministic (ERANOS, VARIANT, and PARTISN) codes. When the assembly is driven by a pulsed neutron source generated by a particle accelerator and delayed neutrons are at equilibrium, the Sjöstrand method becomes extremely fast if the integral of the reaction rate frommore » a single pulse is split into two parts. These two integrals distinguish between the neutron counts during and after the pulse period. To conclude, when the facility is driven by a spontaneous fission neutron source, the timestamps of the detector neutron counts can be obtained up to the nanosecond precision using MCNP6, which allows obtaining the Rossi and Feynman distributions.« less
NASA Astrophysics Data System (ADS)
Belinato, W.; Santos, W. S.; Paschoal, C. M. M.; Souza, D. N.
2015-06-01
The combination of positron emission tomography (PET) and computed tomography (CT) has been extensively used in oncology for diagnosis and staging of tumors, radiotherapy planning and follow-up of patients with cancer, as well as in cardiology and neurology. This study determines by the Monte Carlo method the internal organ dose deposition for computational phantoms created by multidetector CT (MDCT) beams of two PET/CT devices operating with different parameters. The different MDCT beam parameters were largely related to the total filtration that provides a beam energetic change inside the gantry. This parameter was determined experimentally with the Accu-Gold Radcal measurement system. The experimental values of the total filtration were included in the simulations of two MCNPX code scenarios. The absorbed organ doses obtained in MASH and FASH phantoms indicate that bowtie filter geometry and the energy of the X-ray beam have significant influence on the results, although this influence can be compensated by adjusting other variables such as the tube current-time product (mAs) and pitch during PET/CT procedures.
Dual-resolution dose assessments for proton beamlet using MCNPX 2.6.0
NASA Astrophysics Data System (ADS)
Chao, T. C.; Wei, S. C.; Wu, S. W.; Tung, C. J.; Tu, S. J.; Cheng, H. W.; Lee, C. C.
2015-11-01
The purpose of this study is to access proton dose distribution in dual resolution phantoms using MCNPX 2.6.0. The dual resolution phantom uses higher resolution in Bragg peak, area near large dose gradient, or heterogeneous interface and lower resolution in the rest. MCNPX 2.6.0 was installed in Ubuntu 10.04 with MPI for parallel computing. FMesh1 tallies were utilized to record the energy deposition which is a special designed tally for voxel phantoms that converts dose deposition from fluence. 60 and 120 MeV narrow proton beam were incident into Coarse, Dual and Fine resolution phantoms with pure water, water-bone-water and water-air-water setups. The doses in coarse resolution phantoms are underestimated owing to partial volume effect. The dose distributions in dual or high resolution phantoms agreed well with each other and dual resolution phantoms were at least 10 times more efficient than fine resolution one. Because the secondary particle range is much longer in air than in water, the dose of low density region may be under-estimated if the resolution or calculation grid is not small enough.
Review of heavy charged particle transport in MCNP6.2
NASA Astrophysics Data System (ADS)
Zieb, K.; Hughes, H. G.; James, M. R.; Xu, X. G.
2018-04-01
The release of version 6.2 of the MCNP6 radiation transport code is imminent. To complement the newest release, a summary of the heavy charged particle physics models used in the 1 MeV to 1 GeV energy regime is presented. Several changes have been introduced into the charged particle physics models since the merger of the MCNP5 and MCNPX codes into MCNP6. This paper discusses the default models used in MCNP6 for continuous energy loss, energy straggling, and angular scattering of heavy charged particles. Explanations of the physics models' theories are included as well.
Review of Heavy Charged Particle Transport in MCNP6.2
Zieb, Kristofer James Ekhart; Hughes, Henry Grady III; Xu, X. George; ...
2018-01-05
The release of version 6.2 of the MCNP6 radiation transport code is imminent. To complement the newest release, a summary of the heavy charged particle physics models used in the 1 MeV to 1 GeV energy regime is presented. Several changes have been introduced into the charged particle physics models since the merger of the MCNP5 and MCNPX codes into MCNP6. Here, this article discusses the default models used in MCNP6 for continuous energy loss, energy straggling, and angular scattering of heavy charged particles. Explanations of the physics models’ theories are included as well.
Smans, Kristien; Zoetelief, Johannes; Verbrugge, Beatrijs; Haeck, Wim; Struelens, Lara; Vanhavere, Filip; Bosmans, Hilde
2010-05-01
The purpose of this study was to compare and validate three methods to simulate radiographic image detectors with the Monte Carlo software MCNP/MCNPX in a time efficient way. The first detector model was the standard semideterministic radiography tally, which has been used in previous image simulation studies. Next to the radiography tally two alternative stochastic detector models were developed: A perfect energy integrating detector and a detector based on the energy absorbed in the detector material. Validation of three image detector models was performed by comparing calculated scatter-to-primary ratios (SPRs) with the published and experimentally acquired SPR values. For mammographic applications, SPRs computed with the radiography tally were up to 44% larger than the published results, while the SPRs computed with the perfect energy integrating detectors and the blur-free absorbed energy detector model were, on the average, 0.3% (ranging from -3% to 3%) and 0.4% (ranging from -5% to 5%) lower, respectively. For general radiography applications, the radiography tally overestimated the measured SPR by as much as 46%. The SPRs calculated with the perfect energy integrating detectors were, on the average, 4.7% (ranging from -5.3% to -4%) lower than the measured SPRs, whereas for the blur-free absorbed energy detector model, the calculated SPRs were, on the average, 1.3% (ranging from -0.1% to 2.4%) larger than the measured SPRs. For mammographic applications, both the perfect energy integrating detector model and the blur-free energy absorbing detector model can be used to simulate image detectors, whereas for conventional x-ray imaging using higher energies, the blur-free energy absorbing detector model is the most appropriate image detector model. The radiography tally overestimates the scattered part and should therefore not be used to simulate radiographic image detectors.
NASA Astrophysics Data System (ADS)
Cunha, J. S.; Cavalcante, F. R.; Souza, S. O.; Souza, D. N.; Santos, W. S.; Carvalho Júnior, A. B.
2017-11-01
One of the main criteria that must be held in Total Body Irradiation (TBI) is the uniformity of dose in the body. In TBI procedures the certification that the prescribed doses are absorbed in organs is made with dosimeters positioned on the patient skin. In this work, we modelled TBI scenarios in the MCNPX code to estimate the entrance dose rate in the skin for comparison and validation of simulations with experimental measurements from literature. Dose rates were estimated simulating an ionization chamber laterally positioned on thorax, abdomen, leg and thigh. Four exposure scenarios were simulated: ionization chamber (S1), TBI room (S2), and patient represented by hybrid phantom (S3) and water stylized phantom (S4) in sitting posture. The posture of the patient in experimental work was better represented by S4 compared with hybrid phantom, and this led to minimum and maximum percentage differences of 1.31% and 6.25% to experimental measurements for thorax and thigh regions, respectively. As for all simulations reported here the percentage differences in the estimated dose rates were less than 10%, we considered that the obtained results are consistent with experimental measurements and the modelled scenarios are suitable to estimate the absorbed dose in organs during TBI procedure.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, C; Badal, A
Purpose: Computational voxel phantom provides realistic anatomy but the voxel structure may result in dosimetric error compared to real anatomy composed of perfect surface. We analyzed the dosimetric error caused from the voxel structure in hybrid computational phantoms by comparing the voxel-based doses at different resolutions with triangle mesh-based doses. Methods: We incorporated the existing adult male UF/NCI hybrid phantom in mesh format into a Monte Carlo transport code, penMesh that supports triangle meshes. We calculated energy deposition to selected organs of interest for parallel photon beams with three mono energies (0.1, 1, and 10 MeV) in antero-posterior geometry. Wemore » also calculated organ energy deposition using three voxel phantoms with different voxel resolutions (1, 5, and 10 mm) using MCNPX2.7. Results: Comparison of organ energy deposition between the two methods showed that agreement overall improved for higher voxel resolution, but for many organs the differences were small. Difference in the energy deposition for 1 MeV, for example, decreased from 11.5% to 1.7% in muscle but only from 0.6% to 0.3% in liver as voxel resolution increased from 10 mm to 1 mm. The differences were smaller at higher energies. The number of photon histories processed per second in voxels were 6.4×10{sup 4}, 3.3×10{sup 4}, and 1.3×10{sup 4}, for 10, 5, and 1 mm resolutions at 10 MeV, respectively, while meshes ran at 4.0×10{sup 4} histories/sec. Conclusion: The combination of hybrid mesh phantom and penMesh was proved to be accurate and of similar speed compared to the voxel phantom and MCNPX. The lowest voxel resolution caused a maximum dosimetric error of 12.6% at 0.1 MeV and 6.8% at 10 MeV but the error was insignificant in some organs. We will apply the tool to calculate dose to very thin layer tissues (e.g., radiosensitive layer in gastro intestines) which cannot be modeled by voxel phantoms.« less
MCNP6 Fission Multiplicity with FMULT Card
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wilcox, Trevor; Fensin, Michael Lorne; Hendricks, John S.
With the merger of MCNPX and MCNP5 into MCNP6, MCNP6 now provides all the capabilities of both codes allowing the user to access all the fission multiplicity data sets. Detailed in this paper is: (1) the new FMULT card capabilities for accessing these different data sets; (2) benchmark calculations, as compared to experiment, detailing the results of selecting these separate data sets for thermal neutron induced fission on U-235.
Performance revaluation of a N-type coaxial HPGe detector with front edges crystal using MCNPX.
Azli, Tarek; Chaoui, Zine-El-Abidine
2015-03-01
The MCNPX code was used to determine the efficiency of a N-type HPGe detector after two decades of operation. Accounting for the roundedness of the crystal`s front edges and an inhomogeneous description of the detector's dead layers were shown to achieve better agreement between measurements and simulation efficiency determination. The calculations were experimentally verified using point sources in the energy range from 50keV to 1400keV, and an overall uncertainty less than 2% was achieved. In order to use the detector for different matrices and geometries in radioactivity, the suggested model was validated by changing the counting geometry and by using multi-gamma disc sources. The introduced simulation approach permitted the revaluation of the performance of an HPGe detector in comparison of its initial condition, which is a useful tool for precise determination of the thickness of the inhomogeneous dead layer. Copyright © 2014 Elsevier Ltd. All rights reserved.
124Sb-Be photo-neutron source for BNCT: Is it possible?
NASA Astrophysics Data System (ADS)
Golshanian, Mohadeseh; Rajabi, Ali Akbar; Kasesaz, Yaser
2016-11-01
In this research a computational feasibility study has been done on the use of 124SbBe photo-neutron source for Boron Neutron Capture Therapy (BNCT) using MCNPX Monte Carlo code. For this purpose, a special beam shaping assembly has been designed to provide an appropriate epithermal neutron beam suitable for BNCT. The final result shows that using 150 kCi of 124Sb, the epithermal neutron flux at the designed beam exit is 0.23×109 (n/cm2 s). In-phantom dose analysis indicates that treatment time for a brain tumor is about 40 min which is a reasonable time. This high activity 124Sb could be achieved using three 50 kCi rods of 124Sb which can be produced in a research reactor. It is clear, that as this activity is several hundred times the activity of a typical cobalt radiotherapy source, issues related to handling, safety and security must be addressed.
NASA Astrophysics Data System (ADS)
Courageot, Estelle; Sayah, Rima; Huet, Christelle
2010-05-01
Estimating the dose distribution in a victim's body is a relevant indicator in assessing biological damage from exposure in the event of a radiological accident caused by an external source. When the dose distribution is evaluated with a numerical anthropomorphic model, the posture and morphology of the victim have to be reproduced as realistically as possible. Several years ago, IRSN developed a specific software application, called the simulation of external source accident with medical images (SESAME), for the dosimetric reconstruction of radiological accidents by numerical simulation. This tool combines voxel geometry and the MCNP(X) Monte Carlo computer code for radiation-material interaction. This note presents a new functionality in this software that enables the modelling of a victim's posture and morphology based on non-uniform rational B-spline (NURBS) surfaces. The procedure for constructing the modified voxel phantoms is described, along with a numerical validation of this new functionality using a voxel phantom of the RANDO tissue-equivalent physical model.
Courageot, Estelle; Sayah, Rima; Huet, Christelle
2010-05-07
Estimating the dose distribution in a victim's body is a relevant indicator in assessing biological damage from exposure in the event of a radiological accident caused by an external source. When the dose distribution is evaluated with a numerical anthropomorphic model, the posture and morphology of the victim have to be reproduced as realistically as possible. Several years ago, IRSN developed a specific software application, called the simulation of external source accident with medical images (SESAME), for the dosimetric reconstruction of radiological accidents by numerical simulation. This tool combines voxel geometry and the MCNP(X) Monte Carlo computer code for radiation-material interaction. This note presents a new functionality in this software that enables the modelling of a victim's posture and morphology based on non-uniform rational B-spline (NURBS) surfaces. The procedure for constructing the modified voxel phantoms is described, along with a numerical validation of this new functionality using a voxel phantom of the RANDO tissue-equivalent physical model.
Effect of the multiple scattering of electrons in Monte Carlo simulation of LINACS.
Vilches, Manuel; García-Pareja, Salvador; Guerrero, Rafael; Anguiano, Marta; Lallena, Antonio M
2008-01-01
Results obtained from Monte Carlo simulations of the transport of electrons in thin slabs of dense material media and air slabs with different widths are analyzed. Various general purpose Monte Carlo codes have been used: PENELOPE, GEANT3, GEANT4, EGSNRC, MCNPX. Non-negligible differences between the angular and radial distributions after the slabs have been found. The effects of these differences on the depth doses measured in water are also discussed.
McStas event logger: Definition and applications
NASA Astrophysics Data System (ADS)
Bergbäck Knudsen, Erik; Bryndt Klinkby, Esben; Kjær Willendrup, Peter
2014-02-01
Functionality is added to the McStas neutron ray-tracing code, which allows individual neutron states before and after a scattering to be temporarily stored, and analysed. This logging mechanism has multiple uses, including studies of longitudinal intensity loss in neutron guides and guide coating design optimisations. Furthermore, the logging method enables the cold/thermal neutron induced gamma background along the guide to be calculated from the un-reflected neutron, using a recently developed MCNPX-McStas interface.
NASA Astrophysics Data System (ADS)
Engle, J. W.; Kelsey, C. T.; Bach, H.; Ballard, B. D.; Fassbender, M. E.; John, K. D.; Birnbaum, E. R.; Nortier, F. M.
2012-12-01
In order to ascertain the potential for radioisotope production and material science studies using the Isotope Production Facility at Los Alamos National Lab, a two-pronged investigation has been initiated. The Monte Carlo for Neutral Particles eXtended (MCNPX) code has been used in conjunction with the CINDER 90 burnup code to predict neutron flux energy distributions as a result of routine irradiations and to estimate yields of radioisotopes of interest for hypothetical irradiation conditions. A threshold foil activation experiment is planned to study the neutron flux using measured yields of radioisotopes, quantified by HPGe gamma spectroscopy, from representative nuclear reactions with known thresholds up to 50 MeV.
Path Toward a Unified Geometry for Radiation Transport
NASA Astrophysics Data System (ADS)
Lee, Kerry
The Direct Accelerated Geometry for Radiation Analysis and Design (DAGRAD) element of the RadWorks Project under Advanced Exploration Systems (AES) within the Space Technology Mission Directorate (STMD) of NASA will enable new designs and concepts of operation for radiation risk assessment, mitigation and protection. This element is designed to produce a solution that will allow NASA to calculate the transport of space radiation through complex CAD models using the state-of-the-art analytic and Monte Carlo radiation transport codes. Due to the inherent hazard of astronaut and spacecraft exposure to ionizing radiation in low-Earth orbit (LEO) or in deep space, risk analyses must be performed for all crew vehicles and habitats. Incorporating these analyses into the design process can minimize the mass needed solely for radiation protection. Transport of the radiation fields as they pass through shielding and body materials can be simulated using Monte Carlo techniques or described by the Boltzmann equation, which is obtained by balancing changes in particle fluxes as they traverse a small volume of material with the gains and losses caused by atomic and nuclear collisions. Deterministic codes that solve the Boltzmann transport equation, such as HZETRN (high charge and energy transport code developed by NASA LaRC), are generally computationally faster than Monte Carlo codes such as FLUKA, GEANT4, MCNP(X) or PHITS; however, they are currently limited to transport in one dimension, which poorly represents the secondary light ion and neutron radiation fields. NASA currently uses HZETRN space radiation transport software, both because it is computationally efficient and because proven methods have been developed for using this software to analyze complex geometries. Although Monte Carlo codes describe the relevant physics in a fully three-dimensional manner, their computational costs have thus far prevented their widespread use for analysis of complex CAD models, leading to the creation and maintenance of toolkit specific simplistic geometry models. The work presented here builds on the Direct Accelerated Geometry Monte Carlo (DAGMC) toolkit developed for use with the Monte Carlo N-Particle (MCNP) transport code. The work-flow for doing radiation transport on CAD models using MCNP and FLUKA has been demonstrated and the results of analyses on realistic spacecraft/habitats will be presented. Future work is planned that will further automate this process and enable the use of multiple radiation transport codes on identical geometry models imported from CAD. This effort will enhance the modeling tools used by NASA to accurately evaluate the astronaut space radiation risk and accurately determine the protection provided by as-designed exploration mission vehicles and habitats.
Path Toward a Unifid Geometry for Radiation Transport
NASA Technical Reports Server (NTRS)
Lee, Kerry; Barzilla, Janet; Davis, Andrew; Zachmann
2014-01-01
The Direct Accelerated Geometry for Radiation Analysis and Design (DAGRAD) element of the RadWorks Project under Advanced Exploration Systems (AES) within the Space Technology Mission Directorate (STMD) of NASA will enable new designs and concepts of operation for radiation risk assessment, mitigation and protection. This element is designed to produce a solution that will allow NASA to calculate the transport of space radiation through complex computer-aided design (CAD) models using the state-of-the-art analytic and Monte Carlo radiation transport codes. Due to the inherent hazard of astronaut and spacecraft exposure to ionizing radiation in low-Earth orbit (LEO) or in deep space, risk analyses must be performed for all crew vehicles and habitats. Incorporating these analyses into the design process can minimize the mass needed solely for radiation protection. Transport of the radiation fields as they pass through shielding and body materials can be simulated using Monte Carlo techniques or described by the Boltzmann equation, which is obtained by balancing changes in particle fluxes as they traverse a small volume of material with the gains and losses caused by atomic and nuclear collisions. Deterministic codes that solve the Boltzmann transport equation, such as HZETRN [high charge and energy transport code developed by NASA Langley Research Center (LaRC)], are generally computationally faster than Monte Carlo codes such as FLUKA, GEANT4, MCNP(X) or PHITS; however, they are currently limited to transport in one dimension, which poorly represents the secondary light ion and neutron radiation fields. NASA currently uses HZETRN space radiation transport software, both because it is computationally efficient and because proven methods have been developed for using this software to analyze complex geometries. Although Monte Carlo codes describe the relevant physics in a fully three-dimensional manner, their computational costs have thus far prevented their widespread use for analysis of complex CAD models, leading to the creation and maintenance of toolkit-specific simplistic geometry models. The work presented here builds on the Direct Accelerated Geometry Monte Carlo (DAGMC) toolkit developed for use with the Monte Carlo N-Particle (MCNP) transport code. The workflow for achieving radiation transport on CAD models using MCNP and FLUKA has been demonstrated and the results of analyses on realistic spacecraft/habitats will be presented. Future work is planned that will further automate this process and enable the use of multiple radiation transport codes on identical geometry models imported from CAD. This effort will enhance the modeling tools used by NASA to accurately evaluate the astronaut space radiation risk and accurately determine the protection provided by as-designed exploration mission vehicles and habitats
Karimi, Zahra; Sadeghi, Mahdi; Mataji-Kojouri, Naimeddin
2018-07-01
64 Cu is one of the most beneficial radionuclide that can be used as a theranostic agent in Positron Emission Tomography (PET) imaging. In this current work, 64 Cu was produced with zinc oxide nanoparticles ( nat ZnONPs) and zinc oxide powder ( nat ZnO) via the 64 Zn(n,p) 64 Cu reaction in Tehran Research Reactor (TRR) and the activity values were compared with each other. The theoretical activity of 64 Cu also was calculated with MCNPX-2.6 and the cross sections of this reaction were calculated by using TALYS-1.8, EMPIRE-3.2.2 and ALICE/ASH nuclear codes and were compared with experimental values. Transmission Electronic Microscopy (TEM), Scanning Electronic Microscopy (SEM) and X-Ray Diffraction (XRD) analysis were used for samples characterizations. From these results, it's concluded that 64 Cu activity value with nanoscale target was achieved more than the bulk state target and had a good adaptation with the MCNPX result. Copyright © 2018 Elsevier Ltd. All rights reserved.
Utilizing Radioisotope Power Systems for Human Lunar Exploration
NASA Technical Reports Server (NTRS)
Schreiner, Timothy M.
2005-01-01
The Vision for Space Exploration has a goal of sending crewed missions to the lunar surface as early as 2015 and no later than 2020. The use of nuclear power sources could aid in assisting crews in exploring the surface and performing In-Situ Resource Utilization (ISRU) activities. Radioisotope Power Systems (RPS) provide constant sources of electrical power and thermal energy for space applications. RPSs were carried on six of the crewed Apollo missions to power surface science packages, five of which still remain on the lunar surface. Future RPS designs may be able to play a more active role in supporting a long-term human presence. Due to its lower thermal and radiation output, the planned Stirling Radioisotope Generator (SRG) appears particularly attractive for manned applications. The MCNPX particle transport code has been used to model the current SRG design to assess its use in proximity with astronauts operating on the surface. Concepts of mobility and ISRU infrastructure were modeled using MCNPX to analyze the impact of RPSs on crewed mobility systems. Strategies for lowering the radiation dose were studied to determine methods of shielding the crew from the RPSs.
NASA Astrophysics Data System (ADS)
LaFleur, Adrienne Marie
The development of non-destructive assay (NDA) capabilities to directly measure the fissile content in spent fuel is needed to improve the timely detection of the diversion of significant quantities of fissile material. Currently, the International Atomic Energy Agency (IAEA) does not have effective NDA methods to verify spent fuel and recover continuity of knowledge in the event of a containment and surveillance systems failure. This issue has become increasingly critical with the worldwide expansion of nuclear power, adoption of enhanced safeguards criteria for spent fuel verification, and recent efforts by the IAEA to incorporate an integrated safeguards regime. In order to address these issues, the use of Self-Interrogation Neutron Resonance Densitometry (SINRD) has been developed to improve existing nuclear safeguards and material accountability measurements. The following characteristics of SINRD were analyzed: (1) ability to measure the fissile content in Light Water Reactors (LWR) fuel assemblies and (2) sensitivity and penetrability of SINRD to the removal of fuel pins from an assembly. The Monte Carlo Neutral Particle eXtended (MCNPX) transport code was used to simulate SINRD for different geometries. Experimental measurements were also performed with SINRD and were compared to MCNPX simulations of the experiment to verify the accuracy of the MCNPX model of SINRD. Based on the results from these simulations and measurements, we have concluded that SINRD provides a number of improvements over current IAEA verification methods. These improvements include: (1) SINRD provides absolute measurements of burnup independent of the operator's declaration. (2) SINRD is sensitive to pin removal over the entire burnup range and can verify the diversion of 6% of fuel pins within 3o from LWR spent LEU and MOX fuel. (3) SINRD is insensitive to the boron concentration and initial fuel enrichment and can therefore be used at multiple spent fuel storage facilities. (4) The calibration of SINRD at one reactor facility carries over to reactor sites in different countries because it uses the ratio of fission chambers (FCs) that are not facility dependent. (5) SINRD can distinguish fresh and 1-cycle spent MOX fuel from 3- and 4-cycles spent LEU fuel without using reactor burnup codes.
Fonseca, T C Ferreira; Bogaerts, R; Lebacq, A L; Mihailescu, C L; Vanhavere, F
2014-04-01
A realistic computational 3D human body library, called MaMP and FeMP (Male and Female Mesh Phantoms), based on polygonal mesh surface geometry, has been created to be used for numerical calibration of the whole body counter (WBC) system of the nuclear power plant (NPP) in Doel, Belgium. The main objective was to create flexible computational models varying in gender, body height, and mass for studying the morphology-induced variation of the detector counting efficiency (CE) and reducing the measurement uncertainties. First, the counting room and an HPGe detector were modeled using MCNPX (Monte Carlo radiation transport code). The validation of the model was carried out for different sample-detector geometries with point sources and a physical phantom. Second, CE values were calculated for a total of 36 different mesh phantoms in a seated position using the validated Monte Carlo model. This paper reports on the validation process of the in vivo whole body system and the CE calculated for different body heights and weights. The results reveal that the CE is strongly dependent on the individual body shape, size, and gender and may vary by a factor of 1.5 to 3 depending on the morphology aspects of the individual to be measured.
A new response matrix for a 6LiI scintillator BSS system
NASA Astrophysics Data System (ADS)
Lacerda, M. A. S.; Méndez-Villafañe, R.; Lorente, A.; Ibañez, S.; Gallego, E.; Vega-Carrillo, H. R.
2017-10-01
A new response matrix was calculated for a Bonner Sphere Spectrometer (BSS) with a 6 LiI(Eu) scintillator, using the Monte Carlo N-Particle radiation transport code MCNPX. Responses were calculated for 6 spheres and the bare detector, for energies varying from 1.059E(-9) MeV to 105.9 MeV, with 20 equal-log(E)-width bins per energy decade, totalizing 221 energy groups. A comparison was done among the responses obtained in this work and other published elsewhere, for the same detector model. The calculated response functions were inserted in the response input file of the MAXED code and used to unfold the total and direct neutron spectra generated by the 241Am-Be source of the Universidad Politécnica de Madrid (UPM). These spectra were compared with those obtained using the same unfolding code with the Mares and Schraube matrix response.
Analysis and evaluation for consumer goods containing NORM in Korea.
Jang, Mee; Chung, Kun Ho; Lim, Jong Myoung; Ji, Young Yong; Kim, Chang Jong; Kang, Mun Ja
2017-08-01
We analyzed the consumer goods containing NORM by ICP-MS and evaluated the external dose. To evaluate the external dose, we assumed the small room model as irradiation scenario and calculated the specific effective dose rate using MCNPX code. The external doses for twenty goods are less than 1 mSv considering the specific effective dose rates and usage quantities. However, some of them have relatively high dose and the activity concentration limits are necessary as a screening tool. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Günay, M.; Şarer, B.; Kasap, H.
2014-08-01
In the present investigation, a fusion-fission hybrid reactor system was designed by using 9Cr2WVTa ferritic steel structural material and 99-95 % Li20Sn80-1-5 % SFG-Pu, 99-95 % Li20Sn80-1-5 % SFG-PuF4, 99-95 % Li20Sn80-1-5 % SFG-PuO2 the molten salt-heavy metal mixtures, as fluids. The fluids were used in the liquid first wall, blanket and shield zones of a fusion-fission hybrid reactor system. Beryllium zone with the width of 3 cm was used for the neutron multiplicity between liquid first wall and blanket. The contributions of each isotope in fluids on the nuclear parameters of a fusion-fission hybrid reactor such as tritium breeding ratio, energy multiplication factor, heat deposition rate were computed in liquid first wall, blanket and shield zones. Three-dimensional analyses were performed by using Monte Carlo code MCNPX-2.7.0 and nuclear data library ENDF/B-VII.0.
Ali, F; Waker, A J; Waller, E J
2014-10-01
Tissue-equivalent proportional counters (TEPC) can potentially be used as a portable and personal dosemeter in mixed neutron and gamma-ray fields, but what hinders this use is their typically large physical size. To formulate compact TEPC designs, the use of a Monte Carlo transport code is necessary to predict the performance of compact designs in these fields. To perform this modelling, three candidate codes were assessed: MCNPX 2.7.E, FLUKA 2011.2 and PHITS 2.24. In each code, benchmark simulations were performed involving the irradiation of a 5-in. TEPC with monoenergetic neutron fields and a 4-in. wall-less TEPC with monoenergetic gamma-ray fields. The frequency and dose mean lineal energies and dose distributions calculated from each code were compared with experimentally determined data. For the neutron benchmark simulations, PHITS produces data closest to the experimental values and for the gamma-ray benchmark simulations, FLUKA yields data closest to the experimentally determined quantities. © The Author 2013. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Coupled Monte Carlo neutronics and thermal hydraulics for power reactors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bernnat, W.; Buck, M.; Mattes, M.
The availability of high performance computing resources enables more and more the use of detailed Monte Carlo models even for full core power reactors. The detailed structure of the core can be described by lattices, modeled by so-called repeated structures e.g. in Monte Carlo codes such as MCNP5 or MCNPX. For cores with mainly uniform material compositions, fuel and moderator temperatures, there is no problem in constructing core models. However, when the material composition and the temperatures vary strongly a huge number of different material cells must be described which complicate the input and in many cases exceed code ormore » memory limits. The second problem arises with the preparation of corresponding temperature dependent cross sections and thermal scattering laws. Only if these problems can be solved, a realistic coupling of Monte Carlo neutronics with an appropriate thermal-hydraulics model is possible. In this paper a method for the treatment of detailed material and temperature distributions in MCNP5 is described based on user-specified internal functions which assign distinct elements of the core cells to material specifications (e.g. water density) and temperatures from a thermal-hydraulics code. The core grid itself can be described with a uniform material specification. The temperature dependency of cross sections and thermal neutron scattering laws is taken into account by interpolation, requiring only a limited number of data sets generated for different temperatures. Applications will be shown for the stationary part of the Purdue PWR benchmark using ATHLET for thermal- hydraulics and for a generic Modular High Temperature reactor using THERMIX for thermal- hydraulics. (authors)« less
Calculations vs. measurements of remnant dose rates for SNS spent structures
NASA Astrophysics Data System (ADS)
Popova, I. I.; Gallmeier, F. X.; Trotter, S.; Dayton, M.
2018-06-01
Residual dose rate measurements were conducted on target vessel #13 and proton beam window #5 after extraction from their service locations. These measurements were used to verify calculation methods of radionuclide inventory assessment that are typically performed for nuclear waste characterization and transportation of these structures. Neutronics analyses for predicting residual dose rates were carried out using the transport code MCNPX and the transmutation code CINDER90. For transport analyses complex and rigorous geometry model of the structures and their surrounding are applied. The neutronics analyses were carried out using Bertini and CEM high energy physics models for simulating particles interaction. Obtained preliminary calculational results were analysed and compared to the measured dose rates and overall are showing good agreement with in 40% in average.
Calculations vs. measurements of remnant dose rates for SNS spent structures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Popova, Irina I.; Gallmeier, Franz X.; Trotter, Steven M.
Residual dose rate measurements were conducted on target vessel #13 and proton beam window #5 after extraction from their service locations. These measurements were used to verify calculation methods of radionuclide inventory assessment that are typically performed for nuclear waste characterization and transportation of these structures. Neutronics analyses for predicting residual dose rates were carried out using the transport code MCNPX and the transmutation code CINDER90. For transport analyses complex and rigorous geometry model of the structures and their surrounding are applied. The neutronics analyses were carried out using Bertini and CEM high energy physics models for simulating particles interaction.more » Obtained preliminary calculational results were analysed and compared to the measured dose rates and overall are showing good agreement with in 40% in average.« less
NASA Astrophysics Data System (ADS)
Suharyana; Riyatun; Octaviana, E. F.
2016-11-01
We have successfully proposed a simulation of a neutron beam-shaping assembly using MCNPX Code. This simulation study deals with designing a compact, optimized, and geometrically simple beam shaping assembly for a neutron source based on a proton cyclotron for BNCT purpose. Shifting method was applied in order to lower the fast neutron energy to the epithermal energy range by choosing appropriate materials. Based on a set of MCNPX simulations, it has been found that the best materials for beam shaping assembly are 3 cm Ni layered with 7 cm Pb as the reflector and 13 cm AlF3 the moderator. Our proposed beam shaping assembly configuration satisfies 2 of 5 of the IAEA criteria, namely the epithermal neutron flux 1.25 × 109 n.cm-2 s-1 and the gamma dose over the epithermal neutron flux is 0.18×10 -13 Gy.cm 2 n -1. However, the ratio of the fast neutron dose rate over neutron epithermal flux is still too high. We recommended that the shifting method must be accompanied by the filter method to reduce the fast neutron flux.
Freitas, B M; Martins, M M; Pereira, W W; da Silva, A X; Mauricio, C L P
2016-09-01
The Brazilian Instituto de Radioproteção e Dosimetria (IRD) runs a neutron individual monitoring system with a home-made TLD albedo dosemeter. It has already been characterised and calibrated in some reference fields. However, the complete energy response of this dosemeter is not known, and the calibration factors for all monitored workplace neutron fields are difficult to be obtained experimentally. Therefore, to overcome such difficulties, Monte Carlo simulations have been used. This paper describes the simulation of the HP(10) neutron response of the IRD TLD albedo dosemeter using the MCNPX transport code, for energies from thermal to 20 MeV. The validation of the MCNPX modelling is done comparing the simulated results with the experimental measurements for ISO standard neutron fields of (241)Am-Be, (252)Cf, (241)Am-B and (252)Cf(D2O) and also for (241)Am-Be source moderated with paraffin and silicone. Bare (252)Cf are used for normalisation. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
NASA Technical Reports Server (NTRS)
Marshall, C. J.; Ladbury, R.; Marshall, P. W.; Reed, R. A.; Howe, C.; Weller, B.; Mendenhall, M.; Waczynski, A.; Jordan, T. M.; Fodness, B.
2006-01-01
This paper presents a combined Monte Carlo and analytic approach to the calculation of the pixel-to-pixel distribution of proton-induced damage in a HgCdTe sensor array and compares the results to measured dark current distributions after damage by 63 MeV protons. The moments of the Coulombic, nuclear elastic and nuclear inelastic damage distribution were extracted from Monte Carlo simulations and combined to form a damage distribution using the analytic techniques first described in [I]. The calculations show that the high energy recoils from the nuclear inelastic reactions (calculated using the Monte Car10 code MCNPX [2]) produce a pronounced skewing of the damage energy distribution. The nuclear elastic component (also calculated using the MCNPX) has a negligible effect on the shape of the damage distribution. The Coulombic contribution was calculated using MRED [3,4], a Geant4 [4,5] application. The comparison with the dark current distribution strongly suggests that mechanisms which are not linearly correlated with nonionizing damage produced according to collision kinematics are responsible for the observed dark current increases. This has important implications for the process of predicting the on-orbit dark current response of the HgCdTe sensor array.
NASA Astrophysics Data System (ADS)
Krása, A.; Majerle, M.; Krízek, F.; Wagner, V.; Kugler, A.; Svoboda, O.; Henzl, V.; Henzlová, D.; Adam, J.; Caloun, P.; Kalinnikov, V. G.; Krivopustov, M. I.; Stegailov, V. I.; Tsoupko-Sitnikov, V. M.
2006-05-01
Relativistic protons with energies 0.7-1.5 GeV interacting with a thick, cylindrical, lead target, surrounded by a uranium blanket and a polyethylene moderator, produced spallation neutrons. The spatial and energetic distributions of the produced neutron field were measured by the Activation Analysis Method using Al, Au, Bi, and Co radio-chemical sensors. The experimental yields of isotopes induced in the sensors were compared with Monte-Carlo calculations performed with the MCNPX 2.4.0 code.
Simulation of Charge Collection in Diamond Detectors Irradiated with Deuteron-Triton Neutron Sources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Milocco, Alberto; Trkov, Andrej; Pillon, Mario
2011-12-13
Diamond-based neutron spectrometers exhibit outstanding properties such as radiation hardness, low sensitivity to gamma rays, fast response and high-energy resolution. They represent a very promising application of diamonds for plasma diagnostics in fusion devices. The measured pulse height spectrum is obtained from the collection of helium and beryllium ions produced by the reactions on {sup 12}C. An original code is developed to simulate the production and the transport of charged particles inside the diamond detector. The ion transport methodology is based on the well-known TRIM code. The reactions of interest are triggered using the ENDF/B-VII.0 nuclear data for the neutronmore » interactions on carbon. The model is implemented in the TALLYX subroutine of the MCNP5 and MCNPX codes. Measurements with diamond detectors in a {approx}14 MeV neutron field have been performed at the FNG (Rome, Italy) and IRMM (Geel, Belgium) facilities. The comparison of the experimental data with the simulations validates the proposed model.« less
Design Analysis of SNS Target StationBiological Shielding Monoligh with Proton Power Uprate
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bekar, Kursat B.; Ibrahim, Ahmad M.
2017-05-01
This report documents the analysis of the dose rate in the experiment area outside the Spallation Neutron Source (SNS) target station shielding monolith with proton beam energy of 1.3 GeV. The analysis implemented a coupled three dimensional (3D)/two dimensional (2D) approach that used both the Monte Carlo N-Particle Extended (MCNPX) 3D Monte Carlo code and the Discrete Ordinates Transport (DORT) two dimensional deterministic code. The analysis with proton beam energy of 1.3 GeV showed that the dose rate in continuously occupied areas on the lateral surface outside the SNS target station shielding monolith is less than 0.25 mrem/h, which compliesmore » with the SNS facility design objective. However, the methods and codes used in this analysis are out of date and unsupported, and the 2D approximation of the target shielding monolith does not accurately represent the geometry. We recommend that this analysis is updated with modern codes and libraries such as ADVANTG or SHIFT. These codes have demonstrated very high efficiency in performing full 3D radiation shielding analyses of similar and even more difficult problems.« less
Monte Carlo calculations of positron emitter yields in proton radiotherapy.
Seravalli, E; Robert, C; Bauer, J; Stichelbaut, F; Kurz, C; Smeets, J; Van Ngoc Ty, C; Schaart, D R; Buvat, I; Parodi, K; Verhaegen, F
2012-03-21
Positron emission tomography (PET) is a promising tool for monitoring the three-dimensional dose distribution in charged particle radiotherapy. PET imaging during or shortly after proton treatment is based on the detection of annihilation photons following the ß(+)-decay of radionuclides resulting from nuclear reactions in the irradiated tissue. Therapy monitoring is achieved by comparing the measured spatial distribution of irradiation-induced ß(+)-activity with the predicted distribution based on the treatment plan. The accuracy of the calculated distribution depends on the correctness of the computational models, implemented in the employed Monte Carlo (MC) codes that describe the interactions of the charged particle beam with matter and the production of radionuclides and secondary particles. However, no well-established theoretical models exist for predicting the nuclear interactions and so phenomenological models are typically used based on parameters derived from experimental data. Unfortunately, the experimental data presently available are insufficient to validate such phenomenological hadronic interaction models. Hence, a comparison among the models used by the different MC packages is desirable. In this work, starting from a common geometry, we compare the performances of MCNPX, GATE and PHITS MC codes in predicting the amount and spatial distribution of proton-induced activity, at therapeutic energies, to the already experimentally validated PET modelling based on the FLUKA MC code. In particular, we show how the amount of ß(+)-emitters produced in tissue-like media depends on the physics model and cross-sectional data used to describe the proton nuclear interactions, thus calling for future experimental campaigns aiming at supporting improvements of MC modelling for clinical application of PET monitoring. © 2012 Institute of Physics and Engineering in Medicine
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cho, S; Shin, E H; Kim, J
2015-06-15
Purpose: To evaluate the shielding wall design to protect patients, staff and member of the general public for secondary neutron using a simply analytic solution, multi-Monte Carlo code MCNPX, ANISN and FLUKA. Methods: An analytical and multi-Monte Carlo method were calculated for proton facility (Sumitomo Heavy Industry Ltd.) at Samsung Medical Center in Korea. The NCRP-144 analytical evaluation methods, which produced conservative estimates on the dose equivalent values for the shielding, were used for analytical evaluations. Then, the radiation transport was simulated with the multi-Monte Carlo code. The neutron dose at evaluation point is got by the value using themore » production of the simulation value and the neutron dose coefficient introduced in ICRP-74. Results: The evaluation points of accelerator control room and control room entrance are mainly influenced by the point of the proton beam loss. So the neutron dose equivalent of accelerator control room for evaluation point is 0.651, 1.530, 0.912, 0.943 mSv/yr and the entrance of cyclotron room is 0.465, 0.790, 0.522, 0.453 mSv/yr with calculation by the method of NCRP-144 formalism, ANISN, FLUKA and MCNP, respectively. The most of Result of MCNPX and FLUKA using the complicated geometry showed smaller values than Result of ANISN. Conclusion: The neutron shielding for a proton therapy facility has been evaluated by the analytic model and multi-Monte Carlo methods. We confirmed that the setting of shielding was located in well accessible area to people when the proton facility is operated.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shin, H; Yoon, D; Jung, J
Purpose: The purpose of this study is to suggest a tumor monitoring technique using prompt gamma rays emitted during the reaction between an antiproton and a boron particle, and to verify the increase of the therapeutic effectiveness of the antiproton boron fusion therapy using Monte Carlo simulation code. Methods: We acquired the percentage depth dose of the antiproton beam from a water phantom with and without three boron uptake regions (region A, B, and C) using F6 tally of MCNPX. The tomographic image was reconstructed using prompt gamma ray events from the reaction between the antiproton and boron during themore » treatment from 32 projections (reconstruction algorithm: MLEM). For the image reconstruction, we were performed using a 80 × 80 pixel matrix with a pixel size of 5 mm. The energy window was set as a 10 % energy window. Results: The prompt gamma ray peak for imaging was observed at 719 keV in the energy spectrum using the F8 tally fuction (energy deposition tally) of the MCNPX code. The tomographic image shows that the boron uptake regions were successfully identified from the simulation results. In terms of the receiver operating characteristic curve analysis, the area under the curve values were 0.647 (region A), 0.679 (region B), and 0.632 (region C). The SNR values increased as the tumor diameter increased. The CNR indicated the relative signal intensity within different regions. The CNR values also increased as the different of BURs diamter increased. Conclusion: We confirmed the feasibility of tumor monitoring during the antiproton therapy as well as the superior therapeutic effect of the antiproton boron fusion therapy. This result can be beneficial for the development of a more accurate particle therapy.« less
High and low energy gamma beam dump designs for the gamma beam delivery system at ELI-NP
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yasin, Zafar, E-mail: zafar.yasin@eli-np.ro; Matei, Catalin; Ur, Calin A.
The Extreme Light Infrastructure - Nuclear Physics (ELI-NP) is under construction in Magurele, Bucharest, Romania. The facility will use two 10 PW lasers and a high intensity, narrow bandwidth gamma beam for stand-alone and combined laser-gamma experiments. The accurate estimation of particle doses and their restriction within the limits for both personel and general public is very important in the design phase of any nuclear facility. In the present work, Monte Carlo simulations are performed using FLUKA and MCNPX to design 19.4 and 4 MeV gamma beam dumps along with shielding of experimental areas. Dose rate contour plots from both FLUKAmore » and MCNPX along with numerical values of doses in experimental area E8 of the facility are performed. The calculated doses are within the permissible limits. Furthermore, a reasonable agreement between both codes enhances our confidence in using one or both of them for future calculations in beam dump designs, radiation shielding, radioactive inventory, and other calculations releated to radiation protection. Residual dose rates and residual activity calculations are also performed for high-energy beam dump and their effect is negligible in comparison to contributions from prompt radiation.« less
Detector photon response and absorbed dose and their applications to rapid triage techniques
NASA Astrophysics Data System (ADS)
Voss, Shannon Prentice
As radiation specialists, one of our primary objectives in the Navy is protecting people and the environment from the effects of ionizing and non-ionizing radiation. Focusing on radiological dispersal devices (RDD) will provide increased personnel protection as well as optimize emergency response assets for the general public. An attack involving an RDD has been of particular concern because it is intended to spread contamination over a wide area and cause massive panic within the general population. A rapid method of triage will be necessary to segregate the unexposed and slightly exposed from those needing immediate medical treatment. Because of the aerosol dispersal of the radioactive material, inhalation of the radioactive material may be the primary exposure route. The primary radionuclides likely to be used in a RDD attack are Co-60, Cs-137, Ir-192, Sr-90 and Am-241. Through the use of a MAX phantom along with a few Simulink MATLAB programs, a good anthropomorphic phantom was created for use in MCNPX simulations that would provide organ doses from internally deposited radionuclides. Ludlum model 44-9 and 44-2 detectors were used to verify the simulated dose from the MCNPX code. Based on the results, acute dose rate limits were developed for emergency response personnel that would assist in patient triage.
NASA Technical Reports Server (NTRS)
Marshall, C. J.; Marshall, P. W.; Howe, C. L.; Reed, R. A.; Weller, R. A.; Mendenhall, M.; Waczynski, A.; Ladbury, R.; Jordan, T. M.
2007-01-01
This paper presents a combined Monte Carlo and analytic approach to the calculation of the pixel-to-pixel distribution of proton-induced damage in a HgCdTe sensor array and compares the results to measured dark current distributions after damage by 63 MeV protons. The moments of the Coulombic, nuclear elastic and nuclear inelastic damage distributions were extracted from Monte Carlo simulations and combined to form a damage distribution using the analytic techniques first described in [1]. The calculations show that the high energy recoils from the nuclear inelastic reactions (calculated using the Monte Carlo code MCNPX [2]) produce a pronounced skewing of the damage energy distribution. While the nuclear elastic component (also calculated using the MCNPX) contributes only a small fraction of the total nonionizing damage energy, its inclusion in the shape of the damage across the array is significant. The Coulombic contribution was calculated using MRED [3-5], a Geant4 [4,6] application. The comparison with the dark current distribution strongly suggests that mechanisms which are not linearly correlated with nonionizing damage produced according to collision kinematics are responsible for the observed dark current increases. This has important implications for the process of predicting the on-orbit dark current response of the HgCdTe sensor array.
Monte Carlo modeling of a conventional X-ray computed tomography scanner for gel dosimetry purposes.
Hayati, Homa; Mesbahi, Asghar; Nazarpoor, Mahmood
2016-01-01
Our purpose in the current study was to model an X-ray CT scanner with the Monte Carlo (MC) method for gel dosimetry. In this study, a conventional CT scanner with one array detector was modeled with use of the MCNPX MC code. The MC calculated photon fluence in detector arrays was used for image reconstruction of a simple water phantom as well as polyacrylamide polymer gel (PAG) used for radiation therapy. Image reconstruction was performed with the filtered back-projection method with a Hann filter and the Spline interpolation method. Using MC results, we obtained the dose-response curve for images of irradiated gel at different absorbed doses. A spatial resolution of about 2 mm was found for our simulated MC model. The MC-based CT images of the PAG gel showed a reliable increase in the CT number with increasing absorbed dose for the studied gel. Also, our results showed that the current MC model of a CT scanner can be used for further studies on the parameters that influence the usability and reliability of results, such as the photon energy spectra and exposure techniques in X-ray CT gel dosimetry.
Preliminary Monte Carlo calculations for the UNCOSS neutron-based explosive detector
NASA Astrophysics Data System (ADS)
Eleon, C.; Perot, B.; Carasco, C.
2010-07-01
The goal of the FP7 UNCOSS project (Underwater Coastal Sea Surveyor) is to develop a non destructive explosive detection system based on the associated particle technique, in view to improve the security of coastal area and naval infrastructures where violent conflicts took place. The end product of the project will be a prototype of a complete coastal survey system, including a neutron-based sensor capable of confirming the presence of explosives on the sea bottom. A 3D analysis of prompt gamma rays induced by 14 MeV neutrons will be performed to identify elements constituting common military explosives, such as C, N and O. This paper presents calculations performed with the MCNPX computer code to support the ongoing design studies performed by the UNCOSS collaboration. Detection efficiencies, time and energy resolutions of the possible gamma-ray detectors are compared, which show NaI(Tl) or LaBr 3(Ce) scintillators will be suitable for this application. The effect of neutron attenuation and scattering in the seawater, influencing the counting statistics and signal-to-noise ratio, are also studied with calculated neutron time-of-flight and gamma-ray spectra for an underwater TNT target.
Active-Interrogation Measurements of Induced-Fission Neutrons from Low-Enriched Uranium
DOE Office of Scientific and Technical Information (OSTI.GOV)
J. L. Dolan; M. J. Marcath; M. Flaska
2012-07-01
Protection and control of nuclear fuels is paramount for nuclear security and safeguards; therefore, it is important to develop fast and robust controlling mechanisms to ensure the safety of nuclear fuels. Through both passive- and active-interrogation methods we can use fast-neutron detection to perform real-time measurements of fission neutrons for process monitoring. Active interrogation allows us to use different ranges of incident neutron energy to probe for different isotopes of uranium. With fast-neutron detectors, such as organic liquid scintillation detectors, we can detect the induced-fission neutrons and photons and work towards quantifying a sample’s mass and enrichment. Using MCNPX-PoliMi, amore » system was designed to measure induced-fission neutrons from U-235 and U-238. Measurements were then performed in the summer of 2010 at the Joint Research Centre in Ispra, Italy. Fissions were induced with an associated particle D-T generator and an isotopic Am-Li source. The fission neutrons, as well as neutrons from (n, 2n) and (n, 3n) reactions, were measured with five 5” by 5” EJ-309 organic liquid scintillators. The D-T neutron generator was available as part of a measurement campaign in place by Padova University. The measurement and data-acquisition systems were developed at the University of Michigan utilizing a CAEN V1720 digitizer and pulse-shape discrimination algorithms to differentiate neutron and photon detections. Low-enriched uranium samples of varying mass and enrichment were interrogated. Acquired time-of-flight curves and cross-correlation curves are currently analyzed to draw relationships between detected neutrons and sample mass and enrichment. In the full paper, the promise of active-interrogation measurements and fast-neutron detection will be assessed through the example of this proof-of-concept measurement campaign. Additionally, MCNPX-PoliMi simulation results will be compared to the measured data to validate the MCNPX-PoliMi code when used for active-interrogation simulations.« less
Remanent Activation in the Mini-SHINE Experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Micklich, Bradley J.
2015-04-16
Argonne National Laboratory is assisting SHINE Medical Technologies in developing a domestic source of the medical isotope 99Mo through the fission of low-enrichment uranium in a uranyl sulfate solution. In Phase 2 of these experiments, electrons from a linear accelerator create neutrons by interacting in a depleted uranium target, and these neutrons are used to irradiate the solution. The resulting neutron and photon radiation activates the target, the solution vessels, and a shielded cell that surrounds the experimental apparatus. When the experimental campaign is complete, the target must be removed into a shielding cask, and the experimental components must bemore » disassembled. The radiation transport code MCNPX and the transmutation code CINDER were used to calculate the radionuclide inventories of the solution, the target assembly, and the shielded cell, and to determine the dose rates and shielding requirements for selected removal scenarios for the target assembly and the solution vessels.« less
NASA Astrophysics Data System (ADS)
Gómez-Ros, J. M.; Bedogni, R.; Moraleda, M.; Delgado, A.; Romero, A.; Esposito, A.
2010-01-01
This communication describes an improved design for a neutron spectrometer consisting of 6Li thermoluminescent dosemeters located at selected positions within a single moderating polyethylene sphere. The spatial arrangement of the dosemeters has been designed using the MCNPX Monte Carlo code to calculate the response matrix for 56 log-equidistant energies from 10 -9 to 100 MeV, looking for a configuration that permits to obtain a nearly isotropic response for neutrons in the energy range from thermal to 20 MeV. The feasibility of the proposed spectrometer and the isotropy of its response have been evaluated by simulating exposures to different reference and workplace neutron fields. The FRUIT code has been used for unfolding purposes. The results of the simulations as well as the experimental tests confirm the suitability of the prototype for environmental and workplace monitoring applications.
Correlated prompt fission data in transport simulations
Talou, P.; Vogt, R.; Randrup, J.; ...
2018-01-24
Detailed information on the fission process can be inferred from the observation, modeling and theoretical understanding of prompt fission neutron and γ-ray observables. Beyond simple average quantities, the study of distributions and correlations in prompt data, e.g., multiplicity-dependent neutron and γ-ray spectra, angular distributions of the emitted particles, n -n, n - γ, and γ - γ correlations, can place stringent constraints on fission models and parameters that would otherwise be free to be tuned separately to represent individual fission observables. The FREYA and CGMF codes have been developed to follow the sequential emissions of prompt neutrons and γ raysmore » from the initial excited fission fragments produced right after scission. Both codes implement Monte Carlo techniques to sample initial fission fragment configurations in mass, charge and kinetic energy and sample probabilities of neutron and γ emission at each stage of the decay. This approach naturally leads to using simple but powerful statistical techniques to infer distributions and correlations among many observables and model parameters. The comparison of model calculations with experimental data provides a rich arena for testing various nuclear physics models such as those related to the nuclear structure and level densities of neutron-rich nuclei, the γ-ray strength functions of dipole and quadrupole transitions, the mechanism for dividing the excitation energy between the two nascent fragments near scission, and the mechanisms behind the production of angular momentum in the fragments, etc. Beyond the obvious interest from a fundamental physics point of view, such studies are also important for addressing data needs in various nuclear applications. The inclusion of the FREYA and CGMF codes into the MCNP6.2 and MCNPX - PoliMi transport codes, for instance, provides a new and powerful tool to simulate correlated fission events in neutron transport calculations important in nonproliferation, safeguards, nuclear energy, and defense programs. Here, this review provides an overview of the topic, starting from theoretical considerations of the fission process, with a focus on correlated signatures. It then explores the status of experimental correlated fission data and current efforts to address some of the known shortcomings. Numerical simulations employing the FREYA and CGMF codes are compared to experimental data for a wide range of correlated fission quantities. The inclusion of those codes into the MCNP6.2 and MCNPX - PoliMi transport codes is described and discussed in the context of relevant applications. The accuracy of the model predictions and their sensitivity to model assumptions and input parameters are discussed. Lastly, a series of important experimental and theoretical questions that remain unanswered are presented, suggesting a renewed effort to address these shortcomings.« less
Correlated prompt fission data in transport simulations
NASA Astrophysics Data System (ADS)
Talou, P.; Vogt, R.; Randrup, J.; Rising, M. E.; Pozzi, S. A.; Verbeke, J.; Andrews, M. T.; Clarke, S. D.; Jaffke, P.; Jandel, M.; Kawano, T.; Marcath, M. J.; Meierbachtol, K.; Nakae, L.; Rusev, G.; Sood, A.; Stetcu, I.; Walker, C.
2018-01-01
Detailed information on the fission process can be inferred from the observation, modeling and theoretical understanding of prompt fission neutron and γ-ray observables. Beyond simple average quantities, the study of distributions and correlations in prompt data, e.g., multiplicity-dependent neutron and γ-ray spectra, angular distributions of the emitted particles, n - n, n - γ, and γ - γ correlations, can place stringent constraints on fission models and parameters that would otherwise be free to be tuned separately to represent individual fission observables. The FREYA and CGMF codes have been developed to follow the sequential emissions of prompt neutrons and γ rays from the initial excited fission fragments produced right after scission. Both codes implement Monte Carlo techniques to sample initial fission fragment configurations in mass, charge and kinetic energy and sample probabilities of neutron and γ emission at each stage of the decay. This approach naturally leads to using simple but powerful statistical techniques to infer distributions and correlations among many observables and model parameters. The comparison of model calculations with experimental data provides a rich arena for testing various nuclear physics models such as those related to the nuclear structure and level densities of neutron-rich nuclei, the γ-ray strength functions of dipole and quadrupole transitions, the mechanism for dividing the excitation energy between the two nascent fragments near scission, and the mechanisms behind the production of angular momentum in the fragments, etc. Beyond the obvious interest from a fundamental physics point of view, such studies are also important for addressing data needs in various nuclear applications. The inclusion of the FREYA and CGMF codes into the MCNP6.2 and MCNPX - PoliMi transport codes, for instance, provides a new and powerful tool to simulate correlated fission events in neutron transport calculations important in nonproliferation, safeguards, nuclear energy, and defense programs. This review provides an overview of the topic, starting from theoretical considerations of the fission process, with a focus on correlated signatures. It then explores the status of experimental correlated fission data and current efforts to address some of the known shortcomings. Numerical simulations employing the FREYA and CGMF codes are compared to experimental data for a wide range of correlated fission quantities. The inclusion of those codes into the MCNP6.2 and MCNPX - PoliMi transport codes is described and discussed in the context of relevant applications. The accuracy of the model predictions and their sensitivity to model assumptions and input parameters are discussed. Finally, a series of important experimental and theoretical questions that remain unanswered are presented, suggesting a renewed effort to address these shortcomings.
Correlated prompt fission data in transport simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Talou, P.; Vogt, R.; Randrup, J.
Detailed information on the fission process can be inferred from the observation, modeling and theoretical understanding of prompt fission neutron and γ-ray observables. Beyond simple average quantities, the study of distributions and correlations in prompt data, e.g., multiplicity-dependent neutron and γ-ray spectra, angular distributions of the emitted particles, n -n, n - γ, and γ - γ correlations, can place stringent constraints on fission models and parameters that would otherwise be free to be tuned separately to represent individual fission observables. The FREYA and CGMF codes have been developed to follow the sequential emissions of prompt neutrons and γ raysmore » from the initial excited fission fragments produced right after scission. Both codes implement Monte Carlo techniques to sample initial fission fragment configurations in mass, charge and kinetic energy and sample probabilities of neutron and γ emission at each stage of the decay. This approach naturally leads to using simple but powerful statistical techniques to infer distributions and correlations among many observables and model parameters. The comparison of model calculations with experimental data provides a rich arena for testing various nuclear physics models such as those related to the nuclear structure and level densities of neutron-rich nuclei, the γ-ray strength functions of dipole and quadrupole transitions, the mechanism for dividing the excitation energy between the two nascent fragments near scission, and the mechanisms behind the production of angular momentum in the fragments, etc. Beyond the obvious interest from a fundamental physics point of view, such studies are also important for addressing data needs in various nuclear applications. The inclusion of the FREYA and CGMF codes into the MCNP6.2 and MCNPX - PoliMi transport codes, for instance, provides a new and powerful tool to simulate correlated fission events in neutron transport calculations important in nonproliferation, safeguards, nuclear energy, and defense programs. Here, this review provides an overview of the topic, starting from theoretical considerations of the fission process, with a focus on correlated signatures. It then explores the status of experimental correlated fission data and current efforts to address some of the known shortcomings. Numerical simulations employing the FREYA and CGMF codes are compared to experimental data for a wide range of correlated fission quantities. The inclusion of those codes into the MCNP6.2 and MCNPX - PoliMi transport codes is described and discussed in the context of relevant applications. The accuracy of the model predictions and their sensitivity to model assumptions and input parameters are discussed. Lastly, a series of important experimental and theoretical questions that remain unanswered are presented, suggesting a renewed effort to address these shortcomings.« less
NASA Astrophysics Data System (ADS)
Vermeeren, Ludo; Leysen, Willem; Brichard, Benoit
2018-01-01
Mineral-insulated (MI) cables and Low-Temperature Co-fired Ceramic (LTCC) magnetic pick-up coils are intended to be installed in various position in ITER. The severe ITER nuclear radiation field is expected to lead to induced currents that could perturb diagnostic measurements. In order to assess this problem and to find mitigation strategies models were developed for the calculation of neutron-and gamma-induced currents in MI cables and in LTCC coils. The models are based on calculations with the MCNPX code, combined with a dedicated model for the drift of electrons stopped in the insulator. The gamma induced currents can be easily calculated with a single coupled photon-electron MCNPX calculation. The prompt neutron induced currents requires only a single coupled neutron-photon-electron MCNPX run. The various delayed neutron contributions require a careful analysis of all possibly relevant neutron-induced reaction paths and a combination of different types of MCNPX calculations. The models were applied for a specific twin-core copper MI cable, for one quad-core copper cable and for silver conductor LTCC coils (one with silver ground plates in order to reduce the currents and one without such silver ground plates). Calculations were performed for irradiation conditions (neutron and gamma spectra and fluxes) in relevant positions in ITER and in the Y3 irradiation channel of the BR1 reactor at SCK•CEN, in which an irradiation test of these four test devices was carried out afterwards. We will present the basic elements of the models and show the results of all relevant partial currents (gamma and neutron induced, prompt and various delayed currents) in BR1-Y3 conditions. Experimental data will be shown and analysed in terms of the respective contributions. The tests were performed at reactor powers of 350 kW and 1 MW, leading to thermal neutron fluxes of 1E11 n/cm2s and 3E11 n/cm2s, respectively. The corresponding total radiation induced currents are ranging from 1 to 7 nA only, putting a challenge on the acquisition system and on the data analysis. The detailed experimental results will be compared with the corresponding values predicted by the model. The overall agreement between the experimental data and the model predictions is fairly good, with very consistent data for the main delayed current components, while the lower amplitude delayed currents and some of the prompt contributions show some minor discrepancies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Plaschy, M.; Murphy, M.; Jatuff, F.
2006-07-01
The PROTEUS research reactor at the Paul Scherrer Inst. (PSI) has been operating since the sixties and has already permitted, due to its high flexibility, investigation of a large range of very different nuclear systems. Currently, the ongoing experimental programme is called LWR-PROTEUS. This programme was started in 1997 and concerns large-scale investigations of advanced light water reactors (LWR) fuels. Until now, the different LWR-PROTEUS phases have permitted to study more than fifteen different configurations, each of them having to be demonstrated to be operationally safe, in particular, for the Swiss safety authorities. In this context, recent developments of themore » PSI computer capabilities have made possible the use of full-scale SD-heterogeneous MCNPX models to calculate accurately different safety related parameters (e.g. the critical driver loading and the shutdown rod worth). The current paper presents the MCNPX predictions of these operational characteristics for seven different LWR-PROTEUS configurations using a large number of nuclear data libraries. More specifically, this significant benchmarking exercise is based on the ENDF/B6v2, ENDF/B6v8, JEF2.2, JEFF3.0, JENDL3.2, and JENDL3.3 libraries. The results highlight certain library specific trends in the prediction of the multiplication factor k{sub eff} (e.g. the systematically larger reactivity calculated with JEF2.2 and the smaller reactivity associated with JEFF3.0). They also confirm the satisfactory determination of reactivity variations by all calculational schemes, for instance, due to the introduction of a safety rod pair, these calculations having been compared with experiments. (authors)« less
NASA Astrophysics Data System (ADS)
Lis, M.; Gómez-Ros, J. M.; Bedogni, R.; Delgado, A.
2008-01-01
The design of a neutron detector with spectrometric capability based on thermoluminescent (TL) 6LiF:Ti,Mg (TLD-600) dosimeters located along three perpendicular axis within a single polyethylene (PE) sphere has been analyzed. The neutron response functions have been calculated in the energy range from 10 -8 to 100 MeV with the Monte Carlo (MC) code MCNPX 2.5 and their shape and behaviour have been used to discuss a suitable configuration for an actual instrument. The feasibility of such a device has been preliminary evaluated by the simulation of exposure to 241Am-Be, bare 252Cf and Fe-PE moderated 252Cf sources. The expected accuracy in the evaluation of energy quantities has been evaluated using the unfolding code FRUIT. The obtained results together with additional calculations performed using MAXED and GRAVEL codes show the spectrometric capability of the proposed design for radiation protection applications, especially in the range 1 keV-20 MeV.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wahanani, Nursinta Adi, E-mail: sintaadi@batan.go.id; Natsir, Khairina, E-mail: sintaadi@batan.go.id; Hartini, Entin, E-mail: sintaadi@batan.go.id
Data processing software packages such as VSOP and MCNPX are softwares that has been scientifically proven and complete. The result of VSOP and MCNPX are huge and complex text files. In the analyze process, user need additional processing like Microsoft Excel to show informative result. This research develop an user interface software for output of VSOP and MCNPX. VSOP program output is used to support neutronic analysis and MCNPX program output is used to support burn-up analysis. Software development using iterative development methods which allow for revision and addition of features according to user needs. Processing time with this softwaremore » 500 times faster than with conventional methods using Microsoft Excel. PYTHON is used as a programming language, because Python is available for all major operating systems: Windows, Linux/Unix, OS/2, Mac, Amiga, among others. Values that support neutronic analysis are k-eff, burn-up and mass Pu{sup 239} and Pu{sup 241}. Burn-up analysis used the mass inventory values of actinide (Thorium, Plutonium, Neptunium and Uranium). Values are visualized in graphical shape to support analysis.« less
Development and validation of MCNPX-based Monte Carlo treatment plan verification system
Jabbari, Iraj; Monadi, Shahram
2015-01-01
A Monte Carlo treatment plan verification (MCTPV) system was developed for clinical treatment plan verification (TPV), especially for the conformal and intensity-modulated radiotherapy (IMRT) plans. In the MCTPV, the MCNPX code was used for particle transport through the accelerator head and the patient body. MCTPV has an interface with TiGRT planning system and reads the information which is needed for Monte Carlo calculation transferred in digital image communications in medicine-radiation therapy (DICOM-RT) format. In MCTPV several methods were applied in order to reduce the simulation time. The relative dose distribution of a clinical prostate conformal plan calculated by the MCTPV was compared with that of TiGRT planning system. The results showed well implementation of the beams configuration and patient information in this system. For quantitative evaluation of MCTPV a two-dimensional (2D) diode array (MapCHECK2) and gamma index analysis were used. The gamma passing rate (3%/3 mm) of an IMRT plan was found to be 98.5% for total beams. Also, comparison of the measured and Monte Carlo calculated doses at several points inside an inhomogeneous phantom for 6- and 18-MV photon beams showed a good agreement (within 1.5%). The accuracy and timing results of MCTPV showed that MCTPV could be used very efficiently for additional assessment of complicated plans such as IMRT plan. PMID:26170554
Rogers, Jeremy; Marianno, Craig; Kallenbach, Gene; ...
2016-06-01
Calibration sources based on the primordial isotope potassium-40 ( 40K) have reduced controls on the source’s activity due to its terrestrial ubiquity and very low specific activity. Potassium–40’s beta emissions and 1,460.8 keV gamma ray can be used to induce K-shell fluorescence x rays in high-Z metals between 60 and 80 keV. A gamma ray calibration source that uses potassium chloride salt and a high-Z metal to create a two-point calibration for a sodium iodide field gamma spectroscopy instrument is thus proposed. The calibration source was designed in collaboration with the Sandia National Laboratory using the Monte Carlo N-Particle eXtendedmore » (MCNPX) transport code. Two methods of x-ray production were explored. First, a thin high-Z layer (HZL) was interposed between the detector and the potassium chloride-urethane source matrix. Second, bismuth metal powder was homogeneously mixed with a urethane binding agent to form a potassium chloride-bismuth matrix (KBM). The bismuth-based source was selected as the development model because it is inexpensive, nontoxic, and outperforms the high-Z layer method in simulation. As a result, based on the MCNPX studies, sealing a mixture of bismuth powder and potassium chloride into a thin plastic case could provide a light, inexpensive field calibration source.« less
Narrow beam neutron dosimetry.
Ferenci, M Sutton
2004-01-01
Organ and effective doses have been estimated for male and female anthropomorphic mathematical models exposed to monoenergetic narrow beams of neutrons with energies from 10(-11) to 1000 MeV. Calculations were performed for anterior-posterior, posterior-anterior, left-lateral and right-lateral irradiation geometries. The beam diameter used in the calculations was 7.62 cm and the phantoms were irradiated at a height of 1 m above the ground. This geometry was chosen to simulate an accidental scenario (a worker walking through the beam) at Flight Path 30 Left (FP30L) of the Weapons Neutron Research (WNR) Facility at Los Alamos National Laboratory. The calculations were carried out using the Monte Carlo transport code MCNPX 2.5c.
Jovanovic, Z; Krstic, D; Nikezic, D; Ros, J M Gomez; Ferrari, P
2018-03-01
Monte Carlo simulations were performed to evaluate treatment doses with wide spread used radionuclides 133Xe, 99mTc and 81mKr. These different radionuclides are used in perfusion or ventilation examinations in nuclear medicine and as indicators for cardiovascular and pulmonary diseases. The objective of this work was to estimate the specific absorbed fractions in surrounding organs and tissues, when these radionuclides are incorporated in the lungs. For this purpose a voxel thorax model has been developed and compared with the ORNL phantom. All calculations and simulations were performed by means of the MCNP5/X code.
Neutron H*(10) estimation and measurements around 18MV linac.
Cerón Ramírez, Pablo Víctor; Díaz Góngora, José Antonio Irán; Paredes Gutiérrez, Lydia Concepción; Rivera Montalvo, Teodoro; Vega Carrillo, Héctor René
2016-11-01
Thermoluminescent dosimetry, analytical techniques and Monte Carlo calculations were used to estimate the dose of neutron radiation in a treatment room with a linear electron accelerator of 18MV. Measurements were carried out through neutron ambient dose monitors which include pairs of thermoluminescent dosimeters TLD 600 ( 6 LiF: Mg, Ti) and TLD 700 ( 7 LiF: Mg, Ti), which were placed inside a paraffin spheres. The measurements has allowed to use NCRP 151 equations, these expressions are useful to find relevant dosimetric quantities. In addition, photoneutrons produced by linac head were calculated through MCNPX code taking into account the geometry and composition of the linac head principal parts. Copyright © 2016 Elsevier Ltd. All rights reserved.
Measurement of thermal neutrons reflection coefficients for two-layer reflectors.
Azimkhani, S; Zolfagharpour, F; Ziaie, F
2018-05-01
In this research, thermal neutrons albedo coefficients and relative number of excess counts have been measured experimentally for different thicknesses of two-layer reflectors by using 241 Am-Be neutron source (5.2Ci) and BF 3 detector. Our used reflectors consist of two-layer which are combinations of water, graphite, polyethylene, and lead materials. Experimental results reveal that thermal neutron reflection coefficients slightly increased by addition of the second layer. The maximum value of growth for thermal neutrons albedo is obtained for lead-polyethylene compound (0.72 ± 0.01). Also, there is suitable agreement between the experimental values and simulation results by using MCNPX code. Copyright © 2018 Elsevier Ltd. All rights reserved.
New thermal neutron calibration channel at LNMRI/IRD
NASA Astrophysics Data System (ADS)
Astuto, A.; Patrão, K. C. S.; Fonseca, E. S.; Pereira, W. W.; Lopes, R. T.
2016-07-01
A new standard thermal neutron flux unit was designed in the National Ionizing Radiation Metrology Laboratory (LNMRI) for calibration of neutron detectors. Fluence is achieved by moderation of four 241Am-Be sources with 0.6 TBq each, in a facility built with graphite and paraffin blocks. The study was divided into two stages. First, simulations were performed using MCNPX code in different geometric arrangements, seeking the best performance in terms of fluence and their uncertainties. Last, the system was assembled based on the results obtained on the simulations. The simulation results indicate quasi-homogeneous fluence in the central chamber and H*(10) at 50 cm from the front face with the polyethylene filter.
Spallation yield of neutrons produced in thick lead target bombarded with 250 MeV protons
NASA Astrophysics Data System (ADS)
Chen, L.; Ma, F.; Zhanga, X. Y.; Ju, Y. Q.; Zhang, H. B.; Ge, H. L.; Wang, J. G.; Zhou, B.; Li, Y. Y.; Xu, X. W.; Luo, P.; Yang, L.; Zhang, Y. B.; Li, J. Y.; Xu, J. K.; Liang, T. J.; Wang, S. L.; Yang, Y. W.; Gu, L.
2015-01-01
The neutron yield from thick target of Pb irradiated with 250 MeV protons has been studied experimentally. The neutron production was measured with the water-bath gold method. The thermal neutron distributions in the water were determined according to the measured activities of Au foils. Corresponding results calculated with the Monte Carlo code MCNPX were compared with the experimental data. It was found out that the Au foils with cadmium cover significantly changed the spacial distribution of the thermal neutron field. The corrected neutron yield was deduced to be 2.23 ± 0.19 n/proton by considering the influence of the Cd cover on the thermal neutron flux.
Extraterrestrial Studies Using Nuclear Interactions
NASA Technical Reports Server (NTRS)
Reedy, Robert C.
2003-01-01
Cosmogenic nuclides were used to study the recent histories of the aubrite Norton County and the pallasite Brenham using calculated production rates. Calculations were done of the rates for making cosmogenic noble-gas isotopes in the Jovian satellite Europa by the interactions of galactic cosmic rays and especially trapped Jovian protons. Cross sections for the production of cosmogenic nuclides were reported and plans made to measure additional cross sections. A new code, MCNPX, was used to numerically simulate the interactions of cosmic rays with matter and the subsequent production of cosmogenic nuclides. A review was written about studies of extraterrestrial matter using cosmogenic radionuclides. Several other projects were done. Results are reviewed here with references to my recent publications for details.
A methodology to develop computational phantoms with adjustable posture for WBC calibration
NASA Astrophysics Data System (ADS)
Ferreira Fonseca, T. C.; Bogaerts, R.; Hunt, John; Vanhavere, F.
2014-11-01
A Whole Body Counter (WBC) is a facility to routinely assess the internal contamination of exposed workers, especially in the case of radiation release accidents. The calibration of the counting device is usually done by using anthropomorphic physical phantoms representing the human body. Due to such a challenge of constructing representative physical phantoms a virtual calibration has been introduced. The use of computational phantoms and the Monte Carlo method to simulate radiation transport have been demonstrated to be a worthy alternative. In this study we introduce a methodology developed for the creation of realistic computational voxel phantoms with adjustable posture for WBC calibration. The methodology makes use of different software packages to enable the creation and modification of computational voxel phantoms. This allows voxel phantoms to be developed on demand for the calibration of different WBC configurations. This in turn helps to study the major source of uncertainty associated with the in vivo measurement routine which is the difference between the calibration phantoms and the real persons being counted. The use of realistic computational phantoms also helps the optimization of the counting measurement. Open source codes such as MakeHuman and Blender software packages have been used for the creation and modelling of 3D humanoid characters based on polygonal mesh surfaces. Also, a home-made software was developed whose goal is to convert the binary 3D voxel grid into a MCNPX input file. This paper summarizes the development of a library of phantoms of the human body that uses two basic phantoms called MaMP and FeMP (Male and Female Mesh Phantoms) to create a set of male and female phantoms that vary both in height and in weight. Two sets of MaMP and FeMP phantoms were developed and used for efficiency calibration of two different WBC set-ups: the Doel NPP WBC laboratory and AGM laboratory of SCK-CEN in Mol, Belgium.
A methodology to develop computational phantoms with adjustable posture for WBC calibration.
Fonseca, T C Ferreira; Bogaerts, R; Hunt, John; Vanhavere, F
2014-11-21
A Whole Body Counter (WBC) is a facility to routinely assess the internal contamination of exposed workers, especially in the case of radiation release accidents. The calibration of the counting device is usually done by using anthropomorphic physical phantoms representing the human body. Due to such a challenge of constructing representative physical phantoms a virtual calibration has been introduced. The use of computational phantoms and the Monte Carlo method to simulate radiation transport have been demonstrated to be a worthy alternative. In this study we introduce a methodology developed for the creation of realistic computational voxel phantoms with adjustable posture for WBC calibration. The methodology makes use of different software packages to enable the creation and modification of computational voxel phantoms. This allows voxel phantoms to be developed on demand for the calibration of different WBC configurations. This in turn helps to study the major source of uncertainty associated with the in vivo measurement routine which is the difference between the calibration phantoms and the real persons being counted. The use of realistic computational phantoms also helps the optimization of the counting measurement. Open source codes such as MakeHuman and Blender software packages have been used for the creation and modelling of 3D humanoid characters based on polygonal mesh surfaces. Also, a home-made software was developed whose goal is to convert the binary 3D voxel grid into a MCNPX input file. This paper summarizes the development of a library of phantoms of the human body that uses two basic phantoms called MaMP and FeMP (Male and Female Mesh Phantoms) to create a set of male and female phantoms that vary both in height and in weight. Two sets of MaMP and FeMP phantoms were developed and used for efficiency calibration of two different WBC set-ups: the Doel NPP WBC laboratory and AGM laboratory of SCK-CEN in Mol, Belgium.
Prompt gamma neutron activation analysis of toxic elements in radioactive waste packages.
Ma, J-L; Carasco, C; Perot, B; Mauerhofer, E; Kettler, J; Havenith, A
2012-07-01
The French Alternative Energies and Atomic Energy Commission (CEA) and National Radioactive Waste Management Agency (ANDRA) are conducting an R&D program to improve the characterization of long-lived and medium activity (LL-MA) radioactive waste packages. In particular, the amount of toxic elements present in radioactive waste packages must be assessed before they can be accepted in repository facilities in order to avoid pollution of underground water reserves. To this aim, the Nuclear Measurement Laboratory of CEA-Cadarache has started to study the performances of Prompt Gamma Neutron Activation Analysis (PGNAA) for elements showing large capture cross sections such as mercury, cadmium, boron, and chromium. This paper reports a comparison between Monte Carlo calculations performed with the MCNPX computer code using the ENDF/B-VII.0 library and experimental gamma rays measured in the REGAIN PGNAA cell with small samples of nickel, lead, cadmium, arsenic, antimony, chromium, magnesium, zinc, boron, and lithium to verify the validity of a numerical model and gamma-ray production data. The measurement of a ∼20kg test sample of concrete containing toxic elements has also been performed, in collaboration with Forschungszentrum Jülich, to validate the model in view of future performance studies for dense and large LL-MA waste packages. Copyright © 2012 Elsevier Ltd. All rights reserved.
Benchmark Analysis of Pion Contribution from Galactic Cosmic Rays
NASA Technical Reports Server (NTRS)
Aghara, Sukesh K.; Blattnig, Steve R.; Norbury, John W.; Singleterry, Robert C., Jr.
2008-01-01
Shielding strategies for extended stays in space must include a comprehensive resolution of the secondary radiation environment inside the spacecraft induced by the primary, external radiation. The distribution of absorbed dose and dose equivalent is a function of the type, energy and population of these secondary products. A systematic verification and validation effort is underway for HZETRN, which is a space radiation transport code currently used by NASA. It performs neutron, proton and heavy ion transport explicitly, but it does not take into account the production and transport of mesons, photons and leptons. The question naturally arises as to what is the contribution of these particles to space radiation. The pion has a production kinetic energy threshold of about 280 MeV. The Galactic cosmic ray (GCR) spectra, coincidentally, reaches flux maxima in the hundreds of MeV range, corresponding to the pion production threshold. We present results from the Monte Carlo code MCNPX, showing the effect of lepton and meson physics when produced and transported explicitly in a GCR environment.
NASA Astrophysics Data System (ADS)
Yu, Q. Z.; Liang, T. J.
2018-06-01
China Spallation Neutron Source (CSNS) is intended to begin operation in 2018. CSNS is an accelerator-base multidisciplinary user facility. The pulsed neutrons are produced by a 1.6GeV short-pulsed proton beam impinging on a W-Ta spallation target, at a beam power of100 kW and a repetition rate of 25 Hz. 20 neutron beam lines are extracted for the neutron scattering and neutron irradiation research. During the commissioning and maintenance scenarios, the gamma rays induced from the W-Ta target can cause the dose threat to the personal and the environment. In this paper, the gamma dose rate distributions for the W-Ta spallation are calculated, based on the engineering model of the target-moderator-reflector system. The shipping cask is analyzed to satisfy the dose rate limit that less than 2 mSv/h at the surface of the shipping cask. All calculations are performed by the Monte carlo code MCNPX2.5 and the activation code CINDER’90.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maekawa, Fujio; Meigo, Shin-ichiro; Kasugai, Yoshimi
2005-05-15
A neutronic benchmark experiment on a simulated spallation neutron target assembly was conducted by using the Alternating Gradient Synchrotron at Brookhaven National Laboratory and was analyzed to investigate the prediction capability of Monte Carlo simulation codes used in neutronic designs of spallation neutron sources. The target assembly consisting of a mercury target, a light water moderator, and a lead reflector was bombarded by 1.94-, 12-, and 24-GeV protons, and the fast neutron flux distributions around the target and the spectra of thermal neutrons leaking from the moderator were measured in the experiment. In this study, the Monte Carlo particle transportmore » simulation codes NMTC/JAM, MCNPX, and MCNP-4A with associated cross-section data in JENDL and LA-150 were verified based on benchmark analysis of the experiment. As a result, all the calculations predicted the measured quantities adequately; calculated integral fluxes of fast and thermal neutrons agreed approximately within {+-}40% with the experiments although the overall energy range encompassed more than 12 orders of magnitude. Accordingly, it was concluded that these simulation codes and cross-section data were adequate for neutronics designs of spallation neutron sources.« less
Modelling of aircrew radiation exposure during solar particle events
NASA Astrophysics Data System (ADS)
Al Anid, Hani Khaled
In 1990, the International Commission on Radiological Protection recognized the occupational exposure of aircrew to cosmic radiation. In Canada, a Commercial and Business Aviation Advisory Circular was issued by Transport Canada suggesting that action should be taken to manage such exposure. In anticipation of possible regulations on exposure of Canadian-based aircrew in the near future, an extensive study was carried out at the Royal Military College of Canada to measure the radiation exposure during commercial flights. The radiation exposure to aircrew is a result of a complex mixed-radiation field resulting from Galactic Cosmic Rays (GCRs) and Solar Energetic Particles (SEPs). Supernova explosions and active galactic nuclei are responsible for GCRs which consist of 90% protons, 9% alpha particles, and 1% heavy nuclei. While they have a fairly constant fluence rate, their interaction with the magnetic field of the Earth varies throughout the solar cycles, which has a period of approximately 11 years. SEPs are highly sporadic events that are associated with solar flares and coronal mass ejections. This type of exposure may be of concern to certain aircrew members, such as pregnant flight crew, for which the annual effective dose is limited to 1 mSv over the remainder of the pregnancy. The composition of SEPs is very similar to GCRs, in that they consist of mostly protons, some alpha particles and a few heavy nuclei, but with a softer energy spectrum. An additional factor when analysing SEPs is the effect of flare anisotropy. This refers to the way charged particles are transported through the Earth's magnetosphere in an anisotropic fashion. Solar flares that are fairly isotropic produce a uniform radiation exposure for areas that have similar geomagnetic shielding, while highly anisotropic events produce variable exposures at different locations on the Earth. Studies of neutron monitor count rates from detectors sharing similar geomagnetic shielding properties show a very different response during anisotropic events, leading to variations in aircrew radiation doses that may be significant for dose assessment. To estimate the additional exposure due to solar flares, a model was developed using a Monte-Carlo radiation transport code, MCNPX. The model transports an extrapolated particle spectrum based on satellite measurements through the atmosphere using the MCNPX analysis. This code produces the estimated flux at a specific altitude where radiation dose conversion coefficients are applied to convert the particle flux into effective and ambient dose-equivalent rates. A cut-off rigidity model accounts for the shielding effects of the Earth's magnetic field. Comparisons were made between the model predictions and actual flight measurements taken with various types of instruments used to measure the mixed radiation field during Ground Level Enhancements 60 and 65. An anisotropy analysis that uses neutron monitor responses and the pitch angle distribution of energetic solar particles was used to identify particle anisotropy for a solar event in December 2006. In anticipation of future commercial use, a computer code has been developed to implement the radiation dose assessment model for routine analysis. Keywords: Radiation Dosimetry, Radiation Protection, Space Physics.
Radiography simulation on single-shot dual-spectrum X-ray for cargo inspection system.
Gil, Youngmi; Oh, Youngdo; Cho, Moohyun; Namkung, Won
2011-02-01
We propose a method to identify materials in the dual energy X-ray (DeX) inspection system. This method identifies materials by combining information on the relative proportions T of high-energy and low-energy X-rays transmitted through the material, and the ratio R of the attenuation coefficient of the material when high-energy are used to that when low energy X-rays are used. In Monte Carlo N-Particle Transport Code (MCNPX) simulations using the same geometry as that of the real container inspection system, this T vs. R method successfully identified tissue-equivalent plastic and several metals. In further simulations, the single-shot mode of operating the accelerator led to better distinguishing of materials than the dual-shot system. Copyright © 2010 Elsevier Ltd. All rights reserved.
Absolute Calibration of Image Plate for electrons at energy between 100 keV and 4 MeV
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, H; Back, N L; Eder, D C
2007-12-10
The authors measured the absolute response of image plate (Fuji BAS SR2040) for electrons at energies between 100 keV to 4 MeV using an electron spectrometer. The electron source was produced from a short pulse laser irradiated on the solid density targets. This paper presents the calibration results of image plate Photon Stimulated Luminescence PSL per electrons at this energy range. The Monte Carlo radiation transport code MCNPX results are also presented for three representative incident angles onto the image plates and corresponding electron energies depositions at these angles. These provide a complete set of tools that allows extraction ofmore » the absolute calibration to other spectrometer setting at this electron energy range.« less
Evaluation of RayXpert® for shielding design of medical facilities
NASA Astrophysics Data System (ADS)
Derreumaux, Sylvie; Vecchiola, Sophie; Geoffray, Thomas; Etard, Cécile
2017-09-01
In a context of growing demands for expert evaluation concerning medical, industrial and research facilities, the French Institute for radiation protection and nuclear safety (IRSN) considered necessary to acquire new software for efficient dimensioning calculations. The selected software is RayXpert®. Before using this software in routine, exposure and transmission calculations for some basic configurations were validated. The validation was performed by the calculation of gamma dose constants and tenth value layers (TVL) for usual shielding materials and for radioisotopes most used in therapy (Ir-192, Co-60 and I-131). Calculated values were compared with results obtained using MCNPX as a reference code and with published values. The impact of different calculation parameters, such as the source emission rays considered for calculation and the use of biasing techniques, was evaluated.
NASA Astrophysics Data System (ADS)
Baptista, M.; Di Maria, S.; Vieira, S.; Vaz, P.
2017-11-01
Cone-Beam Computed Tomography (CBCT) enables high-resolution volumetric scanning of the bone and soft tissue anatomy under investigation at the treatment accelerator. This technique is extensively used in Image Guided Radiation Therapy (IGRT) for pre-treatment verification of patient position and target volume localization. When employed daily and several times per patient, CBCT imaging may lead to high cumulative imaging doses to the healthy tissues surrounding the exposed organs. This work aims at (1) evaluating the dose distribution during a CBCT scan and (2) calculating the organ doses involved in this image guiding procedure for clinically available scanning protocols. Both Monte Carlo (MC) simulations and measurements were performed. To model and simulate the kV imaging system mounted on a linear accelerator (Edge™, Varian Medical Systems) the state-of-the-art MC radiation transport program MCNPX 2.7.0 was used. In order to validate the simulation results, measurements of the Computed Tomography Dose Index (CTDI) were performed, using standard PMMA head and body phantoms, with 150 mm length and a standard pencil ionizing chamber (IC) 100 mm long. Measurements for head and pelvis scanning protocols, usually adopted in clinical environment were acquired, using two acquisition modes (full-fan and half fan). To calculate the organ doses, the implemented MC model of the CBCT scanner together with a male voxel phantom ("Golem") was used. The good agreement between the MCNPX simulations and the CTDIw measurements (differences up to 17%) presented in this work reveals that the CBCT MC model was successfully validated, taking into account the several uncertainties. The adequacy of the computational model to map dose distributions during a CBCT scan is discussed in order to identify ways to reduce the total CBCT imaging dose. The organ dose assessment highlights the need to evaluate the therapeutic and the CBCT imaging doses, in a more balanced approach, and the importance of improving awareness regarding the increased risk, arising from repeated exposures.
NASA Astrophysics Data System (ADS)
Nasrabadi, M. N.; Bakhshi, F.; Jalali, M.; Mohammadi, A.
2011-12-01
Nuclear-based explosive detection methods can detect explosives by identifying their elemental components, especially nitrogen. Thermal neutron capture reactions have been used for detecting prompt gamma 10.8 MeV following radioactive neutron capture by 14N nuclei. We aimed to study the feasibility of using field-portable prompt gamma neutron activation analysis (PGNAA) along with improved nuclear equipment to detect and identify explosives, illicit substances or landmines. A 252Cf radio-isotopic source was embedded in a cylinder made of high-density polyethylene (HDPE) and the cylinder was then placed in another cylindrical container filled with water. Measurements were performed on high nitrogen content compounds such as melamine (C3H6N6). Melamine powder in a HDPE bottle was placed underneath the vessel containing water and the neutron source. Gamma rays were detected using two NaI(Tl) crystals. The results were simulated with MCNP4c code calculations. The theoretical calculations and experimental measurements were in good agreement indicating that this method can be used for detection of explosives and illicit drugs.
Neutron-induced reaction cross-sections of 93Nb with fast neutron based on 9Be(p,n) reaction
NASA Astrophysics Data System (ADS)
Naik, H.; Kim, G. N.; Kim, K.; Zaman, M.; Nadeem, M.; Sahid, M.
2018-02-01
The cross-sections of the 93Nb (n , 2 n)92mNb, 93Nb (n , 3 n)91mNb and 93Nb (n , 4 n)90Nb reactions with the average neutron energies of 14.4 to 34.0 MeV have been determined by using an activation and off-line γ-ray spectrometric technique. The fast neutrons were produced using the 9Be (p , n) reaction with the proton energies of 25-, 35- and 45-MeV from the MC-50 Cyclotron at the Korea Institute of Radiological and Medical Sciences (KIRAMS). The neutron flux-weighted average cross-sections of the 93Nb(n , xn ; x = 2- 4) reactions were also obtained from the mono-energetic neutron-induced reaction cross-sections of 93Nb calculated using the TALYS 1.8 code, and the neutron flux spectrum based on the MCNPX 2.6.0 code. The present results for the 93Nb(n , xn ; x = 2- 4) reactions are compared with the calculated neutron flux-weighted average values and found to be in good agreement.
Study of neutron spectra in a water bath from a Pb target irradiated by 250 MeV protons
NASA Astrophysics Data System (ADS)
Li, Yan-Yan; Zhang, Xue-Ying; Ju, Yong-Qin; Ma, Fei; Zhang, Hong-Bin; Chen, Liang; Ge, Hong-Lin; Wan, Bo; Luo, Peng; Zhou, Bin; Zhang, Yan-Bin; Li, Jian-Yang; Xu, Jun-Kui; Wang, Song-Lin; Yang, Yong-Wei; Yang, Lei
2015-04-01
Spallation neutrons were produced by the irradiation of Pb with 250 MeV protons. The Pb target was surrounded by water which was used to slow down the emitted neutrons. The moderated neutrons in the water bath were measured by using the resonance detectors of Au, Mn and In with a cadmium (Cd) cover. According to the measured activities of the foils, the neutron flux at different resonance energies were deduced and the epithermal neutron spectra were proposed. Corresponding results calculated with the Monte Carlo code MCNPX were compared with the experimental data to check the validity of the code. The comparison showed that the simulation could give a good prediction for the neutron spectra above 50 eV, while the finite thickness of the foils greatly effected the experimental data in low energy. It was also found that the resonance detectors themselves had great impact on the simulated energy spectra. Supported by National Natural Science Foundation and Strategic Priority Research Program of the Chinese Academy of Sciences (11305229, 11105186, 91226107, 91026009, XDA03030300)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghorbani, M; Tabatabaei, Z; Noghreiyan, A Vejdani
Purpose: The aim of this study is to evaluate soft tissue composition effect on dose distribution for various soft tissues and various depths in radiotherapy with 6 MV photon beam of a medical linac. Methods: A phantom and Siemens Primus linear accelerator were simulated using MCNPX Monte Carlo code. In a homogeneous cubic phantom, six types of soft tissue and three types of tissue-equivalent materials were defined separately. The soft tissues were muscle (skeletal), adipose tissue, blood (whole), breast tissue, soft tissue (9-component) and soft tissue (4-component). The tissue-equivalent materials included: water, A-150 tissue-equivalent plastic and perspex. Photon dose relativemore » to dose in 9-component soft tissue at various depths on the beam’s central axis was determined for the 6 MV photon beam. The relative dose was also calculated and compared for various MCNPX tallies including,F8, F6 and,F4. Results: The results of the relative photon dose in various materials relative to dose in 9-component soft tissue and using different tallies are reported in the form of tabulated data. Minor differences between dose distributions in various soft tissues and tissue-equivalent materials were observed. The results from F6 and F4 were practically the same but different with,F8 tally. Conclusion: Based on the calculations performed, the differences in dose distributions in various soft tissues and tissue-equivalent materials are minor but they could be corrected in radiotherapy calculations to upgrade the accuracy of the dosimetric calculations.« less
NASA Astrophysics Data System (ADS)
Lerendegui-Marco, J.; Cortés-Giraldo, M. A.; Guerrero, C.; Quesada, J. M.; Meo, S. Lo; Massimi, C.; Barbagallo, M.; Colonna, N.; Mancussi, D.; Mingrone, F.; Sabaté-Gilarte, M.; Vannini, G.; Vlachoudis, V.; Aberle, O.; Andrzejewski, J.; Audouin, L.; Bacak, M.; Balibrea, J.; Bečvář, F.; Berthoumieux, E.; Billowes, J.; Bosnar, D.; Brown, A.; Caamaño, M.; Calviño, F.; Calviani, M.; Cano-Ott, D.; Cardella, R.; Casanovas, A.; Cerutti, F.; Chen, Y. H.; Chiaveri, E.; Cortés, G.; Cosentino, L.; Damone, L. A.; Diakaki, M.; Domingo-Pardo, C.; Dressler, R.; Dupont, E.; Durán, I.; Fernández-Domínguez, B.; Ferrari, A.; Ferreira, P.; Finocchiaro, P.; Göbel, K.; Gómez-Hornillos, M. B.; García, A. R.; Gawlik, A.; Gilardoni, S.; Glodariu, T.; Gonçalves, I. F.; González, E.; Griesmayer, E.; Gunsing, F.; Harada, H.; Heinitz, S.; Heyse, J.; Jenkins, D. G.; Jericha, E.; Käppeler, F.; Kadi, Y.; Kalamara, A.; Kavrigin, P.; Kimura, A.; Kivel, N.; Kokkoris, M.; Krtička, M.; Kurtulgil, D.; Leal-Cidoncha, E.; Lederer, C.; Leeb, H.; Lonsdale, S. J.; Macina, D.; Marganiec, J.; Martínez, T.; Masi, A.; Mastinu, P.; Mastromarco, M.; Maugeri, E. A.; Mazzone, A.; Mendoza, E.; Mengoni, A.; Milazzo, P. M.; Musumarra, A.; Negret, A.; Nolte, R.; Oprea, A.; Patronis, N.; Pavlik, A.; Perkowski, J.; Porras, I.; Praena, J.; Radeck, D.; Rauscher, T.; Reifarth, R.; Rout, P. C.; Rubbia, C.; Ryan, J. A.; Saxena, A.; Schillebeeckx, P.; Schumann, D.; Smith, A. G.; Sosnin, N. V.; Stamatopoulos, A.; Tagliente, G.; Tain, J. L.; Tarifeño-Saldivia, A.; Tassan-Got, L.; Valenta, S.; Variale, V.; Vaz, P.; Ventura, A.; Vlastou, R.; Wallner, A.; Warren, S.; Woods, P. J.; Wright, T.; Žugec, P.
2017-09-01
Monte Carlo (MC) simulations are an essential tool to determine fundamental features of a neutron beam, such as the neutron flux or the γ-ray background, that sometimes can not be measured or at least not in every position or energy range. Until recently, the most widely used MC codes in this field had been MCNPX and FLUKA. However, the Geant4 toolkit has also become a competitive code for the transport of neutrons after the development of the native Geant4 format for neutron data libraries, G4NDL. In this context, we present the Geant4 simulations of the neutron spallation target of the n_TOF facility at CERN, done with version 10.1.1 of the toolkit. The first goal was the validation of the intra-nuclear cascade models implemented in the code using, as benchmark, the characteristics of the neutron beam measured at the first experimental area (EAR1), especially the neutron flux and energy distribution, and the time distribution of neutrons of equal kinetic energy, the so-called Resolution Function. The second goal was the development of a Monte Carlo tool aimed to provide useful calculations for both the analysis and planning of the upcoming measurements at the new experimental area (EAR2) of the facility.
NASA Astrophysics Data System (ADS)
Otiougova, Polina; Bergmann, Ryan; Kiselev, Daniela; Talanov, Vadim; Wohlmuther, Michael
2017-09-01
The Paul Scherrer Institute (PSI) is the largest national research center in Switzerland. Its multidisciplinary research is dedicated to a wide ↓eld in natural science and technology as well as particle physics. The High Intensity Proton Accelerator Facility (HIPA) has been in operation at PSI since 1974. It includes an 870 keV Cockroft-Walton pre-accelerator, a 72 MeV injector cyclotron as well as a 590 MeV ring cyclotron. The experimental facilities, the meson production graphite targets, Target E and Target M, and the spallation target stations (SINQ and UCN) are used for material research and particle physics. In order to ful↓ll the request of the regulatory authorities and to be reported to the regulators, the expected radioactive waste and nuclide inventory after an anticipated ↓nal shutdown in the far future has to be estimated. In this contribution, calculations for the 20 m long beamline between Target E and the 590 MeV beam dump of HIPA are presented. The ↓rst step in the calculations was determining spectra and spatial particle distributions around the beamlines using the Monte-Carlo particle transport code MCNPX2.7.0 [1]. To perform the analysis of the MCNPX output and to determine the radionuclide inventory as well as the speci↓c activity of the nuclides, an activation script [2] using the FISPACT10 code with the cross sections from the European Activation File (EAF2010) [3] was applied. The speci↓c activity values were compared to the currently existing Swiss exemption limits (LE) [4] as well as to the Swiss liberation limits (LL) [5], becoming e↑ective in the near future. The obtained results were used to estimate the total volume of the radioactive waste produced at HIPA and have to be reported to the Swiss regulatory authorities. The comparison of the performed calculations to measurements is discussed as well. Note to the reader: the pdf file has been changed on September 22, 2017.
In-Situ Assays Using a New Advanced Mathematical Algorithm - 12400
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oginni, B.M.; Bronson, F.L.; Field, M.B.
2012-07-01
Current mathematical efficiency modeling software for in-situ counting, such as the commercially available In-Situ Object Calibration Software (ISOCS), typically allows the description of measurement geometries via a list of well-defined templates which describe regular objects, such as boxes, cylinder, or spheres. While for many situations, these regular objects are sufficient to describe the measurement conditions, there are occasions in which a more detailed model is desired. We have developed a new all-purpose geometry template that can extend the flexibility of current ISOCS templates. This new template still utilizes the same advanced mathematical algorithms as current templates, but allows the extensionmore » to a multitude of shapes and objects that can be placed at any location and even combined. In addition, detectors can be placed anywhere and aimed at any location within the measurement scene. Several applications of this algorithm to in-situ waste assay measurements, as well as, validations of this template using Monte Carlo calculations and experimental measurements are studied. Presented in this paper is a new template of the mathematical algorithms for evaluating efficiencies. This new template combines all the advantages of the ISOCS and it allows the use of very complex geometries, it also allows stacking of geometries on one another in the same measurement scene and it allows the detector to be placed anywhere in the measurement scene and pointing in any direction. We have shown that the template compares well with the previous ISOCS software within the limit of convergence of the code, and also compare well with the MCNPX and measured data within the joint uncertainties for the code and the data. The new template agrees with ISOCS to within 1.5% at all energies. It agrees with the MCNPX to within 10% at all energies and it agrees with most geometries within 5%. It finally agrees with measured data to within 10%. This mathematical algorithm can now be used for quickly and accurately evaluating efficiencies for wider range of gamma-ray spectroscopy applications. (authors)« less
Radiation Environment Inside Spacecraft
NASA Technical Reports Server (NTRS)
O'Neill, Patrick
2015-01-01
Dr. Patrick O'Neill, NASA Johnson Space Center, will present a detailed description of the radiation environment inside spacecraft. The free space (outside) solar and galactic cosmic ray and trapped Van Allen belt proton spectra are significantly modified as these ions propagate through various thicknesses of spacecraft structure and shielding material. In addition to energy loss, secondary ions are created as the ions interact with the structure materials. Nuclear interaction codes (FLUKA, GEANT4, HZTRAN, MCNPX, CEM03, and PHITS) transport free space spectra through different thicknesses of various materials. These "inside" energy spectra are then converted to Linear Energy Transfer (LET) spectra and dose rate - that's what's needed by electronics systems designers. Model predictions are compared to radiation measurements made by instruments such as the Intra-Vehicular Charged Particle Directional Spectrometer (IV-CPDS) used inside the Space Station, Orion, and Space Shuttle.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Purwaningsih, Anik
Dosimetric data for a brachytherapy source should be known before it used for clinical treatment. Iridium-192 source type H01 was manufactured by PRR-BATAN aimed to brachytherapy is not yet known its dosimetric data. Radial dose function and anisotropic dose distribution are some primary keys in brachytherapy source. Dose distribution for Iridium-192 source type H01 was obtained from the dose calculation formalism recommended in the AAPM TG-43U1 report using MCNPX 2.6.0 Monte Carlo simulation code. To know the effect of cavity on Iridium-192 type H01 caused by manufacturing process, also calculated on Iridium-192 type H01 if without cavity. The result ofmore » calculation of radial dose function and anisotropic dose distribution for Iridium-192 source type H01 were compared with another model of Iridium-192 source.« less
Zandi, Nadia; Afarideh, Hossein; Aboudzadeh, Mohammad Reza; Rajabifar, Saeed
2018-02-01
The aim of this work is to increase the magnitude of the fast neutron flux inside the flux trap where radionuclides are produced. For this purpose, three new designs of the flux trap are proposed and the obtained fast and thermal neutron fluxes compared with each other. The first and second proposed designs were a sealed cube contained air and D 2 O, respectively. The results of calculated production yield all indicated the superiority of the latter by a factor of 55% in comparison to the first proposed design. The third proposed design was based on changing the surrounding of the sealed cube by locating two fuel plates near that. In this case, the production yield increased up to 70%. Copyright © 2017. Published by Elsevier Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen Jing
2008-08-07
This study used the Monte-Carlo code MCNPX to determine mean absorbed doses to the embryo and foetus when the mother is exposed to external muon fields. Monoenergetic muons ranging from 20 MeV to 50 GeV were considered. The irradiation geometries include anteroposterior (AP), postero-anterior (PA), lateral (LAT), rotational (ROT), isotropic (ISO), and top-down (TOP). At each of these irradiation geometries, absorbed doses to the foetal body were calculated for the embryo of 8 weeks and the foetus of 3, 6 or 9 months, respectively. Muon fluence-to-absorbed-dose conversion coefficients were derived for the four prenatal ages. Since such conversion coefficients aremore » yet unknown, the results presented here fill a data gap.« less
Feasibility study for a realistic training dedicated to radiological protection improvement
NASA Astrophysics Data System (ADS)
Courageot, Estelle; Reinald, Kutschera; Gaillard-Lecanu, Emmanuelle; Sylvie, Jahan; Riedel, Alexandre; Therache, Benjamin
2014-06-01
Any personnel involved in activities within the controlled area of a nuclear facility must be provided with appropriate radiological protection training. An evident purpose of this training is to know the regulation dedicated to workplaces where ionizing radiation may be present, in order to properly carry out the radiation monitoring, to use suitable protective equipments and to behave correctly if unexpected working conditions happen. A major difficulty of this training consist in having the most realistic reading from the monitoring devices for a given exposure situation, but without using real radioactive sources. A new approach is developed at EDF R&D for radiological protection training. This approach combines different technologies, in an environment representative of the workplace but geographically separated from the nuclear power plant: a training area representative of a workplace, a Man Machine Interface used by the trainer to define the source configuration and the training scenario, a geolocalization system, fictive radiation monitoring devices and a particle transport code able to calculate in real time the dose map due to the virtual sources. In a first approach, our real-time particles transport code, called Moderato, used only an attenuation low in straight line. To improve the realism further, we would like to switch a code based on the Monte Carlo transport of particles method like Geant 4 or MCNPX instead of Moderato. The aim of our study is the evaluation of the code in our application, in particular, the possibility to keep a real time response of our architecture.
Cross-correlation measurements with the EJ-299-33 plastic scintillator
NASA Astrophysics Data System (ADS)
Bourne, Mark M.; Whaley, Jeff; Dolan, Jennifer L.; Polack, John K.; Flaska, Marek; Clarke, Shaun D.; Tomanin, Alice; Peerani, Paolo; Pozzi, Sara A.
2015-06-01
New organic-plastic scintillation compositions have demonstrated pulse-shape discrimination (PSD) of neutrons and gamma rays. We present cross-correlation measurements of 252Cf and mixed uranium-plutonium oxide (MOX) with the EJ-299-33 plastic scintillator. For comparison, equivalent measurements were performed with an EJ-309 liquid scintillator. Offline, digital PSD was applied to each detector. These measurements show that EJ-299-33 sacrifices a factor of 5 in neutron-neutron efficiency relative to EJ-309, but could still utilize the difference in neutron-neutron efficiency and neutron single-to-double ratio to distinguish 252Cf from MOX. These measurements were modeled with MCNPX-PoliMi, and MPPost was used to convert the detailed collision history into simulated cross-correlation distributions. MCNPX-PoliMi predicted the measured 252Cf cross-correlation distribution for EJ-309 to within 10%. Greater photon uncertainty in the MOX sample led to larger discrepancy in the simulated MOX cross-correlation distribution. The modeled EJ-299-33 plastic also gives reasonable agreement with measured cross-correlation distributions, although the MCNPX-PoliMi model appears to under-predict the neutron detection efficiency.
Anigstein, Robert; Olsher, Richard H; Loomis, Donald A; Ansari, Armin
2016-12-01
The detonation of a radiological dispersion device or other radiological incidents could result in widespread releases of radioactive materials and intakes of radionuclides by affected individuals. Transportable radiation monitoring instruments could be used to measure radiation from gamma-emitting radionuclides in the body for triaging individuals and assigning priorities to their bioassay samples for in vitro assessments. The present study derived sets of calibration factors for four instruments: the Ludlum Model 44-2 gamma scintillator, a survey meter containing a 2.54 × 2.54-cm NaI(Tl) crystal; the Captus 3000 thyroid uptake probe, which contains a 5.08 × 5.08-cm NaI(Tl) crystal; the Transportable Portal Monitor Model TPM-903B, which contains two 3.81 × 7.62 × 182.9-cm polyvinyltoluene plastic scintillators; and a generic instrument, such as an ionization chamber, that measures exposure rates. The calibration factors enable these instruments to be used for assessing inhaled or ingested intakes of any of four radionuclides: Co, I, Cs, and Ir. The derivations used biokinetic models embodied in the DCAL computer software system developed by the Oak Ridge National Laboratory and Monte Carlo simulations using the MCNPX radiation transport code. The three physical instruments were represented by MCNP models that were developed previously. The affected individuals comprised children of five ages who were represented by the revised Oak Ridge National Laboratory pediatric phantoms, and adult men and adult women represented by the Adult Reference Computational Phantoms described in Publication 110 of the International Commission on Radiological Protection. These calibration factors can be used to calculate intakes; the intakes can be converted to committed doses by the use of tabulated dose coefficients. These calibration factors also constitute input data to the ICAT computer program, an interactive Microsoft Windows-based software package that estimates intakes of radionuclides and cumulative and committed effective doses, based on measurements made with these instruments. This program constitutes a convenient tool for assessing intakes and doses without consulting tabulated calibration factors and dose coefficients.
Anigstein, Robert; Olsher, Richard H.; Loomis, Donald A.; Ansari, Armin
2017-01-01
The detonation of a radiological dispersion device or other radiological incidents could result in widespread releases of radioactive materials and intakes of radionuclides by affected individuals. Transportable radiation monitoring instruments could be used to measure radiation from gamma-emitting radionuclides in the body for triaging individuals and assigning priorities to their bioassay samples for in vitro assessments. The present study derived sets of calibration factors for four instruments: the Ludlum Model 44-2 gamma scintillator, a survey meter containing a 2.54 × 2.54-cm NaI(Tl) crystal; the Captus 3000 thyroid uptake probe, which contains a 5.08 × 5.08-cm NaI(Tl) crystal; the Transportable Portal Monitor Model TPM-903B, which contains two 3.81 × 7.62 × 182.9-cm polyvinyltoluene plastic scintillators; and a generic instrument, such as an ionization chamber, that measures exposure rates. The calibration factors enable these instruments to be used for assessing inhaled or ingested intakes of any of four radionuclides: 60Co, 131I, 137Cs, and 192Ir. The derivations used biokinetic models embodied in the DCAL computer software system developed by the Oak Ridge National Laboratory and Monte Carlo simulations using the MCNPX radiation transport code. The three physical instruments were represented by MCNP models that were developed previously. The affected individuals comprised children of five ages who were represented by the revised Oak Ridge National Laboratory pediatric phantoms, and adult men and adult women represented by the Adult Reference Computational Phantoms described in Publication 110 of the International Commission on Radiological Protection. These calibration factors can be used to calculate intakes; the intakes can be converted to committed doses by the use of tabulated dose coefficients. These calibration factors also constitute input data to the ICAT computer program, an interactive Microsoft Windows-based software package that estimates intakes of radionuclides and cumulative and committed effective doses, based on measurements made with these instruments. This program constitutes a convenient tool for assessing intakes and doses without consulting tabulated calibration factors and dose coefficients. PMID:27798478
Han, Bin; Xu, X. George; Chen, George T. Y.
2011-01-01
Purpose: Monte Carlo methods are used to simulate and optimize a time-resolved proton range telescope (TRRT) in localization of intrafractional and interfractional motions of lung tumor and in quantification of proton range variations. Methods: The Monte Carlo N-Particle eXtended (MCNPX) code with a particle tracking feature was employed to evaluate the TRRT performance, especially in visualizing and quantifying proton range variations during respiration. Protons of 230 MeV were tracked one by one as they pass through position detectors, patient 4DCT phantom, and finally scintillator detectors that measured residual ranges. The energy response of the scintillator telescope was investigated. Mass density and elemental composition of tissues were defined for 4DCT data. Results: Proton water equivalent length (WEL) was deduced by a reconstruction algorithm that incorporates linear proton track and lateral spatial discrimination to improve the image quality. 4DCT data for three patients were used to visualize and measure tumor motion and WEL variations. The tumor trajectories extracted from the WEL map were found to be within ∼1 mm agreement with direct 4DCT measurement. Quantitative WEL variation studies showed that the proton radiograph is a good representation of WEL changes from entrance to distal of the target. Conclusions:MCNPX simulation results showed that TRRT can accurately track the motion of the tumor and detect the WEL variations. Image quality was optimized by choosing proton energy, testing parameters of image reconstruction algorithm, and comparing to ground truth 4DCT. The future study will demonstrate the feasibility of using the time resolved proton radiography as an imaging tool for proton treatments of lung tumors. PMID:21626923
Computational study of radiation doses at UNLV accelerator facility
NASA Astrophysics Data System (ADS)
Hodges, Matthew; Barzilov, Alexander; Chen, Yi-Tung; Lowe, Daniel
2017-09-01
A Varian K15 electron linear accelerator (linac) has been considered for installation at University of Nevada, Las Vegas (UNLV). Before experiments can be performed, it is necessary to evaluate the photon and neutron spectra as generated by the linac, as well as the resulting dose rates within the accelerator facility. A computational study using MCNPX was performed to characterize the source terms for the bremsstrahlung converter. The 15 MeV electron beam available in the linac is above the photoneutron threshold energy for several materials in the linac assembly, and as a result, neutrons must be accounted for. The angular and energy distributions for bremsstrahlung flux generated by the interaction of the 15 MeV electron beam with the linac target were determined. This source term was used in conjunction with the K15 collimators to determine the dose rates within the facility.
Sun, Wenjuan; JIA, Xianghong; XIE, Tianwu; XU, Feng; LIU, Qian
2013-01-01
With the rapid development of China's space industry, the importance of radiation protection is increasingly prominent. To provide relevant dose data, we first developed the Visible Chinese Human adult Female (VCH-F) phantom, and performed further modifications to generate the VCH-F Astronaut (VCH-FA) phantom, incorporating statistical body characteristics data from the first batch of Chinese female astronauts as well as reference organ mass data from the International Commission on Radiological Protection (ICRP; both within 1% relative error). Based on cryosection images, the original phantom was constructed via Non-Uniform Rational B-Spline (NURBS) boundary surfaces to strengthen the deformability for fitting the body parameters of Chinese female astronauts. The VCH-FA phantom was voxelized at a resolution of 2 × 2 × 4 mm3for radioactive particle transport simulations from isotropic protons with energies of 5000–10 000 MeV in Monte Carlo N-Particle eXtended (MCNPX) code. To investigate discrepancies caused by anatomical variations and other factors, the obtained doses were compared with corresponding values from other phantoms and sex-averaged doses. Dose differences were observed among phantom calculation results, especially for effective dose with low-energy protons. Local skin thickness shifts the breast dose curve toward high energy, but has little impact on inner organs. Under a shielding layer, organ dose reduction is greater for skin than for other organs. The calculated skin dose per day closely approximates measurement data obtained in low-Earth orbit (LEO). PMID:23135158
Alves, M C; Santos, W S; Lee, Choonsik; Bolch, Wesley E; Hunt, John G; Carvalho Júnior, A B
2014-12-21
The conversion coefficients (CCs) relate protection quantities, mean absorbed dose (DT) and effective dose (E), with physical radiation field quantities, such as fluence (Φ). The calculation of CCs through Monte Carlo simulations is useful for estimating the dose in individuals exposed to radiation. The aim of this work was the calculation of conversion coefficients for absorbed and effective doses per fluence (DT/ Φ and E/Φ) using a sitting and standing female hybrid phantom (UFH/NCI) exposure to monoenergetic protons with energy ranging from 2 MeV to 10 GeV. The radiation transport code MCNPX was used to develop exposure scenarios implementing the female UFH/NCI phantom in sitting and standing postures. Whole-body irradiations were performed using the recommended irradiation geometries by ICRP publication 116 (AP, PA, RLAT, LLAT, ROT and ISO). In most organs, the conversion coefficients DT/Φ were similar for both postures. However, relative differences were significant for organs located in the abdominal region, such as ovaries, uterus and urinary bladder, especially in the AP, RLAT and LLAT geometries. Anatomical differences caused by changing the posture of the female UFH/NCI phantom led an attenuation of incident protons with energies below 150 MeV by the thigh of the phantom in the sitting posture, for the front-to-back irradiation, and by the arms and hands of the phantom in the standing posture, for the lateral irradiation.
Exposure to 137Cs deposited in soil – A Monte Carlo study
NASA Astrophysics Data System (ADS)
da Silveira, Lucas M.; Pereira, Marco A. M.; Neves, Lucio P.; Perini, Ana P.; Belinato, Walmir; Caldas, Linda V. E.; Santos, William S.
2018-03-01
In the event of an environmental contamination with radioactive materials, one of the most dangerous materials is 137Cs. In order to evaluate the radiation doses involved in an environmental contamination of soil, with 137Cs, we carried out a computational dosimetric study. We determined the radiation conversion coefficients (CC) for effective (E) and equivalent (H T) doses, using a male and a female anthropomorphic phantoms. These phantoms were coupled with the MCNPX (2.7.0) Monte Carlo simulation software, for three different types of soil. The highest CC[H T] values were for the gonads and skin (male) and bone marrow and skin (female). We found no difference for the different types of soil.
MCMEG: Simulations of both PDD and TPR for 6 MV LINAC photon beam using different MC codes
NASA Astrophysics Data System (ADS)
Fonseca, T. C. F.; Mendes, B. M.; Lacerda, M. A. S.; Silva, L. A. C.; Paixão, L.; Bastos, F. M.; Ramirez, J. V.; Junior, J. P. R.
2017-11-01
The Monte Carlo Modelling Expert Group (MCMEG) is an expert network specializing in Monte Carlo radiation transport and the modelling and simulation applied to the radiation protection and dosimetry research field. For the first inter-comparison task the group launched an exercise to model and simulate a 6 MV LINAC photon beam using the Monte Carlo codes available within their laboratories and validate their simulated results by comparing them with experimental measurements carried out in the National Cancer Institute (INCA) in Rio de Janeiro, Brazil. The experimental measurements were performed using an ionization chamber with calibration traceable to a Secondary Standard Dosimetry Laboratory (SSDL). The detector was immersed in a water phantom at different depths and was irradiated with a radiation field size of 10×10 cm2. This exposure setup was used to determine the dosimetric parameters Percentage Depth Dose (PDD) and Tissue Phantom Ratio (TPR). The validation process compares the MC calculated results to the experimental measured PDD20,10 and TPR20,10. Simulations were performed reproducing the experimental TPR20,10 quality index which provides a satisfactory description of both the PDD curve and the transverse profiles at the two depths measured. This paper reports in detail the modelling process using MCNPx, MCNP6, EGSnrc and Penelope Monte Carlo codes, the source and tally descriptions, the validation processes and the results.
An Improved Neutron Transport Algorithm for Space Radiation
NASA Technical Reports Server (NTRS)
Heinbockel, John H.; Clowdsley, Martha S.; Wilson, John W.
2000-01-01
A low-energy neutron transport algorithm for use in space radiation protection is developed. The algorithm is based upon a multigroup analysis of the straight-ahead Boltzmann equation by using a mean value theorem for integrals. This analysis is accomplished by solving a realistic but simplified neutron transport test problem. The test problem is analyzed by using numerical and analytical procedures to obtain an accurate solution within specified error bounds. Results from the test problem are then used for determining mean values associated with rescattering terms that are associated with a multigroup solution of the straight-ahead Boltzmann equation. The algorithm is then coupled to the Langley HZETRN code through the evaporation source term. Evaluation of the neutron fluence generated by the solar particle event of February 23, 1956, for a water and an aluminum-water shield-target configuration is then compared with LAHET and MCNPX Monte Carlo code calculations for the same shield-target configuration. The algorithm developed showed a great improvement in results over the unmodified HZETRN solution. In addition, a two-directional solution of the evaporation source showed even further improvement of the fluence near the front of the water target where diffusion from the front surface is important.
Fine-resolution voxel S values for constructing absorbed dose distributions at variable voxel size.
Dieudonné, Arnaud; Hobbs, Robert F; Bolch, Wesley E; Sgouros, George; Gardin, Isabelle
2010-10-01
This article presents a revised voxel S values (VSVs) approach for dosimetry in targeted radiotherapy, allowing dose calculation for any voxel size and shape of a given SPECT or PET dataset. This approach represents an update to the methodology presented in MIRD pamphlet no. 17. VSVs were generated in soft tissue with a fine spatial sampling using the Monte Carlo (MC) code MCNPX for particle emissions of 9 radionuclides: (18)F, (90)Y, (99m)Tc, (111)In, (123)I, (131)I, (177)Lu, (186)Re, and (201)Tl. A specific resampling algorithm was developed to compute VSVs for desired voxel dimensions. The dose calculation was performed by convolution via a fast Hartley transform. The fine VSVs were calculated for cubic voxels of 0.5 mm for electrons and 1.0 mm for photons. Validation studies were done for (90)Y and (131)I VSV sets by comparing the revised VSV approach to direct MC simulations. The first comparison included 20 spheres with different voxel sizes (3.8-7.7 mm) and radii (4-64 voxels) and the second comparison a hepatic tumor with cubic voxels of 3.8 mm. MC simulations were done with MCNPX for both. The third comparison was performed on 2 clinical patients with the 3D-RD (3-Dimensional Radiobiologic Dosimetry) software using the EGSnrc (Electron Gamma Shower National Research Council Canada)-based MC implementation, assuming a homogeneous tissue-density distribution. For the sphere model study, the mean relative difference in the average absorbed dose was 0.20% ± 0.41% for (90)Y and -0.36% ± 0.51% for (131)I (n = 20). For the hepatic tumor, the difference in the average absorbed dose to tumor was 0.33% for (90)Y and -0.61% for (131)I and the difference in average absorbed dose to the liver was 0.25% for (90)Y and -1.35% for (131)I. The comparison with the 3D-RD software showed an average voxel-to-voxel dose ratio between 0.991 and 0.996. The calculation time was below 10 s with the VSV approach and 50 and 15 h with 3D-RD for the 2 clinical patients. This new VSV approach enables the calculation of absorbed dose based on a SPECT or PET cumulated activity map, with good agreement with direct MC methods, in a faster and more clinically compatible manner.
Neutron spectra due (13)N production in a PET cyclotron.
Benavente, J A; Vega-Carrillo, H R; Lacerda, M A S; Fonseca, T C F; Faria, F P; da Silva, T A
2015-05-01
Monte Carlo and experimental methods have been used to characterize the neutron radiation field around PET (Positron Emission Tomography) cyclotrons. In this work, the Monte Carlo code MCNPX was used to estimate the neutron spectra, the neutron fluence rates and the ambient dose equivalent (H*(10)) in seven locations around a PET cyclotron during (13)N production. In order to validate these calculations, H*(10) was measured in three sites and were compared with the calculated doses. All the spectra have two peaks, one above 0.1MeV due to the evaporation neutrons and another in the thermal region due to the room-return effects. Despite the relatively large difference between the measured and calculated H*(10) for one point, the agreement was considered good, compared with that obtained for (18)F production in a previous work. Copyright © 2015 Elsevier Ltd. All rights reserved.
Shielding and activation calculations around the reactor core for the MYRRHA ADS design
NASA Astrophysics Data System (ADS)
Ferrari, Anna; Mueller, Stefan; Konheiser, J.; Castelliti, D.; Sarotto, M.; Stankovskiy, A.
2017-09-01
In the frame of the FP7 European project MAXSIMA, an extensive simulation study has been done to assess the main shielding problems in view of the construction of the MYRRHA accelerator-driven system at SCK·CEN in Mol (Belgium). An innovative method based on the combined use of the two state-of-the-art Monte Carlo codes MCNPX and FLUKA has been used, with the goal to characterize complex, realistic neutron fields around the core barrel, to be used as source terms in detailed analyses of the radiation fields due to the system in operation, and of the coupled residual radiation. The main results of the shielding analysis are presented, as well as the construction of an activation database of all the key structural materials. The results evidenced a powerful way to analyse the shielding and activation problems, with direct and clear implications on the design solutions.
Radiation damage calculations for the SINQ Target 5
NASA Astrophysics Data System (ADS)
Wechsler, Monroe S.; Lu, Wei; Dai, Yong
2003-03-01
Calculations are underway of radiation damage (production of displacements, helium, and hydrogen) at Target 5 of the SINQ spallation neutron source at the Paul Scherrer Institute in Switzerland. The target is bombarded by 575-MeV protons, and the spallation-neutron-producing target material is liquid lead. The calculations employ the Monte Carlo code MCNPX (version 2.3.0). The peak proton and neutron fluxes at the aluminum-alloy entrance window are determined to be about 1.9E14 protons/cm2s per mA of incident proton current and 2.4E13 neutrons/cm2s per mA. For a beam exposure of 10 Ahr, the peak damage sustained at the entrance window due to protons and neutrons combined is calculated to be 7.8 dpa, 2000 appmHe, and 4000 appmH. The significance of the damage results for the entrance window and components within Target 5 will be discussed.
Microtron MT 25 as a source of neutrons
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kralik, M.; Solc, J.; Chvatil, D.
2012-08-15
The objective was to describe Microtron MT25 as a source of neutrons generated by bremsstrahlung induced photonuclear reactions in U and Pb targets. Bremsstrahlung photons were produced by electrons accelerated at energy 21.6 MeV. Spectral fluence of the generated neutrons was calculated with MCNPX code and then experimentally determined at two positions by means of a Bonner spheres spectrometer in which the detector of thermal neutrons was replaced by activation Mn tablets or track detectors CR-39 with a {sup 10}B radiator. The measured neutron spectral fluence and the calculated anisotropy served for the estimation of neutron yield from the targetsmore » and for the determination of ambient dose equivalent rate at the place of measurement. Microtron MT25 is intended as one of the sources for testing neutron sensitive devices which will be sent into the space.« less
Cancer risk coefficient for patient undergoing kyphoplasty surgery using Monte Carlo method
NASA Astrophysics Data System (ADS)
Santos, Felipe A.; Santos, William S.; Galeano, Diego C.; Cavalcante, Fernanda R.; Silva, Ademir X.; Souza, Susana O.; Júnior, Albérico B. Carvalho
2017-11-01
Kyphoplasty surgery is widely used for pain relief in patients with vertebral compression fracture (VCF). For this surgery, an X-ray emitter that provides real-time imaging is employed to guide the medical instruments and the surgical cement used to fill and strengthen the vertebra. Equivalent and effective doses related to high temporal resolution equipment has been studied to assess the damage and more recently cancer risk. For this study, a virtual scenario was prepared using MCNPX code and a pair of UF family simulators. Two projections with seven tube voltages for each one were simulated. The organ in the abdominal region were those who had higher cancer risk because they receive the primary beam. The risk of lethal cancer is on average 20% higher in AP projection than in LL projection. This study aims at estimating the risk of cancer in organs and the risk of lethal cancer for patient submitted to kyphoplasty surgery.
A Monte Carlo model for photoneutron generation by a medical LINAC
NASA Astrophysics Data System (ADS)
Sumini, M.; Isolan, L.; Cucchi, G.; Sghedoni, R.; Iori, M.
2017-11-01
For an optimal tuning of the radiation protection planning, a Monte Carlo model using the MCNPX code has been built, allowing an accurate estimate of the spectrometric and geometrical characteristics of photoneutrons generated by a Varian TrueBeam Stx© medical linear accelerator. We considered in our study a device working at the reference energy for clinical applications of 15 MV, stemmed from a Varian Clinac©2100 modeled starting from data collected thanks to several papers available in the literature. The model results were compared with neutron and photon dose measurements inside and outside the bunker hosting the accelerator obtaining a complete dose map. Normalized neutron fluences were tallied in different positions at the patient plane and at different depths. A sensitivity analysis with respect to the flattening filter material were performed to enlighten aspects that could influence the photoneutron production.
NASA Astrophysics Data System (ADS)
Smirnov, A. N.; Pietropaolo, A.; Prokofiev, A. V.; Rodionova, E. E.; Frost, C. D.; Ansell, S.; Schooneveld, E. M.; Gorini, G.
2012-09-01
The high-energy neutron field of the VESUVIO instrument at the ISIS facility has been characterized using the technique of thin-film breakdown counters (TFBC). The technique utilizes neutron-induced fission reactions of natU and 209Bi with detection of fission fragments by TFBCs. Experimentally determined count rates of the fragments are ≈50% higher than those calculated using spectral neutron flux simulated with the MCNPX code. This work is a part of the project to develop ChipIr, a new dedicated facility for the accelerated testing of electronic components and systems for neutron-induced single event effects in the new Target Station 2 at ISIS. The TFBC technique has shown to be applicable for on-line monitoring of the neutron flux in the neutron energy range 1-800 MeV at the position of the device under test (DUT).
NASA Astrophysics Data System (ADS)
Borella, Alessandro
2016-09-01
The Belgian Nuclear Research Centre is engaged in R&D activity in the field of Non Destructive Analysis on nuclear materials, with focus on spent fuel characterization. A 500 mm3 Cadmium Zinc Telluride (CZT) with enhanced resolution was recently purchased. With a full width at half maximum of 1.3% at 662 keV, the detector is very promising in view of its use for applications such as determination of uranium enrichment and plutonium isotopic composition, as well as measurement on spent fuel. In this paper, I report about the work done with such a detector in terms of its characterization. The detector energy calibration, peak shape and efficiency were determined from experimental data. The data included measurements with calibrated sources, both in a bare and in a shielded environment. In addition, Monte Carlo calculations with the MCNPX code were carried out and benchmarked with experiments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tobin, Stephen J.; Lundkvist, Niklas; Goodsell, Alison V.
In this study, Monte Carlo simulations were performed for the differential die-away (DDA) technique to analyse the time-dependent behaviour of the neutron population in fresh and spent nuclear fuel assemblies as part of the Next Generation Safeguards Initiative Spent Fuel (NGSI-SF) Project. Simulations were performed to investigate both a possibly portable as well as a permanent DDA instrument. Taking advantage of a custom made modification to the MCNPX code, the variation in the neutron population, simultaneously in time and space, was examined. The motivation for this research was to improve the design of the DDA instrument, as it is bemore » ing considered for possible deployment at the Central Storage of Spent Nuclear Fuel and Encapsulation Plant in Sweden (Clab), as well as to assist in the interpretation of the both simulated and measured signals.« less
Feasibility study of using laser-generated neutron beam for BNCT.
Kasesaz, Y; Rahmani, F; Khalafi, H
2015-09-01
The feasibility of using a laser-accelerated proton beam to produce a neutron source, via (p,n) reaction, for Boron Neutron Capture Therapy (BNCT) applications has been studied by MCNPX Monte Carlo code. After optimization of the target material and its thickness, a Beam Shaping Assembly (BSA) has been designed and optimized to provide appropriate neutron beam according to the recommended criteria by International Atomic Energy Agency. It was found that the considered laser-accelerated proton beam can provide epithermal neutron flux of ∼2×10(6) n/cm(2) shot. To achieve an appropriate epithermal neutron flux for BNCT treatment, the laser must operate at repetition rates of 1 kHz, which is rather ambitious at this moment. But it can be used in some BNCT researches field such as biological research. Copyright © 2015 Elsevier Ltd. All rights reserved.
Tobin, Stephen J.; Lundkvist, Niklas; Goodsell, Alison V.; ...
2015-12-01
In this study, Monte Carlo simulations were performed for the differential die-away (DDA) technique to analyse the time-dependent behaviour of the neutron population in fresh and spent nuclear fuel assemblies as part of the Next Generation Safeguards Initiative Spent Fuel (NGSI-SF) Project. Simulations were performed to investigate both a possibly portable as well as a permanent DDA instrument. Taking advantage of a custom made modification to the MCNPX code, the variation in the neutron population, simultaneously in time and space, was examined. The motivation for this research was to improve the design of the DDA instrument, as it is bemore » ing considered for possible deployment at the Central Storage of Spent Nuclear Fuel and Encapsulation Plant in Sweden (Clab), as well as to assist in the interpretation of the both simulated and measured signals.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schear, Melissa A; Tobin, Stephen J
2009-01-01
The {sup 252}Cf shuffler has been widely used in nuclear safeguards and radioactive waste management to assay fissile isotopes, such as {sup 235}U or {sup 239}Pu, present in a variety of samples, ranging from small cans of uranium waste to metal samples weighing several kilograms. Like other non-destructive assay instruments, the shuffler uses an interrogating neutron source to induce fissions in the sample. Although shufflers with {sup 252}Cf sources have been reliably used for several decades, replacing this isotopic source with a neutron generator presents some distinct advantages. Neutron generators can be run in a continuous or pulsed mode, andmore » may be turned off, eliminating the need for shielding and a shuffling mechanism in the shuffler. There is also essentially no dose to personnel during installation, and no reliance on the availability of {sup 252}Cf. Despite these advantages, the more energetic neutrons emitted from the neutron generator (141 MeV for D-T generators) present some challenges for certain material types. For example when the enrichment of a uranium sample is unknown, the fission of {sup 238}U is generally undesirable. Since measuring uranium is one of the main uses of a shuffler, reducing the delayed neutron contribution from {sup 238}U is desirable. Hence, the shuffler hardware must be modified to accommodate a moderator configuration near the source to tailor the interrogating spectrum in a manner which promotes sub-threshold fissions (below 1 MeV) but avoids the over-moderation of the interrogating neutrons so as to avoid self-shielding. In this study, where there are many material and geometry combinations, the Monte Carlo N-Particle eXtended (MCNPX) transport code was used to model, design, and optimize the moderator configuration within the shuffler geometry. The code is then used to evaluate and compare the assay performances of both the modified shuffler and the current {sup 252}Cf shuffler designs for different test samples. The matrix effect and the non-uniformity of the interrogating flux are investigated and quantified in each case. The modified geometry proposed by this study can serve s a guide in retrofitting shufflers that are already in use.« less
1975-09-01
This report assumes a familiarity with the GIFT and MAGIC computer codes. The EDIT-COMGEOM code is a FORTRAN computer code. The EDIT-COMGEOM code...converts the target description data which was used in the MAGIC computer code to the target description data which can be used in the GIFT computer code
Radiation environment at LEO orbits: MC simulation and experimental data.
NASA Astrophysics Data System (ADS)
Zanini, Alba; Borla, Oscar; Damasso, Mario; Falzetta, Giuseppe
The evaluations of the different components of the radiation environment in spacecraft, both in LEO orbits and in deep space is of great importance because the biological effect on humans and the risk for instrumentation strongly depends on the kind of radiation (high or low LET). That is important especially in view of long term manned or unmanned space missions, (mission to Mars, solar system exploration). The study of space radiation field is extremely complex and not completely solved till today. Given the complexity of the radiation field, an accurate dose evaluation should be considered an indispensable part of any space mission. Two simulation codes (MCNPX and GEANT4) have been used to assess the secondary radiation inside FO-TON M3 satellite and ISS. The energy spectra of primary radiation at LEO orbits have been modelled by using various tools (SPENVIS, OMERE, CREME96) considering separately Van Allen protons, the GCR protons and the GCR alpha particles. This data are used as input for the two MC codes and transported inside the spacecraft. The results of two calculation meth-ods have been compared. Moreover some experimental results previously obtained on FOTON M3 satellite by using TLD, Bubble dosimeter and LIULIN detector are considered to check the performances of the two codes. Finally the same experimental device are at present collecting data on the ISS (ASI experiment BIOKIS -nDOSE) and at the end of the mission the results will be compared with the calculation.
Spallation neutron production and the current intra-nuclear cascade and transport codes
NASA Astrophysics Data System (ADS)
Filges, D.; Goldenbaum, F.; Enke, M.; Galin, J.; Herbach, C.-M.; Hilscher, D.; Jahnke, U.; Letourneau, A.; Lott, B.; Neef, R.-D.; Nünighoff, K.; Paul, N.; Péghaire, A.; Pienkowski, L.; Schaal, H.; Schröder, U.; Sterzenbach, G.; Tietze, A.; Tishchenko, V.; Toke, J.; Wohlmuther, M.
A recent renascent interest in energetic proton-induced production of neutrons originates largely from the inception of projects for target stations of intense spallation neutron sources, like the planned European Spallation Source (ESS), accelerator-driven nuclear reactors, nuclear waste transmutation, and also from the application for radioactive beams. In the framework of such a neutron production, of major importance is the search for ways for the most efficient conversion of the primary beam energy into neutron production. Although the issue has been quite successfully addressed experimentally by varying the incident proton energy for various target materials and by covering a huge collection of different target geometries --providing an exhaustive matrix of benchmark data-- the ultimate challenge is to increase the predictive power of transport codes currently on the market. To scrutinize these codes, calculations of reaction cross-sections, hadronic interaction lengths, average neutron multiplicities, neutron multiplicity and energy distributions, and the development of hadronic showers are confronted with recent experimental data of the NESSI collaboration. Program packages like HERMES, LCS or MCNPX master the prevision of reaction cross-sections, hadronic interaction lengths, averaged neutron multiplicities and neutron multiplicity distributions in thick and thin targets for a wide spectrum of incident proton energies, geometrical shapes and materials of the target generally within less than 10% deviation, while production cross-section measurements for light charged particles on thin targets point out that appreciable distinctions exist within these models.
Dose conversion coefficients for neutron exposure to the lens of the human eye.
Manger, R P; Bellamy, M B; Eckerman, K F
2012-03-01
Dose conversion coefficients for the lens of the human eye have been calculated for neutron exposure at energies from 1 × 10(-9) to 20 MeV and several standard orientations: anterior-to-posterior, rotational and right lateral. MCNPX version 2.6.0, a Monte Carlo-based particle transport package, was used to determine the energy deposited in the lens of the eye. The human eyeball model was updated by partitioning the lens into sensitive and insensitive volumes as the anterior portion (sensitive volume) of the lens being more radiosensitive and prone to cataract formation. The updated eye model was used with the adult UF-ORNL mathematical phantom in the MCNPX transport calculations.
Nosratieh, Anita; Hernandez, Andrew; Shen, Sam Z; Yaffe, Martin J; Seibert, J Anthony; Boone, John M
2015-09-21
To develop tables of normalized glandular dose coefficients D(g)N for a range of anode-filter combinations and tube voltages used in contemporary breast imaging systems. Previously published mono-energetic D(g)N values were used with various spectra to mathematically compute D(g)N coefficients. The tungsten anode spectra from TASMICS were used; molybdenum and rhodium anode-spectra were generated using MCNPX Monte Carlo code. The spectra were filtered with various thicknesses of Al, Rh, Mo or Cu. An initial half value layer (HVL) calculation was made using the anode and filter material. A range of the HVL values was produced with the addition of small thicknesses of polymethyl methacrylate (PMMA) as a surrogate for the breast compression paddle, to produce a range of HVL values at each tube voltage. Using a spectral weighting method, D(g)N coefficients for the generated spectra were calculated for breast glandular densities of 0%, 12.5%, 25%, 37.5%, 50% and 100% for a range of compressed breast thicknesses from 3 to 8 cm. Eleven tables of normalized glandular dose (D(g)N) coefficients were produced for the following anode/filter combinations: W + 50 μm Ag, W + 500 μm Al, W + 700 μm Al, W + 200 μm Cu, W + 300 μm Cu, W + 50 μm Rh, Mo + 400 μm Cu, Mo + 30 μm Mo, Mo + 25 μm Rh, Rh + 400 μm Cu and Rh + 25 μm Rh. Where possible, these results were compared to previously published D(g)N values and were found to be on average less than 2% different than previously reported values.Over 200 pages of D(g)N coefficients were computed for modeled x-ray system spectra that are used in a number of new breast imaging applications. The reported values were found to be in excellent agreement when compared to published values.
Mean Glandular dose coefficients (DgN) for x-ray spectra used in contemporary breast imaging systems
Nosratieh, Anita; Hernandez, Andrew; Shen, Sam Z.; Yaffe, Martin J.; Seibert, J. Anthony; Boone, John M.
2015-01-01
Purpose To develop tables of normalized glandular dose coefficients DgN for a range of anode–filter combinations and tube voltages used in contemporary breast imaging systems. Methods Previously published mono-energetic DgN values were used with various spectra to mathematically compute DgN coefficients. The tungsten anode spectra from TASMICS were used; Molybdenum and Rhodium anode-spectra were generated using MCNPx Monte Carlo code. The spectra were filtered with various thicknesses of Al, Rh, Mo or Cu. An initial HVL calculation was made using the anode and filter material. A range of the HVL values was produced with the addition of small thicknesses of polymethyl methacrylate (PMMA) as a surrogate for the breast compression paddle, to produce a range of HVL values at each tube voltage. Using a spectral weighting method, DgN coefficients for the generated spectra were calculated for breast glandular densities of 0%, 12.5%, 25%, 37.5%, 50% and 100% for a range of compressed breast thicknesses from 3 to 8 cm. Results Eleven tables of normalized glandular dose (DgN) coefficients were produced for the following anode/filter combinations: W + 50 μm Ag, W + 500 μm Al, W + 700 μm Al, W + 200 μm Cu, W + 300 μm Cu, W + 50 μm Rh, Mo + 400 μm Cu, Mo + 30 μm Mo, Mo + 25 μm Rh, Rh + 400 μm Cu and Rh + 25 μm Rh. Where possible, these results were compared to previously published DgN values and were found to be on average less than 2% different than previously reported values. Conclusion Over 200-pages of DgN coefficients were computed for modeled x-ray system spectra that are used in a number of new breast imaging applications. The reported values were found to be in excellent agreement when compared to published values. PMID:26348995
Development of the two Korean adult tomographic computational phantoms for organ dosimetry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Choonsik; Lee, Choonik; Park, Sang-Hyun
2006-02-15
Following the previously developed Korean tomographic phantom, KORMAN, two additional whole-body tomographic phantoms of Korean adult males were developed from magnetic resonance (MR) and computed tomography (CT) images, respectively. Two healthy male volunteers, whose body dimensions were fairly representative of the average Korean adult male, were recruited and scanned for phantom development. Contiguous whole body MR images were obtained from one subject exclusive of the arms, while whole-body CT images were acquired from the second individual. A total of 29 organs and tissues and 19 skeletal sites were segmented via image manipulation techniques such as gray-level thresholding, region growing, andmore » manual drawing, in which each of segmented image slice was subsequently reviewed by an experienced radiologist for anatomical accuracy. The resulting phantoms, the MR-based KTMAN-1 (Korean Typical MAN-1) and the CT-based KTMAN-2 (Korean Typical MAN-2), consist of 300x150x344 voxels with a voxel resolution of 2x2x5 mm{sup 3} for both phantoms. Masses of segmented organs and tissues were calculated as the product of a nominal reference density, the prevoxel volume, and the cumulative number of voxels defining each organs or tissue. These organs masses were then compared with those of both the Asian and the ICRP reference adult male. Organ masses within both KTMAN-1 and KTMAN-2 showed differences within 40% of Asian and ICRP reference values, with the exception of the skin, gall bladder, and pancreas which displayed larger differences. The resulting three-dimensional binary file was ported to the Monte Carlo code MCNPX2.4 to calculate organ doses following external irradiation for illustrative purposes. Colon, lung, liver, and stomach absorbed doses, as well as the effective dose, for idealized photon irradiation geometries (anterior-posterior and right lateral) were determined, and then compared with data from two other tomographic phantoms (Asian and Caucasian), and stylized ORNL phantom. The armless KTMAN-1 can be applied to dosimetry for computed tomography or lateral x-ray examination, while the whole body KTMAN-2 can be used for radiation protection dosimetry.« less
NASA Astrophysics Data System (ADS)
van den Akker, Mary Evelyn
Radon is considered the second-leading cause of lung cancer after smoking. Epidemiological studies have been conducted in miner cohorts as well as general populations to estimate the risks associated with high and low dose exposures. There are problems with extrapolating risk estimates to low dose exposures, mainly that the dose-response curve at low doses is not well understood. Calculated dosimetric quantities give average energy depositions in an organ or a whole body, but morphological features of an individual can affect these values. As opposed to human phantom models, Computed Tomography (CT) scans provide unique, patient-specific geometries that are valuable in modeling the radiological effects of the short-lived radon progeny sources. Monte Carlo particle transport code Geant4 was used with the CT scan data to model radon inhalation in the main bronchial bifurcation. The equivalent dose rates are near the lower bounds of estimates found in the literature, depending on source volume. To complement the macroscopic study, simulations were run in a small tissue volume in Geant4-DNA toolkit. As an expansion of Geant4 meant to simulate direct physical interactions at the cellular level, the particle track structure of the radon progeny alphas can be analyzed to estimate the damage that can occur in sensitive cellular structures like the DNA molecule. These estimates of DNA double strand breaks are lower than those found in Geant4-DNA studies. Further refinements of the microscopic model are at the cutting edge of nanodosimetry research.
Designing an extended energy range single-sphere multi-detector neutron spectrometer
NASA Astrophysics Data System (ADS)
Gómez-Ros, J. M.; Bedogni, R.; Moraleda, M.; Esposito, A.; Pola, A.; Introini, M. V.; Mazzitelli, G.; Quintieri, L.; Buonomo, B.
2012-06-01
This communication describes the design specifications for a neutron spectrometer consisting of 31 thermal neutron detectors, namely Dysprosium activation foils, embedded in a 25 cm diameter polyethylene sphere which includes a 1 cm thick lead shell insert that degrades the energy of neutrons through (n,xn) reactions, thus allowing to extension of the energy range of the response up to hundreds of MeV neutrons. The new spectrometer, called SP2 (SPherical SPectrometer), relies on the same detection mechanism as that of the Bonner Sphere Spectrometer, but with the advantage of determining the whole neutron spectrum in a single exposure. The Monte Carlo transport code MCNPX was used to design the spectrometer in terms of sphere diameter, number and position of the detectors, position and thickness of the lead shell, as well as to obtain the response matrix for the final configuration. This work focuses on evaluating the spectrometric capabilities of the SP2 design by simulating the exposure of SP2 in neutron fields representing different irradiation conditions (test spectra). The simulated SP2 readings were then unfolded with the FRUIT unfolding code, in the absence of detailed pre-information, and the unfolded spectra were compared with the known test spectra. The results are satisfactory and allowed approving the production of a prototypal spectrometer.
A New Capability for Nuclear Thermal Propulsion Design
DOE Office of Scientific and Technical Information (OSTI.GOV)
Amiri, Benjamin W.; Nuclear and Radiological Engineering Department, University of Florida, Gainesville, FL 32611; Kapernick, Richard J.
2007-01-30
This paper describes a new capability for Nuclear Thermal Propulsion (NTP) design that has been developed, and presents the results of some analyses performed with this design tool. The purpose of the tool is to design to specified mission and material limits, while maximizing system thrust to weight. The head end of the design tool utilizes the ROCket Engine Transient Simulation (ROCETS) code to generate a system design and system design requirements as inputs to the core analysis. ROCETS is a modular system level code which has been used extensively in the liquid rocket engine industry for many years. Themore » core design tool performs high-fidelity reactor core nuclear and thermal-hydraulic design analysis. At the heart of this process are two codes TMSS-NTP and NTPgen, which together greatly automate the analysis, providing the capability to rapidly produce designs that meet all specified requirements while minimizing mass. A PERL based command script, called CORE DESIGNER controls the execution of these two codes, and checks for convergence throughout the process. TMSS-NTP is executed first, to produce a suite of core designs that meet the specified reactor core mechanical, thermal-hydraulic and structural requirements. The suite of designs consists of a set of core layouts and, for each core layout specific designs that span a range of core fuel volumes. NTPgen generates MCNPX models for each of the core designs from TMSS-NTP. Iterative analyses are performed in NTPgen until a reactor design (fuel volume) is identified for each core layout that meets cold and hot operation reactivity requirements and that is zoned to meet a radial core power distribution requirement.« less
A method to calculate the gamma ray detection efficiency of a cylindrical NaI (Tl) crystal
NASA Astrophysics Data System (ADS)
Ahmadi, S.; Ashrafi, S.; Yazdansetad, F.
2018-05-01
Given a wide range application of NaI(Tl) detector in industrial and medical sectors, computation of the related detection efficiency in different distances of a radioactive source, especially for calibration purposes, is the subject of radiation detection studies. In this work, a 2in both in radius and height cylindrical NaI (Tl) scintillator was used, and by changing the radial, axial, and diagonal positions of an isotropic 137Cs point source relative to the detector, the solid angles and the interaction probabilities of gamma photons with the detector's sensitive area have been calculated. The calculations present the geometric and intrinsic efficiency as the functions of detector's dimensions and the position of the source. The calculation model is in good agreement with experiment, and MCNPX simulation.
Dose conversion coefficients for neutron exposure to the lens of the human eye
DOE Office of Scientific and Technical Information (OSTI.GOV)
Manger, Ryan P; Bellamy, Michael B; Eckerman, Keith F
Dose conversion coefficients for the lens of the human eye have been calculated for neutron exposure at energies from 1 x 10{sup -9} to 20 MeV and several standard orientations: anterior-to-posterior, rotational and right lateral. MCNPX version 2.6.0, a Monte Carlo-based particle transport package, was used to determine the energy deposited in the lens of the eye. The human eyeball model was updated by partitioning the lens into sensitive and insensitive volumes as the anterior portion (sensitive volume) of the lens being more radiosensitive and prone to cataract formation. The updated eye model was used with the adult UF-ORNL mathematicalmore » phantom in the MCNPX transport calculations.« less
Preliminary calibration of the ACP safeguards neutron counter
NASA Astrophysics Data System (ADS)
Lee, T. H.; Kim, H. D.; Yoon, J. S.; Lee, S. Y.; Swinhoe, M.; Menlove, H. O.
2007-10-01
The Advanced Spent Fuel Conditioning Process (ACP), a kind of pyroprocess, has been developed at the Korea Atomic Energy Research Institute (KAERI). Since there is no IAEA safeguards criteria for this process, KAERI has developed a neutron coincidence counter to make it possible to perform a material control and accounting (MC&A) for its ACP materials for the purpose of a transparency in the peaceful uses of nuclear materials at KAERI. The test results of the ACP Safeguards Neutron Counter (ASNC) show a satisfactory performance for the Doubles count measurement with a low measurement error for its cylindrical sample cavity. The neutron detection efficiency is about 21% with an error of ±1.32% along the axial direction of the cavity. Using two 252Cf neutron sources, we obtained various parameters for the Singles and Doubles rates for the ASNC. The Singles, Doubles, and Triples rates for a 252Cf point source were obtained by using the MCNPX code and the results for the ft8 cap multiplicity tally option with the values of ɛ, fd, and ft measured with a strong source most closely match the measurement results to within a 1% error. A preliminary calibration curve for the ASNC was generated by using the point model equation relationship between 244Cm and 252Cf and the calibration coefficient for the non-multiplying sample is 2.78×10 5 (Doubles counts/s/g 244Cm). The preliminary calibration curves for the ACP samples were also obtained by using an MCNPX simulation. A neutron multiplication influence on an increase of the Doubles rate for a metal ingot and UO2 powder is clearly observed. These calibration curves will be modified and complemented, when hot calibration samples become available. To verify the validity of this calibration curve, a measurement of spent fuel standards for a known 244Cm mass will be performed in the near future.
SU-E-T-523: On the Radiobiological Impact of Lateral Scatter in Proton Beams
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heuvel, F Van den; Deruysscher, D
2014-06-01
Introduction: In proton therapy, justified concern has been voiced with respect to an increased efficiency in cell kill at the distal end of the Bragg peak. This coupled with range uncertainty is a counter indication to use the Bragg peak to define the border of a treated volume with a critical organ. An alternative is to use the lateral edge of the proton beam, obtaining more robust plans. We investigate the spectral and biological effects of the lateral scatter . Methods: A general purpose Monte Carlo simulation engine (MCNPX 2.7c) installed on a Scientific Linux cluster, calculated the dose depositionmore » spectrum of protons, knock on electrons and generated neutrons for a proton beam with maximal kinetic energy of 200MeV. Around the beam at different positions in the beam direction the spectrum is calculated in concentric rings of thickness 1cm. The deposited dose is converted to a double strand break map using an analytical expression.based on micro dosimetric calculations using a phenomenological Monte Carlo code (MCDS). A strict version of RBE is defined as the ratio of generation of double strand breaks in the different modalities. To generate the reference a Varian linac was modelled in MCNPX and the generated electron dose deposition spectrum was used . Results: On a pristine point source 200MeV beam the RBE before the Bragg peak was of the order of 1.1, increasing to 1.7 right behind the Bragg peak. When using a physically more realistic beam of 10cm diameter the effect was smaller. Both the lateral dose and RBE increased with increasing beam depth, generating a dose deposition with mixed biological effect. Conclusions: The dose deposition in proton beams need to be carefully examined because the biological effect will be different depending on the treatment geometry. Deeply penetrating proton beams generate more biologically effective lateral scatter.« less
NASA Astrophysics Data System (ADS)
Crites, S. T.; Lucey, P. G.; Lawrence, D. J.
2013-11-01
Galactic cosmic rays are a potential energy source to stimulate organic synthesis from simple ices. The recent detection of organic molecules at the polar regions of the Moon by LCROSS (Colaprete, A. et al. [2010]. Science 330, 463-468, http://dx.doi.org/10.1126/science.1186986), and possibly at the poles of Mercury (Paige, D.A. et al. [2013]. Science 339, 300-303, http://dx.doi.org/10.1126/science.1231106), introduces the question of whether the organics were delivered by impact or formed in situ. Laboratory experiments show that high energy particles can cause organic production from simple ices. We use a Monte Carlo particle scattering code (MCNPX) to model and report the flux of GCR protons at the surface of the Moon and report radiation dose rates and absorbed doses at the Moon’s surface and with depth as a result of GCR protons and secondary particles, and apply scaling factors to account for contributions to dose from heavier ions. We compare our results with dose rate measurements by the Cosmic Ray Telescope for the Effects of Radiation (CRaTER) experiment on Lunar Reconnaissance Orbiter (Schwadron, N.A. et al. [2012]. J. Geophys. Res. 117, E00H13, http://dx.doi.org/10.1029/2011JE003978) and find them in good agreement, indicating that MCNPX can be confidently applied to studies of radiation dose at and within the surface of the Moon. We use our dose rate calculations to conclude that organic synthesis is plausible well within the age of the lunar polar cold traps, and that organics detected at the poles of the Moon may have been produced in situ. Our dose rate calculations also indicate that galactic cosmic rays can induce organic synthesis within the estimated age of the dark deposits at the pole of Mercury that may contain organics.
Mesbahi, Asghar; Ghiasi, Hosein
2018-06-01
The shielding properties of ordinary concrete doped with some micro and nano scaled materials were studied in the current study. Narrow beam geometry was simulated using MCNPX Monte Carlo code and the mass attenuation coefficient of ordinary concrete doped with PbO 2 , Fe 2 O 3 , WO 3 and H 4 B (Boronium) in both nano and micro scales was calculated for photon and neutron beams. Mono-energetic beams of neutrons (100-3000 keV) and photons (142-1250 keV) were used for calculations. The concrete doped with nano-sized particles showed higher neutron removal cross section (7%) and photon attenuation coefficient (8%) relative to micro-particles. Application of nano-sized material in the composition of new concretes for dual protection against neutrons and photons are recommended. For further studies, the calculation of attenuation coefficients of these nano-concretes against higher energies of neutrons and photons and different particles are suggested. Copyright © 2018 Elsevier Ltd. All rights reserved.
Research on stellarator-mirror fission-fusion hybrid
NASA Astrophysics Data System (ADS)
Moiseenko, V. E.; Kotenko, V. G.; Chernitskiy, S. V.; Nemov, V. V.; Ågren, O.; Noack, K.; Kalyuzhnyi, V. N.; Hagnestål, A.; Källne, J.; Voitsenya, V. S.; Garkusha, I. E.
2014-09-01
The development of a stellarator-mirror fission-fusion hybrid concept is reviewed. The hybrid comprises of a fusion neutron source and a powerful sub-critical fast fission reactor core. The aim is the transmutation of spent nuclear fuel and safe fission energy production. In its fusion part, neutrons are generated in deuterium-tritium (D-T) plasma, confined magnetically in a stellarator-type system with an embedded magnetic mirror. Based on kinetic calculations, the energy balance for such a system is analyzed. Neutron calculations have been performed with the MCNPX code, and the principal design of the reactor part is developed. Neutron outflux at different outer parts of the reactor is calculated. Numerical simulations have been performed on the structure of a magnetic field in a model of the stellarator-mirror device, and that is achieved by switching off one or two coils of toroidal field in the Uragan-2M torsatron. The calculations predict the existence of closed magnetic surfaces under certain conditions. The confinement of fast particles in such a magnetic trap is analyzed.
Lee, Hee-Seock; Ban, Syuichi; Sanami, Toshiya; Takahashi, Kazutoshi; Sato, Tatsuhiko; Shin, Kazuo; Chung, Chinwha
2005-01-01
A study of differential photo-neutron yields by irradiation with 2 GeV electrons has been carried out. In this extension of a previous study in which measurements were made at an angle of 90 degrees relative to incident electrons, the differential photo-neutron yield was obtained at two other angles, 48 degrees and 140 degrees, to study its angular characteristics. Photo-neutron spectra were measured using a pulsed beam time-of-flight method and a BC418 plastic scintillator. The reliable range of neutron energy measurement was 8-250 MeV. The neutron spectra were measured for 10 Xo-thick Cu, Sn, W and Pb targets. The angular distribution characteristics, together with the previous results for 90 degrees, are presented in the study. The experimental results are compared with Monte Carlo calculation results. The yields predicted by MCNPX 2.5 tend to underestimate the measured ones. The same trend holds for the comparison results using the EGS4 and PICA3 codes.
NASA Astrophysics Data System (ADS)
Remetti, Romolo; Gandolfo, Giada; Lepore, Luigi; Cherubini, Nadia
2017-10-01
In the frame of Chemical, Biological, Radiological, and Nuclear defense European activities, the ENEA, the Italian National Agency for New Technologies, Energy and Sustainable Economic Development, is proposing the Neutron Active Interrogation system (NAI), a device designed to find transuranic-based Radioactive Dispersal Devices hidden inside suspected packages. It is based on Differential Die-Away time Analysis, an active neutron technique targeted in revealing the presence of fissile material through detection of induced fission neutrons. Several Monte Carlo simulations, carried out by MCNPX code, and the development of ad-hoc design methods, have led to the realization of a first prototype based on a 14 MeV d-t neutron generator coupled with a tailored moderating structure, and an array of helium-3 neutron detectors. The complete system is characterized by easy transportability, light weight, and real-time response. First results have shown device's capability to detect gram quantities of fissile materials.
Thermal neutron calibration channel at LNMRI/IRD.
Astuto, A; Salgado, A P; Leite, S P; Patrão, K C S; Fonseca, E S; Pereira, W W; Lopes, R T
2014-10-01
The Brazilian Metrology Laboratory of Ionizing Radiations (LNMRI) standard thermal neutron flux facility was designed to provide uniform neutron fluence for calibration of small neutron detectors and individual dosemeters. This fluence is obtained by neutron moderation from four (241)Am-Be sources, each with 596 GBq, in a facility built with blocks of graphite/paraffin compound and high-purity carbon graphite. This study was carried out in two steps. In the first step, simulations using the MCNPX code on different geometric arrangements of moderator materials and neutron sources were performed. The quality of the resulting neutron fluence in terms of spectrum, cadmium ratio and gamma-neutron ratio was evaluated. In the second step, the system was assembled based on the results obtained on the simulations, and new measurements are being made. These measurements will validate the system, and other intercomparisons will ensure traceability to the International System of Units. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
IMPROVEMENTS IN THE THERMAL NEUTRON CALIBRATION UNIT, TNF2, AT LNMRI/IRD.
Astuto, A; Fernandes, S S; Patrão, K C S; Fonseca, E S; Pereira, W W; Lopes, R T
2018-02-21
The standard thermal neutron flux unit, TNF2, in the Brazilian National Ionizing Radiation Metrology Laboratory was rebuilt. Fluence is still achieved by moderating of four 241Am-Be sources with 0.6 TBq each. The facility was again simulated and redesigned with graphite core and paraffin added graphite blocks surrounding it. Simulations using the MCNPX code on different geometric arrangements of moderator materials and neutron sources were performed. The resulting neutron fluence quality in terms of intensity, spectrum and cadmium ratio was evaluated. After this step, the system was assembled based on the results obtained from the simulations and measurements were performed with equipment existing in LNMRI/IRD and by simulated equipment. This work focuses on the characterization of a central chamber point and external points around the TNF2 in terms of neutron spectrum, fluence and ambient dose equivalent, H*(10). This system was validated with spectra measurements, fluence and H*(10) to ensure traceability.
Design of thermal neutron beam based on an electron linear accelerator for BNCT.
Zolfaghari, Mona; Sedaghatizadeh, Mahmood
2016-12-01
An electron linear accelerator (Linac) can be used for boron neutron capture therapy (BNCT) by producing thermal neutron flux. In this study, we used a Varian 2300 C/D Linac and MCNPX.2.6.0 code to simulate an electron-photoneutron source for use in BNCT. In order to decelerate the produced fast neutrons from the photoneutron source, which optimize the thermal neutron flux, a beam-shaping assembly (BSA) was simulated. After simulations, a thermal neutron flux with sharp peak at the beam exit was obtained in the order of 3.09×10 8 n/cm 2 s and 6.19×10 8 n/cm 2 s for uranium and enriched uranium (10%) as electron-photoneutron sources respectively. Also, in-phantom dose analysis indicates that the simulated thermal neutron beam can be used for treatment of shallow skin melanoma in time of about 85.4 and 43.6min for uranium and enriched uranium (10%) respectively. Copyright © 2016. Published by Elsevier Ltd.
NASA Astrophysics Data System (ADS)
Tribet, M.; Mougnaud, S.; Jégou, C.
2017-05-01
This work aims to better understand the nature and evolution of energy deposits at the UO2/water reactional interface subjected to alpha irradiation, through an original approach based on Monte-Carlo-type simulations, using the MCNPX code. Such an approach has the advantage of describing the energy deposit profiles on both sides of the interface (UO2 and water). The calculations have been performed on simple geometries, with data from an irradiated UOX fuel (burnup of 47 GWd.tHM-1 and 15 years of alpha decay). The influence of geometric parameters such as the diameter and the calculation steps at the reactional interface are discussed, and the exponential laws to be used in practice are suggested. The case of cracks with various different apertures (from 5 to 35 μm) has also been examined and these calculations have also enabled new information on the mean range of radiolytic species in cracks, and thus on the local chemistry.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grebennikov, A.N.; Zhitnik, A.K.; Zvenigorodskaya, O.A.
1995-12-31
In conformity with the protocol of the Workshop under Contract {open_quotes}Assessment of RBMK reactor safety using modern Western Codes{close_quotes} VNIIEF performed a neutronics computation series to compare western and VNIIEF codes and assess whether VNIIEF codes are suitable for RBMK type reactor safety assessment computation. The work was carried out in close collaboration with M.I. Rozhdestvensky and L.M. Podlazov, NIKIET employees. The effort involved: (1) cell computations with the WIMS, EKRAN codes (improved modification of the LOMA code) and the S-90 code (VNIIEF Monte Carlo). Cell, polycell, burnup computation; (2) 3D computation of static states with the KORAT-3D and NEUmore » codes and comparison with results of computation with the NESTLE code (USA). The computations were performed in the geometry and using the neutron constants presented by the American party; (3) 3D computation of neutron kinetics with the KORAT-3D and NEU codes. These computations were performed in two formulations, both being developed in collaboration with NIKIET. Formulation of the first problem maximally possibly agrees with one of NESTLE problems and imitates gas bubble travel through a core. The second problem is a model of the RBMK as a whole with imitation of control and protection system controls (CPS) movement in a core.« less
REACTOR PHYSICS MODELING OF SPENT RESEARCH REACTOR FUEL FOR TECHNICAL NUCLEAR FORENSICS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nichols, T.; Beals, D.; Sternat, M.
2011-07-18
Technical nuclear forensics (TNF) refers to the collection, analysis and evaluation of pre- and post-detonation radiological or nuclear materials, devices, and/or debris. TNF is an integral component, complementing traditional forensics and investigative work, to help enable the attribution of discovered radiological or nuclear material. Research is needed to improve the capabilities of TNF. One research area of interest is determining the isotopic signatures of research reactors. Research reactors are a potential source of both radiological and nuclear material. Research reactors are often the least safeguarded type of reactor; they vary greatly in size, fuel type, enrichment, power, and burn-up. Manymore » research reactors are fueled with highly-enriched uranium (HEU), up to {approx}93% {sup 235}U, which could potentially be used as weapons material. All of them have significant amounts of radiological material with which a radioactive dispersal device (RDD) could be built. Therefore, the ability to attribute if material originated from or was produced in a specific research reactor is an important tool in providing for the security of the United States. Currently there are approximately 237 operating research reactors worldwide, another 12 are in temporary shutdown and 224 research reactors are reported as shut down. Little is currently known about the isotopic signatures of spent research reactor fuel. An effort is underway at Savannah River National Laboratory (SRNL) to analyze spent research reactor fuel to determine these signatures. Computer models, using reactor physics codes, are being compared to the measured analytes in the spent fuel. This allows for improving the reactor physics codes in modeling research reactors for the purpose of nuclear forensics. Currently the Oak Ridge Research reactor (ORR) is being modeled and fuel samples are being analyzed for comparison. Samples of an ORR spent fuel assembly were taken by SRNL for analytical and radiochemical analysis. The fuel assembly was modeled using MONTEBURNS(MCNP5/ ORIGEN2.2) and MCNPX/CINDER90. The results from the models have been compared to each other and to the measured data.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lafleur, Adrienne M.; Ulrich, Timothy J. II; Menlove, Howard O.
Objective is to investigate the use of Passive Neutron Albedo Reactivity (PNAR) and Self-Interrogation Neutron Resonance Densitometry (SINRD) to quantify fissile content in FUGEN spent fuel assemblies (FAs). Methodology used is: (1) Detector was designed using fission chambers (FCs); (2) Optimized design via MCNPX simulations; and (3) Plan to build and field test instrument in FY13. Significance was to improve safeguards verification of spent fuel assemblies in water and increase sensitivity to partial defects. MCNPX simulations were performed to optimize the design of the SINRD+PNAR detector. PNAR ratio was less sensitive to FA positioning than SINRD and SINRD ratio wasmore » more sensitive to Pu fissile mass than PNAR. Significance was that the integration of these techniques can be used to improve verification of spent fuel assemblies in water.« less
Development and application of the GIM code for the Cyber 203 computer
NASA Technical Reports Server (NTRS)
Stainaker, J. F.; Robinson, M. A.; Rawlinson, E. G.; Anderson, P. G.; Mayne, A. W.; Spradley, L. W.
1982-01-01
The GIM computer code for fluid dynamics research was developed. Enhancement of the computer code, implicit algorithm development, turbulence model implementation, chemistry model development, interactive input module coding and wing/body flowfield computation are described. The GIM quasi-parabolic code development was completed, and the code used to compute a number of example cases. Turbulence models, algebraic and differential equations, were added to the basic viscous code. An equilibrium reacting chemistry model and implicit finite difference scheme were also added. Development was completed on the interactive module for generating the input data for GIM. Solutions for inviscid hypersonic flow over a wing/body configuration are also presented.
NASA Astrophysics Data System (ADS)
Belinato, Walmir; Santos, William S.; Perini, Ana P.; Neves, Lucio P.; Caldas, Linda V. E.; Souza, Divanizia N.
2017-11-01
Positron emission tomography (PET) has revolutionized the diagnosis of cancer since its conception. When combined with computed tomography (CT), PET/CT performed in children produces highly accurate diagnoses from images of regions affected by malignant tumors. Considering the high risk to children when exposed to ionizing radiation, a dosimetric study for PET/CT procedures is necessary. Specific absorbed fractions (SAF) were determined for monoenergetic photons and positrons, as well as the S-values for six positron emitting radionuclides (11C, 13N, 18F, 68Ga, 82Rb, 15O), and 22 source organs. The study was performed for six pediatric anthropomorphic hybrid models, including the newborn and 1 year hermaphrodite, 5 and 10-year-old male and female, using the Monte Carlo N-Particle eXtended code (MCNPX, version 2.7.0). The results of the SAF in source organs and S-values for all organs showed to be inversely related to the age of the phantoms, which includes the variation of body weight. The results also showed that radionuclides with higher energy peak emission produces larger auto absorbed S-values due to local dose deposition by positron decay. The S-values for the source organs are considerably larger due to the interaction of tissue with non-penetrating particles (electrons and positrons) and present a linear relationship with the phantom body masses. The results of the S-values determined for positron-emitting radionuclides can be used to assess the radiation dose delivered to pediatric patients subjected to PET examination in clinical settings. The novelty of this work is associated with the determination of auto absorbed S-values, in six new pediatric virtual anthropomorphic phantoms, for six emitting positrons, commonly employed in PET exams.
Chen, Yizheng; Qiu, Rui; Li, Chunyan; Wu, Zhen; Li, Junli
2016-03-07
In vivo measurement is a main method of internal contamination evaluation, particularly for large numbers of people after a nuclear accident. Before the practical application, it is necessary to obtain the counting efficiency of the detector by calibration. The virtual calibration based on Monte Carlo simulation usually uses the reference human computational phantom, and the morphological difference between the monitored personnel with the calibrated phantom may lead to the deviation of the counting efficiency. Therefore, a phantom library containing a wide range of heights and total body masses is needed. In this study, a Chinese reference adult male polygon surface (CRAM_S) phantom was constructed based on the CRAM voxel phantom, with the organ models adjusted to match the Chinese reference data. CRAM_S phantom was then transformed to sitting posture for convenience in practical monitoring. Referring to the mass and height distribution of the Chinese adult male, a phantom library containing 84 phantoms was constructed by deforming the reference surface phantom. Phantoms in the library have 7 different heights ranging from 155 cm to 185 cm, and there are 12 phantoms with different total body masses in each height. As an example of application, organ specific and total counting efficiencies of Ba-133 were calculated using the MCNPX code, with two series of phantoms selected from the library. The influence of morphological variation on the counting efficiency was analyzed. The results show only using the reference phantom in virtual calibration may lead to an error of 68.9% for total counting efficiency. Thus the influence of morphological difference on virtual calibration can be greatly reduced using the phantom library with a wide range of masses and heights instead of a single reference phantom.
NASA Astrophysics Data System (ADS)
Chang, Lienard A.
In the event of a radiological accident or attack, it is important to estimate the organ doses to those exposed. In general, it is difficult to measure organ dose directly in the field and therefore dose conversion coefficients (DCC) are needed to convert measurable values such as air kerma to organ dose. Previous work on these coefficients has been conducted mainly for adults with a focus on radiation protection workers. Hence, there is a large gap in the literature for pediatric values. This study coupled a Monte Carlo N-Particle eXtended (MCNPX) code with International Council of Radiological Protection (ICRP)-adopted University of Florida and National Cancer Institute pediatric reference phantoms to calculate a comprehensive list of dose conversion coefficients (mGy/mGy) to convert air-kerma to organ dose. Parameters included ten phantoms (newborn, 1-year, 5-year, 10-year, 15-year old male and female), 28 organs over 33 energies between 0.01 and 20 MeV in six (6) irradiation geometries relevant to a child who might be exposed to a radiological release: anterior-posterior (AP), posterior-anterior (PA), right-lateral (RLAT), left-lateral (LLAT), rotational (ROT), and isotropic (ISO). Dose conversion coefficients to the red bone marrow over 36 skeletal sites were also calculated. It was hypothesized that the pediatric organ dose conversion coefficients would follow similar trends to the published adult values as dictated by human anatomy, but be of a higher magnitude. It was found that while the pediatric coefficients did yield similar patterns to that of the adult coefficients, depending on the organ and irradiation geometry, the pediatric values could be lower or higher than that of the adult coefficients.
NASA Astrophysics Data System (ADS)
Chen, Yizheng; Qiu, Rui; Li, Chunyan; Wu, Zhen; Li, Junli
2016-03-01
In vivo measurement is a main method of internal contamination evaluation, particularly for large numbers of people after a nuclear accident. Before the practical application, it is necessary to obtain the counting efficiency of the detector by calibration. The virtual calibration based on Monte Carlo simulation usually uses the reference human computational phantom, and the morphological difference between the monitored personnel with the calibrated phantom may lead to the deviation of the counting efficiency. Therefore, a phantom library containing a wide range of heights and total body masses is needed. In this study, a Chinese reference adult male polygon surface (CRAM_S) phantom was constructed based on the CRAM voxel phantom, with the organ models adjusted to match the Chinese reference data. CRAMS phantom was then transformed to sitting posture for convenience in practical monitoring. Referring to the mass and height distribution of the Chinese adult male, a phantom library containing 84 phantoms was constructed by deforming the reference surface phantom. Phantoms in the library have 7 different heights ranging from 155 cm to 185 cm, and there are 12 phantoms with different total body masses in each height. As an example of application, organ specific and total counting efficiencies of Ba-133 were calculated using the MCNPX code, with two series of phantoms selected from the library. The influence of morphological variation on the counting efficiency was analyzed. The results show only using the reference phantom in virtual calibration may lead to an error of 68.9% for total counting efficiency. Thus the influence of morphological difference on virtual calibration can be greatly reduced using the phantom library with a wide range of masses and heights instead of a single reference phantom.
MONTE CARLO SIMULATIONS OF PERIODIC PULSED REACTOR WITH MOVING GEOMETRY PARTS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cao, Yan; Gohar, Yousry
2015-11-01
In a periodic pulsed reactor, the reactor state varies periodically from slightly subcritical to slightly prompt supercritical for producing periodic power pulses. Such periodic state change is accomplished by a periodic movement of specific reactor parts, such as control rods or reflector sections. The analysis of such reactor is difficult to perform with the current reactor physics computer programs. Based on past experience, the utilization of the point kinetics approximations gives considerable errors in predicting the magnitude and the shape of the power pulse if the reactor has significantly different neutron life times in different zones. To accurately simulate themore » dynamics of this type of reactor, a Monte Carlo procedure using the transfer function TRCL/TR of the MCNP/MCNPX computer programs is utilized to model the movable reactor parts. In this paper, two algorithms simulating the geometry part movements during a neutron history tracking have been developed. Several test cases have been developed to evaluate these procedures. The numerical test cases have shown that the developed algorithms can be utilized to simulate the reactor dynamics with movable geometry parts.« less
Implementation of a 3D mixing layer code on parallel computers
NASA Technical Reports Server (NTRS)
Roe, K.; Thakur, R.; Dang, T.; Bogucz, E.
1995-01-01
This paper summarizes our progress and experience in the development of a Computational-Fluid-Dynamics code on parallel computers to simulate three-dimensional spatially-developing mixing layers. In this initial study, the three-dimensional time-dependent Euler equations are solved using a finite-volume explicit time-marching algorithm. The code was first programmed in Fortran 77 for sequential computers. The code was then converted for use on parallel computers using the conventional message-passing technique, while we have not been able to compile the code with the present version of HPF compilers.
Differential die-away instrument: Report on comparison of fuel assembly experiments and simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goodsell, Alison Victoria; Henzl, Vladimir; Swinhoe, Martyn Thomas
2015-01-14
Experimental results of the assay of mock-up (fresh) fuel with the differential die-away (DDA) instrument were compared to the Monte Carlo N-Particle eXtended (MCNPX) simulation results. Most principal experimental observables, the die-away time and the in tegral of the DDA signal in several time domains, have been found in good agreement with the MCNPX simulation results. The remaining discrepancies between the simulation and experimental results are likely due to small differences between the actual experimental setup and the simulated geometry, including uncertainty in the DT neutron generator yield. Within this report we also present a sensitivity study of the DDAmore » instrument which is a complex and sensitive system and demonstrate to what degree it can be impacted by geometry, material composition, and electronics performance.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Botta, F; Di Dia, A; Pedroli, G
The calculation of patient-specific dose distribution can be achieved by Monte Carlo simulations or by analytical methods. In this study, fluka Monte Carlo code has been considered for use in nuclear medicine dosimetry. Up to now, fluka has mainly been dedicated to other fields, namely high energy physics, radiation protection, and hadrontherapy. When first employing a Monte Carlo code for nuclear medicine dosimetry, its results concerning electron transport at energies typical of nuclear medicine applications need to be verified. This is commonly achieved by means of calculation of a representative parameter and comparison with reference data. Dose point kernel (DPK),more » quantifying the energy deposition all around a point isotropic source, is often the one.Methods: fluka DPKs have been calculated in both water and compact bone for monoenergetic electrons (10–3 MeV) and for beta emitting isotopes commonly used for therapy (89Sr, 90Y, 131I, 153Sm, 177Lu, 186Re, and 188Re). Point isotropic sources have been simulated at the center of a water (bone) sphere, and deposed energy has been tallied in concentric shells. fluka outcomes have been compared to penelope v.2008 results, calculated in this study as well. Moreover, in case of monoenergetic electrons in water, comparison with the data from the literature (etran, geant4, mcnpx) has been done. Maximum percentage differences within 0.8·RCSDA and 0.9·RCSDA for monoenergetic electrons (RCSDA being the continuous slowing down approximation range) and within 0.8·X90 and 0.9·X90 for isotopes (X90 being the radius of the sphere in which 90% of the emitted energy is absorbed) have been computed, together with the average percentage difference within 0.9·RCSDA and 0.9·X90 for electrons and isotopes, respectively.Results: Concerning monoenergetic electrons, within 0.8·RCSDA (where 90%–97% of the particle energy is deposed), fluka and penelope agree mostly within 7%, except for 10 and 20 keV electrons (12% in water, 8.3% in bone). The discrepancies between fluka and the other codes are of the same order of magnitude than those observed when comparing the other codes among them, which can be referred to the different simulation algorithms. When considering the beta spectra, discrepancies notably reduce: within 0.9·X90, fluka and penelope differ for less than 1% in water and less than 2% in bone with any of the isotopes here considered. Complete data of fluka DPKs are given as Supplementary Material as a tool to perform dosimetry by analytical point kernel convolution.Conclusions: fluka provides reliable results when transporting electrons in the low energy range, proving to be an adequate tool for nuclear medicine dosimetry.« less
40 CFR 194.23 - Models and computer codes.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 26 2013-07-01 2013-07-01 false Models and computer codes. 194.23... General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... obtain stable solutions; (iv) Computer models accurately implement the numerical models; i.e., computer...
40 CFR 194.23 - Models and computer codes.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 26 2012-07-01 2011-07-01 true Models and computer codes. 194.23... General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... obtain stable solutions; (iv) Computer models accurately implement the numerical models; i.e., computer...
40 CFR 194.23 - Models and computer codes.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 25 2014-07-01 2014-07-01 false Models and computer codes. 194.23... General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... obtain stable solutions; (iv) Computer models accurately implement the numerical models; i.e., computer...
40 CFR 194.23 - Models and computer codes.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 24 2010-07-01 2010-07-01 false Models and computer codes. 194.23... General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... obtain stable solutions; (iv) Computer models accurately implement the numerical models; i.e., computer...
40 CFR 194.23 - Models and computer codes.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 25 2011-07-01 2011-07-01 false Models and computer codes. 194.23... General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... obtain stable solutions; (iv) Computer models accurately implement the numerical models; i.e., computer...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Choonsik; Kim, Kwang Pyo; Long, Daniel
2011-03-15
Purpose: To develop a computed tomography (CT) organ dose estimation method designed to readily provide organ doses in a reference adult male and female for different scan ranges to investigate the degree to which existing commercial programs can reasonably match organ doses defined in these more anatomically realistic adult hybrid phantomsMethods: The x-ray fan beam in the SOMATOM Sensation 16 multidetector CT scanner was simulated within the Monte Carlo radiation transport code MCNPX2.6. The simulated CT scanner model was validated through comparison with experimentally measured lateral free-in-air dose profiles and computed tomography dose index (CTDI) values. The reference adult malemore » and female hybrid phantoms were coupled with the established CT scanner model following arm removal to simulate clinical head and other body region scans. A set of organ dose matrices were calculated for a series of consecutive axial scans ranging from the top of the head to the bottom of the phantoms with a beam thickness of 10 mm and the tube potentials of 80, 100, and 120 kVp. The organ doses for head, chest, and abdomen/pelvis examinations were calculated based on the organ dose matrices and compared to those obtained from two commercial programs, CT-EXPO and CTDOSIMETRY. Organ dose calculations were repeated for an adult stylized phantom by using the same simulation method used for the adult hybrid phantom. Results: Comparisons of both lateral free-in-air dose profiles and CTDI values through experimental measurement with the Monte Carlo simulations showed good agreement to within 9%. Organ doses for head, chest, and abdomen/pelvis scans reported in the commercial programs exceeded those from the Monte Carlo calculations in both the hybrid and stylized phantoms in this study, sometimes by orders of magnitude. Conclusions: The organ dose estimation method and dose matrices established in this study readily provides organ doses for a reference adult male and female for different CT scan ranges and technical parameters. Organ doses from existing commercial programs do not reasonably match organ doses calculated for the hybrid phantoms due to differences in phantom anatomy, as well as differences in organ dose scaling parameters. The organ dose matrices developed in this study will be extended to cover different technical parameters, CT scanner models, and various age groups.« less
Computer Description of Black Hawk Helicopter
1979-06-01
Model Combinatorial Geometry Models Black Hawk Helicopter Helicopter GIFT Computer Code Geometric Description of Targets 20. ABSTRACT...description was made using the technique of combinatorial geometry (COM-GEOM) and will be used as input to the GIFT computer code which generates Tliic...rnHp The data used bv the COVART comtmter code was eenerated bv the Geometric Information for Targets ( GIFT )Z computer code. This report documents
NASA Astrophysics Data System (ADS)
Usta, Metin; Tufan, Mustafa Çağatay; Aydın, Güral; Bozkurt, Ahmet
2018-07-01
In this study, we have performed the calculations stopping power, depth dose, and range verification for proton beams using dielectric and Bethe-Bloch theories and FLUKA, Geant4 and MCNPX Monte Carlo codes. In the framework, as analytical studies, Drude model was applied for dielectric theory and effective charge approach with Roothaan-Hartree-Fock charge densities was used in Bethe theory. In the simulations different setup parameters were selected to evaluate the performance of three distinct Monte Carlo codes. The lung and breast tissues were investigated are considered to be related to the most common types of cancer throughout the world. The results were compared with each other and the available data in literature. In addition, the obtained results were verified with prompt gamma range data. In both stopping power values and depth-dose distributions, it was found that the Monte Carlo values give better results compared with the analytical ones while the results that agree best with ICRU data in terms of stopping power are those of the effective charge approach between the analytical methods and of the FLUKA code among the MC packages. In the depth dose distributions of the examined tissues, although the Bragg curves for Monte Carlo almost overlap, the analytical ones show significant deviations that become more pronounce with increasing energy. Verifications with the results of prompt gamma photons were attempted for 100-200 MeV protons which are regarded important for proton therapy. The analytical results are within 2%-5% and the Monte Carlo values are within 0%-2% as compared with those of the prompt gammas.
NASA Astrophysics Data System (ADS)
Pietrzak, Robert; Konefał, Adam; Sokół, Maria; Orlef, Andrzej
2016-08-01
The success of proton therapy depends strongly on the precision of treatment planning. Dose distribution in biological tissue may be obtained from Monte Carlo simulations using various scientific codes making it possible to perform very accurate calculations. However, there are many factors affecting the accuracy of modeling. One of them is a structure of objects called bins registering a dose. In this work the influence of bin structure on the dose distributions was examined. The MCNPX code calculations of Bragg curve for the 60 MeV proton beam were done in two ways: using simple logical detectors being the volumes determined in water, and using a precise model of ionization chamber used in clinical dosimetry. The results of the simulations were verified experimentally in the water phantom with Marcus ionization chamber. The average local dose difference between the measured relative doses in the water phantom and those calculated by means of the logical detectors was 1.4% at first 25 mm, whereas in the full depth range this difference was 1.6% for the maximum uncertainty in the calculations less than 2.4% and for the maximum measuring error of 1%. In case of the relative doses calculated with the use of the ionization chamber model this average difference was somewhat greater, being 2.3% at depths up to 25 mm and 2.4% in the full range of depths for the maximum uncertainty in the calculations of 3%. In the dose calculations the ionization chamber model does not offer any additional advantages over the logical detectors. The results provided by both models are similar and in good agreement with the measurements, however, the logical detector approach is a more time-effective method.
NASA Astrophysics Data System (ADS)
Tsinganis, A.; Kokkoris, M.; Vlastou, R.; Kalamara, A.; Stamatopoulos, A.; Kanellakopoulos, A.; Lagoyannis, A.; Axiotis, M.
2017-09-01
Accurate data on neutron-induced fission cross-sections of actinides are essential for the design of advanced nuclear reactors based either on fast neutron spectra or alternative fuel cycles, as well as for the reduction of safety margins of existing and future conventional facilities. The fission cross-section of 234U was measured at incident neutron energies of 560 and 660 keV and 7.5 MeV with a setup based on `microbulk' Micromegas detectors and the same samples previously used for the measurement performed at the CERN n_TOF facility (Karadimos et al., 2014). The 235U fission cross-section was used as reference. The (quasi-)monoenergetic neutron beams were produced via the 7Li(p,n) and the 2H(d,n) reactions at the neutron beam facility of the Institute of Nuclear and Particle Physics at the `Demokritos' National Centre for Scientific Research. A detailed study of the neutron spectra produced in the targets and intercepted by the samples was performed coupling the NeuSDesc and MCNPX codes, taking into account the energy spread, energy loss and angular straggling of the beam ions in the target assemblies, as well as contributions from competing reactions and neutron scattering in the experimental setup. Auxiliary Monte-Carlo simulations were performed with the FLUKA code to study the behaviour of the detectors, focusing particularly on the reproduction of the pulse height spectra of α-particles and fission fragments (using distributions produced with the GEF code) for the evaluation of the detector efficiency. An overview of the developed methodology and preliminary results are presented.
User manual for semi-circular compact range reflector code: Version 2
NASA Technical Reports Server (NTRS)
Gupta, Inder J.; Burnside, Walter D.
1987-01-01
A computer code has been developed at the Ohio State University ElectroScience Laboratory to analyze a semi-circular paraboloidal reflector with or without a rolled edge at the top and a skirt at the bottom. The code can be used to compute the total near field of the reflector or its individual components at a given distance from the center of the paraboloid. The code computes the fields along a radial, horizontal, vertical or axial cut at that distance. Thus, it is very effective in computing the size of the sweet spot for a semi-circular compact range reflector. This report describes the operation of the code. Various input and output statements are explained. Some results obtained using the computer code are presented to illustrate the code's capability as well as being samples of input/output sets.
Computational lymphatic node models in pediatric and adult hybrid phantoms for radiation dosimetry
NASA Astrophysics Data System (ADS)
Lee, Choonsik; Lamart, Stephanie; Moroz, Brian E.
2013-03-01
We developed models of lymphatic nodes for six pediatric and two adult hybrid computational phantoms to calculate the lymphatic node dose estimates from external and internal radiation exposures. We derived the number of lymphatic nodes from the recommendations in International Commission on Radiological Protection (ICRP) Publications 23 and 89 at 16 cluster locations for the lymphatic nodes: extrathoracic, cervical, thoracic (upper and lower), breast (left and right), mesentery (left and right), axillary (left and right), cubital (left and right), inguinal (left and right) and popliteal (left and right), for different ages (newborn, 1-, 5-, 10-, 15-year-old and adult). We modeled each lymphatic node within the voxel format of the hybrid phantoms by assuming that all nodes have identical size derived from published data except narrow cluster sites. The lymph nodes were generated by the following algorithm: (1) selection of the lymph node site among the 16 cluster sites; (2) random sampling of the location of the lymph node within a spherical space centered at the chosen cluster site; (3) creation of the sphere or ovoid of tissue representing the node based on lymphatic node characteristics defined in ICRP Publications 23 and 89. We created lymph nodes until the pre-defined number of lymphatic nodes at the selected cluster site was reached. This algorithm was applied to pediatric (newborn, 1-, 5-and 10-year-old male, and 15-year-old males) and adult male and female ICRP-compliant hybrid phantoms after voxelization. To assess the performance of our models for internal dosimetry, we calculated dose conversion coefficients, called S values, for selected organs and tissues with Iodine-131 distributed in six lymphatic node cluster sites using MCNPX2.6, a well validated Monte Carlo radiation transport code. Our analysis of the calculations indicates that the S values were significantly affected by the location of the lymph node clusters and that the values increased for smaller phantoms due to the shorter inter-organ distances compared to the bigger phantoms. By testing sensitivity of S values to random sampling and voxel resolution, we confirmed that the lymph node model is reasonably stable and consistent for different random samplings and voxel resolutions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xie Tianwu; Liu Qian; Zaidi, Habib
2012-03-15
Purpose: Rats have been widely used in radionuclide therapy research for the treatment of hepatocellular carcinoma (HCC). This has created the need to assess rat liver absorbed radiation dose. In most dose estimation studies, the rat liver is considered as a homogeneous integrated target organ with a tissue composition assumed to be similar to that of human liver tissue. However, the rat liver is composed of several lobes having different anatomical and chemical characteristics. To assess the overall impact on rat liver dose calculation, the authors use a new voxel-based rat model with identified suborgan regions of the liver. Methods:more » The liver in the original cryosectional color images was manually segmented into seven individual lobes and subsequently integrated into a voxel-based computational rat model. Photon and electron particle transport was simulated using the MCNPX Monte Carlo code to calculate absorbed fractions and S-values for {sup 90}Y, {sup 131}I, {sup 166}Ho, and {sup 188}Re for the seven liver lobes. The effect of chemical composition on organ-specific absorbed dose was investigated by changing the chemical composition of the voxel filling liver material. Radionuclide-specific absorbed doses at the voxel level were further assessed for a small spherical hepatic tumor. Results: The self-absorbed dose for different liver lobes varied depending on their respective masses. A maximum difference of 3.5% was observed for the liver self-absorbed fraction between rat and human tissues for photon energies below 100 keV. {sup 166}Ho and {sup 188}Re produce a uniformly distributed high dose in the tumor and relatively low absorbed dose for surrounding tissues. Conclusions: The authors evaluated rat liver radiation doses from various radionuclides used in HCC treatments using a realistic computational rat model. This work contributes to a better understanding of all aspects influencing radiation transport in organ-specific radiation dose evaluation for preclinical therapy studies, from tissue composition to organ morphology and activity distribution.« less
Hanford meteorological station computer codes: Volume 9, The quality assurance computer codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burk, K.W.; Andrews, G.L.
1989-02-01
The Hanford Meteorological Station (HMS) was established in 1944 on the Hanford Site to collect and archive meteorological data and provide weather forecasts and related services for Hanford Site approximately 1/2 mile east of the 200 West Area and is operated by PNL for the US Department of Energy. Meteorological data are collected from various sensors and equipment located on and off the Hanford Site. These data are stored in data bases on the Digital Equipment Corporation (DEC) VAX 11/750 at the HMS (hereafter referred to as the HMS computer). Files from those data bases are routinely transferred to themore » Emergency Management System (EMS) computer at the Unified Dose Assessment Center (UDAC). To ensure the quality and integrity of the HMS data, a set of Quality Assurance (QA) computer codes has been written. The codes will be routinely used by the HMS system manager or the data base custodian. The QA codes provide detailed output files that will be used in correcting erroneous data. The following sections in this volume describe the implementation and operation of QA computer codes. The appendices contain detailed descriptions, flow charts, and source code listings of each computer code. 2 refs.« less
Mowlavi, Ali Asghar; Fornasier, Maria Rossa; Mirzaei, Mohammd; Bregant, Paola; de Denaro, Mario
2014-10-01
The beta and gamma absorbed fractions in organs and tissues are the important key factors of radionuclide internal dosimetry based on Medical Internal Radiation Dose (MIRD) approach. The aim of this study is to find suitable analytical functions for beta and gamma absorbed fractions in spherical and ellipsoidal volumes with a uniform distribution of iodine-131 radionuclide. MCNPX code has been used to calculate the energy absorption from beta and gamma rays of iodine-131 uniformly distributed inside different ellipsoids and spheres, and then the absorbed fractions have been evaluated. We have found the fit parameters of a suitable analytical function for the beta absorbed fraction, depending on a generalized radius for ellipsoid based on the radius of sphere, and a linear fit function for the gamma absorbed fraction. The analytical functions that we obtained from fitting process in Monte Carlo data can be used for obtaining the absorbed fractions of iodine-131 beta and gamma rays for any volume of the thyroid lobe. Moreover, our results for the spheres are in good agreement with the results of MIRD and other scientific literatures.
Characterization of a tin-loaded liquid scintillator for gamma spectroscopy and neutron detection
NASA Astrophysics Data System (ADS)
Wen, Xianfei; Harvey, Taylor; Weinmann-Smith, Robert; Walker, James; Noh, Young; Farley, Richard; Enqvist, Andreas
2018-07-01
A tin-loaded liquid scintillator has been developed for gamma spectroscopy and neutron detection. The scintillator was characterized in regard to energy resolution, pulse shape discrimination, neutron light output function, and timing resolution. The loading of tin into scintillators with low effective atomic number was demonstrated to provide photopeaks with acceptable energy resolution. The scintillator was shown to have reasonable neutron/gamma discrimination capability based on the charge comparison method. The effect on the discrimination quality of the total charge integration time and the initial delay time for tail charge integration was studied. To obtain the neutron light output function, the time-of-flight technique was utilized with a 252Cf source. The light output function was validated with the MCNPX-PoliMi code by comparing the measured and simulated pule height spectra. The timing resolution of the developed scintillator was also evaluated. The tin-loading was found to have negligible impact on the scintillation decay times. However, a relatively large degradation of timing resolution was observed due to the reduced light yield.
Prado, A C M; Pazianotto, M T; Gonçalez, O L; Dos Santos, L R; Caldeira, A D; Pereira, H H C; Hubert, G; Federico, C A
2017-11-01
This article report the measurements on-board a small aircraft at the same altitude and around the same geographic coordinates. The measurements of Ambient Dose Equivalent Rate (H*(10)) were performed in several positions inside the aircraft, close and far from the pilot location and the discrimination between neutron and non-neutron components. The results show that the neutrons are attenuated close to fuel depots and the non-neutron component appears to have the opposite behavior inside the aircraft. These experimental results are also confronted with results from Monte Carlo simulation, obtained with the MCNPX code, using a simplified model of the Learjet-type aircraft and a modeling of the standard atmosphere, which reproduces the real energy and angular distribution of the particles. The Monte Carlo simulation agreed with the experimental measurements and shows that the total H*(10) presents small variation (around 1%) between the positions inside aircraft, although the neutron spectra present significant variations. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Barati, B.; Zabihzadeh, M.; Tahmasebi Birgani, M.J.; Chegini, N.; Fatahiasl, J.; Mirr, I.
2018-01-01
Objective: The use of miniature X-ray source in electronic brachytherapy is on the rise so there is an urgent need to acquire more knowledge on X-ray spectrum production and distribution by a dose. The aim of this research was to investigate the influence of target thickness and geometry at the source of miniature X-ray tube on tube output. Method: Five sources were simulated based on problems each with a specific geometric structure and conditions using MCNPX code. Tallies proportional to the output were used to calculate the results for the influence of source geometry on output. Results: The results of this work include the size of the optimal thickness of 5 miniature sources, energy spectrum of the sources per 50 kev and also the axial and transverse dose of simulated sources were calculated based on these thicknesses. The miniature source geometric was affected on the output x-ray tube. Conclusion: The result of this study demonstrates that hemispherical-conical, hemispherical and truncated-conical miniature sources were determined as the most suitable tools. PMID:29732338
DOE Office of Scientific and Technical Information (OSTI.GOV)
Latysheva, L. N.; Bergman, A. A.; Sobolevsky, N. M., E-mail: sobolevs@inr.ru
Lead slowing-down (LSD) spectrometers have a low energy resolution (about 30%), but their luminosity is 10{sup 3} to 10{sup 4} times higher than that of time-of-flight (TOF) spectrometers. A high luminosity of LSD spectrometers makes it possible to use them to measure neutron cross section for samples of mass about several micrograms. These features specify a niche for the application of LSD spectrometers in measuring neutron cross sections for elements hardly available in macroscopic amounts-in particular, for actinides. A mathematical simulation of the parameters of SVZ-100 LSD spectrometer of the Institute for Nuclear Research (INR, Moscow) is performed in themore » present study on the basis of the MCNPX code. It is found that the moderation constant, which is the main parameter of LSD spectrometers, is highly sensitive to the size and shape of detecting volumes in calculations and, hence, to the real size of experimental channels of the LSD spectrometer.« less
Three-dimensional Monte Carlo calculation of some nuclear parameters
NASA Astrophysics Data System (ADS)
Günay, Mehtap; Şeker, Gökmen
2017-09-01
In this study, a fusion-fission hybrid reactor system was designed by using 9Cr2WVTa Ferritic steel structural material and the molten salt-heavy metal mixtures 99-95% Li20Sn80 + 1-5% RG-Pu, 99-95% Li20Sn80 + 1-5% RG-PuF4, and 99-95% Li20Sn80 + 1-5% RG-PuO2, as fluids. The fluids were used in the liquid first wall, blanket and shield zones of a fusion-fission hybrid reactor system. Beryllium (Be) zone with the width of 3 cm was used for the neutron multiplication between the liquid first wall and blanket. This study analyzes the nuclear parameters such as tritium breeding ratio (TBR), energy multiplication factor (M), heat deposition rate, fission reaction rate in liquid first wall, blanket and shield zones and investigates effects of reactor grade Pu content in the designed system on these nuclear parameters. Three-dimensional analyses were performed by using the Monte Carlo code MCNPX-2.7.0 and nuclear data library ENDF/B-VII.0.
NASA Astrophysics Data System (ADS)
Santos, W. S.; Carvalho, A. B., Jr.; Hunt, J. G.; Maia, A. F.
2014-02-01
The objective of this study was to estimate doses in the physician and the nurse assistant at different positions during interventional radiology procedures. In this study, effective doses obtained for the physician and at points occupied by other workers were normalised by air kerma-area product (KAP). The simulations were performed for two X-ray spectra (70 kVp and 87 kVp) using the radiation transport code MCNPX (version 2.7.0), and a pair of anthropomorphic voxel phantoms (MASH/FASH) used to represent both the patient and the medical professional at positions from 7 cm to 47 cm from the patient. The X-ray tube was represented by a point source positioned in the anterior posterior (AP) and posterior anterior (PA) projections. The CC can be useful to calculate effective doses, which in turn are related to stochastic effects. With the knowledge of the values of CCs and KAP measured in an X-ray equipment, at a similar exposure, medical professionals will be able to know their own effective dose.
Two-dimensional dosimetry of radiotherapeutical proton beams using thermoluminescence foils.
Czopyk, L; Klosowski, M; Olko, P; Swakon, J; Waligorski, M P R; Kajdrowicz, T; Cuttone, G; Cirrone, G A P; Di Rosa, F
2007-01-01
In modern radiation therapy such as intensity modulated radiation therapy or proton therapy, one is able to cover the target volume with improved dose conformation and to spare surrounding tissue with help of modern measurement techniques. Novel thermoluminescence dosimetry (TLD) foils, developed from the hot-pressed mixture of LiF:Mg,Cu,P (MCP TL) powder and ethylene-tetrafluoroethylene (ETFE) copolymer, have been applied for 2-D dosimetry of radiotherapeutical proton beams at INFN Catania and IFJ Krakow. A TLD reader with 70 mm heating plate and CCD camera was used to read the 2-D emission pattern of irradiated foils. The absorbed dose profiles were evaluated, taking into account correction factors specific for TLD such as dose and energy response. TLD foils were applied for measuring of dose distributions within an eye phantom and compared with predictions obtained from the MCNPX code and Eclipse Ocular Proton Planning (Varian Medical Systems) clinical radiotherapy planning system. We demonstrate the possibility of measuring 2-D dose distributions with point resolution of about 0.5 x 0.5 mm(2).
An Improved Elastic and Nonelastic Neutron Transport Algorithm for Space Radiation
NASA Technical Reports Server (NTRS)
Clowdsley, Martha S.; Wilson, John W.; Heinbockel, John H.; Tripathi, R. K.; Singleterry, Robert C., Jr.; Shinn, Judy L.
2000-01-01
A neutron transport algorithm including both elastic and nonelastic particle interaction processes for use in space radiation protection for arbitrary shield material is developed. The algorithm is based upon a multiple energy grouping and analysis of the straight-ahead Boltzmann equation by using a mean value theorem for integrals. The algorithm is then coupled to the Langley HZETRN code through a bidirectional neutron evaporation source term. Evaluation of the neutron fluence generated by the solar particle event of February 23, 1956, for an aluminum water shield-target configuration is then compared with MCNPX and LAHET Monte Carlo calculations for the same shield-target configuration. With the Monte Carlo calculation as a benchmark, the algorithm developed in this paper showed a great improvement in results over the unmodified HZETRN solution. In addition, a high-energy bidirectional neutron source based on a formula by Ranft showed even further improvement of the fluence results over previous results near the front of the water target where diffusion out the front surface is important. Effects of improved interaction cross sections are modest compared with the addition of the high-energy bidirectional source terms.
NASA Astrophysics Data System (ADS)
Kim, Chan Hyeong; Hyoun Choi, Sang; Jeong, Jong Hwi; Lee, Choonsik; Chung, Min Suk
2008-08-01
A Korean voxel model, named 'High-Definition Reference Korean-Man (HDRK-Man)', was constructed using high-resolution color photographic images that were obtained by serially sectioning the cadaver of a 33-year-old Korean adult male. The body height and weight, the skeletal mass and the dimensions of the individual organs and tissues were adjusted to the reference Korean data. The resulting model was then implemented into a Monte Carlo particle transport code, MCNPX, to calculate the dose conversion coefficients for the internal organs and tissues. The calculated values, overall, were reasonable in comparison with the values from other adult voxel models. HDRK-Man showed higher dose conversion coefficients than other models, due to the facts that HDRK-Man has a smaller torso and that the arms of HDRK-Man are shifted backward. The developed model is believed to adequately represent average Korean radiation workers and thus can be used for more accurate calculation of dose conversion coefficients for Korean radiation workers in the future.
Depth profile of production yields of natPb(p, xn) 206,205,204,203,202,201Bi nuclear reactions
NASA Astrophysics Data System (ADS)
Mokhtari Oranj, Leila; Jung, Nam-Suk; Kim, Dong-Hyun; Lee, Arim; Bae, Oryun; Lee, Hee-Seock
2016-11-01
Experimental and simulation studies on the depth profiles of production yields of natPb(p, xn) 206,205,204,203,202,201Bi nuclear reactions were carried out. Irradiation experiments were performed at the high-intensity proton linac facility (KOMAC) in Korea. The targets, irradiated by 100-MeV protons, were arranged in a stack consisting of natural Pb, Al, Au foils and Pb plates. The proton beam intensity was determined by activation analysis method using 27Al(p, 3p1n)24Na, 197Au(p, p1n)196Au, and 197Au(p, p3n)194Au monitor reactions and also by Gafchromic film dosimetry method. The yields of produced radio-nuclei in the natPb activation foils and monitor foils were measured by HPGe spectroscopy system. Monte Carlo simulations were performed by FLUKA, PHITS/DCHAIN-SP, and MCNPX/FISPACT codes and the calculated data were compared with the experimental results. A satisfactory agreement was observed between the present experimental data and the simulations.
U-238 fission and Pu-239 production in subcritical assembly
NASA Astrophysics Data System (ADS)
Grab, Magdalena; Wojciechowski, Andrzej
2018-04-01
The project touches upon an issue of U-238 fission reactions and Pu-239 production reactions in subcritical assembly. The experiment took place in November 2014 at the Dzhelepov Laboratory of Nuclear Problems (JINR, Dubna) using PHASOTRON.Data of this experiment were analyzed in Laboratory of Information Technologies (LIT). Four MCNPX models were considered for simulation: Bertini/Dresnen, Bertini/Abla, INCL4/Drensnen, INCL4/Abla. The main goal of the project was to compare the experimental data and simulation results. We obtain a good agreement of experimental data and computation results especially for detectors placed besides the assembly axis. In addition, the U-238 fission reactions are more probable to be observed in the region of a higher particle energy spectrum, located closer to the assembly axis and the particle beam as well and vice versa Pu-239 production reactions were dominant in the peripheral region of geometry.
User's manual for semi-circular compact range reflector code
NASA Technical Reports Server (NTRS)
Gupta, Inder J.; Burnside, Walter D.
1986-01-01
A computer code was developed to analyze a semi-circular paraboloidal reflector antenna with a rolled edge at the top and a skirt at the bottom. The code can be used to compute the total near field of the antenna or its individual components at a given distance from the center of the paraboloid. Thus, it is very effective in computing the size of the sweet spot for RCS or antenna measurement. The operation of the code is described. Various input and output statements are explained. Some results obtained using the computer code are presented to illustrate the code's capability as well as being samples of input/output sets.
Highly fault-tolerant parallel computation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spielman, D.A.
We re-introduce the coded model of fault-tolerant computation in which the input and output of a computational device are treated as words in an error-correcting code. A computational device correctly computes a function in the coded model if its input and output, once decoded, are a valid input and output of the function. In the coded model, it is reasonable to hope to simulate all computational devices by devices whose size is greater by a constant factor but which are exponentially reliable even if each of their components can fail with some constant probability. We consider fine-grained parallel computations inmore » which each processor has a constant probability of producing the wrong output at each time step. We show that any parallel computation that runs for time t on w processors can be performed reliably on a faulty machine in the coded model using w log{sup O(l)} w processors and time t log{sup O(l)} w. The failure probability of the computation will be at most t {center_dot} exp(-w{sup 1/4}). The codes used to communicate with our fault-tolerant machines are generalized Reed-Solomon codes and can thus be encoded and decoded in O(n log{sup O(1)} n) sequential time and are independent of the machine they are used to communicate with. We also show how coded computation can be used to self-correct many linear functions in parallel with arbitrarily small overhead.« less
An emulator for minimizing computer resources for finite element analysis
NASA Technical Reports Server (NTRS)
Melosh, R.; Utku, S.; Islam, M.; Salama, M.
1984-01-01
A computer code, SCOPE, has been developed for predicting the computer resources required for a given analysis code, computer hardware, and structural problem. The cost of running the code is a small fraction (about 3 percent) of the cost of performing the actual analysis. However, its accuracy in predicting the CPU and I/O resources depends intrinsically on the accuracy of calibration data that must be developed once for the computer hardware and the finite element analysis code of interest. Testing of the SCOPE code on the AMDAHL 470 V/8 computer and the ELAS finite element analysis program indicated small I/O errors (3.2 percent), larger CPU errors (17.8 percent), and negligible total errors (1.5 percent).
A generalized one-dimensional computer code for turbomachinery cooling passage flow calculations
NASA Technical Reports Server (NTRS)
Kumar, Ganesh N.; Roelke, Richard J.; Meitner, Peter L.
1989-01-01
A generalized one-dimensional computer code for analyzing the flow and heat transfer in the turbomachinery cooling passages was developed. This code is capable of handling rotating cooling passages with turbulators, 180 degree turns, pin fins, finned passages, by-pass flows, tip cap impingement flows, and flow branching. The code is an extension of a one-dimensional code developed by P. Meitner. In the subject code, correlations for both heat transfer coefficient and pressure loss computations were developed to model each of the above mentioned type of coolant passages. The code has the capability of independently computing the friction factor and heat transfer coefficient on each side of a rectangular passage. Either the mass flow at the inlet to the channel or the exit plane pressure can be specified. For a specified inlet total temperature, inlet total pressure, and exit static pressure, the code computers the flow rates through the main branch and the subbranches, flow through tip cap for impingement cooling, in addition to computing the coolant pressure, temperature, and heat transfer coefficient distribution in each coolant flow branch. Predictions from the subject code for both nonrotating and rotating passages agree well with experimental data. The code was used to analyze the cooling passage of a research cooled radial rotor.
Efficient Proximity Computation Techniques Using ZIP Code Data for Smart Cities †
Murdani, Muhammad Harist; Hong, Bonghee
2018-01-01
In this paper, we are interested in computing ZIP code proximity from two perspectives, proximity between two ZIP codes (Ad-Hoc) and neighborhood proximity (Top-K). Such a computation can be used for ZIP code-based target marketing as one of the smart city applications. A naïve approach to this computation is the usage of the distance between ZIP codes. We redefine a distance metric combining the centroid distance with the intersecting road network between ZIP codes by using a weighted sum method. Furthermore, we prove that the results of our combined approach conform to the characteristics of distance measurement. We have proposed a general and heuristic approach for computing Ad-Hoc proximity, while for computing Top-K proximity, we have proposed a general approach only. Our experimental results indicate that our approaches are verifiable and effective in reducing the execution time and search space. PMID:29587366
Efficient Proximity Computation Techniques Using ZIP Code Data for Smart Cities †.
Murdani, Muhammad Harist; Kwon, Joonho; Choi, Yoon-Ho; Hong, Bonghee
2018-03-24
In this paper, we are interested in computing ZIP code proximity from two perspectives, proximity between two ZIP codes ( Ad-Hoc ) and neighborhood proximity ( Top-K ). Such a computation can be used for ZIP code-based target marketing as one of the smart city applications. A naïve approach to this computation is the usage of the distance between ZIP codes. We redefine a distance metric combining the centroid distance with the intersecting road network between ZIP codes by using a weighted sum method. Furthermore, we prove that the results of our combined approach conform to the characteristics of distance measurement. We have proposed a general and heuristic approach for computing Ad-Hoc proximity, while for computing Top-K proximity, we have proposed a general approach only. Our experimental results indicate that our approaches are verifiable and effective in reducing the execution time and search space.
Mostafaei, Farshad; Blake, Scott P; Liu, Yingzi; Sowers, Daniel A; Nie, Linda H
2015-10-01
The subject of whether fluorine (F) is detrimental to human health has been controversial for many years. Much of the discussion focuses on the known benefits and detriments to dental care and problems that F causes in bone structure at high doses. It is therefore advantageous to have the means to monitor F concentrations in the human body as a method to directly assess exposure. F accumulates in the skeleton making bone a useful biomarker to assess long term cumulative exposure to F. This study presents work in the development of a non-invasive method for the monitoring of F in human bone. The work was based on the technique of in vivo neutron activation analysis (IVNAA). A compact deuterium-deuterium (DD) generator was used to produce neutrons. A moderator/reflector/shielding assembly was designed and built for human hand irradiation. The gamma rays emitted through the (19)F(n,γ)(20)F reaction were measured using a HPGe detector. This study was undertaken to (i) find the feasibility of using DD system to determine F in human bone, (ii) estimate the F minimum detection limit (MDL), and (iii) optimize the system using the Monte Carlo N-Particle eXtended (MCNPX) code in order to improve the MDL of the system. The F MDL was found to be 0.54 g experimentally with a neutron flux of 7 × 10(8) n s(-1) and an optimized irradiation, decay, and measurement time scheme. The numbers of F counts from the experiment were found to be close to the (MCNPX) simulation results with the same irradiation and detection parameters. The equivalent dose to the irradiated hand and the effective dose to the whole body were found to be 0.9 mSv and 0.33 μSv, respectively. Based on these results, it is feasible to develop a compact DD generator based IVNAA system to measure bone F in a population with moderate to high F exposure.
Volume accumulator design analysis computer codes
NASA Technical Reports Server (NTRS)
Whitaker, W. D.; Shimazaki, T. T.
1973-01-01
The computer codes, VANEP and VANES, were written and used to aid in the design and performance calculation of the volume accumulator units (VAU) for the 5-kwe reactor thermoelectric system. VANEP computes the VAU design which meets the primary coolant loop VAU volume and pressure performance requirements. VANES computes the performance of the VAU design, determined from the VANEP code, at the conditions of the secondary coolant loop. The codes can also compute the performance characteristics of the VAU's under conditions of possible modes of failure which still permit continued system operation.
"Hour of Code": Can It Change Students' Attitudes toward Programming?
ERIC Educational Resources Information Center
Du, Jie; Wimmer, Hayden; Rada, Roy
2016-01-01
The Hour of Code is a one-hour introduction to computer science organized by Code.org, a non-profit dedicated to expanding participation in computer science. This study investigated the impact of the Hour of Code on students' attitudes towards computer programming and their knowledge of programming. A sample of undergraduate students from two…
Talking about Code: Integrating Pedagogical Code Reviews into Early Computing Courses
ERIC Educational Resources Information Center
Hundhausen, Christopher D.; Agrawal, Anukrati; Agarwal, Pawan
2013-01-01
Given the increasing importance of soft skills in the computing profession, there is good reason to provide students withmore opportunities to learn and practice those skills in undergraduate computing courses. Toward that end, we have developed an active learning approach for computing education called the "Pedagogical Code Review"…
Guidelines for developing vectorizable computer programs
NASA Technical Reports Server (NTRS)
Miner, E. W.
1982-01-01
Some fundamental principles for developing computer programs which are compatible with array-oriented computers are presented. The emphasis is on basic techniques for structuring computer codes which are applicable in FORTRAN and do not require a special programming language or exact a significant penalty on a scalar computer. Researchers who are using numerical techniques to solve problems in engineering can apply these basic principles and thus develop transportable computer programs (in FORTRAN) which contain much vectorizable code. The vector architecture of the ASC is discussed so that the requirements of array processing can be better appreciated. The "vectorization" of a finite-difference viscous shock-layer code is used as an example to illustrate the benefits and some of the difficulties involved. Increases in computing speed with vectorization are illustrated with results from the viscous shock-layer code and from a finite-element shock tube code. The applicability of these principles was substantiated through running programs on other computers with array-associated computing characteristics, such as the Hewlett-Packard (H-P) 1000-F.
The Helicopter Antenna Radiation Prediction Code (HARP)
NASA Technical Reports Server (NTRS)
Klevenow, F. T.; Lynch, B. G.; Newman, E. H.; Rojas, R. G.; Scheick, J. T.; Shamansky, H. T.; Sze, K. Y.
1990-01-01
The first nine months effort in the development of a user oriented computer code, referred to as the HARP code, for analyzing the radiation from helicopter antennas is described. The HARP code uses modern computer graphics to aid in the description and display of the helicopter geometry. At low frequencies the helicopter is modeled by polygonal plates, and the method of moments is used to compute the desired patterns. At high frequencies the helicopter is modeled by a composite ellipsoid and flat plates, and computations are made using the geometrical theory of diffraction. The HARP code will provide a user friendly interface, employing modern computer graphics, to aid the user to describe the helicopter geometry, select the method of computation, construct the desired high or low frequency model, and display the results.
Enhanced fault-tolerant quantum computing in d-level systems.
Campbell, Earl T
2014-12-05
Error-correcting codes protect quantum information and form the basis of fault-tolerant quantum computing. Leading proposals for fault-tolerant quantum computation require codes with an exceedingly rare property, a transversal non-Clifford gate. Codes with the desired property are presented for d-level qudit systems with prime d. The codes use n=d-1 qudits and can detect up to ∼d/3 errors. We quantify the performance of these codes for one approach to quantum computation known as magic-state distillation. Unlike prior work, we find performance is always enhanced by increasing d.
Convergence acceleration of the Proteus computer code with multigrid methods
NASA Technical Reports Server (NTRS)
Demuren, A. O.; Ibraheem, S. O.
1992-01-01
Presented here is the first part of a study to implement convergence acceleration techniques based on the multigrid concept in the Proteus computer code. A review is given of previous studies on the implementation of multigrid methods in computer codes for compressible flow analysis. Also presented is a detailed stability analysis of upwind and central-difference based numerical schemes for solving the Euler and Navier-Stokes equations. Results are given of a convergence study of the Proteus code on computational grids of different sizes. The results presented here form the foundation for the implementation of multigrid methods in the Proteus code.
NASA Technical Reports Server (NTRS)
Capo, M. A.; Disney, R. K.
1971-01-01
The work performed in the following areas is summarized: (1) Analysis of Realistic nuclear-propelled vehicle was analyzed using the Marshall Space Flight Center computer code package. This code package includes one and two dimensional discrete ordinate transport, point kernel, and single scatter techniques, as well as cross section preparation and data processing codes, (2) Techniques were developed to improve the automated data transfer in the coupled computation method of the computer code package and improve the utilization of this code package on the Univac-1108 computer system. (3) The MSFC master data libraries were updated.
Nonuniform code concatenation for universal fault-tolerant quantum computing
NASA Astrophysics Data System (ADS)
Nikahd, Eesa; Sedighi, Mehdi; Saheb Zamani, Morteza
2017-09-01
Using transversal gates is a straightforward and efficient technique for fault-tolerant quantum computing. Since transversal gates alone cannot be computationally universal, they must be combined with other approaches such as magic state distillation, code switching, or code concatenation to achieve universality. In this paper we propose an alternative approach for universal fault-tolerant quantum computing, mainly based on the code concatenation approach proposed in [T. Jochym-O'Connor and R. Laflamme, Phys. Rev. Lett. 112, 010505 (2014), 10.1103/PhysRevLett.112.010505], but in a nonuniform fashion. The proposed approach is described based on nonuniform concatenation of the 7-qubit Steane code with the 15-qubit Reed-Muller code, as well as the 5-qubit code with the 15-qubit Reed-Muller code, which lead to two 49-qubit and 47-qubit codes, respectively. These codes can correct any arbitrary single physical error with the ability to perform a universal set of fault-tolerant gates, without using magic state distillation.
Monte Carlo Analysis of Pion Contribution to Absorbed Dose from Galactic Cosmic Rays
NASA Technical Reports Server (NTRS)
Aghara, S.K.; Battnig, S.R.; Norbury, J.W.; Singleterry, R.C.
2009-01-01
Accurate knowledge of the physics of interaction, particle production and transport is necessary to estimate the radiation damage to equipment used on spacecraft and the biological effects of space radiation. For long duration astronaut missions, both on the International Space Station and the planned manned missions to Moon and Mars, the shielding strategy must include a comprehensive knowledge of the secondary radiation environment. The distribution of absorbed dose and dose equivalent is a function of the type, energy and population of these secondary products. Galactic cosmic rays (GCR) comprised of protons and heavier nuclei have energies from a few MeV per nucleon to the ZeV region, with the spectra reaching flux maxima in the hundreds of MeV range. Therefore, the MeV - GeV region is most important for space radiation. Coincidentally, the pion production energy threshold is about 280 MeV. The question naturally arises as to how important these particles are with respect to space radiation problems. The space radiation transport code, HZETRN (High charge (Z) and Energy TRaNsport), currently used by NASA, performs neutron, proton and heavy ion transport explicitly, but it does not take into account the production and transport of mesons, photons and leptons. In this paper, we present results from the Monte Carlo code MCNPX (Monte Carlo N-Particle eXtended), showing the effect of leptons and mesons when they are produced and transported in a GCR environment.
Roshani, G H; Karami, A; Khazaei, A; Olfateh, A; Nazemi, E; Omidi, M
2018-05-17
Gamma ray source has very important role in precision of multi-phase flow metering. In this study, different combination of gamma ray sources (( 133 Ba- 137 Cs), ( 133 Ba- 60 Co), ( 241 Am- 137 Cs), ( 241 Am- 60 Co), ( 133 Ba- 241 Am) and ( 60 Co- 137 Cs)) were investigated in order to optimize the three-phase flow meter. Three phases were water, oil and gas and the regime was considered annular. The required data was numerically generated using MCNP-X code which is a Monte-Carlo code. Indeed, the present study devotes to forecast the volume fractions in the annular three-phase flow, based on a multi energy metering system including various radiation sources and also one NaI detector, using a hybrid model of artificial neural network and Jaya Optimization algorithm. Since the summation of volume fractions is constant, a constraint modeling problem exists, meaning that the hybrid model must forecast only two volume fractions. Six hybrid models associated with the number of used radiation sources are designed. The models are employed to forecast the gas and water volume fractions. The next step is to train the hybrid models based on numerically obtained data. The results show that, the best forecast results are obtained for the gas and water volume fractions of the system including the ( 241 Am- 137 Cs) as the radiation source. Copyright © 2018 Elsevier Ltd. All rights reserved.
Dosimetric factors for diagnostic nuclear medicine procedures in a non-reference pregnant phantom.
Rafat-Motavalli, Laleh; Miri Hakimabad, Hashem; Hoseinian Azghadi, Elie
2018-05-01
This study was evaluated the impact of using non-reference fetal models on the fetal radiation dose from diagnostic radionuclide administration. The 6 month pregnant phantoms including fetal models at 10th and 90th growth percentiles were constructed at either end of the normal range around the 50th percentile and implemented in the Monte Carlo N-Particle code version MCNPX 2.6. The code have been used then to evaluate the 99mTc S factors of interested target organs as the most common used radionuclide in nuclear medicine procedures. Substantial variations were observed in the S factors between the 10th/90th percentile phantoms from the 50th percentile phantom, with the greatest difference being 38.6 %. When the source organs were in close proximity to, or inside the fetal body, the 99mTc S factors presented strong statistical correlations with fetal body habitus. The trends observed in the S factors and the differences between various percentiles were justified by the source organs' masses, and chord length distributions (CLDs). The results of this study showed that fetal body habitus had a considerable effect on fetal dose (on average up to 8.4%) if constant fetal biokinetic data was considered for all fetal weight percentiles. However, an almost smaller variation on fetal dose (up to 5.3%) was obtained if the available biokinetic data for the reference fetus was scaled by fetal mass. © 2018 IOP Publishing Ltd.
NASA Astrophysics Data System (ADS)
Sobolev, V.; Lemehov, S.; Messaoudi, N.; Van Uffelen, P.; Aı̈t Abderrahim, H.
2003-06-01
The Belgian Nuclear Research Centre, SCK • CEN, is currently working on the pre-design of the multipurpose accelerator-driven system (ADS) MYRRHA. A demonstration of the possibility of transmutation of minor actinides and long-lived fission products with a realistic design of experimental fuel targets and prognosis of their behaviour under typical ADS conditions is an important task in the MYRRHA project. In the present article, the irradiation behaviour of three different oxide fuel mixtures, containing americium and plutonium - (Am,Pu,U)O 2- x with urania matrix, (Am,Pu,Th)O 2- x with thoria matrix and (Am,Y,Pu,Zr)O 2- x with inert zirconia matrix stabilised by yttria - were simulated with the new fuel performance code MACROS, which is under development and testing at the SCK • CEN. All the fuel rods were considered to be of the same design and sizes: annular fuel pellets, helium bounded with the stainless steel cladding, and a large gas plenum. The liquid lead-bismuth eutectic was used as coolant. Typical irradiation conditions of the hottest fuel assembly of the MYRRHA subcritical core were pre-calculated with the MCNPX code and used in the following calculations as the input data. The results of prediction of the thermo-mechanical behaviour of the designed rods with the considered fuels during three irradiation cycles of 90 EFPD are presented and discussed.
Green's function methods in heavy ion shielding
NASA Technical Reports Server (NTRS)
Wilson, John W.; Costen, Robert C.; Shinn, Judy L.; Badavi, Francis F.
1993-01-01
An analytic solution to the heavy ion transport in terms of Green's function is used to generate a highly efficient computer code for space applications. The efficiency of the computer code is accomplished by a nonperturbative technique extending Green's function over the solution domain. The computer code can also be applied to accelerator boundary conditions to allow code validation in laboratory experiments.
NASA Technical Reports Server (NTRS)
Anderson, O. L.; Chiappetta, L. M.; Edwards, D. E.; Mcvey, J. B.
1982-01-01
A user's manual describing the operation of three computer codes (ADD code, PTRAK code, and VAPDIF code) is presented. The general features of the computer codes, the input/output formats, run streams, and sample input cases are described.
Automated apparatus and method of generating native code for a stitching machine
NASA Technical Reports Server (NTRS)
Miller, Jeffrey L. (Inventor)
2000-01-01
A computer system automatically generates CNC code for a stitching machine. The computer determines the locations of a present stitching point and a next stitching point. If a constraint is not found between the present stitching point and the next stitching point, the computer generates code for making a stitch at the next stitching point. If a constraint is found, the computer generates code for changing a condition (e.g., direction) of the stitching machine's stitching head.
Computer codes developed and under development at Lewis
NASA Technical Reports Server (NTRS)
Chamis, Christos C.
1992-01-01
The objective of this summary is to provide a brief description of: (1) codes developed or under development at LeRC; and (2) the development status of IPACS with some typical early results. The computer codes that have been developed and/or are under development at LeRC are listed in the accompanying charts. This list includes: (1) the code acronym; (2) select physics descriptors; (3) current enhancements; and (4) present (9/91) code status with respect to its availability and documentation. The computer codes list is grouped by related functions such as: (1) composite mechanics; (2) composite structures; (3) integrated and 3-D analysis; (4) structural tailoring; and (5) probabilistic structural analysis. These codes provide a broad computational simulation infrastructure (technology base-readiness) for assessing the structural integrity/durability/reliability of propulsion systems. These codes serve two other very important functions: they provide an effective means of technology transfer; and they constitute a depository of corporate memory.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zizin, M. N.; Zimin, V. G.; Zizina, S. N., E-mail: zizin@adis.vver.kiae.ru
2010-12-15
The ShIPR intellectual code system for mathematical simulation of nuclear reactors includes a set of computing modules implementing the preparation of macro cross sections on the basis of the two-group library of neutron-physics cross sections obtained for the SKETCH-N nodal code. This library is created by using the UNK code for 3D diffusion computation of first VVER-1000 fuel loadings. Computation of neutron fields in the ShIPR system is performed using the DP3 code in the two-group diffusion approximation in 3D triangular geometry. The efficiency of all groups of control rods for the first fuel loading of the third unit ofmore » the Kalinin Nuclear Power Plant is computed. The temperature, barometric, and density effects of reactivity as well as the reactivity coefficient due to the concentration of boric acid in the reactor were computed additionally. Results of computations are compared with the experiment.« less
NASA Astrophysics Data System (ADS)
Zizin, M. N.; Zimin, V. G.; Zizina, S. N.; Kryakvin, L. V.; Pitilimov, V. A.; Tereshonok, V. A.
2010-12-01
The ShIPR intellectual code system for mathematical simulation of nuclear reactors includes a set of computing modules implementing the preparation of macro cross sections on the basis of the two-group library of neutron-physics cross sections obtained for the SKETCH-N nodal code. This library is created by using the UNK code for 3D diffusion computation of first VVER-1000 fuel loadings. Computation of neutron fields in the ShIPR system is performed using the DP3 code in the two-group diffusion approximation in 3D triangular geometry. The efficiency of all groups of control rods for the first fuel loading of the third unit of the Kalinin Nuclear Power Plant is computed. The temperature, barometric, and density effects of reactivity as well as the reactivity coefficient due to the concentration of boric acid in the reactor were computed additionally. Results of computations are compared with the experiment.
Users manual and modeling improvements for axial turbine design and performance computer code TD2-2
NASA Technical Reports Server (NTRS)
Glassman, Arthur J.
1992-01-01
Computer code TD2 computes design point velocity diagrams and performance for multistage, multishaft, cooled or uncooled, axial flow turbines. This streamline analysis code was recently modified to upgrade modeling related to turbine cooling and to the internal loss correlation. These modifications are presented in this report along with descriptions of the code's expanded input and output. This report serves as the users manual for the upgraded code, which is named TD2-2.
An Object-Oriented Approach to Writing Computational Electromagnetics Codes
NASA Technical Reports Server (NTRS)
Zimmerman, Martin; Mallasch, Paul G.
1996-01-01
Presently, most computer software development in the Computational Electromagnetics (CEM) community employs the structured programming paradigm, particularly using the Fortran language. Other segments of the software community began switching to an Object-Oriented Programming (OOP) paradigm in recent years to help ease design and development of highly complex codes. This paper examines design of a time-domain numerical analysis CEM code using the OOP paradigm, comparing OOP code and structured programming code in terms of software maintenance, portability, flexibility, and speed.
Computer Description of the Field Artillery Ammunition Supply Vehicle
1983-04-01
Combinatorial Geometry (COM-GEOM) GIFT Computer Code Computer Target Description 2& AfTNACT (Cmne M feerve shb N ,neemssalyan ify by block number) A...input to the GIFT computer code to generate target vulnerability data. F.a- 4 ono OF I NOV 5S OLETE UNCLASSIFIED SECUOITY CLASSIFICATION OF THIS PAGE...Combinatorial Geometry (COM-GEOM) desrription. The "Geometric Information for Tarqets" ( GIFT ) computer code accepts the CO!-GEOM description and
48 CFR 252.227-7013 - Rights in technical data-Noncommercial items.
Code of Federal Regulations, 2011 CFR
2011-10-01
... causing a computer to perform a specific operation or series of operations. (3) Computer software means computer programs, source code, source code listings, object code listings, design details, algorithms... or will be developed exclusively with Government funds; (ii) Studies, analyses, test data, or similar...
48 CFR 252.227-7013 - Rights in technical data-Noncommercial items.
Code of Federal Regulations, 2012 CFR
2012-10-01
... causing a computer to perform a specific operation or series of operations. (3) Computer software means computer programs, source code, source code listings, object code listings, design details, algorithms... or will be developed exclusively with Government funds; (ii) Studies, analyses, test data, or similar...
48 CFR 252.227-7013 - Rights in technical data-Noncommercial items.
Code of Federal Regulations, 2014 CFR
2014-10-01
... causing a computer to perform a specific operation or series of operations. (3) Computer software means computer programs, source code, source code listings, object code listings, design details, algorithms... or will be developed exclusively with Government funds; (ii) Studies, analyses, test data, or similar...
48 CFR 252.227-7013 - Rights in technical data-Noncommercial items.
Code of Federal Regulations, 2010 CFR
2010-10-01
... causing a computer to perform a specific operation or series of operations. (3) Computer software means computer programs, source code, source code listings, object code listings, design details, algorithms... developed exclusively with Government funds; (ii) Studies, analyses, test data, or similar data produced for...
NASA Technical Reports Server (NTRS)
Harper, Warren
1989-01-01
Two electromagnetic scattering codes, NEC-BSC and ESP3, were delivered and installed on a NASA VAX computer for use by Marshall Space Flight Center antenna design personnel. The existing codes and certain supplementary software were updated, the codes installed on a computer that will be delivered to the customer, to provide capability for graphic display of the data to be computed by the use of the codes and to assist the customer in the solution of specific problems that demonstrate the use of the codes. With the exception of one code revision, all of these tasks were performed.
48 CFR 252.227-7013 - Rights in technical data-Noncommercial items.
Code of Federal Regulations, 2013 CFR
2013-10-01
... causing a computer to perform a specific operation or series of operations. (3) Computer software means computer programs, source code, source code listings, object code listings, design details, algorithms... funds; (ii) Studies, analyses, test data, or similar data produced for this contract, when the study...
Parallel Computation of the Jacobian Matrix for Nonlinear Equation Solvers Using MATLAB
NASA Technical Reports Server (NTRS)
Rose, Geoffrey K.; Nguyen, Duc T.; Newman, Brett A.
2017-01-01
Demonstrating speedup for parallel code on a multicore shared memory PC can be challenging in MATLAB due to underlying parallel operations that are often opaque to the user. This can limit potential for improvement of serial code even for the so-called embarrassingly parallel applications. One such application is the computation of the Jacobian matrix inherent to most nonlinear equation solvers. Computation of this matrix represents the primary bottleneck in nonlinear solver speed such that commercial finite element (FE) and multi-body-dynamic (MBD) codes attempt to minimize computations. A timing study using MATLAB's Parallel Computing Toolbox was performed for numerical computation of the Jacobian. Several approaches for implementing parallel code were investigated while only the single program multiple data (spmd) method using composite objects provided positive results. Parallel code speedup is demonstrated but the goal of linear speedup through the addition of processors was not achieved due to PC architecture.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eslinger, Paul W.; Aaberg, Rosanne L.; Lopresti, Charles A.
2004-09-14
This document contains detailed user instructions for a suite of utility codes developed for Rev. 1 of the Systems Assessment Capability. The suite of computer codes for Rev. 1 of Systems Assessment Capability performs many functions.
NASA Astrophysics Data System (ADS)
Chao, Tsi-Chian; Tsai, Yi-Chun; Chen, Shih-Kuan; Wu, Shu-Wei; Tung, Chuan-Jong; Hong, Ji-Hong; Wang, Chun-Chieh; Lee, Chung-Chi
2017-08-01
The purpose of this study was to investigate the density heterogeneity pattern as a factor affecting Bragg peak degradation, including shifts in Bragg peak depth (ZBP), distal range (R80 and R20), and distal fall-off (R80-R20) using Monte Carlo N-Particles, eXtension (MCNPX). Density heterogeneities of different patterns with increasing complexity were placed downstream of commissioned proton beams at the Proton and Radiation Therapy Centre of Chang Gung Memorial Hospital, including one 150 MeV wobbling broad beam (10×10 cm2) and one 150 MeV proton pencil beam (FWHM of cross-plane=2.449 cm, FWHM of in-plane=2.256 cm). MCNPX 2.7.0 was used to model the transport and interactions of protons and secondary particles in density heterogeneity patterns and water using its repeated structure geometry. Different heterogeneity patterns were inserted into a 21×21×20 cm3 phantom. Mesh tally was used to track the dose distribution when the proton beam passed through the different density heterogeneity patterns. The results show that different heterogeneity patterns do cause different Bragg peak degradations owing to multiple Coulomb scattering (MCS) occurring in the density heterogeneities. A trend of increasing R20 and R80-R20 with increasing geometry complexity was observed. This means that Bragg peak degradation is mainly caused by the changes to the proton spectrum owing to MCS in the density heterogeneities. In contrast, R80 did not change considerably with different heterogeneity patterns, which indicated that the energy spectrum has only minimum effects on R80. Bragg peak degradation can occur both for a broad proton beam and a pencil beam, but is less significant for the broad beam.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Botta, F.; Mairani, A.; Battistoni, G.
Purpose: The calculation of patient-specific dose distribution can be achieved by Monte Carlo simulations or by analytical methods. In this study, fluka Monte Carlo code has been considered for use in nuclear medicine dosimetry. Up to now, fluka has mainly been dedicated to other fields, namely high energy physics, radiation protection, and hadrontherapy. When first employing a Monte Carlo code for nuclear medicine dosimetry, its results concerning electron transport at energies typical of nuclear medicine applications need to be verified. This is commonly achieved by means of calculation of a representative parameter and comparison with reference data. Dose point kernelmore » (DPK), quantifying the energy deposition all around a point isotropic source, is often the one. Methods: fluka DPKs have been calculated in both water and compact bone for monoenergetic electrons (10{sup -3} MeV) and for beta emitting isotopes commonly used for therapy ({sup 89}Sr, {sup 90}Y, {sup 131}I, {sup 153}Sm, {sup 177}Lu, {sup 186}Re, and {sup 188}Re). Point isotropic sources have been simulated at the center of a water (bone) sphere, and deposed energy has been tallied in concentric shells. fluka outcomes have been compared to penelope v.2008 results, calculated in this study as well. Moreover, in case of monoenergetic electrons in water, comparison with the data from the literature (etran, geant4, mcnpx) has been done. Maximum percentage differences within 0.8{center_dot}R{sub CSDA} and 0.9{center_dot}R{sub CSDA} for monoenergetic electrons (R{sub CSDA} being the continuous slowing down approximation range) and within 0.8{center_dot}X{sub 90} and 0.9{center_dot}X{sub 90} for isotopes (X{sub 90} being the radius of the sphere in which 90% of the emitted energy is absorbed) have been computed, together with the average percentage difference within 0.9{center_dot}R{sub CSDA} and 0.9{center_dot}X{sub 90} for electrons and isotopes, respectively. Results: Concerning monoenergetic electrons, within 0.8{center_dot}R{sub CSDA} (where 90%-97% of the particle energy is deposed), fluka and penelope agree mostly within 7%, except for 10 and 20 keV electrons (12% in water, 8.3% in bone). The discrepancies between fluka and the other codes are of the same order of magnitude than those observed when comparing the other codes among them, which can be referred to the different simulation algorithms. When considering the beta spectra, discrepancies notably reduce: within 0.9{center_dot}X{sub 90}, fluka and penelope differ for less than 1% in water and less than 2% in bone with any of the isotopes here considered. Complete data of fluka DPKs are given as Supplementary Material as a tool to perform dosimetry by analytical point kernel convolution. Conclusions: fluka provides reliable results when transporting electrons in the low energy range, proving to be an adequate tool for nuclear medicine dosimetry.« less
Development of a model and computer code to describe solar grade silicon production processes
NASA Technical Reports Server (NTRS)
Gould, R. K.; Srivastava, R.
1979-01-01
Two computer codes were developed for describing flow reactors in which high purity, solar grade silicon is produced via reduction of gaseous silicon halides. The first is the CHEMPART code, an axisymmetric, marching code which treats two phase flows with models describing detailed gas-phase chemical kinetics, particle formation, and particle growth. It can be used to described flow reactors in which reactants, mix, react, and form a particulate phase. Detailed radial gas-phase composition, temperature, velocity, and particle size distribution profiles are computed. Also, deposition of heat, momentum, and mass (either particulate or vapor) on reactor walls is described. The second code is a modified version of the GENMIX boundary layer code which is used to compute rates of heat, momentum, and mass transfer to the reactor walls. This code lacks the detailed chemical kinetics and particle handling features of the CHEMPART code but has the virtue of running much more rapidly than CHEMPART, while treating the phenomena occurring in the boundary layer in more detail.
Comparison of two computer codes for crack growth analysis: NASCRAC Versus NASA/FLAGRO
NASA Technical Reports Server (NTRS)
Stallworth, R.; Meyers, C. A.; Stinson, H. C.
1989-01-01
Results are presented from the comparison study of two computer codes for crack growth analysis - NASCRAC and NASA/FLAGRO. The two computer codes gave compatible conservative results when the part through crack analysis solutions were analyzed versus experimental test data. Results showed good correlation between the codes for the through crack at a lug solution. For the through crack at a lug solution, NASA/FLAGRO gave the most conservative results.
Computational Predictions of the Performance Wright 'Bent End' Propellers
NASA Technical Reports Server (NTRS)
Wang, Xiang-Yu; Ash, Robert L.; Bobbitt, Percy J.; Prior, Edwin (Technical Monitor)
2002-01-01
Computational analysis of two 1911 Wright brothers 'Bent End' wooden propeller reproductions have been performed and compared with experimental test results from the Langley Full Scale Wind Tunnel. The purpose of the analysis was to check the consistency of the experimental results and to validate the reliability of the tests. This report is one part of the project on the propeller performance research of the Wright 'Bent End' propellers, intend to document the Wright brothers' pioneering propeller design contributions. Two computer codes were used in the computational predictions. The FLO-MG Navier-Stokes code is a CFD (Computational Fluid Dynamics) code based on the Navier-Stokes Equations. It is mainly used to compute the lift coefficient and the drag coefficient at specified angles of attack at different radii. Those calculated data are the intermediate results of the computation and a part of the necessary input for the Propeller Design Analysis Code (based on Adkins and Libeck method), which is a propeller design code used to compute the propeller thrust coefficient, the propeller power coefficient and the propeller propulsive efficiency.
Proceduracy: Computer Code Writing in the Continuum of Literacy
ERIC Educational Resources Information Center
Vee, Annette
2010-01-01
This dissertation looks at computer programming through the lens of literacy studies, building from the concept of code as a written text with expressive and rhetorical power. I focus on the intersecting technological and social factors of computer code writing as a literacy--a practice I call "proceduracy". Like literacy, proceduracy is a human…
Computer Code Aids Design Of Wings
NASA Technical Reports Server (NTRS)
Carlson, Harry W.; Darden, Christine M.
1993-01-01
AERO2S computer code developed to aid design engineers in selection and evaluation of aerodynamically efficient wing/canard and wing/horizontal-tail configurations that includes simple hinged-flap systems. Code rapidly estimates longitudinal aerodynamic characteristics of conceptual airplane lifting-surface arrangements. Developed in FORTRAN V on CDC 6000 computer system, and ported to MS-DOS environment.
Proton induced activity in graphite - comparison between measurement and simulation
NASA Astrophysics Data System (ADS)
Kiselev, Daniela; Bergmann, Ryan; Schumann, Dorothea; Talanov, Vadim; Wohlmuther, Michael
2018-06-01
The Paul Scherrer Institut (PSI) operates the Meson production target stations E and M with 590 MeV protons at currents of up to 2.4 mA. Both targets consist of polycrystalline graphite and rotate with 1 Hz due to the high power deposition (40 kW at 2 mA) in Target E. The graphite wheel is regularly exchanged and disposed as radioactive waste after a maximum of 3 to 4 years in operation, which corresponds to about 30 to 40 Ah of proton fluence. For disposal, the nuclide inventory of the long-lived isotopes (T1/2 > 60 d) has to be calculated and reported to the authorities. Measurements of gamma emitters, as well as 3H, 10Be and 14C, were carried out using different techniques. The measured specific activities are compared to Monte Carlo particle transport simulations performed with MCNPX2.7.0 using the BERTINI-DRESNER-RAL (default model in MCNPX2.7.0) and INCL4.6/ABLA07 as nuclear reaction models.
Cloud Computing for Complex Performance Codes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Appel, Gordon John; Hadgu, Teklu; Klein, Brandon Thorin
This report describes the use of cloud computing services for running complex public domain performance assessment problems. The work consisted of two phases: Phase 1 was to demonstrate complex codes, on several differently configured servers, could run and compute trivial small scale problems in a commercial cloud infrastructure. Phase 2 focused on proving non-trivial large scale problems could be computed in the commercial cloud environment. The cloud computing effort was successfully applied using codes of interest to the geohydrology and nuclear waste disposal modeling community.
APC: A New Code for Atmospheric Polarization Computations
NASA Technical Reports Server (NTRS)
Korkin, Sergey V.; Lyapustin, Alexei I.; Rozanov, Vladimir V.
2014-01-01
A new polarized radiative transfer code Atmospheric Polarization Computations (APC) is described. The code is based on separation of the diffuse light field into anisotropic and smooth (regular) parts. The anisotropic part is computed analytically. The smooth regular part is computed numerically using the discrete ordinates method. Vertical stratification of the atmosphere, common types of bidirectional surface reflection and scattering by spherical particles or spheroids are included. A particular consideration is given to computation of the bidirectional polarization distribution function (BPDF) of the waved ocean surface.
Hypercube matrix computation task
NASA Technical Reports Server (NTRS)
Calalo, Ruel H.; Imbriale, William A.; Jacobi, Nathan; Liewer, Paulett C.; Lockhart, Thomas G.; Lyzenga, Gregory A.; Lyons, James R.; Manshadi, Farzin; Patterson, Jean E.
1988-01-01
A major objective of the Hypercube Matrix Computation effort at the Jet Propulsion Laboratory (JPL) is to investigate the applicability of a parallel computing architecture to the solution of large-scale electromagnetic scattering problems. Three scattering analysis codes are being implemented and assessed on a JPL/California Institute of Technology (Caltech) Mark 3 Hypercube. The codes, which utilize different underlying algorithms, give a means of evaluating the general applicability of this parallel architecture. The three analysis codes being implemented are a frequency domain method of moments code, a time domain finite difference code, and a frequency domain finite elements code. These analysis capabilities are being integrated into an electromagnetics interactive analysis workstation which can serve as a design tool for the construction of antennas and other radiating or scattering structures. The first two years of work on the Hypercube Matrix Computation effort is summarized. It includes both new developments and results as well as work previously reported in the Hypercube Matrix Computation Task: Final Report for 1986 to 1987 (JPL Publication 87-18).
NASA Technical Reports Server (NTRS)
Norment, H. G.
1980-01-01
Calculations can be performed for any atmospheric conditions and for all water drop sizes, from the smallest cloud droplet to large raindrops. Any subsonic, external, non-lifting flow can be accommodated; flow into, but not through, inlets also can be simulated. Experimental water drop drag relations are used in the water drop equations of motion and effects of gravity settling are included. Seven codes are described: (1) a code used to debug and plot body surface description data; (2) a code that processes the body surface data to yield the potential flow field; (3) a code that computes flow velocities at arrays of points in space; (4) a code that computes water drop trajectories from an array of points in space; (5) a code that computes water drop trajectories and fluxes to arbitrary target points; (6) a code that computes water drop trajectories tangent to the body; and (7) a code that produces stereo pair plots which include both the body and trajectories. Code descriptions include operating instructions, card inputs and printouts for example problems, and listing of the FORTRAN codes. Accuracy of the calculations is discussed, and trajectory calculation results are compared with prior calculations and with experimental data.
Utilizing GPUs to Accelerate Turbomachinery CFD Codes
NASA Technical Reports Server (NTRS)
MacCalla, Weylin; Kulkarni, Sameer
2016-01-01
GPU computing has established itself as a way to accelerate parallel codes in the high performance computing world. This work focuses on speeding up APNASA, a legacy CFD code used at NASA Glenn Research Center, while also drawing conclusions about the nature of GPU computing and the requirements to make GPGPU worthwhile on legacy codes. Rewriting and restructuring of the source code was avoided to limit the introduction of new bugs. The code was profiled and investigated for parallelization potential, then OpenACC directives were used to indicate parallel parts of the code. The use of OpenACC directives was not able to reduce the runtime of APNASA on either the NVIDIA Tesla discrete graphics card, or the AMD accelerated processing unit. Additionally, it was found that in order to justify the use of GPGPU, the amount of parallel work being done within a kernel would have to greatly exceed the work being done by any one portion of the APNASA code. It was determined that in order for an application like APNASA to be accelerated on the GPU, it should not be modular in nature, and the parallel portions of the code must contain a large portion of the code's computation time.
PASCO: Structural panel analysis and sizing code: Users manual - Revised
NASA Technical Reports Server (NTRS)
Anderson, M. S.; Stroud, W. J.; Durling, B. J.; Hennessy, K. W.
1981-01-01
A computer code denoted PASCO is described for analyzing and sizing uniaxially stiffened composite panels. Buckling and vibration analyses are carried out with a linked plate analysis computer code denoted VIPASA, which is included in PASCO. Sizing is based on nonlinear mathematical programming techniques and employs a computer code denoted CONMIN, also included in PASCO. Design requirements considered are initial buckling, material strength, stiffness and vibration frequency. A user's manual for PASCO is presented.
Computation of Reacting Flows in Combustion Processes
NASA Technical Reports Server (NTRS)
Keith, Theo G., Jr.; Chen, Kuo-Huey
1997-01-01
The main objective of this research was to develop an efficient three-dimensional computer code for chemically reacting flows. The main computer code developed is ALLSPD-3D. The ALLSPD-3D computer program is developed for the calculation of three-dimensional, chemically reacting flows with sprays. The ALL-SPD code employs a coupled, strongly implicit solution procedure for turbulent spray combustion flows. A stochastic droplet model and an efficient method for treatment of the spray source terms in the gas-phase equations are used to calculate the evaporating liquid sprays. The chemistry treatment in the code is general enough that an arbitrary number of reaction and species can be defined by the users. Also, it is written in generalized curvilinear coordinates with both multi-block and flexible internal blockage capabilities to handle complex geometries. In addition, for general industrial combustion applications, the code provides both dilution and transpiration cooling capabilities. The ALLSPD algorithm, which employs the preconditioning and eigenvalue rescaling techniques, is capable of providing efficient solution for flows with a wide range of Mach numbers. Although written for three-dimensional flows in general, the code can be used for two-dimensional and axisymmetric flow computations as well. The code is written in such a way that it can be run in various computer platforms (supercomputers, workstations and parallel processors) and the GUI (Graphical User Interface) should provide a user-friendly tool in setting up and running the code.
NASA Rotor 37 CFD Code Validation: Glenn-HT Code
NASA Technical Reports Server (NTRS)
Ameri, Ali A.
2010-01-01
In order to advance the goals of NASA aeronautics programs, it is necessary to continuously evaluate and improve the computational tools used for research and design at NASA. One such code is the Glenn-HT code which is used at NASA Glenn Research Center (GRC) for turbomachinery computations. Although the code has been thoroughly validated for turbine heat transfer computations, it has not been utilized for compressors. In this work, Glenn-HT was used to compute the flow in a transonic compressor and comparisons were made to experimental data. The results presented here are in good agreement with this data. Most of the measures of performance are well within the measurement uncertainties and the exit profiles of interest agree with the experimental measurements.
Final report for the Tera Computer TTI CRADA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davidson, G.S.; Pavlakos, C.; Silva, C.
1997-01-01
Tera Computer and Sandia National Laboratories have completed a CRADA, which examined the Tera Multi-Threaded Architecture (MTA) for use with large codes of importance to industry and DOE. The MTA is an innovative architecture that uses parallelism to mask latency between memories and processors. The physical implementation is a parallel computer with high cross-section bandwidth and GaAs processors designed by Tera, which support many small computation threads and fast, lightweight context switches between them. When any thread blocks while waiting for memory accesses to complete, another thread immediately begins execution so that high CPU utilization is maintained. The Tera MTAmore » parallel computer has a single, global address space, which is appealing when porting existing applications to a parallel computer. This ease of porting is further enabled by compiler technology that helps break computations into parallel threads. DOE and Sandia National Laboratories were interested in working with Tera to further develop this computing concept. While Tera Computer would continue the hardware development and compiler research, Sandia National Laboratories would work with Tera to ensure that their compilers worked well with important Sandia codes, most particularly CTH, a shock physics code used for weapon safety computations. In addition to that important code, Sandia National Laboratories would complete research on a robotic path planning code, SANDROS, which is important in manufacturing applications, and would evaluate the MTA performance on this code. Finally, Sandia would work directly with Tera to develop 3D visualization codes, which would be appropriate for use with the MTA. Each of these tasks has been completed to the extent possible, given that Tera has just completed the MTA hardware. All of the CRADA work had to be done on simulators.« less
Operations analysis (study 2.1). Program listing for the LOVES computer code
NASA Technical Reports Server (NTRS)
Wray, S. T., Jr.
1974-01-01
A listing of the LOVES computer program is presented. The program is coded partially in SIMSCRIPT and FORTRAN. This version of LOVES is compatible with both the CDC 7600 and the UNIVAC 1108 computers. The code has been compiled, loaded, and executed successfully on the EXEC 8 system for the UNIVAC 1108.
ERIC Educational Resources Information Center
Knowlton, Marie; Wetzel, Robin
2006-01-01
This study compared the length of text in English Braille American Edition, the Nemeth code, and the computer braille code with the Unified English Braille Code (UEBC)--also known as Unified English Braille (UEB). The findings indicate that differences in the length of text are dependent on the type of material that is transcribed and the grade…
A MATLAB based 3D modeling and inversion code for MT data
NASA Astrophysics Data System (ADS)
Singh, Arun; Dehiya, Rahul; Gupta, Pravin K.; Israil, M.
2017-07-01
The development of a MATLAB based computer code, AP3DMT, for modeling and inversion of 3D Magnetotelluric (MT) data is presented. The code comprises two independent components: grid generator code and modeling/inversion code. The grid generator code performs model discretization and acts as an interface by generating various I/O files. The inversion code performs core computations in modular form - forward modeling, data functionals, sensitivity computations and regularization. These modules can be readily extended to other similar inverse problems like Controlled-Source EM (CSEM). The modular structure of the code provides a framework useful for implementation of new applications and inversion algorithms. The use of MATLAB and its libraries makes it more compact and user friendly. The code has been validated on several published models. To demonstrate its versatility and capabilities the results of inversion for two complex models are presented.
Applications of automatic differentiation in computational fluid dynamics
NASA Technical Reports Server (NTRS)
Green, Lawrence L.; Carle, A.; Bischof, C.; Haigler, Kara J.; Newman, Perry A.
1994-01-01
Automatic differentiation (AD) is a powerful computational method that provides for computing exact sensitivity derivatives (SD) from existing computer programs for multidisciplinary design optimization (MDO) or in sensitivity analysis. A pre-compiler AD tool for FORTRAN programs called ADIFOR has been developed. The ADIFOR tool has been easily and quickly applied by NASA Langley researchers to assess the feasibility and computational impact of AD in MDO with several different FORTRAN programs. These include a state-of-the-art three dimensional multigrid Navier-Stokes flow solver for wings or aircraft configurations in transonic turbulent flow. With ADIFOR the user specifies sets of independent and dependent variables with an existing computer code. ADIFOR then traces the dependency path throughout the code, applies the chain rule to formulate derivative expressions, and generates new code to compute the required SD matrix. The resulting codes have been verified to compute exact non-geometric and geometric SD for a variety of cases. in less time than is required to compute the SD matrix using centered divided differences.
NASA Astrophysics Data System (ADS)
Alipchenkov, V. M.; Anfimov, A. M.; Afremov, D. A.; Gorbunov, V. S.; Zeigarnik, Yu. A.; Kudryavtsev, A. V.; Osipov, S. L.; Mosunova, N. A.; Strizhov, V. F.; Usov, E. V.
2016-02-01
The conceptual fundamentals of the development of the new-generation system thermal-hydraulic computational HYDRA-IBRAE/LM code are presented. The code is intended to simulate the thermalhydraulic processes that take place in the loops and the heat-exchange equipment of liquid-metal cooled fast reactor systems under normal operation and anticipated operational occurrences and during accidents. The paper provides a brief overview of Russian and foreign system thermal-hydraulic codes for modeling liquid-metal coolants and gives grounds for the necessity of development of a new-generation HYDRA-IBRAE/LM code. Considering the specific engineering features of the nuclear power plants (NPPs) equipped with the BN-1200 and the BREST-OD-300 reactors, the processes and the phenomena are singled out that require a detailed analysis and development of the models to be correctly described by the system thermal-hydraulic code in question. Information on the functionality of the computational code is provided, viz., the thermalhydraulic two-phase model, the properties of the sodium and the lead coolants, the closing equations for simulation of the heat-mass exchange processes, the models to describe the processes that take place during the steam-generator tube rupture, etc. The article gives a brief overview of the usability of the computational code, including a description of the support documentation and the supply package, as well as possibilities of taking advantages of the modern computer technologies, such as parallel computations. The paper shows the current state of verification and validation of the computational code; it also presents information on the principles of constructing of and populating the verification matrices for the BREST-OD-300 and the BN-1200 reactor systems. The prospects are outlined for further development of the HYDRA-IBRAE/LM code, introduction of new models into it, and enhancement of its usability. It is shown that the program of development and practical application of the code will allow carrying out in the nearest future the computations to analyze the safety of potential NPP projects at a qualitatively higher level.
Performance assessment of KORAT-3D on the ANL IBM-SP computer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alexeyev, A.V.; Zvenigorodskaya, O.A.; Shagaliev, R.M.
1999-09-01
The TENAR code is currently being developed at the Russian Federal Nuclear Center (VNIIEF) as a coupled dynamics code for the simulation of transients in VVER and RBMK systems and other nuclear systems. The neutronic module in this code system is KORAT-3D. This module is also one of the most computationally intensive components of the code system. A parallel version of KORAT-3D has been implemented to achieve the goal of obtaining transient solutions in reasonable computational time, particularly for RBMK calculations that involve the application of >100,000 nodes. An evaluation of the KORAT-3D code performance was recently undertaken on themore » Argonne National Laboratory (ANL) IBM ScalablePower (SP) parallel computer located in the Mathematics and Computer Science Division of ANL. At the time of the study, the ANL IBM-SP computer had 80 processors. This study was conducted under the auspices of a technical staff exchange program sponsored by the International Nuclear Safety Center (INSC).« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Evans, Thomas; Hamilton, Steven; Slattery, Stuart
Profugus is an open-source mini-application (mini-app) for radiation transport and reactor applications. It contains the fundamental computational kernels used in the Exnihilo code suite from Oak Ridge National Laboratory. However, Exnihilo is production code with a substantial user base. Furthermore, Exnihilo is export controlled. This makes collaboration with computer scientists and computer engineers difficult. Profugus is designed to bridge that gap. By encapsulating the core numerical algorithms in an abbreviated code base that is open-source, computer scientists can analyze the algorithms and easily make code-architectural changes to test performance without compromising the production code values of Exnihilo. Profugus is notmore » meant to be production software with respect to problem analysis. The computational kernels in Profugus are designed to analyze performance, not correctness. Nonetheless, users of Profugus can setup and run problems with enough real-world features to be useful as proof-of-concept for actual production work.« less
Fast H.264/AVC FRExt intra coding using belief propagation.
Milani, Simone
2011-01-01
In the H.264/AVC FRExt coder, the coding performance of Intra coding significantly overcomes the previous still image coding standards, like JPEG2000, thanks to a massive use of spatial prediction. Unfortunately, the adoption of an extensive set of predictors induces a significant increase of the computational complexity required by the rate-distortion optimization routine. The paper presents a complexity reduction strategy that aims at reducing the computational load of the Intra coding with a small loss in the compression performance. The proposed algorithm relies on selecting a reduced set of prediction modes according to their probabilities, which are estimated adopting a belief-propagation procedure. Experimental results show that the proposed method permits saving up to 60 % of the coding time required by an exhaustive rate-distortion optimization method with a negligible loss in performance. Moreover, it permits an accurate control of the computational complexity unlike other methods where the computational complexity depends upon the coded sequence.
2,445 Hours of Code: What I Learned from Facilitating Hour of Code Events in High School Libraries
ERIC Educational Resources Information Center
Colby, Jennifer
2015-01-01
This article describes a school librarian's experience with initiating an Hour of Code event for her school's student body. Hadi Partovi of Code.org conceived the Hour of Code "to get ten million students to try one hour of computer science" (Partovi, 2013a), which is implemented during Computer Science Education Week with a goal of…
NASA Technical Reports Server (NTRS)
Chaderjian, Neal M.
1991-01-01
Computations from two Navier-Stokes codes, NSS and F3D, are presented for a tangent-ogive-cylinder body at high angle of attack. Features of this steady flow include a pair of primary vortices on the leeward side of the body as well as secondary vortices. The topological and physical plausibility of this vortical structure is discussed. The accuracy of these codes are assessed by comparison of the numerical solutions with experimental data. The effects of turbulence model, numerical dissipation, and grid refinement are presented. The overall efficiency of these codes are also assessed by examining their convergence rates, computational time per time step, and maximum allowable time step for time-accurate computations. Overall, the numerical results from both codes compared equally well with experimental data, however, the NSS code was found to be significantly more efficient than the F3D code.
User's Manual for FEMOM3DR. Version 1.0
NASA Technical Reports Server (NTRS)
Reddy, C. J.
1998-01-01
FEMoM3DR is a computer code written in FORTRAN 77 to compute radiation characteristics of antennas on 3D body using combined Finite Element Method (FEM)/Method of Moments (MoM) technique. The code is written to handle different feeding structures like coaxial line, rectangular waveguide, and circular waveguide. This code uses the tetrahedral elements, with vector edge basis functions for FEM and triangular elements with roof-top basis functions for MoM. By virtue of FEM, this code can handle any arbitrary shaped three dimensional bodies with inhomogeneous lossy materials; and due to MoM the computational domain can be terminated in any arbitrary shape. The User's Manual is written to make the user acquainted with the operation of the code. The user is assumed to be familiar with the FORTRAN 77 language and the operating environment of the computers on which the code is intended to run.
Selection of a computer code for Hanford low-level waste engineered-system performance assessment
DOE Office of Scientific and Technical Information (OSTI.GOV)
McGrail, B.P.; Mahoney, L.A.
Planned performance assessments for the proposed disposal of low-level waste (LLW) glass produced from remediation of wastes stored in underground tanks at Hanford, Washington will require calculations of radionuclide release rates from the subsurface disposal facility. These calculations will be done with the aid of computer codes. Currently available computer codes were ranked in terms of the feature sets implemented in the code that match a set of physical, chemical, numerical, and functional capabilities needed to assess release rates from the engineered system. The needed capabilities were identified from an analysis of the important physical and chemical process expected tomore » affect LLW glass corrosion and the mobility of radionuclides. The highest ranked computer code was found to be the ARES-CT code developed at PNL for the US Department of Energy for evaluation of and land disposal sites.« less
User's manual for a material transport code on the Octopus Computer Network
DOE Office of Scientific and Technical Information (OSTI.GOV)
Naymik, T.G.; Mendez, G.D.
1978-09-15
A code to simulate material transport through porous media was developed at Oak Ridge National Laboratory. This code has been modified and adapted for use at Lawrence Livermore Laboratory. This manual, in conjunction with report ORNL-4928, explains the input, output, and execution of the code on the Octopus Computer Network.
NASA Technical Reports Server (NTRS)
Logan, Terry G.
1994-01-01
The purpose of this study is to investigate the performance of the integral equation computations using numerical source field-panel method in a massively parallel processing (MPP) environment. A comparative study of computational performance of the MPP CM-5 computer and conventional Cray-YMP supercomputer for a three-dimensional flow problem is made. A serial FORTRAN code is converted into a parallel CM-FORTRAN code. Some performance results are obtained on CM-5 with 32, 62, 128 nodes along with those on Cray-YMP with a single processor. The comparison of the performance indicates that the parallel CM-FORTRAN code near or out-performs the equivalent serial FORTRAN code for some cases.
NASA Astrophysics Data System (ADS)
Verdipoor, Khatibeh; Alemi, Abdolali; Mesbahi, Asghar
2018-06-01
Novel shielding materials for photons based on silicon resin and WO3, PbO, and Bi2O3 Micro and Nano-particles were designed and their mass attenuation coefficients were calculated using Monte Carlo (MC) method. Using lattice cards in MCNPX code, micro and nanoparticles with sizes of 100 nm and 1 μm was designed inside a silicon resin matrix. Narrow beam geometry was simulated to calculate the attenuation coefficients of samples against mono-energetic beams of Co60 (1.17 and 1.33 MeV), Cs137 (663.8 KeV), and Ba133 (355.9 KeV). The shielding samples made of nanoparticles had higher mass attenuation coefficients, up to 17% relative to those made of microparticles. The superiority of nano-shields relative to micro-shields was dependent on the filler concentration and the energy of photons. PbO, and Bi2O3 nanoparticles showed higher attenuation compared to WO3 nanoparticles in studied energies. Fabrication of novel shielding materials using PbO, and Bi2O3 nanoparticles is recommended for application in radiation protection against photon beams.
A neutron diagnostic for high current deuterium beams.
Rebai, M; Cavenago, M; Croci, G; Dalla Palma, M; Gervasini, G; Ghezzi, F; Grosso, G; Murtas, F; Pasqualotto, R; Cippo, E Perelli; Tardocchi, M; Tollin, M; Gorini, G
2012-02-01
A neutron diagnostic for high current deuterium beams is proposed for installation on the spectral shear interferometry for direct electric field reconstruction (SPIDER, Source for Production of Ion of Deuterium Extracted from RF plasma) test beam facility. The proposed detection system is called Close-contact Neutron Emission Surface Mapping (CNESM). The diagnostic aims at providing the map of the neutron emission on the beam dump surface by placing a detector in close contact, right behind the dump. CNESM uses gas electron multiplier detectors equipped with a cathode that also serves as neutron-proton converter foil. The cathode is made of a thin polythene film and an aluminium film; it is designed for detection of neutrons of energy >2.2 MeV with an incidence angle < 45°. CNESM was designed on the basis of simulations of the different steps from the deuteron beam interaction with the beam dump to the neutron detection in the nGEM. Neutron scattering was simulated with the MCNPX code. CNESM on SPIDER is a first step towards the application of this diagnostic technique to the MITICA beam test facility, where it will be used to resolve the horizontal profile of the beam intensity.
Monte carlo simulations of Yttrium reaction rates in Quinta uranium target
NASA Astrophysics Data System (ADS)
Suchopár, M.; Wagner, V.; Svoboda, O.; Vrzalová, J.; Chudoba, P.; Tichý, P.; Kugler, A.; Adam, J.; Závorka, L.; Baldin, A.; Furman, W.; Kadykov, M.; Khushvaktov, J.; Solnyshkin, A.; Tsoupko-Sitnikov, V.; Tyutyunnikov, S.; Bielewicz, M.; Kilim, S.; Strugalska-Gola, E.; Szuta, M.
2017-03-01
The international collaboration Energy and Transmutation of Radioactive Waste (E&T RAW) performed intensive studies of several simple accelerator-driven system (ADS) setups consisting of lead, uranium and graphite which were irradiated by relativistic proton and deuteron beams in the past years at the Joint Institute for Nuclear Research (JINR) in Dubna, Russia. The most recent setup called Quinta, consisting of natural uranium target-blanket and lead shielding, was irradiated by deuteron beams in the energy range between 1 and 8 GeV in three accelerator runs at JINR Nuclotron in 2011 and 2012 with yttrium samples among others inserted inside the setup to measure the neutron flux in various places. Suitable activation detectors serve as one of possible tools for monitoring of proton and deuteron beams and for measurements of neutron field distribution in ADS studies. Yttrium is one of such suitable materials for monitoring of high energy neutrons. Various threshold reactions can be observed in yttrium samples. The yields of isotopes produced in the samples were determined using the activation method. Monte Carlo simulations of the reaction rates leading to production of different isotopes were performed in the MCNPX transport code and compared with the experimental results obtained from the yttrium samples.
A neutron diagnostic for high current deuterium beams
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rebai, M.; Perelli Cippo, E.; Cavenago, M.
2012-02-15
A neutron diagnostic for high current deuterium beams is proposed for installation on the spectral shear interferometry for direct electric field reconstruction (SPIDER, Source for Production of Ion of Deuterium Extracted from RF plasma) test beam facility. The proposed detection system is called Close-contact Neutron Emission Surface Mapping (CNESM). The diagnostic aims at providing the map of the neutron emission on the beam dump surface by placing a detector in close contact, right behind the dump. CNESM uses gas electron multiplier detectors equipped with a cathode that also serves as neutron-proton converter foil. The cathode is made of a thinmore » polythene film and an aluminium film; it is designed for detection of neutrons of energy >2.2 MeV with an incidence angle < 45 deg. CNESM was designed on the basis of simulations of the different steps from the deuteron beam interaction with the beam dump to the neutron detection in the nGEM. Neutron scattering was simulated with the MCNPX code. CNESM on SPIDER is a first step towards the application of this diagnostic technique to the MITICA beam test facility, where it will be used to resolve the horizontal profile of the beam intensity.« less
Computer Description of the M561 Utility Truck
1984-10-01
GIFT Computer Code Sustainabi1ity Predictions for Army Spare Components Requirements for Combat (SPARC) 20. ABSTRACT (Caotfmia «a NWM eitim ft...used as input to the GIFT computer code to generate target vulnerability data. DO FORM V JAM 73 1473 EDITION OF I NOV 65 IS OBSOLETE Unclass i f ied...anaLyiis requires input from the Geometric Information for Targets ( GIFT ) ’ computer code. This report documents the combina- torial geometry (Com-Geom
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eyler, L L; Trent, D S; Budden, M J
During the course of the TEMPEST computer code development a concurrent effort was conducted to assess the code's performance and the validity of computed results. The results of this work are presented in this document. The principal objective of this effort was to assure the code's computational correctness for a wide range of hydrothermal phenomena typical of fast breeder reactor application. 47 refs., 94 figs., 6 tabs.
NASA Astrophysics Data System (ADS)
Baptista, M.; Teles, P.; Cardoso, G.; Vaz, P.
2014-11-01
Over the last decade, there was a substantial increase in the number of interventional cardiology procedures worldwide, and the corresponding ionizing radiation doses for both the medical staff and patients became a subject of concern. Interventional procedures in cardiology are normally very complex, resulting in long exposure times. Also, these interventions require the operator to work near the patient and, consequently, close to the primary X-ray beam. Moreover, due to the scattered radiation from the patient and the equipment, the medical staff is also exposed to a non-uniform radiation field that can lead to a significant exposure of sensitive body organs and tissues, such as the eye lens, the thyroid and the extremities. In order to better understand the spatial variation of the dose and dose rate distributions during an interventional cardiology procedure, the dose distribution around a C-arm fluoroscopic system, in operation in a cardiac cath lab at Portuguese Hospital, was estimated using both Monte Carlo (MC) simulations and dosimetric measurements. To model and simulate the cardiac cath lab, including the fluoroscopic equipment used to execute interventional procedures, the state-of-the-art MC radiation transport code MCNPX 2.7.0 was used. Subsequently, Thermo-Luminescent Detector (TLD) measurements were performed, in order to validate and support the simulation results obtained for the cath lab model. The preliminary results presented in this study reveal that the cardiac cath lab model was successfully validated, taking into account the good agreement between MC calculations and TLD measurements. The simulated results for the isodose curves related to the C-arm fluoroscopic system are also consistent with the dosimetric information provided by the equipment manufacturer (Siemens). The adequacy of the implemented computational model used to simulate complex procedures and map dose distributions around the operator and the medical staff is discussed, in view of the optimization principle (and the associated ALARA objective), one of the pillars of the international system of radiological protection.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hardin, M; Elson, H; Lamba, M
2014-06-01
Purpose: To quantify the clinically observed dose enhancement adjacent to cranial titanium fixation plates during post-operative radiotherapy. Methods: Irradiation of a titanium burr hole cover was simulated using Monte Carlo code MCNPX for a 6 MV photon spectrum to investigate backscatter dose enhancement due to increased production of secondary electrons within the titanium plate. The simulated plate was placed 3 mm deep in a water phantom, and dose deposition was tallied for 0.2 mm thick cells adjacent to the entrance and exit sides of the plate. These results were compared to a simulation excluding the presence of the titanium tomore » calculate relative dose enhancement on the entrance and exit sides of the plate. To verify simulated results, two titanium burr hole covers (Synthes, Inc. and Biomet, Inc.) were irradiated with 6 MV photons in a solid water phantom containing GafChromic MD-55 film. The phantom was irradiated on a Varian 21EX linear accelerator at multiple gantry angles (0–180 degrees) to analyze the angular dependence of the backscattered radiation. Relative dose enhancement was quantified using computer software. Results: Monte Carlo simulations indicate a relative difference of 26.4% and 7.1% on the entrance and exit sides of the plate respectively. Film dosimetry results using a similar geometry indicate a relative difference of 13% and -10% on the entrance and exit sides of the plate respectively. Relative dose enhancement on the entrance side of the plate decreased with increasing gantry angle from 0 to 180 degrees. Conclusion: Film and simulation results demonstrate an increase in dose to structures immediately adjacent to cranial titanium fixation plates. Increased beam obliquity has shown to alleviate dose enhancement to some extent. These results are consistent with clinically observed effects.« less
Passive Safety Features Evaluation of KIPT Neutron Source Facility
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhong, Zhaopeng; Gohar, Yousry
2016-06-01
Argonne National Laboratory (ANL) of the United States and Kharkov Institute of Physics and Technology (KIPT) of Ukraine have cooperated on the development, design, and construction of a neutron source facility. The facility was constructed at Kharkov, Ukraine and its commissioning process is underway. It will be used to conduct basic and applied nuclear research, produce medical isotopes, and train young nuclear specialists. The facility has an electron accelerator-driven subcritical assembly. The electron beam power is 100 kW using 100 MeV electrons. Tungsten or natural uranium is the target material for generating neutrons driving the subcritical assembly. The subcritical assemblymore » is composed of WWR-M2 - Russian fuel assemblies with U-235 enrichment of 19.7 wt%, surrounded by beryllium reflector assembles and graphite blocks. The subcritical assembly is seated in a water tank, which is a part of the primary cooling loop. During normal operation, the water coolant operates at room temperature and the total facility power is ~300 KW. The passive safety features of the facility are discussed in in this study. Monte Carlo computer code MCNPX was utilized in the analyses with ENDF/B-VII.0 nuclear data libraries. Negative reactivity temperature feedback was consistently observed, which is important for the facility safety performance. Due to the design of WWR-M2 fuel assemblies, slight water temperature increase and the corresponding water density decrease produce large reactivity drop, which offset the reactivity gain by mistakenly loading an additional fuel assembly. The increase of fuel temperature also causes sufficiently large reactivity decrease. This enhances the facility safety performance because fuel temperature increase provides prompt negative reactivity feedback. The reactivity variation due to an empty fuel position filled by water during the fuel loading process is examined. Also, the loading mistakes of removing beryllium reflector assemblies and replacing them with dummy assemblies were analyzed. In all these circumstances, the reactivity change results do not cause any safety concerns.« less
Alarcon, Gene M; Gamble, Rose F; Ryan, Tyler J; Walter, Charles; Jessup, Sarah A; Wood, David W; Capiola, August
2018-07-01
Computer programs are a ubiquitous part of modern society, yet little is known about the psychological processes that underlie reviewing code. We applied the heuristic-systematic model (HSM) to investigate the influence of computer code comments on perceptions of code trustworthiness. The study explored the influence of validity, placement, and style of comments in code on trustworthiness perceptions and time spent on code. Results indicated valid comments led to higher trust assessments and more time spent on the code. Properly placed comments led to lower trust assessments and had a marginal effect on time spent on code; however, the effect was no longer significant after controlling for effects of the source code. Low style comments led to marginally higher trustworthiness assessments, but high style comments led to longer time spent on the code. Several interactions were also found. Our findings suggest the relationship between code comments and perceptions of code trustworthiness is not as straightforward as previously thought. Additionally, the current paper extends the HSM to the programming literature. Copyright © 2018 Elsevier Ltd. All rights reserved.
Adiabatic topological quantum computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cesare, Chris; Landahl, Andrew J.; Bacon, Dave
Topological quantum computing promises error-resistant quantum computation without active error correction. However, there is a worry that during the process of executing quantum gates by braiding anyons around each other, extra anyonic excitations will be created that will disorder the encoded quantum information. Here, we explore this question in detail by studying adiabatic code deformations on Hamiltonians based on topological codes, notably Kitaev’s surface codes and the more recently discovered color codes. We develop protocols that enable universal quantum computing by adiabatic evolution in a way that keeps the energy gap of the system constant with respect to the computationmore » size and introduces only simple local Hamiltonian interactions. This allows one to perform holonomic quantum computing with these topological quantum computing systems. The tools we develop allow one to go beyond numerical simulations and understand these processes analytically.« less
Adiabatic topological quantum computing
Cesare, Chris; Landahl, Andrew J.; Bacon, Dave; ...
2015-07-31
Topological quantum computing promises error-resistant quantum computation without active error correction. However, there is a worry that during the process of executing quantum gates by braiding anyons around each other, extra anyonic excitations will be created that will disorder the encoded quantum information. Here, we explore this question in detail by studying adiabatic code deformations on Hamiltonians based on topological codes, notably Kitaev’s surface codes and the more recently discovered color codes. We develop protocols that enable universal quantum computing by adiabatic evolution in a way that keeps the energy gap of the system constant with respect to the computationmore » size and introduces only simple local Hamiltonian interactions. This allows one to perform holonomic quantum computing with these topological quantum computing systems. The tools we develop allow one to go beyond numerical simulations and understand these processes analytically.« less
Accident Analysis for the NIST Research Reactor Before and After Fuel Conversion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baek J.; Diamond D.; Cuadra, A.
Postulated accidents have been analyzed for the 20 MW D2O-moderated research reactor (NBSR) at the National Institute of Standards and Technology (NIST). The analysis has been carried out for the present core, which contains high enriched uranium (HEU) fuel and for a proposed equilibrium core with low enriched uranium (LEU) fuel. The analyses employ state-of-the-art calculational methods. Three-dimensional Monte Carlo neutron transport calculations were performed with the MCNPX code to determine homogenized fuel compositions in the lower and upper halves of each fuel element and to determine the resulting neutronic properties of the core. The accident analysis employed a modelmore » of the primary loop with the RELAP5 code. The model includes the primary pumps, shutdown pumps outlet valves, heat exchanger, fuel elements, and flow channels for both the six inner and twenty-four outer fuel elements. Evaluations were performed for the following accidents: (1) control rod withdrawal startup accident, (2) maximum reactivity insertion accident, (3) loss-of-flow accident resulting from loss of electrical power with an assumption of failure of shutdown cooling pumps, (4) loss-of-flow accident resulting from a primary pump seizure, and (5) loss-of-flow accident resulting from inadvertent throttling of a flow control valve. In addition, natural circulation cooling at low power operation was analyzed. The analysis shows that the conversion will not lead to significant changes in the safety analysis and the calculated minimum critical heat flux ratio and maximum clad temperature assure that there is adequate margin to fuel failure.« less
Internal photon and electron dosimetry of the newborn patient—a hybrid computational phantom study
NASA Astrophysics Data System (ADS)
Wayson, Michael; Lee, Choonsik; Sgouros, George; Treves, S. Ted; Frey, Eric; Bolch, Wesley E.
2012-03-01
Estimates of radiation absorbed dose to organs of the nuclear medicine patient are a requirement for administered activity optimization and for stochastic risk assessment. Pediatric patients, and in particular the newborn child, represent that portion of the patient population where such optimization studies are most crucial owing to the enhanced tissue radiosensitivities and longer life expectancies of this patient subpopulation. In cases where whole-body CT imaging is not available, phantom-based calculations of radionuclide S values—absorbed dose to a target tissue per nuclear transformation in a source tissue—are required for dose and risk evaluation. In this study, a comprehensive model of electron and photon dosimetry of the reference newborn child is presented based on a high-resolution hybrid-voxel phantom from the University of Florida (UF) patient model series. Values of photon specific absorbed fraction (SAF) were assembled for both the reference male and female newborn using the radiation transport code MCNPX v2.6. Values of electron SAF were assembled in a unique and time-efficient manner whereby the collisional and radiative components of organ dose--for both self- and cross-dose terms—were computed separately. Dose to the newborn skeletal tissues were assessed via fluence-to-dose response functions reported for the first time in this study. Values of photon and electron SAFs were used to assemble a complete set of S values for some 16 radionuclides commonly associated with molecular imaging of the newborn. These values were then compared to those available in the OLINDA/EXM software. S value ratios for organ self-dose ranged from 0.46 to 1.42, while similar ratios for organ cross-dose varied from a low of 0.04 to a high of 3.49. These large discrepancies are due in large part to the simplistic organ modeling in the stylized newborn model used in the OLINDA/EXM software. A comprehensive model of internal dosimetry is presented in this study for the newborn nuclear medicine patient based upon the UF hybrid computational phantom. Photon dose response functions, photon and electron SAFs, and tables of radionuclide S values for the newborn child--both male and female--are given in a series of four electronic annexes available at stacks.iop.org/pmb/57/1433/mmedia. These values can be applied to optimization studies of image quality and stochastic risk for this most vulnerable class of pediatric patients.
NASA Astrophysics Data System (ADS)
Zhang, Juying; Hum Na, Yong; Caracappa, Peter F.; Xu, X. George
2009-10-01
This paper describes the development of a pair of adult male and adult female computational phantoms that are compatible with anatomical parameters for the 50th percentile population as specified by the International Commission on Radiological Protection (ICRP). The phantoms were designed entirely using polygonal mesh surfaces—a Boundary REPresentation (BREP) geometry that affords the ability to efficiently deform the shape and size of individual organs, as well as the body posture. A set of surface mesh models, from Anatomium™ 3D P1 V2.0, including 140 organs (out of 500 available) was adopted to supply the basic anatomical representation at the organ level. The organ masses were carefully adjusted to agree within 0.5% relative error with the reference values provided in the ICRP Publication 89. The finalized phantoms have been designated the RPI adult male (RPI-AM) and adult female (RPI-AF) phantoms. For the purposes of organ dose calculations using the MCNPX Monte Carlo code, these phantoms were subsequently converted to voxel formats. Monoenergetic photons between 10 keV and 10 MeV in six standard external photon source geometries were considered in this study: four parallel beams (anterior-posterior, posterior-anterior, left lateral and right lateral), one rotational and one isotropic. The results are tabulated as fluence-to-organ-absorbed-dose conversion coefficients and fluence-to-effective-dose conversion coefficients and compared against those derived from the ICRP computational phantoms, REX and REGINA. A general agreement was found for the effective dose from these two sets of phantoms for photon energies greater than about 300 keV. However, for low-energy photons and certain individual organs, the absorbed doses exhibit profound differences due to specific anatomical features. For example, the position of the arms affects the dose to the lung by more than 20% below 300 keV in the lateral source directions, and the vertical position of the testes affects the dose by more than 80% below 150 keV in the PA source direction. The deformability and adjustability of organs and posture in the RPI adult phantoms may prove useful not only for average workers or patients for radiation protection purposes, but also in studies involving anatomical and posture variability that is important in future radiation protection dosimetry.
Fast Computation of the Two-Point Correlation Function in the Age of Big Data
NASA Astrophysics Data System (ADS)
Pellegrino, Andrew; Timlin, John
2018-01-01
We present a new code which quickly computes the two-point correlation function for large sets of astronomical data. This code combines the ease of use of Python with the speed of parallel shared libraries written in C. We include the capability to compute the auto- and cross-correlation statistics, and allow the user to calculate the three-dimensional and angular correlation functions. Additionally, the code automatically divides the user-provided sky masks into contiguous subsamples of similar size, using the HEALPix pixelization scheme, for the purpose of resampling. Errors are computed using jackknife and bootstrap resampling in a way that adds negligible extra runtime, even with many subsamples. We demonstrate comparable speed with other clustering codes, and code accuracy compared to known and analytic results.
Design of convolutional tornado code
NASA Astrophysics Data System (ADS)
Zhou, Hui; Yang, Yao; Gao, Hongmin; Tan, Lu
2017-09-01
As a linear block code, the traditional tornado (tTN) code is inefficient in burst-erasure environment and its multi-level structure may lead to high encoding/decoding complexity. This paper presents a convolutional tornado (cTN) code which is able to improve the burst-erasure protection capability by applying the convolution property to the tTN code, and reduce computational complexity by abrogating the multi-level structure. The simulation results show that cTN code can provide a better packet loss protection performance with lower computation complexity than tTN code.
Three-dimensional turbopump flowfield analysis
NASA Technical Reports Server (NTRS)
Sharma, O. P.; Belford, K. A.; Ni, R. H.
1992-01-01
A program was conducted to develop a flow prediction method applicable to rocket turbopumps. The complex nature of a flowfield in turbopumps is described and examples of flowfields are discussed to illustrate that physics based models and analytical calculation procedures based on computational fluid dynamics (CFD) are needed to develop reliable design procedures for turbopumps. A CFD code developed at NASA ARC was used as the base code. The turbulence model and boundary conditions in the base code were modified, respectively, to: (1) compute transitional flows and account for extra rates of strain, e.g., rotation; and (2) compute surface heat transfer coefficients and allow computation through multistage turbomachines. Benchmark quality data from two and three-dimensional cascades were used to verify the code. The predictive capabilities of the present CFD code were demonstrated by computing the flow through a radial impeller and a multistage axial flow turbine. Results of the program indicate that the present code operated in a two-dimensional mode is a cost effective alternative to full three-dimensional calculations, and that it permits realistic predictions of unsteady loadings and losses for multistage machines.
NASA Technical Reports Server (NTRS)
Smith, S. D.
1984-01-01
A users manual for the RAMP2 computer code is provided. The RAMP2 code can be used to model the dominant phenomena which affect the prediction of liquid and solid rocket nozzle and orbital plume flow fields. The general structure and operation of RAMP2 are discussed. A user input/output guide for the modified TRAN72 computer code and the RAMP2F code is given. The application and use of the BLIMPJ module are considered. Sample problems involving the space shuttle main engine and motor are included.
NASA Technical Reports Server (NTRS)
Chan, William M.
1995-01-01
Algorithms and computer code developments were performed for the overset grid approach to solving computational fluid dynamics problems. The techniques developed are applicable to compressible Navier-Stokes flow for any general complex configurations. The computer codes developed were tested on different complex configurations with the Space Shuttle launch vehicle configuration as the primary test bed. General, efficient and user-friendly codes were produced for grid generation, flow solution and force and moment computation.
NASA Technical Reports Server (NTRS)
Wigton, Larry
1996-01-01
Improving the numerical linear algebra routines for use in new Navier-Stokes codes, specifically Tim Barth's unstructured grid code, with spin-offs to TRANAIR is reported. A fast distance calculation routine for Navier-Stokes codes using the new one-equation turbulence models is written. The primary focus of this work was devoted to improving matrix-iterative methods. New algorithms have been developed which activate the full potential of classical Cray-class computers as well as distributed-memory parallel computers.
ISSYS: An integrated synergistic Synthesis System
NASA Technical Reports Server (NTRS)
Dovi, A. R.
1980-01-01
Integrated Synergistic Synthesis System (ISSYS), an integrated system of computer codes in which the sequence of program execution and data flow is controlled by the user, is discussed. The commands available to exert such control, the ISSYS major function and rules, and the computer codes currently available in the system are described. Computational sequences frequently used in the aircraft structural analysis and synthesis are defined. External computer codes utilized by the ISSYS system are documented. A bibliography on the programs is included.
User's manual for a two-dimensional, ground-water flow code on the Octopus computer network
DOE Office of Scientific and Technical Information (OSTI.GOV)
Naymik, T.G.
1978-08-30
A ground-water hydrology computer code, programmed by R.L. Taylor (in Proc. American Society of Civil Engineers, Journal of Hydraulics Division, 93(HY2), pp. 25-33 (1967)), has been adapted to the Octopus computer system at Lawrence Livermore Laboratory. Using an example problem, this manual details the input, output, and execution options of the code.
Interactive Synthesis of Code Level Security Rules
2017-04-01
Interactive Synthesis of Code-Level Security Rules A Thesis Presented by Leo St. Amour to The Department of Computer Science in partial fulfillment...of the requirements for the degree of Master of Science in Computer Science Northeastern University Boston, Massachusetts April 2017 DISTRIBUTION...Abstract of the Thesis Interactive Synthesis of Code-Level Security Rules by Leo St. Amour Master of Science in Computer Science Northeastern University
NASA Technical Reports Server (NTRS)
1986-01-01
AGDISP, a computer code written for Langley by Continuum Dynamics, Inc., aids crop dusting airplanes in targeting pesticides. The code is commercially available and can be run on a personal computer by an inexperienced operator. Called SWA+H, it is used by the Forest Service, FAA, DuPont, etc. DuPont uses the code to "test" equipment on the computer using a laser system to measure particle characteristics of various spray compounds.
The adaption and use of research codes for performance assessment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liebetrau, A.M.
1987-05-01
Models of real-world phenomena are developed for many reasons. The models are usually, if not always, implemented in the form of a computer code. The characteristics of a code are determined largely by its intended use. Realizations or implementations of detailed mathematical models of complex physical and/or chemical processes are often referred to as research or scientific (RS) codes. Research codes typically require large amounts of computing time. One example of an RS code is a finite-element code for solving complex systems of differential equations that describe mass transfer through some geologic medium. Considerable computing time is required because computationsmore » are done at many points in time and/or space. Codes used to evaluate the overall performance of real-world physical systems are called performance assessment (PA) codes. Performance assessment codes are used to conduct simulated experiments involving systems that cannot be directly observed. Thus, PA codes usually involve repeated simulations of system performance in situations that preclude the use of conventional experimental and statistical methods. 3 figs.« less
Topological color codes on Union Jack lattices: a stable implementation of the whole Clifford group
DOE Office of Scientific and Technical Information (OSTI.GOV)
Katzgraber, Helmut G.; Theoretische Physik, ETH Zurich, CH-8093 Zurich; Bombin, H.
We study the error threshold of topological color codes on Union Jack lattices that allow for the full implementation of the whole Clifford group of quantum gates. After mapping the error-correction process onto a statistical mechanical random three-body Ising model on a Union Jack lattice, we compute its phase diagram in the temperature-disorder plane using Monte Carlo simulations. Surprisingly, topological color codes on Union Jack lattices have a similar error stability to color codes on triangular lattices, as well as to the Kitaev toric code. The enhanced computational capabilities of the topological color codes on Union Jack lattices with respectmore » to triangular lattices and the toric code combined with the inherent robustness of this implementation show good prospects for future stable quantum computer implementations.« less
Accurate Modeling of Ionospheric Electromagnetic Fields Generated by a Low-Altitude VLF Transmitter
2007-08-31
latitude) for 3 different grid spacings. 14 8. Low-altitude fields produced by a 10-kHz source computed using the FD and TD codes. The agreement is...excellent, validating the new FD code. 16 9. High-altitude fields produced by a 10-kHz source computed using the FD and TD codes. The agreement is...again excellent. 17 10. Low-altitude fields produced by a 20-k.Hz source computed using the FD and TD codes. 17 11. High-altitude fields produced
Multidisciplinary High-Fidelity Analysis and Optimization of Aerospace Vehicles. Part 1; Formulation
NASA Technical Reports Server (NTRS)
Walsh, J. L.; Townsend, J. C.; Salas, A. O.; Samareh, J. A.; Mukhopadhyay, V.; Barthelemy, J.-F.
2000-01-01
An objective of the High Performance Computing and Communication Program at the NASA Langley Research Center is to demonstrate multidisciplinary shape and sizing optimization of a complete aerospace vehicle configuration by using high-fidelity, finite element structural analysis and computational fluid dynamics aerodynamic analysis in a distributed, heterogeneous computing environment that includes high performance parallel computing. A software system has been designed and implemented to integrate a set of existing discipline analysis codes, some of them computationally intensive, into a distributed computational environment for the design of a highspeed civil transport configuration. The paper describes the engineering aspects of formulating the optimization by integrating these analysis codes and associated interface codes into the system. The discipline codes are integrated by using the Java programming language and a Common Object Request Broker Architecture (CORBA) compliant software product. A companion paper presents currently available results.
NASA Technical Reports Server (NTRS)
Hartenstein, Richard G., Jr.
1985-01-01
Computer codes have been developed to analyze antennas on aircraft and in the presence of scatterers. The purpose of this study is to use these codes to develop accurate computer models of various aircraft and antenna systems. The antenna systems analyzed are a P-3B L-Band antenna, an A-7E UHF relay pod antenna, and traffic advisory antenna system installed on a Bell Long Ranger helicopter. Computer results are compared to measured ones with good agreement. These codes can be used in the design stage of an antenna system to determine the optimum antenna location and save valuable time and costly flight hours.
NASA Astrophysics Data System (ADS)
Wei, Xiaohui; Li, Weishan; Tian, Hailong; Li, Hongliang; Xu, Haixiao; Xu, Tianfu
2015-07-01
The numerical simulation of multiphase flow and reactive transport in the porous media on complex subsurface problem is a computationally intensive application. To meet the increasingly computational requirements, this paper presents a parallel computing method and architecture. Derived from TOUGHREACT that is a well-established code for simulating subsurface multi-phase flow and reactive transport problems, we developed a high performance computing THC-MP based on massive parallel computer, which extends greatly on the computational capability for the original code. The domain decomposition method was applied to the coupled numerical computing procedure in the THC-MP. We designed the distributed data structure, implemented the data initialization and exchange between the computing nodes and the core solving module using the hybrid parallel iterative and direct solver. Numerical accuracy of the THC-MP was verified through a CO2 injection-induced reactive transport problem by comparing the results obtained from the parallel computing and sequential computing (original code). Execution efficiency and code scalability were examined through field scale carbon sequestration applications on the multicore cluster. The results demonstrate successfully the enhanced performance using the THC-MP on parallel computing facilities.
Code of Ethical Conduct for Computer-Using Educators: An ICCE Policy Statement.
ERIC Educational Resources Information Center
Computing Teacher, 1987
1987-01-01
Prepared by the International Council for Computers in Education's Ethics and Equity Committee, this code of ethics for educators using computers covers nine main areas: curriculum issues, issues relating to computer access, privacy/confidentiality issues, teacher-related issues, student issues, the community, school organizational issues,…
ERIC Educational Resources Information Center
Whitney, Michael; Lipford, Heather Richter; Chu, Bill; Thomas, Tyler
2018-01-01
Many of the software security vulnerabilities that people face today can be remediated through secure coding practices. A critical step toward the practice of secure coding is ensuring that our computing students are educated on these practices. We argue that secure coding education needs to be included across a computing curriculum. We are…
NASA Technical Reports Server (NTRS)
Norment, H. G.
1985-01-01
Subsonic, external flow about nonlifting bodies, lifting bodies or combinations of lifting and nonlifting bodies is calculated by a modified version of the Hess lifting code. Trajectory calculations can be performed for any atmospheric conditions and for all water drop sizes, from the smallest cloud droplet to large raindrops. Experimental water drop drag relations are used in the water drop equations of motion and effects of gravity settling are included. Inlet flow can be accommodated, and high Mach number compressibility effects are corrected for approximately. Seven codes are described: (1) a code used to debug and plot body surface description data; (2) a code that processes the body surface data to yield the potential flow field; (3) a code that computes flow velocities at arrays of points in space; (4) a code that computes water drop trajectories from an array of points in space; (5) a code that computes water drop trajectories and fluxes to arbitrary target points; (6) a code that computes water drop trajectories tangent to the body; and (7) a code that produces stereo pair plots which include both the body and trajectories. Accuracy of the calculations is discussed, and trajectory calculation results are compared with prior calculations and with experimental data.
Debugging Techniques Used by Experienced Programmers to Debug Their Own Code.
1990-09-01
IS. NUMBER OF PAGES code debugging 62 computer programmers 16. PRICE CODE debug programming 17. SECURITY CLASSIFICATION 18. SECURITY CLASSIFICATION 119...Davis, and Schultz (1987) also compared experts and novices, but focused on the way a computer program is represented cognitively and how that...of theories in the emerging computer programming domain (Fisher, 1987). In protocol analysis, subjects are asked to talk/think aloud as they solve
NASA Astrophysics Data System (ADS)
Hanlon, Justin Mitchell
Age-related macular degeneration (AMD) is a leading cause of vision loss and a major health problem for people over the age of 50 in industrialized nations. The current standard of care, ranibizumab, is used to help slow and in some cases stabilize the process of AMD, but requires frequent invasive injections into the eye. Interest continues for stereotactic radiosurgery (SRS), an option that provides a non-invasive treatment for the wet form of AMD, through the development of the IRay(TM) (Oraya Therapeutics, Inc., Newark, CA). The goal of this modality is to destroy choroidal neovascularization beneath the pigment epithelium via delivery of three 100 kVp photon beams entering through the sclera and overlapping on the macula delivering up to 24 Gy of therapeutic dose over a span of approximately 5 minutes. The divergent x-ray beams targeting the fovea are robotically positioned and the eye is gently immobilized by a suction-enabled contact lens. Device development requires assessment of patient effective dose, reference patient mean absorbed doses to radiosensitive tissues, and patient specific doses to the lens and optic nerve. A series of head phantoms, including both reference and patient specific, was derived from CT data and employed in conjunction with the MCNPX 2.5.0 radiation transport code to simulate treatment and evaluate absorbed doses to potential tissues-at-risk. The reference phantoms were used to evaluate effective dose and mean absorbed doses to several radiosensitive tissues. The optic nerve was modeled with changeable positions based on individual patient variability seen in a review of head CT scans gathered. Patient specific phantoms were used to determine the effect of varying anatomy and gaze. The results showed that absorbed doses to the non-targeted tissues were below the threshold levels for serious complications; specifically the development of radiogenic cataracts and radiation induced optic neuropathy (RON). The effective dose determined (0.29 mSv) is comparable to diagnostic procedures involving the head, such as an x-ray or CT scan. Thus, the computational assessment performed indicates that a previously established therapeutic dose can be delivered effectively to the macula with IRay(TM) without the potential for secondary complications.
Bednarz, Bryan; Xu, X George
2012-01-01
There is a serious and growing concern about the increased risk of radiation-induced second cancers and late tissue injuries associated with radiation treatment. To better understand and to more accurately quantify non-target organ doses due to scatter and leakage radiation from medical accelerators, a detailed Monte Carlo model of the medical linear accelerator is needed. This paper describes the development and validation of a detailed accelerator model of the Varian Clinac operating at 6 and 18 MV beam energies. Over 100 accelerator components have been defined and integrated using the Monte Carlo code MCNPX. A series of in-field and out-of-field dose validation studies were performed. In-field dose distributions calculated using the accelerator models were tuned to match measurement data that are considered the de facto ‘gold standard’ for the Varian Clinac accelerator provided by the manufacturer. Field sizes of 4 cm × 4 cm, 10 cm × 10 cm, 20 cm × 20 cm and 40 cm × 40 cm were considered. The local difference between calculated and measured dose on the percent depth dose curve was less than 2% for all locations. The local difference between calculated and measured dose on the dose profile curve was less than 2% in the plateau region and less than 2 mm in the penumbra region for all locations. Out-of-field dose profiles were calculated and compared to measurement data for both beam energies for field sizes of 4 cm × 4 cm, 10 cm × 10 cm and 20 cm × 20 cm. For all field sizes considered in this study, the average local difference between calculated and measured dose for the 6 and 18 MV beams was 14 and 16%, respectively. In addition, a method for determining neutron contamination in the 18 MV operating model was validated by comparing calculated in-air neutron fluence with reported calculations and measurements. The average difference between calculated and measured neutron fluence was 20%. As one of the most detailed accelerator models for both in-field and out-of-field dose calculations, the model will be combined with anatomically realistic computational patient phantoms into a computational framework to calculate non-target organ doses to patients from various radiation treatment plans. PMID:19141879
A COTS-Based Replacement Strategy for Aging Avionics Computers
2001-12-01
Communication Control Unit. A COTS-Based Replacement Strategy for Aging Avionics Computers COTS Microprocessor Real Time Operating System New Native Code...Native Code Objec ts Native Code Thread Real - Time Operating System Legacy Function x Virtual Component Environment Context Switch Thunk Add-in Replace
PARAVT: Parallel Voronoi tessellation code
NASA Astrophysics Data System (ADS)
González, R. E.
2016-10-01
In this study, we present a new open source code for massive parallel computation of Voronoi tessellations (VT hereafter) in large data sets. The code is focused for astrophysical purposes where VT densities and neighbors are widely used. There are several serial Voronoi tessellation codes, however no open source and parallel implementations are available to handle the large number of particles/galaxies in current N-body simulations and sky surveys. Parallelization is implemented under MPI and VT using Qhull library. Domain decomposition takes into account consistent boundary computation between tasks, and includes periodic conditions. In addition, the code computes neighbors list, Voronoi density, Voronoi cell volume, density gradient for each particle, and densities on a regular grid. Code implementation and user guide are publicly available at https://github.com/regonzar/paravt.
NASA Technical Reports Server (NTRS)
Almroth, B. O.; Brogan, F. A.
1978-01-01
Basic information about the computer code STAGS (Structural Analysis of General Shells) is presented to describe to potential users the scope of the code and the solution procedures that are incorporated. Primarily, STAGS is intended for analysis of shell structures, although it has been extended to more complex shell configurations through the inclusion of springs and beam elements. The formulation is based on a variational approach in combination with local two dimensional power series representations of the displacement components. The computer code includes options for analysis of linear or nonlinear static stress, stability, vibrations, and transient response. Material as well as geometric nonlinearities are included. A few examples of applications of the code are presented for further illustration of its scope.
Holonomic surface codes for fault-tolerant quantum computation
NASA Astrophysics Data System (ADS)
Zhang, Jiang; Devitt, Simon J.; You, J. Q.; Nori, Franco
2018-02-01
Surface codes can protect quantum information stored in qubits from local errors as long as the per-operation error rate is below a certain threshold. Here we propose holonomic surface codes by harnessing the quantum holonomy of the system. In our scheme, the holonomic gates are built via auxiliary qubits rather than the auxiliary levels in multilevel systems used in conventional holonomic quantum computation. The key advantage of our approach is that the auxiliary qubits are in their ground state before and after each gate operation, so they are not involved in the operation cycles of surface codes. This provides an advantageous way to implement surface codes for fault-tolerant quantum computation.
NASA Technical Reports Server (NTRS)
Chima, R. V.; Strazisar, A. J.
1982-01-01
Two and three dimensional inviscid solutions for the flow in a transonic axial compressor rotor at design speed are compared with probe and laser anemometers measurements at near-stall and maximum-flow operating points. Experimental details of the laser anemometer system and computational details of the two dimensional axisymmetric code and three dimensional Euler code are described. Comparisons are made between relative Mach number and flow angle contours, shock location, and shock strength. A procedure for using an efficient axisymmetric code to generate downstream pressure input for computationally expensive Euler codes is discussed. A film supplement shows the calculations of the two operating points with the time-marching Euler code.
EAC: A program for the error analysis of STAGS results for plates
NASA Technical Reports Server (NTRS)
Sistla, Rajaram; Thurston, Gaylen A.; Bains, Nancy Jane C.
1989-01-01
A computer code is now available for estimating the error in results from the STAGS finite element code for a shell unit consisting of a rectangular orthotropic plate. This memorandum contains basic information about the computer code EAC (Error Analysis and Correction) and describes the connection between the input data for the STAGS shell units and the input data necessary to run the error analysis code. The STAGS code returns a set of nodal displacements and a discrete set of stress resultants; the EAC code returns a continuous solution for displacements and stress resultants. The continuous solution is defined by a set of generalized coordinates computed in EAC. The theory and the assumptions that determine the continuous solution are also outlined in this memorandum. An example of application of the code is presented and instructions on its usage on the Cyber and the VAX machines have been provided.
CFD Modeling of Free-Piston Stirling Engines
NASA Technical Reports Server (NTRS)
Ibrahim, Mounir B.; Zhang, Zhi-Guo; Tew, Roy C., Jr.; Gedeon, David; Simon, Terrence W.
2001-01-01
NASA Glenn Research Center (GRC) is funding Cleveland State University (CSU) to develop a reliable Computational Fluid Dynamics (CFD) code that can predict engine performance with the goal of significant improvements in accuracy when compared to one-dimensional (1-D) design code predictions. The funding also includes conducting code validation experiments at both the University of Minnesota (UMN) and CSU. In this paper a brief description of the work-in-progress is provided in the two areas (CFD and Experiments). Also, previous test results are compared with computational data obtained using (1) a 2-D CFD code obtained from Dr. Georg Scheuerer and further developed at CSU and (2) a multidimensional commercial code CFD-ACE+. The test data and computational results are for (1) a gas spring and (2) a single piston/cylinder with attached annular heat exchanger. The comparisons among the codes are discussed. The paper also discusses plans for conducting code validation experiments at CSU and UMN.
On the error statistics of Viterbi decoding and the performance of concatenated codes
NASA Technical Reports Server (NTRS)
Miller, R. L.; Deutsch, L. J.; Butman, S. A.
1981-01-01
Computer simulation results are presented on the performance of convolutional codes of constraint lengths 7 and 10 concatenated with the (255, 223) Reed-Solomon code (a proposed NASA standard). These results indicate that as much as 0.8 dB can be gained by concatenating this Reed-Solomon code with a (10, 1/3) convolutional code, instead of the (7, 1/2) code currently used by the DSN. A mathematical model of Viterbi decoder burst-error statistics is developed and is validated through additional computer simulations.
New double-byte error-correcting codes for memory systems
NASA Technical Reports Server (NTRS)
Feng, Gui-Liang; Wu, Xinen; Rao, T. R. N.
1996-01-01
Error-correcting or error-detecting codes have been used in the computer industry to increase reliability, reduce service costs, and maintain data integrity. The single-byte error-correcting and double-byte error-detecting (SbEC-DbED) codes have been successfully used in computer memory subsystems. There are many methods to construct double-byte error-correcting (DBEC) codes. In the present paper we construct a class of double-byte error-correcting codes, which are more efficient than those known to be optimum, and a decoding procedure for our codes is also considered.
SOURCELESS STARTUP. A MACHINE CODE FOR COMPUTING LOW-SOURCE REACTOR STARTUPS
DOE Office of Scientific and Technical Information (OSTI.GOV)
MacMillan, D.B.
1960-06-01
>A revision to the sourceless start-up code is presented. The code solves a system of differential equations encountered in computing the probability distribution of activity at an observed power level during reactor start-up from a very low source level. (J.R.D.)
Computer-assisted coding and clinical documentation: first things first.
Tully, Melinda; Carmichael, Angela
2012-10-01
Computer-assisted coding tools have the potential to drive improvements in seven areas: Transparency of coding. Productivity (generally by 20 to 25 percent for inpatient claims). Accuracy (by improving specificity of documentation). Cost containment (by reducing overtime expenses, audit fees, and denials). Compliance. Efficiency. Consistency.
Organic Scintillator for Real-Time Neutron Dosimetry
Beyer, Kyle A.; Di Fulvio, Angela; Stolarczyk, Liliana; ...
2017-11-15
We have developed a radiation detector based on an organic scintillator for spectrometry and dosimetry of out-of-field secondary neutrons from clinical proton beams. The detector consists of an EJ-299-34 crystalline organic scintillator, coupled by fiber optic cable to a silicon photomultiplier (SiPM). Proof of concept measurements were taken with 137Cs and 252Cf, and corresponding simulations were performed in MCNPX-PoliMi. Despite its small size, the detector is able to discriminate between neutron and gamma-rays via pulse shape discrimination. We simulated the response function of the detector to monoenergetic neutrons in the 100 keV–0 MeV range using MCNPX-PoliMi. The measured unfolded 252Cfmore » neutron spectrum is in good agreement with the theoretical Watt fission spectrum. We determined the ambient dose equivalent by folding the spectrum with the fluence-to-ambient dose conversion coefficient, with a 1.4% deviation from theory. Some preliminary proton beam experiments were preformed at the Bronowice Cyclotron Center patient treatment facility using a clinically relevant proton pencil beam for brain tumor and craino-spinal treatment directed at a child phantom.« less
Organic Scintillator for Real-Time Neutron Dosimetry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beyer, Kyle A.; Di Fulvio, Angela; Stolarczyk, Liliana
We have developed a radiation detector based on an organic scintillator for spectrometry and dosimetry of out-of-field secondary neutrons from clinical proton beams. The detector consists of an EJ-299-34 crystalline organic scintillator, coupled by fiber optic cable to a silicon photomultiplier (SiPM). Proof of concept measurements were taken with 137Cs and 252Cf, and corresponding simulations were performed in MCNPX-PoliMi. Despite its small size, the detector is able to discriminate between neutron and gamma-rays via pulse shape discrimination. We simulated the response function of the detector to monoenergetic neutrons in the 100 keV–0 MeV range using MCNPX-PoliMi. The measured unfolded 252Cfmore » neutron spectrum is in good agreement with the theoretical Watt fission spectrum. We determined the ambient dose equivalent by folding the spectrum with the fluence-to-ambient dose conversion coefficient, with a 1.4% deviation from theory. Some preliminary proton beam experiments were preformed at the Bronowice Cyclotron Center patient treatment facility using a clinically relevant proton pencil beam for brain tumor and craino-spinal treatment directed at a child phantom.« less
Molybdenum-99 production calculation analysis of SAMOP reactor based on thorium nitrate fuel
NASA Astrophysics Data System (ADS)
Syarip; Togatorop, E.; Yassar
2018-03-01
SAMOP (Subcritical Assembly for Molybdenum-99 Production) has the potential to use thorium as fuel to produce 99Mo after modifying the design, but the production performance has not been discovered yet. A study needs to be done to obtain the correlation between 99Mo production with the mixed fuel composition of uranium and with SAMOP power on the modified SAMOP design. The study aims to obtain the production of 99Mo based thorium nitrate fuel on SAMOP’s modified designs. Monte Carlo N-Particle eXtended (MCNPX) is required to simulate the operation of the assembly by varying the composition of the uranium-thorium nitrate mixed fuel, geometry and power fraction on the SAMOP modified designs. The burnup command on the MCNPX is used to confirm the 99Mo production result. The assembly is simulated to operate for 6 days with subcritical neutron multiplication factor (keff = 0.97-0.99). The neutron multiplication factor of the modified design (keff) is 0.97, the activity obtained from 99Mo is 18.58 Ci at 1 kW power operation.
Hypercube matrix computation task
NASA Technical Reports Server (NTRS)
Calalo, R.; Imbriale, W.; Liewer, P.; Lyons, J.; Manshadi, F.; Patterson, J.
1987-01-01
The Hypercube Matrix Computation (Year 1986-1987) task investigated the applicability of a parallel computing architecture to the solution of large scale electromagnetic scattering problems. Two existing electromagnetic scattering codes were selected for conversion to the Mark III Hypercube concurrent computing environment. They were selected so that the underlying numerical algorithms utilized would be different thereby providing a more thorough evaluation of the appropriateness of the parallel environment for these types of problems. The first code was a frequency domain method of moments solution, NEC-2, developed at Lawrence Livermore National Laboratory. The second code was a time domain finite difference solution of Maxwell's equations to solve for the scattered fields. Once the codes were implemented on the hypercube and verified to obtain correct solutions by comparing the results with those from sequential runs, several measures were used to evaluate the performance of the two codes. First, a comparison was provided of the problem size possible on the hypercube with 128 megabytes of memory for a 32-node configuration with that available in a typical sequential user environment of 4 to 8 megabytes. Then, the performance of the codes was anlyzed for the computational speedup attained by the parallel architecture.
Bistatic radar cross section of a perfectly conducting rhombus-shaped flat plate
NASA Astrophysics Data System (ADS)
Fenn, Alan J.
1990-05-01
The bistatic radar cross section of a perfectly conducting flat plate that has a rhombus shape (equilateral parallelogram) is investigated. The Ohio State University electromagnetic surface patch code (ESP version 4) is used to compute the theoretical bistatic radar cross section of a 35- x 27-in rhombus plate at 1.3 GHz over the bistatic angles 15 deg to 142 deg. The ESP-4 computer code is a method of moments FORTRAN-77 program which can analyze general configurations of plates and wires. This code has been installed and modified at Lincoln Laboratory on a SUN 3 computer network. Details of the code modifications are described. Comparisons of the method of moments simulations and measurements of the rhombus plate are made. It is shown that the ESP-4 computer code provides a high degree of accuracy in the calculation of copolarized and cross-polarized bistatic radar cross section patterns.
ASR4: A computer code for fitting and processing 4-gage anelastic strain recovery data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Warpinski, N.R.
A computer code for analyzing four-gage Anelastic Strain Recovery (ASR) data has been modified for use on a personal computer. This code fits the viscoelastic model of Warpinski and Teufel to measured ASR data, calculates the stress orientation directly, and computes stress magnitudes if sufficient input data are available. The code also calculates the stress orientation using strain-rosette equations, and its calculates stress magnitudes using Blanton's approach, assuming sufficient input data are available. The program is written in FORTRAN, compiled with Ryan-McFarland Version 2.4. Graphics use PLOT88 software by Plotworks, Inc., but the graphics software must be obtained by themore » user because of licensing restrictions. A version without graphics can also be run. This code is available through the National Energy Software Center (NESC), operated by Argonne National Laboratory. 5 refs., 3 figs.« less
Navier-Stokes Simulation of Homogeneous Turbulence on the CYBER 205
NASA Technical Reports Server (NTRS)
Wu, C. T.; Ferziger, J. H.; Chapman, D. R.; Rogallo, R. S.
1984-01-01
A computer code which solves the Navier-Stokes equations for three dimensional, time-dependent, homogenous turbulence has been written for the CYBER 205. The code has options for both 64-bit and 32-bit arithmetic. With 32-bit computation, mesh sizes up to 64 (3) are contained within core of a 2 million 64-bit word memory. Computer speed timing runs were made for various vector lengths up to 6144. With this code, speeds a little over 100 Mflops have been achieved on a 2-pipe CYBER 205. Several problems encountered in the coding are discussed.
The investigation of tethered satellite system dynamics
NASA Technical Reports Server (NTRS)
Lorenzini, E.
1985-01-01
The tether control law to retrieve the satellite was modified in order to have a smooth retrieval trajectory of the satellite that minimizes the thruster activation. The satellite thrusters were added to the rotational dynamics computer code and a preliminary control logic was implemented to simulate them during the retrieval maneuver. The high resolution computer code for modelling the three dimensional dynamics of untensioned tether, SLACK3, was made fully operative and a set of computer simulations of possible tether breakages was run. The distribution of the electric field around an electrodynamic tether in vacuo severed at some length from the shuttle was computed with a three dimensional electrodynamic computer code.
Experimental and computational surface and flow-field results for an all-body hypersonic aircraft
NASA Technical Reports Server (NTRS)
Lockman, William K.; Lawrence, Scott L.; Cleary, Joseph W.
1990-01-01
The objective of the present investigation is to establish a benchmark experimental data base for a generic hypersonic vehicle shape for validation and/or calibration of advanced computational fluid dynamics computer codes. This paper includes results from the comprehensive test program conducted in the NASA/Ames 3.5-foot Hypersonic Wind Tunnel for a generic all-body hypersonic aircraft model. Experimental and computational results on flow visualization, surface pressures, surface convective heat transfer, and pitot-pressure flow-field surveys are presented. Comparisons of the experimental results with computational results from an upwind parabolized Navier-Stokes code developed at Ames demonstrate the capabilities of this code.
Computer search for binary cyclic UEP codes of odd length up to 65
NASA Technical Reports Server (NTRS)
Lin, Mao-Chao; Lin, Chi-Chang; Lin, Shu
1990-01-01
Using an exhaustive computation, the unequal error protection capabilities of all binary cyclic codes of odd length up to 65 that have minimum distances at least 3 are found. For those codes that can only have upper bounds on their unequal error protection capabilities computed, an analytic method developed by Dynkin and Togonidze (1976) is used to show that the upper bounds meet the exact unequal error protection capabilities.
A Combinatorial Geometry Computer Description of the MEP-021A Generator Set
1979-02-01
Generator Computer Description Gasoline Generator GIFT MEP-021A 20. ABSTRACT fCbntteu* an rararaa eta* ft namamwaay anal Identify by block number) This... GIFT code is also stored on magnetic tape for future vulnerability analysis. 00,] *7,1473 EDITION OF • NOV 65 IS OBSOLETE UNCLASSIFIED SECURITY...the Geometric Information for Targets ( GIFT ) computer code. The GIFT code traces shotlines through a COM-GEOM description from any specified attack
Optimizing a liquid propellant rocket engine with an automated combustor design code (AUTOCOM)
NASA Technical Reports Server (NTRS)
Hague, D. S.; Reichel, R. H.; Jones, R. T.; Glatt, C. R.
1972-01-01
A procedure for automatically designing a liquid propellant rocket engine combustion chamber in an optimal fashion is outlined. The procedure is contained in a digital computer code, AUTOCOM. The code is applied to an existing engine, and design modifications are generated which provide a substantial potential payload improvement over the existing design. Computer time requirements for this payload improvement were small, approximately four minutes in the CDC 6600 computer.
Unaligned instruction relocation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bertolli, Carlo; O'Brien, John K.; Sallenave, Olivier H.
In one embodiment, a computer-implemented method includes receiving source code to be compiled into an executable file for an unaligned instruction set architecture (ISA). Aligned assembled code is generated, by a computer processor. The aligned assembled code complies with an aligned ISA and includes aligned processor code for a processor and aligned accelerator code for an accelerator. A first linking pass is performed on the aligned assembled code, including relocating a first relocation target in the aligned accelerator code that refers to a first object outside the aligned accelerator code. Unaligned assembled code is generated in accordance with the unalignedmore » ISA and includes unaligned accelerator code for the accelerator and unaligned processor code for the processor. A second linking pass is performed on the unaligned assembled code, including relocating a second relocation target outside the unaligned accelerator code that refers to an object in the unaligned accelerator code.« less
Unaligned instruction relocation
Bertolli, Carlo; O'Brien, John K.; Sallenave, Olivier H.; Sura, Zehra N.
2018-01-23
In one embodiment, a computer-implemented method includes receiving source code to be compiled into an executable file for an unaligned instruction set architecture (ISA). Aligned assembled code is generated, by a computer processor. The aligned assembled code complies with an aligned ISA and includes aligned processor code for a processor and aligned accelerator code for an accelerator. A first linking pass is performed on the aligned assembled code, including relocating a first relocation target in the aligned accelerator code that refers to a first object outside the aligned accelerator code. Unaligned assembled code is generated in accordance with the unaligned ISA and includes unaligned accelerator code for the accelerator and unaligned processor code for the processor. A second linking pass is performed on the unaligned assembled code, including relocating a second relocation target outside the unaligned accelerator code that refers to an object in the unaligned accelerator code.
Computer algorithm for coding gain
NASA Technical Reports Server (NTRS)
Dodd, E. E.
1974-01-01
Development of a computer algorithm for coding gain for use in an automated communications link design system. Using an empirical formula which defines coding gain as used in space communications engineering, an algorithm is constructed on the basis of available performance data for nonsystematic convolutional encoding with soft-decision (eight-level) Viterbi decoding.
On the Use of Statistics in Design and the Implications for Deterministic Computer Experiments
NASA Technical Reports Server (NTRS)
Simpson, Timothy W.; Peplinski, Jesse; Koch, Patrick N.; Allen, Janet K.
1997-01-01
Perhaps the most prevalent use of statistics in engineering design is through Taguchi's parameter and robust design -- using orthogonal arrays to compute signal-to-noise ratios in a process of design improvement. In our view, however, there is an equally exciting use of statistics in design that could become just as prevalent: it is the concept of metamodeling whereby statistical models are built to approximate detailed computer analysis codes. Although computers continue to get faster, analysis codes always seem to keep pace so that their computational time remains non-trivial. Through metamodeling, approximations of these codes are built that are orders of magnitude cheaper to run. These metamodels can then be linked to optimization routines for fast analysis, or they can serve as a bridge for integrating analysis codes across different domains. In this paper we first review metamodeling techniques that encompass design of experiments, response surface methodology, Taguchi methods, neural networks, inductive learning, and kriging. We discuss their existing applications in engineering design and then address the dangers of applying traditional statistical techniques to approximate deterministic computer analysis codes. We conclude with recommendations for the appropriate use of metamodeling techniques in given situations and how common pitfalls can be avoided.
Design and optimization of a portable LQCD Monte Carlo code using OpenACC
NASA Astrophysics Data System (ADS)
Bonati, Claudio; Coscetti, Simone; D'Elia, Massimo; Mesiti, Michele; Negro, Francesco; Calore, Enrico; Schifano, Sebastiano Fabio; Silvi, Giorgio; Tripiccione, Raffaele
The present panorama of HPC architectures is extremely heterogeneous, ranging from traditional multi-core CPU processors, supporting a wide class of applications but delivering moderate computing performance, to many-core Graphics Processor Units (GPUs), exploiting aggressive data-parallelism and delivering higher performances for streaming computing applications. In this scenario, code portability (and performance portability) become necessary for easy maintainability of applications; this is very relevant in scientific computing where code changes are very frequent, making it tedious and prone to error to keep different code versions aligned. In this work, we present the design and optimization of a state-of-the-art production-level LQCD Monte Carlo application, using the directive-based OpenACC programming model. OpenACC abstracts parallel programming to a descriptive level, relieving programmers from specifying how codes should be mapped onto the target architecture. We describe the implementation of a code fully written in OpenAcc, and show that we are able to target several different architectures, including state-of-the-art traditional CPUs and GPUs, with the same code. We also measure performance, evaluating the computing efficiency of our OpenACC code on several architectures, comparing with GPU-specific implementations and showing that a good level of performance-portability can be reached.
Development of a thermal and structural analysis procedure for cooled radial turbines
NASA Technical Reports Server (NTRS)
Kumar, Ganesh N.; Deanna, Russell G.
1988-01-01
A procedure for computing the rotor temperature and stress distributions in a cooled radial turbine is considered. Existing codes for modeling the external mainstream flow and the internal cooling flow are used to compute boundary conditions for the heat transfer and stress analyses. An inviscid, quasi three-dimensional code computes the external free stream velocity. The external velocity is then used in a boundary layer analysis to compute the external heat transfer coefficients. Coolant temperatures are computed by a viscous one-dimensional internal flow code for the momentum and energy equation. These boundary conditions are input to a three-dimensional heat conduction code for calculation of rotor temperatures. The rotor stress distribution may be determined for the given thermal, pressure and centrifugal loading. The procedure is applied to a cooled radial turbine which will be tested at the NASA Lewis Research Center. Representative results from this case are included.
COMPUTATION OF GLOBAL PHOTOCHEMISTRY WITH SMVGEAR II (R823186)
A computer model was developed to simulate global gas-phase photochemistry. The model solves chemical equations with SMVGEAR II, a sparse-matrix, vectorized Gear-type code. To obtain SMVGEAR II, the original SMVGEAR code was modified to allow computation of different sets of chem...
NASA Technical Reports Server (NTRS)
Weed, Richard Allen; Sankar, L. N.
1994-01-01
An increasing amount of research activity in computational fluid dynamics has been devoted to the development of efficient algorithms for parallel computing systems. The increasing performance to price ratio of engineering workstations has led to research to development procedures for implementing a parallel computing system composed of distributed workstations. This thesis proposal outlines an ongoing research program to develop efficient strategies for performing three-dimensional flow analysis on distributed computing systems. The PVM parallel programming interface was used to modify an existing three-dimensional flow solver, the TEAM code developed by Lockheed for the Air Force, to function as a parallel flow solver on clusters of workstations. Steady flow solutions were generated for three different wing and body geometries to validate the code and evaluate code performance. The proposed research will extend the parallel code development to determine the most efficient strategies for unsteady flow simulations.
NASA Technical Reports Server (NTRS)
Fishbach, L. H.
1979-01-01
The computational techniques utilized to determine the optimum propulsion systems for future aircraft applications and to identify system tradeoffs and technology requirements are described. The characteristics and use of the following computer codes are discussed: (1) NNEP - a very general cycle analysis code that can assemble an arbitrary matrix fans, turbines, ducts, shafts, etc., into a complete gas turbine engine and compute on- and off-design thermodynamic performance; (2) WATE - a preliminary design procedure for calculating engine weight using the component characteristics determined by NNEP; (3) POD DRG - a table look-up program to calculate wave and friction drag of nacelles; (4) LIFCYC - a computer code developed to calculate life cycle costs of engines based on the output from WATE; and (5) INSTAL - a computer code developed to calculate installation effects, inlet performance and inlet weight. Examples are given to illustrate how these computer techniques can be applied to analyze and optimize propulsion system fuel consumption, weight, and cost for representative types of aircraft and missions.
A Combinatorial Geometry Computer Description of the M9 ACE (Armored Combat Earthmover) Vehicle
1984-12-01
program requires as input the M9 target descriptions as processed by the Geometric Information for Targets ( GIFT ) ’ computer code. The first step is...model of the target. This COM-GEOM target description is used as input to the Geometric Information For Targets ( GIFT ) computer code. Among other...things, the GIFT code traces shotlines through a COM-GEOM description from any specified aspect, listing pertinent information about each component hit
Characterizing the Properties of a Woven SiC/SiC Composite Using W-CEMCAN Computer Code
NASA Technical Reports Server (NTRS)
Murthy, Pappu L. N.; Mital, Subodh K.; DiCarlo, James A.
1999-01-01
A micromechanics based computer code to predict the thermal and mechanical properties of woven ceramic matrix composites (CMC) is developed. This computer code, W-CEMCAN (Woven CEramic Matrix Composites ANalyzer), predicts the properties of two-dimensional woven CMC at any temperature and takes into account various constituent geometries and volume fractions. This computer code is used to predict the thermal and mechanical properties of an advanced CMC composed of 0/90 five-harness (5 HS) Sylramic fiber which had been chemically vapor infiltrated (CVI) with boron nitride (BN) and SiC interphase coatings and melt-infiltrated (MI) with SiC. The predictions, based on the bulk constituent properties from the literature, are compared with measured experimental data. Based on the comparison. improved or calibrated properties for the constituent materials are then developed for use by material developers/designers. The computer code is then used to predict the properties of a composite with the same constituents but with different fiber volume fractions. The predictions are compared with measured data and a good agreement is achieved.
Fault tolerant computing: A preamble for assuring viability of large computer systems
NASA Technical Reports Server (NTRS)
Lim, R. S.
1977-01-01
The need for fault-tolerant computing is addressed from the viewpoints of (1) why it is needed, (2) how to apply it in the current state of technology, and (3) what it means in the context of the Phoenix computer system and other related systems. To this end, the value of concurrent error detection and correction is described. User protection, program retry, and repair are among the factors considered. The technology of algebraic codes to protect memory systems and arithmetic codes to protect memory systems and arithmetic codes to protect arithmetic operations is discussed.
The Advanced Software Development and Commercialization Project
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gallopoulos, E.; Canfield, T.R.; Minkoff, M.
1990-09-01
This is the first of a series of reports pertaining to progress in the Advanced Software Development and Commercialization Project, a joint collaborative effort between the Center for Supercomputing Research and Development of the University of Illinois and the Computing and Telecommunications Division of Argonne National Laboratory. The purpose of this work is to apply techniques of parallel computing that were pioneered by University of Illinois researchers to mature computational fluid dynamics (CFD) and structural dynamics (SD) computer codes developed at Argonne. The collaboration in this project will bring this unique combination of expertise to bear, for the first time,more » on industrially important problems. By so doing, it will expose the strengths and weaknesses of existing techniques for parallelizing programs and will identify those problems that need to be solved in order to enable wide spread production use of parallel computers. Secondly, the increased efficiency of the CFD and SD codes themselves will enable the simulation of larger, more accurate engineering models that involve fluid and structural dynamics. In order to realize the above two goals, we are considering two production codes that have been developed at ANL and are widely used by both industry and Universities. These are COMMIX and WHAMS-3D. The first is a computational fluid dynamics code that is used for both nuclear reactor design and safety and as a design tool for the casting industry. The second is a three-dimensional structural dynamics code used in nuclear reactor safety as well as crashworthiness studies. These codes are currently available for both sequential and vector computers only. Our main goal is to port and optimize these two codes on shared memory multiprocessors. In so doing, we shall establish a process that can be followed in optimizing other sequential or vector engineering codes for parallel processors.« less
Source Code Plagiarism--A Student Perspective
ERIC Educational Resources Information Center
Joy, M.; Cosma, G.; Yau, J. Y.-K.; Sinclair, J.
2011-01-01
This paper considers the problem of source code plagiarism by students within the computing disciplines and reports the results of a survey of students in Computing departments in 18 institutions in the U.K. This survey was designed to investigate how well students understand the concept of source code plagiarism and to discover what, if any,…
NASA Technical Reports Server (NTRS)
Filman, Robert E.
2004-01-01
This viewgraph presentation provides samples of computer code which have characteristics of poetic verse, and addresses the theoretical underpinnings of artistic coding, as well as how computer language influences software style, and the possible style of future coding.
Solution of the lossy nonlinear Tricomi equation with application to sonic boom focusing
NASA Astrophysics Data System (ADS)
Salamone, Joseph A., III
Sonic boom focusing theory has been augmented with new terms that account for mean flow effects in the direction of propagation and also for atmospheric absorption/dispersion due to molecular relaxation due to oxygen and nitrogen. The newly derived model equation was numerically implemented using a computer code. The computer code was numerically validated using a spectral solution for nonlinear propagation of a sinusoid through a lossy homogeneous medium. An additional numerical check was performed to verify the linear diffraction component of the code calculations. The computer code was experimentally validated using measured sonic boom focusing data from the NASA sponsored Superboom Caustic and Analysis Measurement Program (SCAMP) flight test. The computer code was in good agreement with both the numerical and experimental validation. The newly developed code was applied to examine the focusing of a NASA low-boom demonstration vehicle concept. The resulting pressure field was calculated for several supersonic climb profiles. The shaping efforts designed into the signatures were still somewhat evident despite the effects of sonic boom focusing.
NASA Astrophysics Data System (ADS)
Gel, Aytekin; Hu, Jonathan; Ould-Ahmed-Vall, ElMoustapha; Kalinkin, Alexander A.
2017-02-01
Legacy codes remain a crucial element of today's simulation-based engineering ecosystem due to the extensive validation process and investment in such software. The rapid evolution of high-performance computing architectures necessitates the modernization of these codes. One approach to modernization is a complete overhaul of the code. However, this could require extensive investments, such as rewriting in modern languages, new data constructs, etc., which will necessitate systematic verification and validation to re-establish the credibility of the computational models. The current study advocates using a more incremental approach and is a culmination of several modernization efforts of the legacy code MFIX, which is an open-source computational fluid dynamics code that has evolved over several decades, widely used in multiphase flows and still being developed by the National Energy Technology Laboratory. Two different modernization approaches,'bottom-up' and 'top-down', are illustrated. Preliminary results show up to 8.5x improvement at the selected kernel level with the first approach, and up to 50% improvement in total simulated time with the latter were achieved for the demonstration cases and target HPC systems employed.
Visual Computing Environment Workshop
NASA Technical Reports Server (NTRS)
Lawrence, Charles (Compiler)
1998-01-01
The Visual Computing Environment (VCE) is a framework for intercomponent and multidisciplinary computational simulations. Many current engineering analysis codes simulate various aspects of aircraft engine operation. For example, existing computational fluid dynamics (CFD) codes can model the airflow through individual engine components such as the inlet, compressor, combustor, turbine, or nozzle. Currently, these codes are run in isolation, making intercomponent and complete system simulations very difficult to perform. In addition, management and utilization of these engineering codes for coupled component simulations is a complex, laborious task, requiring substantial experience and effort. To facilitate multicomponent aircraft engine analysis, the CFD Research Corporation (CFDRC) is developing the VCE system. This system, which is part of NASA's Numerical Propulsion Simulation System (NPSS) program, can couple various engineering disciplines, such as CFD, structural analysis, and thermal analysis.
Force user's manual: A portable, parallel FORTRAN
NASA Technical Reports Server (NTRS)
Jordan, Harry F.; Benten, Muhammad S.; Arenstorf, Norbert S.; Ramanan, Aruna V.
1990-01-01
The use of Force, a parallel, portable FORTRAN on shared memory parallel computers is described. Force simplifies writing code for parallel computers and, once the parallel code is written, it is easily ported to computers on which Force is installed. Although Force is nearly the same for all computers, specific details are included for the Cray-2, Cray-YMP, Convex 220, Flex/32, Encore, Sequent, Alliant computers on which it is installed.
Monte Carlo simulation of Ising models by multispin coding on a vector computer
NASA Astrophysics Data System (ADS)
Wansleben, Stephan; Zabolitzky, John G.; Kalle, Claus
1984-11-01
Rebbi's efficient multispin coding algorithm for Ising models is combined with the use of the vector computer CDC Cyber 205. A speed of 21.2 million updates per second is reached. This is comparable to that obtained by special- purpose computers.
NASA Technical Reports Server (NTRS)
Chan, J. S.; Freeman, J. A.
1984-01-01
The viscous, axisymmetric flow in the thrust chamber of the space shuttle main engine (SSME) was computed on the CRAY 205 computer using the general interpolants method (GIM) code. Results show that the Navier-Stokes codes can be used for these flows to study trends and viscous effects as well as determine flow patterns; but further research and development is needed before they can be used as production tools for nozzle performance calculations. The GIM formulation, numerical scheme, and computer code are described. The actual SSME nozzle computation showing grid points, flow contours, and flow parameter plots is discussed. The computer system and run times/costs are detailed.
Finite difference time domain electromagnetic scattering from frequency-dependent lossy materials
NASA Technical Reports Server (NTRS)
Luebbers, Raymond J.; Beggs, John H.
1991-01-01
Four different FDTD computer codes and companion Radar Cross Section (RCS) conversion codes on magnetic media are submitted. A single three dimensional dispersive FDTD code for both dispersive dielectric and magnetic materials was developed, along with a user's manual. The extension of FDTD to more complicated materials was made. The code is efficient and is capable of modeling interesting radar targets using a modest computer workstation platform. RCS results for two different plate geometries are reported. The FDTD method was also extended to computing far zone time domain results in two dimensions. Also the capability to model nonlinear materials was incorporated into FDTD and validated.
Multitasking the code ARC3D. [for computational fluid dynamics
NASA Technical Reports Server (NTRS)
Barton, John T.; Hsiung, Christopher C.
1986-01-01
The CRAY multitasking system was developed in order to utilize all four processors and sharply reduce the wall clock run time. This paper describes the techniques used to modify the computational fluid dynamics code ARC3D for this run and analyzes the achieved speedup. The ARC3D code solves either the Euler or thin-layer N-S equations using an implicit approximate factorization scheme. Results indicate that multitask processing can be used to achieve wall clock speedup factors of over three times, depending on the nature of the program code being used. Multitasking appears to be particularly advantageous for large-memory problems running on multiple CPU computers.
Addressing the challenges of standalone multi-core simulations in molecular dynamics
NASA Astrophysics Data System (ADS)
Ocaya, R. O.; Terblans, J. J.
2017-07-01
Computational modelling in material science involves mathematical abstractions of force fields between particles with the aim to postulate, develop and understand materials by simulation. The aggregated pairwise interactions of the material's particles lead to a deduction of its macroscopic behaviours. For practically meaningful macroscopic scales, a large amount of data are generated, leading to vast execution times. Simulation times of hours, days or weeks for moderately sized problems are not uncommon. The reduction of simulation times, improved result accuracy and the associated software and hardware engineering challenges are the main motivations for many of the ongoing researches in the computational sciences. This contribution is concerned mainly with simulations that can be done on a "standalone" computer based on Message Passing Interfaces (MPI), parallel code running on hardware platforms with wide specifications, such as single/multi- processor, multi-core machines with minimal reconfiguration for upward scaling of computational power. The widely available, documented and standardized MPI library provides this functionality through the MPI_Comm_size (), MPI_Comm_rank () and MPI_Reduce () functions. A survey of the literature shows that relatively little is written with respect to the efficient extraction of the inherent computational power in a cluster. In this work, we discuss the main avenues available to tap into this extra power without compromising computational accuracy. We also present methods to overcome the high inertia encountered in single-node-based computational molecular dynamics. We begin by surveying the current state of the art and discuss what it takes to achieve parallelism, efficiency and enhanced computational accuracy through program threads and message passing interfaces. Several code illustrations are given. The pros and cons of writing raw code as opposed to using heuristic, third-party code are also discussed. The growing trend towards graphical processor units and virtual computing clouds for high-performance computing is also discussed. Finally, we present the comparative results of vacancy formation energy calculations using our own parallelized standalone code called Verlet-Stormer velocity (VSV) operating on 30,000 copper atoms. The code is based on the Sutton-Chen implementation of the Finnis-Sinclair pairwise embedded atom potential. A link to the code is also given.
Superimposed Code Theoretic Analysis of DNA Codes and DNA Computing
2008-01-01
complements of one another and the DNA duplex formed is a Watson - Crick (WC) duplex. However, there are many instances when the formation of non-WC...that the user’s requirements for probe selection are met based on the Watson - Crick probe locality within a target. The second type, called...AFRL-RI-RS-TR-2007-288 Final Technical Report January 2008 SUPERIMPOSED CODE THEORETIC ANALYSIS OF DNA CODES AND DNA COMPUTING
Preliminary Results from the Application of Automated Adjoint Code Generation to CFL3D
NASA Technical Reports Server (NTRS)
Carle, Alan; Fagan, Mike; Green, Lawrence L.
1998-01-01
This report describes preliminary results obtained using an automated adjoint code generator for Fortran to augment a widely-used computational fluid dynamics flow solver to compute derivatives. These preliminary results with this augmented code suggest that, even in its infancy, the automated adjoint code generator can accurately and efficiently deliver derivatives for use in transonic Euler-based aerodynamic shape optimization problems with hundreds to thousands of independent design variables.
NASA Technical Reports Server (NTRS)
Gliebe, P; Mani, R.; Shin, H.; Mitchell, B.; Ashford, G.; Salamah, S.; Connell, S.; Huff, Dennis (Technical Monitor)
2000-01-01
This report describes work performed on Contract NAS3-27720AoI 13 as part of the NASA Advanced Subsonic Transport (AST) Noise Reduction Technology effort. Computer codes were developed to provide quantitative prediction, design, and analysis capability for several aircraft engine noise sources. The objective was to provide improved, physics-based tools for exploration of noise-reduction concepts and understanding of experimental results. Methods and codes focused on fan broadband and 'buzz saw' noise and on low-emissions combustor noise and compliment work done by other contractors under the NASA AST program to develop methods and codes for fan harmonic tone noise and jet noise. The methods and codes developed and reported herein employ a wide range of approaches, from the strictly empirical to the completely computational, with some being semiempirical analytical, and/or analytical/computational. Emphasis was on capturing the essential physics while still considering method or code utility as a practical design and analysis tool for everyday engineering use. Codes and prediction models were developed for: (1) an improved empirical correlation model for fan rotor exit flow mean and turbulence properties, for use in predicting broadband noise generated by rotor exit flow turbulence interaction with downstream stator vanes: (2) fan broadband noise models for rotor and stator/turbulence interaction sources including 3D effects, noncompact-source effects. directivity modeling, and extensions to the rotor supersonic tip-speed regime; (3) fan multiple-pure-tone in-duct sound pressure prediction methodology based on computational fluid dynamics (CFD) analysis; and (4) low-emissions combustor prediction methodology and computer code based on CFD and actuator disk theory. In addition. the relative importance of dipole and quadrupole source mechanisms was studied using direct CFD source computation for a simple cascadeigust interaction problem, and an empirical combustor-noise correlation model was developed from engine acoustic test results. This work provided several insights on potential approaches to reducing aircraft engine noise. Code development is described in this report, and those insights are discussed.
Turbine Internal and Film Cooling Modeling For 3D Navier-Stokes Codes
NASA Technical Reports Server (NTRS)
DeWitt, Kenneth; Garg Vijay; Ameri, Ali
2005-01-01
The aim of this research project is to make use of NASA Glenn on-site computational facilities in order to develop, validate and apply aerodynamic, heat transfer, and turbine cooling models for use in advanced 3D Navier-Stokes Computational Fluid Dynamics (CFD) codes such as the Glenn-" code. Specific areas of effort include: Application of the Glenn-HT code to specific configurations made available under Turbine Based Combined Cycle (TBCC), and Ultra Efficient Engine Technology (UEET) projects. Validating the use of a multi-block code for the time accurate computation of the detailed flow and heat transfer of cooled turbine airfoils. The goal of the current research is to improve the predictive ability of the Glenn-HT code. This will enable one to design more efficient turbine components for both aviation and power generation. The models will be tested against specific configurations provided by NASA Glenn.
Development of a 3-D upwind PNS code for chemically reacting hypersonic flowfields
NASA Technical Reports Server (NTRS)
Tannehill, J. C.; Wadawadigi, G.
1992-01-01
Two new parabolized Navier-Stokes (PNS) codes were developed to compute the three-dimensional, viscous, chemically reacting flow of air around hypersonic vehicles such as the National Aero-Space Plane (NASP). The first code (TONIC) solves the gas dynamic and species conservation equations in a fully coupled manner using an implicit, approximately-factored, central-difference algorithm. This code was upgraded to include shock fitting and the capability of computing the flow around complex body shapes. The revised TONIC code was validated by computing the chemically-reacting (M(sub infinity) = 25.3) flow around a 10 deg half-angle cone at various angles of attack and the Ames All-Body model at 0 deg angle of attack. The results of these calculations were in good agreement with the results from the UPS code. One of the major drawbacks of the TONIC code is that the central-differencing of fluxes across interior flowfield discontinuities tends to introduce errors into the solution in the form of local flow property oscillations. The second code (UPS), originally developed for a perfect gas, has been extended to permit either perfect gas, equilibrium air, or nonequilibrium air computations. The code solves the PNS equations using a finite-volume, upwind TVD method based on Roe's approximate Riemann solver that was modified to account for real gas effects. The dissipation term associated with this algorithm is sufficiently adaptive to flow conditions that, even when attempting to capture very strong shock waves, no additional smoothing is required. For nonequilibrium calculations, the code solves the fluid dynamic and species continuity equations in a loosely-coupled manner. This code was used to calculate the hypersonic, laminar flow of chemically reacting air over cones at various angles of attack. In addition, the flow around the McDonnel Douglas generic option blended-wing-body was computed and comparisons were made between the perfect gas, equilibrium air, and the nonequilibrium air results.
Linear chirp phase perturbing approach for finding binary phased codes
NASA Astrophysics Data System (ADS)
Li, Bing C.
2017-05-01
Binary phased codes have many applications in communication and radar systems. These applications require binary phased codes to have low sidelobes in order to reduce interferences and false detection. Barker codes are the ones that satisfy these requirements and they have lowest maximum sidelobes. However, Barker codes have very limited code lengths (equal or less than 13) while many applications including low probability of intercept radar, and spread spectrum communication, require much higher code lengths. The conventional techniques of finding binary phased codes in literatures include exhaust search, neural network, and evolutionary methods, and they all require very expensive computation for large code lengths. Therefore these techniques are limited to find binary phased codes with small code lengths (less than 100). In this paper, by analyzing Barker code, linear chirp, and P3 phases, we propose a new approach to find binary codes. Experiments show that the proposed method is able to find long low sidelobe binary phased codes (code length >500) with reasonable computational cost.
Development of Reduced-Order Models for Aeroelastic and Flutter Prediction Using the CFL3Dv6.0 Code
NASA Technical Reports Server (NTRS)
Silva, Walter A.; Bartels, Robert E.
2002-01-01
A reduced-order model (ROM) is developed for aeroelastic analysis using the CFL3D version 6.0 computational fluid dynamics (CFD) code, recently developed at the NASA Langley Research Center. This latest version of the flow solver includes a deforming mesh capability, a modal structural definition for nonlinear aeroelastic analyses, and a parallelization capability that provides a significant increase in computational efficiency. Flutter results for the AGARD 445.6 Wing computed using CFL3D v6.0 are presented, including discussion of associated computational costs. Modal impulse responses of the unsteady aerodynamic system are then computed using the CFL3Dv6 code and transformed into state-space form. Important numerical issues associated with the computation of the impulse responses are presented. The unsteady aerodynamic state-space ROM is then combined with a state-space model of the structure to create an aeroelastic simulation using the MATLAB/SIMULINK environment. The MATLAB/SIMULINK ROM is used to rapidly compute aeroelastic transients including flutter. The ROM shows excellent agreement with the aeroelastic analyses computed using the CFL3Dv6.0 code directly.
NASA Technical Reports Server (NTRS)
Baumeister, Joseph F.
1994-01-01
A non-flowing, electrically heated test rig was developed to verify computer codes that calculate radiant energy propagation from nozzle geometries that represent aircraft propulsion nozzle systems. Since there are a variety of analysis tools used to evaluate thermal radiation propagation from partially enclosed nozzle surfaces, an experimental benchmark test case was developed for code comparison. This paper briefly describes the nozzle test rig and the developed analytical nozzle geometry used to compare the experimental and predicted thermal radiation results. A major objective of this effort was to make available the experimental results and the analytical model in a format to facilitate conversion to existing computer code formats. For code validation purposes this nozzle geometry represents one validation case for one set of analysis conditions. Since each computer code has advantages and disadvantages based on scope, requirements, and desired accuracy, the usefulness of this single nozzle baseline validation case can be limited for some code comparisons.
Manual for obscuration code with space station applications
NASA Technical Reports Server (NTRS)
Marhefka, R. J.; Takacs, L.
1986-01-01
The Obscuration Code, referred to as SHADOW, is a user-oriented computer code to determine the case shadow of an antenna in a complex environment onto the far zone sphere. The surrounding structure can be composed of multiple composite cone frustums and multiply sided flat plates. These structural pieces are ideal for modeling space station configurations. The means of describing the geometry input is compatible with the NEC-BASIC Scattering Code. In addition, an interactive mode of operation has been provided for DEC VAX computers. The first part of this document is a user's manual designed to give a description of the method used to obtain the shadow map, to provide an overall view of the operation of the computer code, to instruct a user in how to model structures, and to give examples of inputs and outputs. The second part is a code manual that details how to set up the interactive and non-interactive modes of the code and provides a listing and brief description of each of the subroutines.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ritchie, L.T.; Johnson, J.D.; Blond, R.M.
The CRAC2 computer code is a revision of the Calculation of Reactor Accident Consequences computer code, CRAC, developed for the Reactor Safety Study. The CRAC2 computer code incorporates significant modeling improvements in the areas of weather sequence sampling and emergency response, and refinements to the plume rise, atmospheric dispersion, and wet deposition models. New output capabilities have also been added. This guide is to facilitate the informed and intelligent use of CRAC2. It includes descriptions of the input data, the output results, the file structures, control information, and five sample problems.
Progressive fracture of fiber composites
NASA Technical Reports Server (NTRS)
Irvin, T. B.; Ginty, C. A.
1983-01-01
Refined models and procedures are described for determining progressive composite fracture in graphite/epoxy angleplied laminates. Lewis Research Center capabilities are utilized including the Real Time Ultrasonic C Scan (RUSCAN) experimental facility and the Composite Durability Structural Analysis (CODSTRAN) computer code. The CODSTRAN computer code is used to predict the fracture progression based on composite mechanics, finite element stress analysis, and fracture criteria modules. The RUSCAN facility, CODSTRAN computer code, and scanning electron microscope are used to determine durability and identify failure mechanisms in graphite/epoxy composites.
Modeling Improvements and Users Manual for Axial-flow Turbine Off-design Computer Code AXOD
NASA Technical Reports Server (NTRS)
Glassman, Arthur J.
1994-01-01
An axial-flow turbine off-design performance computer code used for preliminary studies of gas turbine systems was modified and calibrated based on the experimental performance of large aircraft-type turbines. The flow- and loss-model modifications and calibrations are presented in this report. Comparisons are made between computed performances and experimental data for seven turbines over wide ranges of speed and pressure ratio. This report also serves as the users manual for the revised code, which is named AXOD.
Design geometry and design/off-design performance computer codes for compressors and turbines
NASA Technical Reports Server (NTRS)
Glassman, Arthur J.
1995-01-01
This report summarizes some NASA Lewis (i.e., government owned) computer codes capable of being used for airbreathing propulsion system studies to determine the design geometry and to predict the design/off-design performance of compressors and turbines. These are not CFD codes; velocity-diagram energy and continuity computations are performed fore and aft of the blade rows using meanline, spanline, or streamline analyses. Losses are provided by empirical methods. Both axial-flow and radial-flow configurations are included.
PerSEUS: Ultra-Low-Power High Performance Computing for Plasma Simulations
NASA Astrophysics Data System (ADS)
Doxas, I.; Andreou, A.; Lyon, J.; Angelopoulos, V.; Lu, S.; Pritchett, P. L.
2017-12-01
Peta-op SupErcomputing Unconventional System (PerSEUS) aims to explore the use for High Performance Scientific Computing (HPC) of ultra-low-power mixed signal unconventional computational elements developed by Johns Hopkins University (JHU), and demonstrate that capability on both fluid and particle Plasma codes. We will describe the JHU Mixed-signal Unconventional Supercomputing Elements (MUSE), and report initial results for the Lyon-Fedder-Mobarry (LFM) global magnetospheric MHD code, and a UCLA general purpose relativistic Particle-In-Cell (PIC) code.
Multiple grid problems on concurrent-processing computers
NASA Technical Reports Server (NTRS)
Eberhardt, D. S.; Baganoff, D.
1986-01-01
Three computer codes were studied which make use of concurrent processing computer architectures in computational fluid dynamics (CFD). The three parallel codes were tested on a two processor multiple-instruction/multiple-data (MIMD) facility at NASA Ames Research Center, and are suggested for efficient parallel computations. The first code is a well-known program which makes use of the Beam and Warming, implicit, approximate factored algorithm. This study demonstrates the parallelism found in a well-known scheme and it achieved speedups exceeding 1.9 on the two processor MIMD test facility. The second code studied made use of an embedded grid scheme which is used to solve problems having complex geometries. The particular application for this study considered an airfoil/flap geometry in an incompressible flow. The scheme eliminates some of the inherent difficulties found in adapting approximate factorization techniques onto MIMD machines and allows the use of chaotic relaxation and asynchronous iteration techniques. The third code studied is an application of overset grids to a supersonic blunt body problem. The code addresses the difficulties encountered when using embedded grids on a compressible, and therefore nonlinear, problem. The complex numerical boundary system associated with overset grids is discussed and several boundary schemes are suggested. A boundary scheme based on the method of characteristics achieved the best results.
Binary weight distributions of some Reed-Solomon codes
NASA Technical Reports Server (NTRS)
Pollara, F.; Arnold, S.
1992-01-01
The binary weight distributions of the (7,5) and (15,9) Reed-Solomon (RS) codes and their duals are computed using the MacWilliams identities. Several mappings of symbols to bits are considered and those offering the largest binary minimum distance are found. These results are then used to compute bounds on the soft-decoding performance of these codes in the presence of additive Gaussian noise. These bounds are useful for finding large binary block codes with good performance and for verifying the performance obtained by specific soft-coding algorithms presently under development.
NASA Technical Reports Server (NTRS)
Mcgaw, Michael A.; Saltsman, James F.
1993-01-01
A recently developed high-temperature fatigue life prediction computer code is presented and an example of its usage given. The code discussed is based on the Total Strain version of Strainrange Partitioning (TS-SRP). Included in this code are procedures for characterizing the creep-fatigue durability behavior of an alloy according to TS-SRP guidelines and predicting cyclic life for complex cycle types for both isothermal and thermomechanical conditions. A reasonably extensive materials properties database is included with the code.
Turbomachinery Heat Transfer and Loss Modeling for 3D Navier-Stokes Codes
NASA Technical Reports Server (NTRS)
DeWitt, Kenneth; Ameri, Ali
2005-01-01
This report's contents focus on making use of NASA Glenn on-site computational facilities,to develop, validate, and apply models for use in advanced 3D Navier-Stokes Computational Fluid Dynamics (CFD) codes to enhance the capability to compute heat transfer and losses in turbomachiney.
Real-time computer treatment of THz passive device images with the high image quality
NASA Astrophysics Data System (ADS)
Trofimov, Vyacheslav A.; Trofimov, Vladislav V.
2012-06-01
We demonstrate real-time computer code improving significantly the quality of images captured by the passive THz imaging system. The code is not only designed for a THz passive device: it can be applied to any kind of such devices and active THz imaging systems as well. We applied our code for computer processing of images captured by four passive THz imaging devices manufactured by different companies. It should be stressed that computer processing of images produced by different companies requires using the different spatial filters usually. The performance of current version of the computer code is greater than one image per second for a THz image having more than 5000 pixels and 24 bit number representation. Processing of THz single image produces about 20 images simultaneously corresponding to various spatial filters. The computer code allows increasing the number of pixels for processed images without noticeable reduction of image quality. The performance of the computer code can be increased many times using parallel algorithms for processing the image. We develop original spatial filters which allow one to see objects with sizes less than 2 cm. The imagery is produced by passive THz imaging devices which captured the images of objects hidden under opaque clothes. For images with high noise we develop an approach which results in suppression of the noise after using the computer processing and we obtain the good quality image. With the aim of illustrating the efficiency of the developed approach we demonstrate the detection of the liquid explosive, ordinary explosive, knife, pistol, metal plate, CD, ceramics, chocolate and other objects hidden under opaque clothes. The results demonstrate the high efficiency of our approach for the detection of hidden objects and they are a very promising solution for the security problem.
Fingerprinting Communication and Computation on HPC Machines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peisert, Sean
2010-06-02
How do we identify what is actually running on high-performance computing systems? Names of binaries, dynamic libraries loaded, or other elements in a submission to a batch queue can give clues, but binary names can be changed, and libraries provide limited insight and resolution on the code being run. In this paper, we present a method for"fingerprinting" code running on HPC machines using elements of communication and computation. We then discuss how that fingerprint can be used to determine if the code is consistent with certain other types of codes, what a user usually runs, or what the user requestedmore » an allocation to do. In some cases, our techniques enable us to fingerprint HPC codes using runtime MPI data with a high degree of accuracy.« less
Practices in source code sharing in astrophysics
NASA Astrophysics Data System (ADS)
Shamir, Lior; Wallin, John F.; Allen, Alice; Berriman, Bruce; Teuben, Peter; Nemiroff, Robert J.; Mink, Jessica; Hanisch, Robert J.; DuPrie, Kimberly
2013-02-01
While software and algorithms have become increasingly important in astronomy, the majority of authors who publish computational astronomy research do not share the source code they develop, making it difficult to replicate and reuse the work. In this paper we discuss the importance of sharing scientific source code with the entire astrophysics community, and propose that journals require authors to make their code publicly available when a paper is published. That is, we suggest that a paper that involves a computer program not be accepted for publication unless the source code becomes publicly available. The adoption of such a policy by editors, editorial boards, and reviewers will improve the ability to replicate scientific results, and will also make computational astronomy methods more available to other researchers who wish to apply them to their data.
Development of V/STOL methodology based on a higher order panel method
NASA Technical Reports Server (NTRS)
Bhateley, I. C.; Howell, G. A.; Mann, H. W.
1983-01-01
The development of a computational technique to predict the complex flowfields of V/STOL aircraft was initiated in which a number of modules and a potential flow aerodynamic code were combined in a comprehensive computer program. The modules were developed in a building-block approach to assist the user in preparing the geometric input and to compute parameters needed to simulate certain flow phenomena that cannot be handled directly within a potential flow code. The PAN AIR aerodynamic code, which is higher order panel method, forms the nucleus of this program. PAN AIR's extensive capability for allowing generalized boundary conditions allows the modules to interact with the aerodynamic code through the input and output files, thereby requiring no changes to the basic code and easy replacement of updated modules.
Lattice surgery on the Raussendorf lattice
NASA Astrophysics Data System (ADS)
Herr, Daniel; Paler, Alexandru; Devitt, Simon J.; Nori, Franco
2018-07-01
Lattice surgery is a method to perform quantum computation fault-tolerantly by using operations on boundary qubits between different patches of the planar code. This technique allows for universal planar code computation without eliminating the intrinsic two-dimensional nearest-neighbor properties of the surface code that eases physical hardware implementations. Lattice surgery approaches to algorithmic compilation and optimization have been demonstrated to be more resource efficient for resource-intensive components of a fault-tolerant algorithm, and consequently may be preferable over braid-based logic. Lattice surgery can be extended to the Raussendorf lattice, providing a measurement-based approach to the surface code. In this paper we describe how lattice surgery can be performed on the Raussendorf lattice and therefore give a viable alternative to computation using braiding in measurement-based implementations of topological codes.
40 CFR 1033.110 - Emission diagnostics-general requirements.
Code of Federal Regulations, 2011 CFR
2011-07-01
... engine operation. (d) Record and store in computer memory any diagnostic trouble codes showing a... and understand the diagnostic trouble codes stored in the onboard computer with generic tools and...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hursin, M.; Koeberl, O.; Perret, G.
2012-07-01
High Conversion Light Water Reactors (HCLWR) allows a better usage of fuel resources thanks to a higher breeding ratio than standard LWR. Their uses together with the current fleet of LWR constitute a fuel cycle thoroughly studied in Japan and the US today. However, one of the issues related to HCLWR is their void reactivity coefficient (VRC), which can be positive. Accurate predictions of void reactivity coefficient in HCLWR conditions and their comparisons with representative experiments are therefore required. In this paper an inter comparison of modern codes and cross-section libraries is performed for a former Benchmark on Void Reactivitymore » Effect in PWRs conducted by the OECD/NEA. It shows an overview of the k-inf values and their associated VRC obtained for infinite lattice calculations with UO{sub 2} and highly enriched MOX fuel cells. The codes MCNPX2.5, TRIPOLI4.4 and CASMO-5 in conjunction with the libraries ENDF/B-VI.8, -VII.0, JEF-2.2 and JEFF-3.1 are used. A non-negligible spread of results for voided conditions is found for the high content MOX fuel. The spread of eigenvalues for the moderated and voided UO{sub 2} fuel are about 200 pcm and 700 pcm, respectively. The standard deviation for the VRCs for the UO{sub 2} fuel is about 0.7% while the one for the MOX fuel is about 13%. This work shows that an appropriate treatment of the unresolved resonance energy range is an important issue for the accurate determination of the void reactivity effect for HCLWR. A comparison to experimental results is needed to resolve the presented discrepancies. (authors)« less
NASA Astrophysics Data System (ADS)
Xu, X. George; Taranenko, Valery; Zhang, Juying; Shi, Chengyu
2007-12-01
Fetuses are extremely radiosensitive and the protection of pregnant females against ionizing radiation is of particular interest in many health and medical physics applications. Existing models of pregnant females relied on simplified anatomical shapes or partial-body images of low resolutions. This paper reviews two general types of solid geometry modeling: constructive solid geometry (CSG) and boundary representation (BREP). It presents in detail a project to adopt the BREP modeling approach to systematically design whole-body radiation dosimetry models: a pregnant female and her fetus at the ends of three gestational periods of 3, 6 and 9 months. Based on previously published CT images of a 7-month pregnant female, the VIP-Man model and mesh organ models, this new set of pregnant female models was constructed using 3D surface modeling technologies instead of voxels. The organ masses were adjusted to agree with the reference data provided by the International Commission on Radiological Protection (ICRP) and previously published papers within 0.5%. The models were then voxelized for the purpose of performing dose calculations in identically implemented EGS4 and MCNPX Monte Carlo codes. The agreements of the fetal doses obtained from these two codes for this set of models were found to be within 2% for the majority of the external photon irradiation geometries of AP, PA, LAT, ROT and ISO at various energies. It is concluded that the so-called RPI-P3, RPI-P6 and RPI-P9 models have been reliably defined for Monte Carlo calculations. The paper also discusses the needs for future research and the possibility for the BREP method to become a major tool in the anatomical modeling for radiation dosimetry.
Airfoil Vibration Dampers program
NASA Technical Reports Server (NTRS)
Cook, Robert M.
1991-01-01
The Airfoil Vibration Damper program has consisted of an analysis phase and a testing phase. During the analysis phase, a state-of-the-art computer code was developed, which can be used to guide designers in the placement and sizing of friction dampers. The use of this computer code was demonstrated by performing representative analyses on turbine blades from the High Pressure Oxidizer Turbopump (HPOTP) and High Pressure Fuel Turbopump (HPFTP) of the Space Shuttle Main Engine (SSME). The testing phase of the program consisted of performing friction damping tests on two different cantilever beams. Data from these tests provided an empirical check on the accuracy of the computer code developed in the analysis phase. Results of the analysis and testing showed that the computer code can accurately predict the performance of friction dampers. In addition, a valuable set of friction damping data was generated, which can be used to aid in the design of friction dampers, as well as provide benchmark test cases for future code developers.
Computer optimization of reactor-thermoelectric space power systems
NASA Technical Reports Server (NTRS)
Maag, W. L.; Finnegan, P. M.; Fishbach, L. H.
1973-01-01
A computer simulation and optimization code that has been developed for nuclear space power systems is described. The results of using this code to analyze two reactor-thermoelectric systems are presented.
A 3D-CFD code for accurate prediction of fluid flows and fluid forces in seals
NASA Technical Reports Server (NTRS)
Athavale, M. M.; Przekwas, A. J.; Hendricks, R. C.
1994-01-01
Current and future turbomachinery requires advanced seal configurations to control leakage, inhibit mixing of incompatible fluids and to control the rotodynamic response. In recognition of a deficiency in the existing predictive methodology for seals, a seven year effort was established in 1990 by NASA's Office of Aeronautics Exploration and Technology, under the Earth-to-Orbit Propulsion program, to develop validated Computational Fluid Dynamics (CFD) concepts, codes and analyses for seals. The effort will provide NASA and the U.S. Aerospace Industry with advanced CFD scientific codes and industrial codes for analyzing and designing turbomachinery seals. An advanced 3D CFD cylindrical seal code has been developed, incorporating state-of-the-art computational methodology for flow analysis in straight, tapered and stepped seals. Relevant computational features of the code include: stationary/rotating coordinates, cylindrical and general Body Fitted Coordinates (BFC) systems, high order differencing schemes, colocated variable arrangement, advanced turbulence models, incompressible/compressible flows, and moving grids. This paper presents the current status of code development, code demonstration for predicting rotordynamic coefficients, numerical parametric study of entrance loss coefficients for generic annular seals, and plans for code extensions to labyrinth, damping, and other seal configurations.
1990-09-01
13 Bart Kuhn, GM-14 Samantha K. Maddox , GS-04 Mike Nakada, GM- 13 John Wolfe, GM-14 Reynaldo I. Monzon, GS- 12 Jose G. Suarez, GS- 11 19 Product...1410-09 GS-334-09 Janice Whiting Procurement Clerk Code 21 GS-1106-05 Separations Samantha Maddox Hoa T. Lu Supply Clerk Computer Specialist Code 21...Jennifer Thorp Royal S. Magnus Student Aide Personnel Research Psychologist Code 23 Code 12 GW-322-03 GS-180-11 Linda L. Turnmire Yvonne S. Baker Computer
Ascent Aerodynamic Pressure Distributions on WB001
NASA Technical Reports Server (NTRS)
Vu, B.; Ruf, J.; Canabal, F.; Brunty, J.
1996-01-01
To support the reusable launch vehicle concept study, the aerodynamic data and surface pressure for WB001 were predicted using three computational fluid dynamic (CFD) codes at several flow conditions between code to code and code to aerodynamic database as well as available experimental data. A set of particular solutions have been selected and recommended for use in preliminary conceptual designs. These computational fluid dynamic (CFD) results have also been provided to the structure group for wing loading analysis.
NASA Technical Reports Server (NTRS)
Kumar, A.; Graves, R. A., Jr.; Weilmuenster, K. J.
1980-01-01
A vectorized code, EQUIL, was developed for calculating the equilibrium chemistry of a reacting gas mixture on the Control Data STAR-100 computer. The code provides species mole fractions, mass fractions, and thermodynamic and transport properties of the mixture for given temperature, pressure, and elemental mass fractions. The code is set up for the electrons H, He, C, O, N system of elements. In all, 24 chemical species are included.
Computer code for charge-exchange plasma propagation
NASA Technical Reports Server (NTRS)
Robinson, R. S.; Kaufman, H. R.
1981-01-01
The propagation of the charge-exchange plasma from an electrostatic ion thruster is crucial in determining the interaction of that plasma with the associated spacecraft. A model that describes this plasma and its propagation is described, together with a computer code based on this model. The structure and calling sequence of the code, named PLASIM, is described. An explanation of the program's input and output is included, together with samples of both. The code is written in ASNI Standard FORTRAN.
Self-Scheduling Parallel Methods for Multiple Serial Codes with Application to WOPWOP
NASA Technical Reports Server (NTRS)
Long, Lyle N.; Brentner, Kenneth S.
2000-01-01
This paper presents a scheme for efficiently running a large number of serial jobs on parallel computers. Two examples are given of computer programs that run relatively quickly, but often they must be run numerous times to obtain all the results needed. It is very common in science and engineering to have codes that are not massive computing challenges in themselves, but due to the number of instances that must be run, they do become large-scale computing problems. The two examples given here represent common problems in aerospace engineering: aerodynamic panel methods and aeroacoustic integral methods. The first example simply solves many systems of linear equations. This is representative of an aerodynamic panel code where someone would like to solve for numerous angles of attack. The complete code for this first example is included in the appendix so that it can be readily used by others as a template. The second example is an aeroacoustics code (WOPWOP) that solves the Ffowcs Williams Hawkings equation to predict the far-field sound due to rotating blades. In this example, one quite often needs to compute the sound at numerous observer locations, hence parallelization is utilized to automate the noise computation for a large number of observers.
Computer Code for Transportation Network Design and Analysis
DOT National Transportation Integrated Search
1977-01-01
This document describes the results of research into the application of the mathematical programming technique of decomposition to practical transportation network problems. A computer code called Catnap (for Control Analysis Transportation Network A...
Current and anticipated uses of the thermal hydraulics codes at the NRC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Caruso, R.
1997-07-01
The focus of Thermal-Hydraulic computer code usage in nuclear regulatory organizations has undergone a considerable shift since the codes were originally conceived. Less work is being done in the area of {open_quotes}Design Basis Accidents,{close_quotes}, and much more emphasis is being placed on analysis of operational events, probabalistic risk/safety assessment, and maintenance practices. All of these areas need support from Thermal-Hydraulic computer codes to model the behavior of plant fluid systems, and they all need the ability to perform large numbers of analyses quickly. It is therefore important for the T/H codes of the future to be able to support thesemore » needs, by providing robust, easy-to-use, tools that produce easy-to understand results for a wider community of nuclear professionals. These tools need to take advantage of the great advances that have occurred recently in computer software, by providing users with graphical user interfaces for both input and output. In addition, reduced costs of computer memory and other hardware have removed the need for excessively complex data structures and numerical schemes, which make the codes more difficult and expensive to modify, maintain, and debug, and which increase problem run-times. Future versions of the T/H codes should also be structured in a modular fashion, to allow for the easy incorporation of new correlations, models, or features, and to simplify maintenance and testing. Finally, it is important that future T/H code developers work closely with the code user community, to ensure that the code meet the needs of those users.« less
Analyzing Pulse-Code Modulation On A Small Computer
NASA Technical Reports Server (NTRS)
Massey, David E.
1988-01-01
System for analysis pulse-code modulation (PCM) comprises personal computer, computer program, and peripheral interface adapter on circuit board that plugs into expansion bus of computer. Functions essentially as "snapshot" PCM decommutator, which accepts and stores thousands of frames of PCM data, sifts through them repeatedly to process according to routines specified by operator. Enables faster testing and involves less equipment than older testing systems.
A fast technique for computing syndromes of BCH and RS codes. [deep space network
NASA Technical Reports Server (NTRS)
Reed, I. S.; Truong, T. K.; Miller, R. L.
1979-01-01
A combination of the Chinese Remainder Theorem and Winograd's algorithm is used to compute transforms of odd length over GF(2 to the m power). Such transforms are used to compute the syndromes needed for decoding CBH and RS codes. The present scheme requires substantially fewer multiplications and additions than the conventional method of computing the syndromes directly.
Computational techniques for solar wind flows past terrestrial planets: Theory and computer programs
NASA Technical Reports Server (NTRS)
Stahara, S. S.; Chaussee, D. S.; Trudinger, B. C.; Spreiter, J. R.
1977-01-01
The interaction of the solar wind with terrestrial planets can be predicted using a computer program based on a single fluid, steady, dissipationless, magnetohydrodynamic model to calculate the axisymmetric, supersonic, super-Alfvenic solar wind flow past both magnetic and nonmagnetic planets. The actual calculations are implemented by an assemblage of computer codes organized into one program. These include finite difference codes which determine the gas-dynamic solution, together with a variety of special purpose output codes for determining and automatically plotting both flow field and magnetic field results. Comparisons are made with previous results, and results are presented for a number of solar wind flows. The computational programs developed are documented and are presented in a general user's manual which is included.
Numerical computation of space shuttle orbiter flow field
NASA Technical Reports Server (NTRS)
Tannehill, John C.
1988-01-01
A new parabolized Navier-Stokes (PNS) code has been developed to compute the hypersonic, viscous chemically reacting flow fields around 3-D bodies. The flow medium is assumed to be a multicomponent mixture of thermally perfect but calorically imperfect gases. The new PNS code solves the gas dynamic and species conservation equations in a coupled manner using a noniterative, implicit, approximately factored, finite difference algorithm. The space-marching method is made well-posed by special treatment of the streamwise pressure gradient term. The code has been used to compute hypersonic laminar flow of chemically reacting air over cones at angle of attack. The results of the computations are compared with the results of reacting boundary-layer computations and show excellent agreement.
NASA Technical Reports Server (NTRS)
Warren, Gary
1988-01-01
The SOS code is used to compute the resonance modes (frequency-domain information) of sample devices and separately to compute the transient behavior of the same devices. A code, DOT, is created to compute appropriate dot products of the time-domain and frequency-domain results. The transient behavior of individual modes in the device is then plotted. Modes in a coupled-cavity traveling-wave tube (CCTWT) section excited beam in separate simulations are analyzed. Mode energy vs. time and mode phase vs. time are computed and it is determined whether the transient waves are forward or backward waves for each case. Finally, the hot-test mode frequencies of the CCTWT section are computed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hall, D.G.: Watkins, J.C.
This report documents an evaluation of the TRAC-PF1/MOD1 reactor safety analysis computer code during computer simulations of feedwater line break transients. The experimental data base for the evaluation included the results of three bottom feedwater line break tests performed in the Semiscale Mod-2C test facility. The tests modeled 14.3% (S-FS-7), 50% (S-FS-11), and 100% (S-FS-6B) breaks. The test facility and the TRAC-PF1/MOD1 model used in the calculations are described. Evaluations of the accuracy of the calculations are presented in the form of comparisons of measured and calculated histories of selected parameters associated with the primary and secondary systems. In additionmore » to evaluating the accuracy of the code calculations, the computational performance of the code during the simulations was assessed. A conclusion was reached that the code is capable of making feedwater line break transient calculations efficiently, but there is room for significant improvements in the simulations that were performed. Recommendations are made for follow-on investigations to determine how to improve future feedwater line break calculations and for code improvements to make the code easier to use.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kostin, Mikhail; Mokhov, Nikolai; Niita, Koji
A parallel computing framework has been developed to use with general-purpose radiation transport codes. The framework was implemented as a C++ module that uses MPI for message passing. It is intended to be used with older radiation transport codes implemented in Fortran77, Fortran 90 or C. The module is significantly independent of radiation transport codes it can be used with, and is connected to the codes by means of a number of interface functions. The framework was developed and tested in conjunction with the MARS15 code. It is possible to use it with other codes such as PHITS, FLUKA andmore » MCNP after certain adjustments. Besides the parallel computing functionality, the framework offers a checkpoint facility that allows restarting calculations with a saved checkpoint file. The checkpoint facility can be used in single process calculations as well as in the parallel regime. The framework corrects some of the known problems with the scheduling and load balancing found in the original implementations of the parallel computing functionality in MARS15 and PHITS. The framework can be used efficiently on homogeneous systems and networks of workstations, where the interference from the other users is possible.« less
NASA Astrophysics Data System (ADS)
Goddard, Braden
The ability of inspection agencies and facility operators to measure powders containing several actinides is increasingly necessary as new reprocessing techniques and fuel forms are being developed. These powders are difficult to measure with nondestructive assay (NDA) techniques because neutrons emitted from induced and spontaneous fission of different nuclides are very similar. A neutron multiplicity technique based on first principle methods was developed to measure these powders by exploiting isotope-specific nuclear properties, such as the energy-dependent fission cross sections and the neutron induced fission neutron multiplicity. This technique was tested through extensive simulations using the Monte Carlo N-Particle eXtended (MCNPX) code and by one measurement campaign using the Active Well Coincidence Counter (AWCC) and two measurement campaigns using the Epithermal Neutron Multiplicity Counter (ENMC) with various (alpha,n) sources and actinide materials. Four potential applications of this first principle technique have been identified: (1) quantitative measurement of uranium, neptunium, plutonium, and americium materials; (2) quantitative measurement of mixed oxide (MOX) materials; (3) quantitative measurement of uranium materials; and (4) weapons verification in arms control agreements. This technique still has several challenges which need to be overcome, the largest of these being the challenge of having high-precision active and passive measurements to produce results with acceptably small uncertainties.
Validation of Monte Carlo simulation of neutron production in a spallation experiment
Zavorka, L.; Adam, J.; Artiushenko, M.; ...
2015-02-25
A renewed interest in experimental research on Accelerator-Driven Systems (ADS) has been initiated by the global attempt to produce energy from thorium as a safe(r), clean(er) and (more) proliferation-resistant alternative to the uranium-fuelled thermal nuclear reactors. The ADS research has been actively pursued at the Joint Institute for Nuclear Research (JINR), Dubna, since decades. Most recently, the emission of fast neutrons was experimentally investigated at the massive (m = 512 kg) natural uranium spallation target QUINTA. The target has been irradiated with the relativistic deuteron beams of energy from 0.5 AGeV up to 4 AGeV at the JINR Nuclotron acceleratormore » in numerous experiments since 2011. Neutron production inside the target was studied through the gamma-ray spectrometry measurement of natural uranium activation detectors. Experimental reaction rates for (n,γ), (n,f) and (n,2n) reactions in uranium have provided valuable information about the neutron distribution over a wide range of energies up to some GeV. The experimental data were compared to the predictions of Monte Carlo simulations using the MCNPX 2.7.0 code. In conclusion, the results are presented and potential sources of partial disagreement are discussed later in this work.« less
Simulation of internal contamination screening with dose rate meters
NASA Astrophysics Data System (ADS)
Fonseca, T. C. F.; Mendes, B. M.; Hunt, J. G.
2017-11-01
Assessing the intake of radionuclides after an accident in a nuclear power plant or after the intentional release of radionuclides in public places allows dose calculations and triage actions to be carried out for members of the public and for emergency response teams. Gamma emitters in the lung, thyroid or the whole body may be detected and quantified by making dose rate measurements at the surface of the internally contaminated person. In an accident scenario, quick measurements made with readily available portable equipment are a key factor for success. In this paper, the Monte Carlo program Visual Monte Carlo (VMC) and MCNPx code are used in conjunction with voxel phantoms to calculate the dose rate at the surface of a contaminated person due to internally deposited radionuclides. A whole body contamination with 137Cs and a thyroid contamination with 131I were simulated and the calibration factors in kBq per μSv/h were calculated. The calculated calibration factors were compared with real data obtained from the Goiania accident in the case of 137Cs and the Chernobyl accident in terms of the 131I. The close comparison of the calculated and real measurements indicates that the method may be applied to other radionuclides. Minimum detectable activities are discussed.
Santos, William S; Belinato, Walmir; Perini, Ana P; Caldas, Linda V E; Galeano, Diego C; Santos, Carla J; Neves, Lucio P
2018-01-01
In this study we evaluated the occupational exposures during an abdominal fluoroscopically guided interventional radiology procedure. We investigated the relation between the Body Mass Index (BMI), of the patient, and the conversion coefficient values (CC) for a set of dosimetric quantities, used to assess the exposure risks of medical radiation workers. The study was performed using a set of male and female virtual anthropomorphic phantoms, of different body weights and sizes. In addition to these phantoms, a female and a male phantom, named FASH3 and MASH3 (reference virtual anthropomorphic phantoms), were also used to represent the medical radiation workers. The CC values, obtained as a function of the dose area product, were calculated for 87 exposure scenarios. In each exposure scenario, three phantoms, implemented in the MCNPX 2.7.0 code, were simultaneously used. These phantoms were utilized to represent a patient and medical radiation workers. The results showed that increasing the BMI of the patient, adjusted for each patient protocol, the CC values for medical radiation workers decrease. It is important to note that these results were obtained with fixed exposure parameters. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Atwell, William; Rojdev, Kristina; Aghara, Sukesh; Sriprisan, Sirikul
2013-01-01
In this paper we present a novel space radiation shielding approach using various material lay-ups, called "Graded-Z" shielding, which could optimize cost, weight, and safety while mitigating the radiation exposures from the trapped radiation and solar proton environments, as well as the galactic cosmic radiation (GCR) environment, to humans and electronics. In addition, a validation and verification (V&V) was performed using two different high energy particle transport/dose codes (MCNPX & HZETRN). Inherently, we know that materials having high-hydrogen content are very good space radiation shielding materials. Graded-Z material lay-ups are very good trapped electron mitigators for medium earth orbit (MEO) and geostationary earth orbit (GEO). In addition, secondary particles, namely neutrons, are produced as the primary particles penetrate a spacecraft, which can have deleterious effects to both humans and electronics. The use of "dopants," such as beryllium, boron, and lithium, impregnated in other shielding materials provides a means of absorbing the secondary neutrons. Several examples of optimized Graded-Z shielding layups that include the use of composite materials are presented and discussed in detail. This parametric shielding study is an extension of some earlier pioneering work we (William Atwell and Kristina Rojdev) performed in 20041 and 20092.
Neutron yield and induced radioactivity: a study of 235-MeV proton and 3-GeV electron accelerators.
Hsu, Yung-Cheng; Lai, Bo-Lun; Sheu, Rong-Jiun
2016-01-01
This study evaluated the magnitude of potential neutron yield and induced radioactivity of two new accelerators in Taiwan: a 235-MeV proton cyclotron for radiation therapy and a 3-GeV electron synchrotron serving as the injector for the Taiwan Photon Source. From a nuclear interaction point of view, neutron production from targets bombarded with high-energy particles is intrinsically related to the resulting target activation. Two multi-particle interaction and transport codes, FLUKA and MCNPX, were used in this study. To ensure prediction quality, much effort was devoted to the associated benchmark calculations. Comparisons of the accelerators' results for three target materials (copper, stainless steel and tissue) are presented. Although the proton-induced neutron yields were higher than those induced by electrons, the maximal neutron production rates of both accelerators were comparable according to their respective beam outputs during typical operation. Activation products in the targets of the two accelerators were unexpectedly similar because the primary reaction channels for proton- and electron-induced activation are (p,pn) and (γ,n), respectively. The resulting residual activities and remnant dose rates as a function of time were examined and discussed. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Transmutation of uranium and thorium in the particle field of the Quinta sub-critical assembly
NASA Astrophysics Data System (ADS)
Hashemi-Nezhad, S. R.; Asquith, N. L.; Voronko, V. A.; Sotnikov, V. V.; Zhadan, Alina; Zhuk, I. V.; Potapenko, A.; Husak, Krystsina; Chilap, V.; Adam, J.; Baldin, A.; Berlev, A.; Furman, W.; Kadykov, M.; Khushvaktov, J.; Kudashkin, I.; Mar'in, I.; Paraipan, M.; Pronskih, V.; Solnyshkin, A.; Tyutyunnikov, S.
2018-03-01
The fission rates of natural uranium and thorium were measured in the particle field of Quinta, a 512 kg natural uranium target-blanket sub-critical assembly. The Quinta assembly was irradiated with deuterons of energy 4 GeV from the Nuclotron accelerator of the Joint Institute for Nuclear Research (JINR), Dubna, Russia. Fission rates of uranium and thorium were measured using Gamma spectroscopy and fission track techniques. The production rate of 239Np was also measured. The obtained experimental results were compared with Monte Carlo predictions using the MCNPX 2.7 code employing the physics and fission-evaporation models of INCL4-ABLA, CEM03.03 and LAQGSM03.03. Some of the neutronic characteristics of the Quinta are compared with the "Energy plus Transmutation (EpT)" subcritical assembly, which is composed of a lead target and natU blanket. This comparison clearly demonstrates the importance of target material, neutron moderator and reflector types on the performance of a spallation neutron driven subcritical system. As the dimensions of the Quinta are very close to those of an optimal multi-rod-uranium target, the experimental and Monte Carlo calculation results presented in this paper provide insights on the particle field within a uranium target as well as in Accelerator Driven Systems in general.
Goddard, Braden; Croft, Stephen; Lousteau, Angela; ...
2016-05-25
Safeguarding nuclear material is an important and challenging task for the international community. One particular safeguards technique commonly used for uranium assay is active neutron correlation counting. This technique involves irradiating unused uranium with ( α,n) neutrons from an Am-Li source and recording the resultant neutron pulse signal which includes induced fission neutrons. Although this non-destructive technique is widely employed in safeguards applications, the neutron energy spectra from an Am-Li sources is not well known. Several measurements over the past few decades have been made to characterize this spectrum; however, little work has been done comparing the measured spectra ofmore » various Am-Li sources to each other. This paper examines fourteen different Am-Li spectra, focusing on how these spectra affect simulated neutron multiplicity results using the code Monte Carlo N-Particle eXtended (MCNPX). Two measurement and simulation campaigns were completed using Active Well Coincidence Counter (AWCC) detectors and uranium standards of varying enrichment. The results of this work indicate that for standard AWCC measurements, the fourteen Am-Li spectra produce similar doubles and triples count rates. Finally, the singles count rates varied by as much as 20% between the different spectra, although they are usually not used in quantitative analysis.« less
SU-E-T-656: Quantitative Analysis of Proton Boron Fusion Therapy (PBFT) in Various Conditions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoon, D; Jung, J; Shin, H
2015-06-15
Purpose: Three alpha particles are concomitant of proton boron interaction, which can be used in radiotherapy applications. We performed simulation studies to determine the effectiveness of proton boron fusion therapy (PBFT) under various conditions. Methods: Boron uptake regions (BURs) of various widths and densities were implemented in Monte Carlo n-particle extended (MCNPX) simulation code. The effect of proton beam energy was considered for different BURs. Four simulation scenarios were designed to verify the effectiveness of integrated boost that was observed in the proton boron reaction. In these simulations, the effect of proton beam energy was determined for different physical conditions,more » such as size, location, and boron concentration. Results: Proton dose amplification was confirmed for all proton beam energies considered (< 96.62%). Based on the simulation results for different physical conditions, the threshold for the range in which proton dose amplification occurred was estimated as 0.3 cm. Effective proton boron reaction requires the boron concentration to be equal to or greater than 14.4 mg/g. Conclusion: We established the effects of the PBFT with various conditions by using Monte Carlo simulation. The results of our research can be used for providing a PBFT dose database.« less
Prediction of sound radiated from different practical jet engine inlets
NASA Technical Reports Server (NTRS)
Zinn, B. T.; Meyer, W. L.
1980-01-01
Existing computer codes for calculating the far field radiation patterns surrounding various practical jet engine inlet configurations under different excitation conditions were upgraded. The computer codes were refined and expanded so that they are now more efficient computationally by a factor of about three and they are now capable of producing accurate results up to nondimensional wave numbers of twenty. Computer programs were also developed to help generate accurate geometrical representations of the inlets to be investigated. This data is required as input for the computer programs which calculate the sound fields. This new geometry generating computer program considerably reduces the time required to generate the input data which was one of the most time consuming steps in the process. The results of sample runs using the NASA-Lewis QCSEE inlet are presented and comparison of run times and accuracy are made between the old and upgraded computer codes. The overall accuracy of the computations is determined by comparison of the results of the computations with simple source solutions.
Error Suppression for Hamiltonian-Based Quantum Computation Using Subsystem Codes
NASA Astrophysics Data System (ADS)
Marvian, Milad; Lidar, Daniel A.
2017-01-01
We present general conditions for quantum error suppression for Hamiltonian-based quantum computation using subsystem codes. This involves encoding the Hamiltonian performing the computation using an error detecting subsystem code and the addition of a penalty term that commutes with the encoded Hamiltonian. The scheme is general and includes the stabilizer formalism of both subspace and subsystem codes as special cases. We derive performance bounds and show that complete error suppression results in the large penalty limit. To illustrate the power of subsystem-based error suppression, we introduce fully two-local constructions for protection against local errors of the swap gate of adiabatic gate teleportation and the Ising chain in a transverse field.
Error Suppression for Hamiltonian-Based Quantum Computation Using Subsystem Codes.
Marvian, Milad; Lidar, Daniel A
2017-01-20
We present general conditions for quantum error suppression for Hamiltonian-based quantum computation using subsystem codes. This involves encoding the Hamiltonian performing the computation using an error detecting subsystem code and the addition of a penalty term that commutes with the encoded Hamiltonian. The scheme is general and includes the stabilizer formalism of both subspace and subsystem codes as special cases. We derive performance bounds and show that complete error suppression results in the large penalty limit. To illustrate the power of subsystem-based error suppression, we introduce fully two-local constructions for protection against local errors of the swap gate of adiabatic gate teleportation and the Ising chain in a transverse field.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simunovic, Srdjan
2015-02-16
CASL's modeling and simulation technology, the Virtual Environment for Reactor Applications (VERA), incorporates coupled physics and science-based models, state-of-the-art numerical methods, modern computational science, integrated uncertainty quantification (UQ) and validation against data from operating pressurized water reactors (PWRs), single-effect experiments, and integral tests. The computational simulation component of VERA is the VERA Core Simulator (VERA-CS). The core simulator is the specific collection of multi-physics computer codes used to model and deplete a LWR core over multiple cycles. The core simulator has a single common input file that drives all of the different physics codes. The parser code, VERAIn, converts VERAmore » Input into an XML file that is used as input to different VERA codes.« less
Navier-Stokes and Comprehensive Analysis Performance Predictions of the NREL Phase VI Experiment
NASA Technical Reports Server (NTRS)
Duque, Earl P. N.; Burklund, Michael D.; Johnson, Wayne
2003-01-01
A vortex lattice code, CAMRAD II, and a Reynolds-Averaged Navier-Stoke code, OVERFLOW-D2, were used to predict the aerodynamic performance of a two-bladed horizontal axis wind turbine. All computations were compared with experimental data that was collected at the NASA Ames Research Center 80- by 120-Foot Wind Tunnel. Computations were performed for both axial as well as yawed operating conditions. Various stall delay models and dynamics stall models were used by the CAMRAD II code. Comparisons between the experimental data and computed aerodynamic loads show that the OVERFLOW-D2 code can accurately predict the power and spanwise loading of a wind turbine rotor.
Fault-tolerance in Two-dimensional Topological Systems
NASA Astrophysics Data System (ADS)
Anderson, Jonas T.
This thesis is a collection of ideas with the general goal of building, at least in the abstract, a local fault-tolerant quantum computer. The connection between quantum information and topology has proven to be an active area of research in several fields. The introduction of the toric code by Alexei Kitaev demonstrated the usefulness of topology for quantum memory and quantum computation. Many quantum codes used for quantum memory are modeled by spin systems on a lattice, with operators that extract syndrome information placed on vertices or faces of the lattice. It is natural to wonder whether the useful codes in such systems can be classified. This thesis presents work that leverages ideas from topology and graph theory to explore the space of such codes. Homological stabilizer codes are introduced and it is shown that, under a set of reasonable assumptions, any qubit homological stabilizer code is equivalent to either a toric code or a color code. Additionally, the toric code and the color code correspond to distinct classes of graphs. Many systems have been proposed as candidate quantum computers. It is very desirable to design quantum computing architectures with two-dimensional layouts and low complexity in parity-checking circuitry. Kitaev's surface codes provided the first example of codes satisfying this property. They provided a new route to fault tolerance with more modest overheads and thresholds approaching 1%. The recently discovered color codes share many properties with the surface codes, such as the ability to perform syndrome extraction locally in two dimensions. Some families of color codes admit a transversal implementation of the entire Clifford group. This work investigates color codes on the 4.8.8 lattice known as triangular codes. I develop a fault-tolerant error-correction strategy for these codes in which repeated syndrome measurements on this lattice generate a three-dimensional space-time combinatorial structure. I then develop an integer program that analyzes this structure and determines the most likely set of errors consistent with the observed syndrome values. I implement this integer program to find the threshold for depolarizing noise on small versions of these triangular codes. Because the threshold for magic-state distillation is likely to be higher than this value and because logical
System, methods and apparatus for program optimization for multi-threaded processor architectures
Bastoul, Cedric; Lethin, Richard A; Leung, Allen K; Meister, Benoit J; Szilagyi, Peter; Vasilache, Nicolas T; Wohlford, David E
2015-01-06
Methods, apparatus and computer software product for source code optimization are provided. In an exemplary embodiment, a first custom computing apparatus is used to optimize the execution of source code on a second computing apparatus. In this embodiment, the first custom computing apparatus contains a memory, a storage medium and at least one processor with at least one multi-stage execution unit. The second computing apparatus contains at least two multi-stage execution units that allow for parallel execution of tasks. The first custom computing apparatus optimizes the code for parallelism, locality of operations and contiguity of memory accesses on the second computing apparatus. This Abstract is provided for the sole purpose of complying with the Abstract requirement rules. This Abstract is submitted with the explicit understanding that it will not be used to interpret or to limit the scope or the meaning of the claims.
NASA Technical Reports Server (NTRS)
Liu, D. D.; Kao, Y. F.; Fung, K. Y.
1989-01-01
A transonic equivalent strip (TES) method was further developed for unsteady flow computations of arbitrary wing planforms. The TES method consists of two consecutive correction steps to a given nonlinear code such as LTRAN2; namely, the chordwise mean flow correction and the spanwise phase correction. The computation procedure requires direct pressure input from other computed or measured data. Otherwise, it does not require airfoil shape or grid generation for given planforms. To validate the computed results, four swept wings of various aspect ratios, including those with control surfaces, are selected as computational examples. Overall trends in unsteady pressures are established with those obtained by XTRAN3S codes, Isogai's full potential code and measured data by NLR and RAE. In comparison with these methods, the TES has achieved considerable saving in computer time and reasonable accuracy which suggests immediate industrial applications.
Method and computer program product for maintenance and modernization backlogging
Mattimore, Bernard G; Reynolds, Paul E; Farrell, Jill M
2013-02-19
According to one embodiment, a computer program product for determining future facility conditions includes a computer readable medium having computer readable program code stored therein. The computer readable program code includes computer readable program code for calculating a time period specific maintenance cost, for calculating a time period specific modernization factor, and for calculating a time period specific backlog factor. Future facility conditions equal the time period specific maintenance cost plus the time period specific modernization factor plus the time period specific backlog factor. In another embodiment, a computer-implemented method for calculating future facility conditions includes calculating a time period specific maintenance cost, calculating a time period specific modernization factor, and calculating a time period specific backlog factor. Future facility conditions equal the time period specific maintenance cost plus the time period specific modernization factor plus the time period specific backlog factor. Other embodiments are also presented.
Development Of A Navier-Stokes Computer Code
NASA Technical Reports Server (NTRS)
Yoon, Seokkwan; Kwak, Dochan
1993-01-01
Report discusses aspects of development of CENS3D computer code, solving three-dimensional Navier-Stokes equations of compressible, viscous, unsteady flow. Implements implicit finite-difference or finite-volume numerical-integration scheme, called "lower-upper symmetric-Gauss-Seidel" (LU-SGS), offering potential for very low computer time per iteration and for fast convergence.
A Flexible and Non-instrusive Approach for Computing Complex Structural Coverage Metrics
NASA Technical Reports Server (NTRS)
Whalen, Michael W.; Person, Suzette J.; Rungta, Neha; Staats, Matt; Grijincu, Daniela
2015-01-01
Software analysis tools and techniques often leverage structural code coverage information to reason about the dynamic behavior of software. Existing techniques instrument the code with the required structural obligations and then monitor the execution of the compiled code to report coverage. Instrumentation based approaches often incur considerable runtime overhead for complex structural coverage metrics such as Modified Condition/Decision (MC/DC). Code instrumentation, in general, has to be approached with great care to ensure it does not modify the behavior of the original code. Furthermore, instrumented code cannot be used in conjunction with other analyses that reason about the structure and semantics of the code under test. In this work, we introduce a non-intrusive preprocessing approach for computing structural coverage information. It uses a static partial evaluation of the decisions in the source code and a source-to-bytecode mapping to generate the information necessary to efficiently track structural coverage metrics during execution. Our technique is flexible; the results of the preprocessing can be used by a variety of coverage-driven software analysis tasks, including automated analyses that are not possible for instrumented code. Experimental results in the context of symbolic execution show the efficiency and flexibility of our nonintrusive approach for computing code coverage information
"SMART": A Compact and Handy FORTRAN Code for the Physics of Stellar Atmospheres
NASA Astrophysics Data System (ADS)
Sapar, A.; Poolamäe, R.
2003-01-01
A new computer code SMART (Spectra from Model Atmospheres by Radiative Transfer) for computing the stellar spectra, forming in plane-parallel atmospheres, has been compiled by us and A. Aret. To guarantee wide compatibility of the code with shell environment, we chose FORTRAN-77 as programming language and tried to confine ourselves to common part of its numerous versions both in WINDOWS and LINUX. SMART can be used for studies of several processes in stellar atmospheres. The current version of the programme is undergoing rapid changes due to our goal to elaborate a simple, handy and compact code. Instead of linearisation (being a mathematical method of recurrent approximations) we propose to use the physical evolutionary changes or in other words relaxation of quantum state populations rates from LTE to NLTE has been studied using small number of NLTE states. This computational scheme is essentially simpler and more compact than the linearisation. This relaxation scheme enables using instead of the Λ-iteration procedure a physically changing emissivity (or the source function) which incorporates in itself changing Menzel coefficients for NLTE quantum state populations. However, the light scattering on free electrons is in the terms of Feynman graphs a real second-order quantum process and cannot be reduced to consequent processes of absorption and emission as in the case of radiative transfer in spectral lines. With duly chosen input parameters the code SMART enables computing radiative acceleration to the matter of stellar atmosphere in turbulence clumps. This also enables to connect the model atmosphere in more detail with the problem of the stellar wind triggering. Another problem, which has been incorporated into the computer code SMART, is diffusion of chemical elements and their isotopes in the atmospheres of chemically peculiar (CP) stars due to usual radiative acceleration and the essential additional acceleration generated by the light-induced drift. As a special case, using duly chosen pixels on the stellar disk, the spectrum of rotating star can be computed. No instrumental broadening has been incorporated in the code of SMART. To facilitate study of stellar spectra, a GUI (Graphical User Interface) with selection of labels by ions has been compiled to study the spectral lines of different elements and ions in the computed emergent flux. An amazing feature of SMART is that its code is very short: it occupies only 4 two-sided two-column A4 sheets in landscape format. In addition, if well commented, it is quite easily readable and understandable. We have used the tactics of writing the comments on the right-side margin (columns starting from 73). Such short code has been composed widely using the unified input physics (for example the ionisation cross-sections for bound-free transitions and the electron and ion collision rates). As current restriction to the application area of the present version of the SMART is that molecules are since ignored. Thus, it can be used only for luke and hot stellar atmospheres. In the computer code we have tried to avoid bulky often over-optimised methods, primarily meant to spare the time of computations. For instance, we compute the continuous absorption coefficient at every wavelength. Nevertheless, during an hour by the personal computer in our disposal AMD Athlon XP 1700+, 512MB DDRAM) a stellar spectrum with spectral step resolution λ / dλ = 3D100,000 for spectral interval 700 -- 30,000 Å is computed. The model input data and the line data used by us are both the ones computed and compiled by R. Kurucz. In order to follow presence and representability of quantum states and to enumerate them for NLTE studies a C++ code, transforming the needed data to the LATEX version, has been compiled. Thus we have composed a quantum state list for all neutrals and ions in the Kurucz file 'gfhyperall.dat'. The list enables more adequately to compose the concept of super-states, including partly correlating super-states. We are grateful to R. Kurucz for making available by CD-ROMs and Internet his computer codes ATLAS and SYNTHE used by us as a starting point in composing of the new computer code. We are also grateful to Estonian Science Foundation for grant ESF-4701.
Guide to AERO2S and WINGDES Computer Codes for Prediction and Minimization of Drag Due to Lift
NASA Technical Reports Server (NTRS)
Carlson, Harry W.; Chu, Julio; Ozoroski, Lori P.; McCullers, L. Arnold
1997-01-01
The computer codes, AER02S and WINGDES, are now widely used for the analysis and design of airplane lifting surfaces under conditions that tend to induce flow separation. These codes have undergone continued development to provide additional capabilities since the introduction of the original versions over a decade ago. This code development has been reported in a variety of publications (NASA technical papers, NASA contractor reports, and society journals). Some modifications have not been publicized at all. Users of these codes have suggested the desirability of combining in a single document the descriptions of the code development, an outline of the features of each code, and suggestions for effective code usage. This report is intended to supply that need.
Transferring ecosystem simulation codes to supercomputers
NASA Technical Reports Server (NTRS)
Skiles, J. W.; Schulbach, C. H.
1995-01-01
Many ecosystem simulation computer codes have been developed in the last twenty-five years. This development took place initially on main-frame computers, then mini-computers, and more recently, on micro-computers and workstations. Supercomputing platforms (both parallel and distributed systems) have been largely unused, however, because of the perceived difficulty in accessing and using the machines. Also, significant differences in the system architectures of sequential, scalar computers and parallel and/or vector supercomputers must be considered. We have transferred a grassland simulation model (developed on a VAX) to a Cray Y-MP/C90. We describe porting the model to the Cray and the changes we made to exploit the parallelism in the application and improve code execution. The Cray executed the model 30 times faster than the VAX and 10 times faster than a Unix workstation. We achieved an additional speedup of 30 percent by using the compiler's vectoring and 'in-line' capabilities. The code runs at only about 5 percent of the Cray's peak speed because it ineffectively uses the vector and parallel processing capabilities of the Cray. We expect that by restructuring the code, it could execute an additional six to ten times faster.
Duct flow nonuniformities for Space Shuttle Main Engine (SSME)
NASA Technical Reports Server (NTRS)
1987-01-01
A three-duct Space Shuttle Main Engine (SSME) Hot Gas Manifold geometry code was developed for use. The methodology of the program is described, recommendations on its implementation made, and an input guide, input deck listing, and a source code listing provided. The code listing is strewn with an abundance of comments to assist the user in following its development and logic. A working source deck will be provided. A thorough analysis was made of the proper boundary conditions and chemistry kinetics necessary for an accurate computational analysis of the flow environment in the SSME fuel side preburner chamber during the initial startup transient. Pertinent results were presented to facilitate incorporation of these findings into an appropriate CFD code. The computation must be a turbulent computation, since the flow field turbulent mixing will have a profound effect on the chemistry. Because of the additional equations demanded by the chemistry model it is recommended that for expediency a simple algebraic mixing length model be adopted. Performing this computation for all or selected time intervals of the startup time will require an abundance of computer CPU time regardless of the specific CFD code selected.
War of Ontology Worlds: Mathematics, Computer Code, or Esperanto?
Rzhetsky, Andrey; Evans, James A.
2011-01-01
The use of structured knowledge representations—ontologies and terminologies—has become standard in biomedicine. Definitions of ontologies vary widely, as do the values and philosophies that underlie them. In seeking to make these views explicit, we conducted and summarized interviews with a dozen leading ontologists. Their views clustered into three broad perspectives that we summarize as mathematics, computer code, and Esperanto. Ontology as mathematics puts the ultimate premium on rigor and logic, symmetry and consistency of representation across scientific subfields, and the inclusion of only established, non-contradictory knowledge. Ontology as computer code focuses on utility and cultivates diversity, fitting ontologies to their purpose. Like computer languages C++, Prolog, and HTML, the code perspective holds that diverse applications warrant custom designed ontologies. Ontology as Esperanto focuses on facilitating cross-disciplinary communication, knowledge cross-referencing, and computation across datasets from diverse communities. We show how these views align with classical divides in science and suggest how a synthesis of their concerns could strengthen the next generation of biomedical ontologies. PMID:21980276
Verifying a computational method for predicting extreme ground motion
Harris, R.A.; Barall, M.; Andrews, D.J.; Duan, B.; Ma, S.; Dunham, E.M.; Gabriel, A.-A.; Kaneko, Y.; Kase, Y.; Aagaard, Brad T.; Oglesby, D.D.; Ampuero, J.-P.; Hanks, T.C.; Abrahamson, N.
2011-01-01
In situations where seismological data is rare or nonexistent, computer simulations may be used to predict ground motions caused by future earthquakes. This is particularly practical in the case of extreme ground motions, where engineers of special buildings may need to design for an event that has not been historically observed but which may occur in the far-distant future. Once the simulations have been performed, however, they still need to be tested. The SCEC-USGS dynamic rupture code verification exercise provides a testing mechanism for simulations that involve spontaneous earthquake rupture. We have performed this examination for the specific computer code that was used to predict maximum possible ground motion near Yucca Mountain. Our SCEC-USGS group exercises have demonstrated that the specific computer code that was used for the Yucca Mountain simulations produces similar results to those produced by other computer codes when tackling the same science problem. We also found that the 3D ground motion simulations produced smaller ground motions than the 2D simulations.
An evaluation of four single element airfoil analytic methods
NASA Technical Reports Server (NTRS)
Freuler, R. J.; Gregorek, G. M.
1979-01-01
A comparison of four computer codes for the analysis of two-dimensional single element airfoil sections is presented for three classes of section geometries. Two of the computer codes utilize vortex singularities methods to obtain the potential flow solution. The other two codes solve the full inviscid potential flow equation using finite differencing techniques, allowing results to be obtained for transonic flow about an airfoil including weak shocks. Each program incorporates boundary layer routines for computing the boundary layer displacement thickness and boundary layer effects on aerodynamic coefficients. Computational results are given for a symmetrical section represented by an NACA 0012 profile, a conventional section illustrated by an NACA 65A413 profile, and a supercritical type section for general aviation applications typified by a NASA LS(1)-0413 section. The four codes are compared and contrasted in the areas of method of approach, range of applicability, agreement among each other and with experiment, individual advantages and disadvantages, computer run times and memory requirements, and operational idiosyncrasies.
War of ontology worlds: mathematics, computer code, or Esperanto?
Rzhetsky, Andrey; Evans, James A
2011-09-01
The use of structured knowledge representations-ontologies and terminologies-has become standard in biomedicine. Definitions of ontologies vary widely, as do the values and philosophies that underlie them. In seeking to make these views explicit, we conducted and summarized interviews with a dozen leading ontologists. Their views clustered into three broad perspectives that we summarize as mathematics, computer code, and Esperanto. Ontology as mathematics puts the ultimate premium on rigor and logic, symmetry and consistency of representation across scientific subfields, and the inclusion of only established, non-contradictory knowledge. Ontology as computer code focuses on utility and cultivates diversity, fitting ontologies to their purpose. Like computer languages C++, Prolog, and HTML, the code perspective holds that diverse applications warrant custom designed ontologies. Ontology as Esperanto focuses on facilitating cross-disciplinary communication, knowledge cross-referencing, and computation across datasets from diverse communities. We show how these views align with classical divides in science and suggest how a synthesis of their concerns could strengthen the next generation of biomedical ontologies.
48 CFR 1819.1005 - Applicability.
Code of Federal Regulations, 2013 CFR
2013-10-01
... System (NAICS) codes are: NAICS code Industry category 334111 Electronic Computer Manufacturing. 334418... Manufacturing. 334119 Other Computer Peripheral Equipment Manufacturing. 33422 Radio and Television Broadcasting and Wireless Communication Equipment Manufacturing. 336415 Guided Missile and Space Vehicle Propulsion...
48 CFR 1819.1005 - Applicability.
Code of Federal Regulations, 2014 CFR
2014-10-01
... System (NAICS) codes are: NAICS code Industry category 334111 Electronic Computer Manufacturing. 334418... Manufacturing. 334119 Other Computer Peripheral Equipment Manufacturing. 33422 Radio and Television Broadcasting and Wireless Communication Equipment Manufacturing. 336415 Guided Missile and Space Vehicle Propulsion...
48 CFR 1819.1005 - Applicability.
Code of Federal Regulations, 2012 CFR
2012-10-01
... System (NAICS) codes are: NAICS code Industry category 334111 Electronic Computer Manufacturing. 334418... Manufacturing. 334119 Other Computer Peripheral Equipment Manufacturing. 33422 Radio and Television Broadcasting and Wireless Communication Equipment Manufacturing. 336415 Guided Missile and Space Vehicle Propulsion...
40 CFR 1048.110 - How must my engines diagnose malfunctions?
Code of Federal Regulations, 2010 CFR
2010-07-01
..., the MIL may stay off during later engine operation. (d) Store trouble codes in computer memory. Record and store in computer memory any diagnostic trouble codes showing a malfunction that should illuminate...
Recent applications of the transonic wing analysis computer code, TWING
NASA Technical Reports Server (NTRS)
Subramanian, N. R.; Holst, T. L.; Thomas, S. D.
1982-01-01
An evaluation of the transonic-wing-analysis computer code TWING is given. TWING utilizes a fully implicit approximate factorization iteration scheme to solve the full potential equation in conservative form. A numerical elliptic-solver grid-generation scheme is used to generate the required finite-difference mesh. Several wing configurations were analyzed, and the limits of applicability of this code was evaluated. Comparisons of computed results were made with available experimental data. Results indicate that the code is robust, accurate (when significant viscous effects are not present), and efficient. TWING generally produces solutions an order of magnitude faster than other conservative full potential codes using successive-line overrelaxation. The present method is applicable to a wide range of isolated wing configurations including high-aspect-ratio transport wings and low-aspect-ratio, high-sweep, fighter configurations.
Response surface method in geotechnical/structural analysis, phase 1
NASA Astrophysics Data System (ADS)
Wong, F. S.
1981-02-01
In the response surface approach, an approximating function is fit to a long running computer code based on a limited number of code calculations. The approximating function, called the response surface, is then used to replace the code in subsequent repetitive computations required in a statistical analysis. The procedure of the response surface development and feasibility of the method are shown using a sample problem in slop stability which is based on data from centrifuge experiments of model soil slopes and involves five random soil parameters. It is shown that a response surface can be constructed based on as few as four code calculations and that the response surface is computationally extremely efficient compared to the code calculation. Potential applications of this research include probabilistic analysis of dynamic, complex, nonlinear soil/structure systems such as slope stability, liquefaction, and nuclear reactor safety.
User's Manual for FEMOM3DS. Version 1.0
NASA Technical Reports Server (NTRS)
Reddy, C.J.; Deshpande, M. D.
1997-01-01
FEMOM3DS is a computer code written in FORTRAN 77 to compute electromagnetic(EM) scattering characteristics of a three dimensional object with complex materials using combined Finite Element Method (FEM)/Method of Moments (MoM) technique. This code uses the tetrahedral elements, with vector edge basis functions for FEM in the volume of the cavity and the triangular elements with the basis functions similar to that described for MoM at the outer boundary. By virtue of FEM, this code can handle any arbitrarily shaped three-dimensional cavities filled with inhomogeneous lossy materials. The User's Manual is written to make the user acquainted with the operation of the code. The user is assumed to be familiar with the FORTRAN 77 language and the operating environment of the computers on which the code is intended to run.
Performance measures for transform data coding.
NASA Technical Reports Server (NTRS)
Pearl, J.; Andrews, H. C.; Pratt, W. K.
1972-01-01
This paper develops performance criteria for evaluating transform data coding schemes under computational constraints. Computational constraints that conform with the proposed basis-restricted model give rise to suboptimal coding efficiency characterized by a rate-distortion relation R(D) similar in form to the theoretical rate-distortion function. Numerical examples of this performance measure are presented for Fourier, Walsh, Haar, and Karhunen-Loeve transforms.
ERIC Educational Resources Information Center
Holbrook, M. Cay; MacCuspie, P. Ann
2010-01-01
Braille-reading mathematicians, scientists, and computer scientists were asked to examine the usability of the Unified English Braille Code (UEB) for technical materials. They had little knowledge of the code prior to the study. The research included two reading tasks, a short tutorial about UEB, and a focus group. The results indicated that the…
ERIC Educational Resources Information Center
Moral, Cristian; de Antonio, Angelica; Ferre, Xavier; Lara, Graciela
2015-01-01
Introduction: In this article we propose a qualitative analysis tool--a coding system--that can support the formalisation of the information-seeking process in a specific field: research in computer science. Method: In order to elaborate the coding system, we have conducted a set of qualitative studies, more specifically a focus group and some…
NASA Technical Reports Server (NTRS)
Stoll, Frederick
1993-01-01
The NLPAN computer code uses a finite-strip approach to the analysis of thin-walled prismatic composite structures such as stiffened panels. The code can model in-plane axial loading, transverse pressure loading, and constant through-the-thickness thermal loading, and can account for shape imperfections. The NLPAN code represents an attempt to extend the buckling analysis of the VIPASA computer code into the geometrically nonlinear regime. Buckling mode shapes generated using VIPASA are used in NLPAN as global functions for representing displacements in the nonlinear regime. While the NLPAN analysis is approximate in nature, it is computationally economical in comparison with finite-element analysis, and is thus suitable for use in preliminary design and design optimization. A comprehensive description of the theoretical approach of NLPAN is provided. A discussion of some operational considerations for the NLPAN code is included. NLPAN is applied to several test problems in order to demonstrate new program capabilities, and to assess the accuracy of the code in modeling various types of loading and response. User instructions for the NLPAN computer program are provided, including a detailed description of the input requirements and example input files for two stiffened-panel configurations.
NASA Technical Reports Server (NTRS)
Rathjen, K. A.
1977-01-01
A digital computer code CAVE (Conduction Analysis Via Eigenvalues), which finds application in the analysis of two dimensional transient heating of hypersonic vehicles is described. The CAVE is written in FORTRAN 4 and is operational on both IBM 360-67 and CDC 6600 computers. The method of solution is a hybrid analytical numerical technique that is inherently stable permitting large time steps even with the best of conductors having the finest of mesh size. The aerodynamic heating boundary conditions are calculated by the code based on the input flight trajectory or can optionally be calculated external to the code and then entered as input data. The code computes the network conduction and convection links, as well as capacitance values, given basic geometrical and mesh sizes, for four generations (leading edges, cooled panels, X-24C structure and slabs). Input and output formats are presented and explained. Sample problems are included. A brief summary of the hybrid analytical-numerical technique, which utilizes eigenvalues (thermal frequencies) and eigenvectors (thermal mode vectors) is given along with aerodynamic heating equations that have been incorporated in the code and flow charts.
Gel, Aytekin; Hu, Jonathan; Ould-Ahmed-Vall, ElMoustapha; ...
2017-03-20
Legacy codes remain a crucial element of today's simulation-based engineering ecosystem due to the extensive validation process and investment in such software. The rapid evolution of high-performance computing architectures necessitates the modernization of these codes. One approach to modernization is a complete overhaul of the code. However, this could require extensive investments, such as rewriting in modern languages, new data constructs, etc., which will necessitate systematic verification and validation to re-establish the credibility of the computational models. The current study advocates using a more incremental approach and is a culmination of several modernization efforts of the legacy code MFIX, whichmore » is an open-source computational fluid dynamics code that has evolved over several decades, widely used in multiphase flows and still being developed by the National Energy Technology Laboratory. Two different modernization approaches,‘bottom-up’ and ‘top-down’, are illustrated. Here, preliminary results show up to 8.5x improvement at the selected kernel level with the first approach, and up to 50% improvement in total simulated time with the latter were achieved for the demonstration cases and target HPC systems employed.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gel, Aytekin; Hu, Jonathan; Ould-Ahmed-Vall, ElMoustapha
Legacy codes remain a crucial element of today's simulation-based engineering ecosystem due to the extensive validation process and investment in such software. The rapid evolution of high-performance computing architectures necessitates the modernization of these codes. One approach to modernization is a complete overhaul of the code. However, this could require extensive investments, such as rewriting in modern languages, new data constructs, etc., which will necessitate systematic verification and validation to re-establish the credibility of the computational models. The current study advocates using a more incremental approach and is a culmination of several modernization efforts of the legacy code MFIX, whichmore » is an open-source computational fluid dynamics code that has evolved over several decades, widely used in multiphase flows and still being developed by the National Energy Technology Laboratory. Two different modernization approaches,‘bottom-up’ and ‘top-down’, are illustrated. Here, preliminary results show up to 8.5x improvement at the selected kernel level with the first approach, and up to 50% improvement in total simulated time with the latter were achieved for the demonstration cases and target HPC systems employed.« less