Core Physics and Kinetics Calculations for the Fissioning Plasma Core Reactor
NASA Technical Reports Server (NTRS)
Butler, C.; Albright, D.
2007-01-01
Highly efficient, compact nuclear reactors would provide high specific impulse spacecraft propulsion. This analysis and numerical simulation effort has focused on the technical feasibility issues related to the nuclear design characteristics of a novel reactor design. The Fissioning Plasma Core Reactor (FPCR) is a shockwave-driven gaseous-core nuclear reactor, which uses Magneto Hydrodynamic effects to generate electric power to be used for propulsion. The nuclear design of the system depends on two major calculations: core physics calculations and kinetics calculations. Presently, core physics calculations have concentrated on the use of the MCNP4C code. However, initial results from other codes such as COMBINE/VENTURE and SCALE4a. are also shown. Several significant modifications were made to the ISR-developed QCALC1 kinetics analysis code. These modifications include testing the state of the core materials, an improvement to the calculation of the material properties of the core, the addition of an adiabatic core temperature model and improvement of the first order reactivity correction model. The accuracy of these modifications has been verified, and the accuracy of the point-core kinetics model used by the QCALC1 code has also been validated. Previously calculated kinetics results for the FPCR were described in the ISR report, "QCALC1: A code for FPCR Kinetics Model Feasibility Analysis" dated June 1, 2002.
Comparison of ENDF/B-VII.1 and JEFF-3.2 in VVER-1000 operational data calculation
NASA Astrophysics Data System (ADS)
Frybort, Jan
2017-09-01
Safe operation of a nuclear reactor requires an extensive calculational support. Operational data are determined by full-core calculations during the design phase of a fuel loading. Loading pattern and design of fuel assemblies are adjusted to meet safety requirements and optimize reactor operation. Nodal diffusion code ANDREA is used for this task in case of Czech VVER-1000 reactors. Nuclear data for this diffusion code are prepared regularly by lattice code HELIOS. These calculations are conducted in 2D on fuel assembly level. There is also possibility to calculate these macroscopic data by Monte-Carlo Serpent code. It can make use of alternative evaluated libraries. All calculations are affected by inherent uncertainties in nuclear data. It is useful to see results of full-core calculations based on two sets of diffusion data obtained by Serpent code calculations with ENDF/B-VII.1 and JEFF-3.2 nuclear data including also decay data library and fission yields data. The comparison is based directly on fuel assembly level macroscopic data and resulting operational data. This study illustrates effect of evaluated nuclear data library on full-core calculations of a large PWR reactor core. The level of difference which results exclusively from nuclear data selection can help to understand the level of inherent uncertainties of such full-core calculations.
NASA Astrophysics Data System (ADS)
Ivanov, V.; Samokhin, A.; Danicheva, I.; Khrennikov, N.; Bouscuet, J.; Velkov, K.; Pasichnyk, I.
2017-01-01
In this paper the approaches used for developing of the BN-800 reactor test model and for validation of coupled neutron-physic and thermohydraulic calculations are described. Coupled codes ATHLET 3.0 (code for thermohydraulic calculations of reactor transients) and DYN3D (3-dimensional code of neutron kinetics) are used for calculations. The main calculation results of reactor steady state condition are provided. 3-D model used for neutron calculations was developed for start reactor BN-800 load. The homogeneous approach is used for description of reactor assemblies. Along with main simplifications, the main reactor BN-800 core zones are described (LEZ, MEZ, HEZ, MOX, blankets). The 3D neutron physics calculations were provided with 28-group library, which is based on estimated nuclear data ENDF/B-7.0. Neutron SCALE code was used for preparation of group constants. Nodalization hydraulic model has boundary conditions by coolant mass-flow rate for core inlet part, by pressure and enthalpy for core outlet part, which can be chosen depending on reactor state. Core inlet and outlet temperatures were chosen according to reactor nominal state. The coolant mass flow rate profiling through the core is based on reactor power distribution. The test thermohydraulic calculations made with using of developed model showed acceptable results in coolant mass flow rate distribution through the reactor core and in axial temperature and pressure distribution. The developed model will be upgraded in future for different transient analysis in metal-cooled fast reactors of BN type including reactivity transients (control rods withdrawal, stop of the main circulation pump, etc.).
TORT/MCNP coupling method for the calculation of neutron flux around a core of BWR.
Kurosawa, Masahiko
2005-01-01
For the analysis of BWR neutronics performance, accurate data are required for neutron flux distribution over the In-Reactor Pressure Vessel equipments taking into account the detailed geometrical arrangement. The TORT code can calculate neutron flux around a core of BWR in a three-dimensional geometry model, but has difficulties in fine geometrical modelling and lacks huge computer resource. On the other hand, the MCNP code enables the calculation of the neutron flux with a detailed geometry model, but requires very long sampling time to give enough number of particles. Therefore, a TORT/MCNP coupling method has been developed to eliminate the two problems mentioned above in each code. In this method, the TORT code calculates angular flux distribution on the core surface and the MCNP code calculates neutron spectrum at the points of interest using the flux distribution. The coupling method will be used as the DOT-DOMINO-MORSE code system. This TORT/MCNP coupling method was applied to calculate the neutron flux at points where induced radioactivity data were measured for 54Mn and 60Co and the radioactivity calculations based on the neutron flux obtained from the above method were compared with the measured data.
Fuel burnup analysis for IRIS reactor using MCNPX and WIMS-D5 codes
NASA Astrophysics Data System (ADS)
Amin, E. A.; Bashter, I. I.; Hassan, Nabil M.; Mustafa, S. S.
2017-02-01
International Reactor Innovative and Secure (IRIS) reactor is a compact power reactor designed with especial features. It contains Integral Fuel Burnable Absorber (IFBA). The core is heterogeneous both axially and radially. This work provides the full core burn up analysis for IRIS reactor using MCNPX and WIMDS-D5 codes. Criticality calculations, radial and axial power distributions and nuclear peaking factor at the different stages of burnup were studied. Effective multiplication factor values for the core were estimated by coupling MCNPX code with WIMS-D5 code and compared with SAS2H/KENO-V code values at different stages of burnup. The two calculation codes show good agreement and correlation. The values of radial and axial powers for the full core were also compared with published results given by SAS2H/KENO-V code (at the beginning and end of reactor operation). The behavior of both radial and axial power distribution is quiet similar to the other data published by SAS2H/KENO-V code. The peaking factor values estimated in the present work are close to its values calculated by SAS2H/KENO-V code.
An approach for coupled-code multiphysics core simulations from a common input
Schmidt, Rodney; Belcourt, Kenneth; Hooper, Russell; ...
2014-12-10
This study describes an approach for coupled-code multiphysics reactor core simulations that is being developed by the Virtual Environment for Reactor Applications (VERA) project in the Consortium for Advanced Simulation of Light-Water Reactors (CASL). In this approach a user creates a single problem description, called the “VERAIn” common input file, to define and setup the desired coupled-code reactor core simulation. A preprocessing step accepts the VERAIn file and generates a set of fully consistent input files for the different physics codes being coupled. The problem is then solved using a single-executable coupled-code simulation tool applicable to the problem, which ismore » built using VERA infrastructure software tools and the set of physics codes required for the problem of interest. The approach is demonstrated by performing an eigenvalue and power distribution calculation of a typical three-dimensional 17 × 17 assembly with thermal–hydraulic and fuel temperature feedback. All neutronics aspects of the problem (cross-section calculation, neutron transport, power release) are solved using the Insilico code suite and are fully coupled to a thermal–hydraulic analysis calculated by the Cobra-TF (CTF) code. The single-executable coupled-code (Insilico-CTF) simulation tool is created using several VERA tools, including LIME (Lightweight Integrating Multiphysics Environment for coupling codes), DTK (Data Transfer Kit), Trilinos, and TriBITS. Parallel calculations are performed on the Titan supercomputer at Oak Ridge National Laboratory using 1156 cores, and a synopsis of the solution results and code performance is presented. Finally, ongoing development of this approach is also briefly described.« less
NASA Astrophysics Data System (ADS)
Homma, Yuto; Moriwaki, Hiroyuki; Ohki, Shigeo; Ikeda, Kazumi
2014-06-01
This paper deals with verification of three dimensional triangular prismatic discrete ordinates transport calculation code ENSEMBLE-TRIZ by comparison with multi-group Monte Carlo calculation code GMVP in a large fast breeder reactor. The reactor is a 750 MWe electric power sodium cooled reactor. Nuclear characteristics are calculated at beginning of cycle of an initial core and at beginning and end of cycle of equilibrium core. According to the calculations, the differences between the two methodologies are smaller than 0.0002 Δk in the multi-plication factor, relatively about 1% in the control rod reactivity, and 1% in the sodium void reactivity.
Development of Safety Analysis Code System of Beam Transport and Core for Accelerator Driven System
NASA Astrophysics Data System (ADS)
Aizawa, Naoto; Iwasaki, Tomohiko
2014-06-01
Safety analysis code system of beam transport and core for accelerator driven system (ADS) is developed for the analyses of beam transients such as the change of the shape and position of incident beam. The code system consists of the beam transport analysis part and the core analysis part. TRACE 3-D is employed in the beam transport analysis part, and the shape and incident position of beam at the target are calculated. In the core analysis part, the neutronics, thermo-hydraulics and cladding failure analyses are performed by the use of ADS dynamic calculation code ADSE on the basis of the external source database calculated by PHITS and the cross section database calculated by SRAC, and the programs of the cladding failure analysis for thermoelastic and creep. By the use of the code system, beam transient analyses are performed for the ADS proposed by Japan Atomic Energy Agency. As a result, the rapid increase of the cladding temperature happens and the plastic deformation is caused in several seconds. In addition, the cladding is evaluated to be failed by creep within a hundred seconds. These results have shown that the beam transients have caused a cladding failure.
Recent improvements of reactor physics codes in MHI
NASA Astrophysics Data System (ADS)
Kosaka, Shinya; Yamaji, Kazuya; Kirimura, Kazuki; Kamiyama, Yohei; Matsumoto, Hideki
2015-12-01
This paper introduces recent improvements for reactor physics codes in Mitsubishi Heavy Industries, Ltd(MHI). MHI has developed a new neutronics design code system Galaxy/Cosmo-S(GCS) for PWR core analysis. After TEPCO's Fukushima Daiichi accident, it is required to consider design extended condition which has not been covered explicitly by the former safety licensing analyses. Under these circumstances, MHI made some improvements for GCS code system. A new resonance calculation model of lattice physics code and homogeneous cross section representative model for core simulator have been developed to apply more wide range core conditions corresponding to severe accident status such like anticipated transient without scram (ATWS) analysis and criticality evaluation of dried-up spent fuel pit. As a result of these improvements, GCS code system has very wide calculation applicability with good accuracy for any core conditions as far as fuel is not damaged. In this paper, the outline of GCS code system is described briefly and recent relevant development activities are presented.
Recent improvements of reactor physics codes in MHI
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kosaka, Shinya, E-mail: shinya-kosaka@mhi.co.jp; Yamaji, Kazuya; Kirimura, Kazuki
2015-12-31
This paper introduces recent improvements for reactor physics codes in Mitsubishi Heavy Industries, Ltd(MHI). MHI has developed a new neutronics design code system Galaxy/Cosmo-S(GCS) for PWR core analysis. After TEPCO’s Fukushima Daiichi accident, it is required to consider design extended condition which has not been covered explicitly by the former safety licensing analyses. Under these circumstances, MHI made some improvements for GCS code system. A new resonance calculation model of lattice physics code and homogeneous cross section representative model for core simulator have been developed to apply more wide range core conditions corresponding to severe accident status such like anticipatedmore » transient without scram (ATWS) analysis and criticality evaluation of dried-up spent fuel pit. As a result of these improvements, GCS code system has very wide calculation applicability with good accuracy for any core conditions as far as fuel is not damaged. In this paper, the outline of GCS code system is described briefly and recent relevant development activities are presented.« less
NASA Astrophysics Data System (ADS)
Susilo, J.; Suparlina, L.; Deswandri; Sunaryo, G. R.
2018-02-01
The using of a computer program for the PWR type core neutronic design parameters analysis has been carried out in some previous studies. These studies included a computer code validation on the neutronic parameters data values resulted from measurements and benchmarking calculation. In this study, the AP1000 first cycle core radial power peaking factor validation and analysis were performed using CITATION module of the SRAC2006 computer code. The computer code has been also validated with a good result to the criticality values of VERA benchmark core. The AP1000 core power distribution calculation has been done in two-dimensional X-Y geometry through ¼ section modeling. The purpose of this research is to determine the accuracy of the SRAC2006 code, and also the safety performance of the AP1000 core first cycle operating. The core calculations were carried out with the several conditions, those are without Rod Cluster Control Assembly (RCCA), by insertion of a single RCCA (AO, M1, M2, MA, MB, MC, MD) and multiple insertion RCCA (MA + MB, MA + MB + MC, MA + MB + MC + MD, and MA + MB + MC + MD + M1). The maximum power factor of the fuel rods value in the fuel assembly assumedapproximately 1.406. The calculation results analysis showed that the 2-dimensional CITATION module of SRAC2006 code is accurate in AP1000 power distribution calculation without RCCA and with MA+MB RCCA insertion.The power peaking factor on the first operating cycle of the AP1000 core without RCCA, as well as with single and multiple RCCA are still below in the safety limit values (less then about 1.798). So in terms of thermal power generated by the fuel assembly, then it can be considered that the AP100 core at the first operating cycle is safe.
WWER-1000 core and reflector parameters investigation in the LR-0 reactor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zaritsky, S. M.; Alekseev, N. I.; Bolshagin, S. N.
2006-07-01
Measurements and calculations carried out in the core and reflector of WWER-1000 mock-up are discussed: - the determination of the pin-to-pin power distribution in the core by means of gamma-scanning of fuel pins and pin-to-pin calculations with Monte Carlo code MCU-REA and diffusion codes MOBY-DICK (with WIMS-D4 cell constants preparation) and RADAR - the fast neutron spectra measurements by proton recoil method inside the experimental channel in the core and inside the channel in the baffle, and corresponding calculations in P{sub 3}S{sub 8} approximation of discrete ordinates method with code DORT and BUGLE-96 library - the neutron spectra evaluations (adjustment)more » in the same channels in energy region 0.5 eV-18 MeV based on the activation and solid state track detectors measurements. (authors)« less
THR-TH: a high-temperature gas-cooled nuclear reactor core thermal hydraulics code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vondy, D.R.
1984-07-01
The ORNL version of PEBBLE, the (RZ) pebble bed thermal hydraulics code, has been extended for application to a prismatic gas cooled reactor core. The supplemental treatment is of one-dimensional coolant flow in up to a three-dimensional core description. Power density data from a neutronics and exposure calculation are used as the basic information for the thermal hydraulics calculation of heat removal. Two-dimensional neutronics results may be expanded for a three-dimensional hydraulics calculation. The geometric description for the hydraulics problem is the same as used by the neutronics code. A two-dimensional thermal cell model is used to predict temperatures inmore » the fuel channel. The capability is available in the local BOLD VENTURE computation system for reactor core analysis with capability to account for the effect of temperature feedback by nuclear cross section correlation. Some enhancements have also been added to the original code to add pebble bed modeling flexibility and to generate useful auxiliary results. For example, an estimate is made of the distribution of fuel temperatures based on average and extreme conditions regularly calculated at a number of locations.« less
Interface requirements to couple thermal-hydraulic codes to severe accident codes: ATHLET-CD
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trambauer, K.
1997-07-01
The system code ATHLET-CD is being developed by GRS in cooperation with IKE and IPSN. Its field of application comprises the whole spectrum of leaks and large breaks, as well as operational and abnormal transients for LWRs and VVERs. At present the analyses cover the in-vessel thermal-hydraulics, the early phases of core degradation, as well as fission products and aerosol release from the core and their transport in the Reactor Coolant System. The aim of the code development is to extend the simulation of core degradation up to failure of the reactor pressure vessel and to cover all physically reasonablemore » accident sequences for western and eastern LWRs including RMBKs. The ATHLET-CD structure is highly modular in order to include a manifold spectrum of models and to offer an optimum basis for further development. The code consists of four general modules to describe the reactor coolant system thermal-hydraulics, the core degradation, the fission product core release, and fission product and aerosol transport. Each general module consists of some basic modules which correspond to the process to be simulated or to its specific purpose. Besides the code structure based on the physical modelling, the code follows four strictly separated steps during the course of a calculation: (1) input of structure, geometrical data, initial and boundary condition, (2) initialization of derived quantities, (3) steady state calculation or input of restart data, and (4) transient calculation. In this paper, the transient solution method is briefly presented and the coupling methods are discussed. Three aspects have to be considered for the coupling of different modules in one code system. First is the conservation of masses and energy in the different subsystems as there are fluid, structures, and fission products and aerosols. Second is the convergence of the numerical solution and stability of the calculation. The third aspect is related to the code performance, and running time.« less
Systematic void fraction studies with RELAP5, FRANCESCA and HECHAN
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stosic, Z.; Preusche, G.
1996-08-01
In enhancing the scope of standard thermal-hydraulic codes applications beyond its capabilities, i.e. coupling with a one and/or three-dimensional kinetics core model, the void fraction, transferred from thermal-hydraulics to the core model, plays a determining role in normal operating range and high core flow, as the generated heat and axial power profiles are direct functions of void distribution in the core. Hence, it is very important to know if the void quality models in the programs which have to be coupled are compatible to allow the interactive exchange of data which are based on these constitutive void-quality relations. The presentedmore » void fraction study is performed in order to give the basis for the conclusion whether a transient core simulation using the RELAP5 void fractions can calculate the axial power shapes adequately. Because of that, the void fractions calculated with RELAP5 are compared with those calculated by BWR safety code for licensing--FRANCESCA and the best estimate model for pre- and post-dryout calculation in BWR heated channel--HECHAN. In addition, a comparison with standard experimental void-quality benchmark tube data is performed for the HECHAN code.« less
MPACT Standard Input User s Manual, Version 2.2.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
Collins, Benjamin S.; Downar, Thomas; Fitzgerald, Andrew
The MPACT (Michigan PArallel Charactistics based Transport) code is designed to perform high-fidelity light water reactor (LWR) analysis using whole-core pin-resolved neutron transport calculations on modern parallel-computing hardware. The code consists of several libraries which provide the functionality necessary to solve steady-state eigenvalue problems. Several transport capabilities are available within MPACT including both 2-D and 3-D Method of Characteristics (MOC). A three-dimensional whole core solution based on the 2D-1D solution method provides the capability for full core depletion calculations.
NASA Astrophysics Data System (ADS)
Karriem, Veronica V.
Nuclear reactor design incorporates the study and application of nuclear physics, nuclear thermal hydraulic and nuclear safety. Theoretical models and numerical methods implemented in computer programs are utilized to analyze and design nuclear reactors. The focus of this PhD study's is the development of an advanced high-fidelity multi-physics code system to perform reactor core analysis for design and safety evaluations of research TRIGA-type reactors. The fuel management and design code system TRIGSIMS was further developed to fulfill the function of a reactor design and analysis code system for the Pennsylvania State Breazeale Reactor (PSBR). TRIGSIMS, which is currently in use at the PSBR, is a fuel management tool, which incorporates the depletion code ORIGEN-S (part of SCALE system) and the Monte Carlo neutronics solver MCNP. The diffusion theory code ADMARC-H is used within TRIGSIMS to accelerate the MCNP calculations. It manages the data and fuel isotopic content and stores it for future burnup calculations. The contribution of this work is the development of an improved version of TRIGSIMS, named TRIGSIMS-TH. TRIGSIMS-TH incorporates a thermal hydraulic module based on the advanced sub-channel code COBRA-TF (CTF). CTF provides the temperature feedback needed in the multi-physics calculations as well as the thermal hydraulics modeling capability of the reactor core. The temperature feedback model is using the CTF-provided local moderator and fuel temperatures for the cross-section modeling for ADMARC-H and MCNP calculations. To perform efficient critical control rod calculations, a methodology for applying a control rod position was implemented in TRIGSIMS-TH, making this code system a modeling and design tool for future core loadings. The new TRIGSIMS-TH is a computer program that interlinks various other functional reactor analysis tools. It consists of the MCNP5, ADMARC-H, ORIGEN-S, and CTF. CTF was coupled with both MCNP and ADMARC-H to provide the heterogeneous temperature distribution throughout the core. Each of these codes is written in its own computer language performing its function and outputs a set of data. TRIGSIMS-TH provides an effective use and data manipulation and transfer between different codes. With the implementation of feedback and control- rod-position modeling methodologies, the TRIGSIMS-TH calculations are more accurate and in a better agreement with measured data. The PSBR is unique in many ways and there are no "off-the-shelf" codes, which can model this design in its entirety. In particular, PSBR has an open core design, which is cooled by natural convection. Combining several codes into a unique system brings many challenges. It also requires substantial knowledge of both operation and core design of the PSBR. This reactor is in operation decades and there is a fair amount of studies and developments in both PSBR thermal hydraulics and neutronics. Measured data is also available for various core loadings and can be used for validation activities. The previous studies and developments in PSBR modeling also aids as a guide to assess the findings of the work herein. In order to incorporate new methods and codes into exiting TRIGSIMS, a re-evaluation of various components of the code was performed to assure the accuracy and efficiency of the existing CTF/MCNP5/ADMARC-H multi-physics coupling. A new set of ADMARC-H diffusion coefficients and cross sections was generated using the SERPENT code. This was needed as the previous data was not generated with thermal hydraulic feedback and the ARO position was used as the critical rod position. The B4C was re-evaluated for this update. The data exchange between ADMARC-H and MCNP5 was modified. The basic core model is given a flexibility to allow for various changes within the core model, and this feature was implemented in TRIGSIMS-TH. The PSBR core in the new code model can be expanded and changed. This allows the new code to be used as a modeling tool for design and analyses of future code loadings.
Neutron flux and power in RTP core-15
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rabir, Mohamad Hairie, E-mail: m-hairie@nuclearmalaysia.gov.my; Zin, Muhammad Rawi Md; Usang, Mark Dennis
PUSPATI TRIGA Reactor achieved initial criticality on June 28, 1982. The reactor is designed to effectively implement the various fields of basic nuclear research, manpower training, and production of radioisotopes. This paper describes the reactor parameters calculation for the PUSPATI TRIGA REACTOR (RTP); focusing on the application of the developed reactor 3D model for criticality calculation, analysis of power and neutron flux distribution of TRIGA core. The 3D continuous energy Monte Carlo code MCNP was used to develop a versatile and accurate full model of the TRIGA reactor. The model represents in detailed all important components of the core withmore » literally no physical approximation. The consistency and accuracy of the developed RTP MCNP model was established by comparing calculations to the available experimental results and TRIGLAV code calculation.« less
SASS-1--SUBASSEMBLY STRESS SURVEY CODE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Friedrich, C.M.
1960-01-01
SASS-1, an IBM-704 FORTRAN code, calculates pressure, thermal, and combined stresses in a nuclear reactor core subassembly. In addition to cross- section stresses, the code calculates axial shear stresses needed to keep plane cross sections plane under axial variations of temperature. The input and output nomenclature, arrangement, and formats are described. (B.O.G.)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bai, D.; Levine, S.L.; Luoma, J.
1992-01-01
The Three Mile Island unit 1 core reloads have been designed using fast but accurate scoping codes, PSUI-LEOPARD and ADMARC. PSUI-LEOPARD has been normalized to EPRI-CPM2 results and used to calculate the two-group constants, whereas ADMARC is a modern two-dimensional, two-group diffusion theory nodal code. Problems in accuracy were encountered for cycles 8 and higher as the core lifetime was increased beyond 500 effective full-power days. This is because the heavier loaded cores in both {sup 235}U and {sup 10}B have harder neutron spectra, which produces a change in the transport effect in the baffle reflector region, and the burnablemore » poison (BP) simulations were not accurate enough for the cores containing the increased amount of {sup 10}B required in the BP rods. In the authors study, a technique has been developed to take into account the change in the transport effect in the baffle region by modifying the fast neutron diffusion coefficient as a function of cycle length and core exposure or burnup. A more accurate BP simulation method is also developed, using integral transport theory and CPM2 data, to calculate the BP contribution to the equivalent fuel assembly (supercell) two-group constants. The net result is that the accuracy of the scoping codes is as good as that produced by CASMO/SIMULATE or CPM2/SIMULATE when comparing with measured data.« less
a Dosimetry Assessment for the Core Restraint of AN Advanced Gas Cooled Reactor
NASA Astrophysics Data System (ADS)
Thornton, D. A.; Allen, D. A.; Tyrrell, R. J.; Meese, T. C.; Huggon, A. P.; Whiley, G. S.; Mossop, J. R.
2009-08-01
This paper describes calculations of neutron damage rates within the core restraint structures of Advanced Gas Cooled Reactors (AGRs). Using advanced features of the Monte Carlo radiation transport code MCBEND, and neutron source data from core follow calculations performed with the reactor physics code PANTHER, a detailed model of the reactor cores of two of British Energy's AGR power plants has been developed for this purpose. Because there are no relevant neutron fluence measurements directly supporting this assessment, results of benchmark comparisons and successful validation of MCBEND for Magnox reactors have been used to estimate systematic and random uncertainties on the predictions. In particular, it has been necessary to address the known under-prediction of lower energy fast neutron responses associated with the penetration of large thicknesses of graphite.
NASA Astrophysics Data System (ADS)
Sboev, A. G.; Ilyashenko, A. S.; Vetrova, O. A.
1997-02-01
The method of bucking evaluation, realized in the MOnte Carlo code MCS, is described. This method was applied for calculational analysis of well known light water experiments TRX-1 and TRX-2. The analysis of this comparison shows, that there is no coincidence between Monte Carlo calculations, obtained by different ways: the MCS calculations with given experimental bucklings; the MCS calculations with given bucklings evaluated on base of full core MCS direct simulations; the full core MCNP and MCS direct simulations; the MCNP and MCS calculations, where the results of cell calculations are corrected by the coefficients taking into the account the leakage from the core. Also the buckling values evaluated by full core MCS calculations have differed from experimental ones, especially in the case of TRX-1, when this difference has corresponded to 0.5 percent increase of Keff value.
PRIZMA predictions of in-core detection indications in the VVER-1000 reactor
NASA Astrophysics Data System (ADS)
Kandiev, Yadgar Z.; Kashayeva, Elena A.; Malyshin, Gennady N.; Modestov, Dmitry G.; Khatuntsev, Kirill E.
2014-06-01
The paper describes calculations which were done by the PRIZMA code(1) to predict indications of in-core rhodium detectors in the VVER-1000 reactor for some core fragments with allowance for fuel and rhodium burnout.
From core to coax: extending core RF modelling to include SOL, Antenna, and PFC
NASA Astrophysics Data System (ADS)
Shiraiwa, Syun'ichi
2017-10-01
A new technique for the calculation of RF waves in toroidal geometry enables the simultaneous incorporation of antenna geometry, plasma facing components (PFCs), the scrape off-layer (SOL), and core propagation. Traditionally, core RF wave propagation and antenna coupling has been calculated separately both using rather simplified SOL plasmas. The new approach, instead, allows capturing wave propagation in the SOL and its interactions with non-conforming PFCs permitting self-consistent calculation of core absorption and edge power loss, as well as investigating far and near field impurity generation from RF sheaths and a breakdown issue from antenna electric fields. Our approach combines the field solutions obtained from a core spectral code with a hot plasma dielectric and an edge FEM code using a cold plasma approximation via surface admittance-like matrix. Our approach was verified using the TORIC core ICRF spectral code and the commercial COMSOL FEM package, and was extended to 3D torus using open-source scalable MFEM library. The simulation result revealed that as the core wave damping gets weaker, the wave absorption in edge could become non-negligible. Three dimensional capabilities with non axisymmetric edge are being applied to study the antenna characteristic difference between the field aligned and toroidally aligned antennas on Alcator C-Mod, as well as the surface wave excitation on NSTX-U. Work supported by the U.S. DoE, OFES, using User Facility Alcator C-Mod, DE-FC02-99ER54512 and Contract No. DE-FC02-01ER54648.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Salko, Robert K; Sung, Yixing; Kucukboyaci, Vefa
The Virtual Environment for Reactor Applications core simulator (VERA-CS) being developed by the Consortium for the Advanced Simulation of Light Water Reactors (CASL) includes coupled neutronics, thermal-hydraulics, and fuel temperature components with an isotopic depletion capability. The neutronics capability employed is based on MPACT, a three-dimensional (3-D) whole core transport code. The thermal-hydraulics and fuel temperature models are provided by the COBRA-TF (CTF) subchannel code. As part of the CASL development program, the VERA-CS (MPACT/CTF) code system was applied to model and simulate reactor core response with respect to departure from nucleate boiling ratio (DNBR) at the limiting time stepmore » of a postulated pressurized water reactor (PWR) main steamline break (MSLB) event initiated at the hot zero power (HZP), either with offsite power available and the reactor coolant pumps in operation (high-flow case) or without offsite power where the reactor core is cooled through natural circulation (low-flow case). The VERA-CS simulation was based on core boundary conditions from the RETRAN-02 system transient calculations and STAR-CCM+ computational fluid dynamics (CFD) core inlet distribution calculations. The evaluation indicated that the VERA-CS code system is capable of modeling and simulating quasi-steady state reactor core response under the steamline break (SLB) accident condition, the results are insensitive to uncertainties in the inlet flow distributions from the CFD simulations, and the high-flow case is more DNB limiting than the low-flow case.« less
Etude des performances de solveurs deterministes sur un coeur rapide a caloporteur sodium
NASA Astrophysics Data System (ADS)
Bay, Charlotte
The reactors of next generation, in particular SFR model, represent a true challenge for current codes and solvers, used mainly for thermic cores. There is no guarantee that their competences could be straight adapted to fast neutron spectrum, or to major design differences. Thus it is necessary to assess the validity of solvers and their potential shortfall in the case of fast neutron reactors. As part of an internship with CEA (France), and at the instigation of EPM Nuclear Institute, this study concerns the following codes : DRAGON/DONJON, ERANOS, PARIS and APOLLO3. The precision assessment has been performed using Monte Carlo code TRIPOLI4. Only core calculation was of interest, namely numerical methods competences in precision and rapidity. Lattice code was not part of the study, that is to say nuclear data, self-shielding, or isotopic compositions. Nor was tackled burnup or time evolution effects. The study consists in two main steps : first evaluating the sensitivity of each solver to calculation parameters, and obtain its optimal calculation set ; then compare their competences in terms of precision and rapidity, by collecting usual quantities (effective multiplication factor, reaction rates map), but also more specific quantities which are crucial to the SFR design, namely control rod worth and sodium void effect. The calculation time is also a key factor. Whatever conclusion or recommendation that could be drawn from this study, they must first of all be applied within similar frameworks, that is to say small fast neutron cores with hexagonal geometry. Eventual adjustments for big cores will have to be demonstrated in developments of this study.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jung, Y. S.; Joo, H. G.; Yoon, J. I.
The nTRACER direct whole core transport code employing the planar MOC solution based 3-D calculation method, the subgroup method for resonance treatment, the Krylov matrix exponential method for depletion, and a subchannel thermal/hydraulic calculation solver was developed for practical high-fidelity simulation of power reactors. Its accuracy and performance is verified by comparing with the measurement data obtained for three pressurized water reactor cores. It is demonstrated that accurate and detailed multi-physic simulation of power reactors is practically realizable without any prior calculations or adjustments. (authors)
Implicit time-integration method for simultaneous solution of a coupled non-linear system
NASA Astrophysics Data System (ADS)
Watson, Justin Kyle
Historically large physical problems have been divided into smaller problems based on the physics involved. This is no different in reactor safety analysis. The problem of analyzing a nuclear reactor for design basis accidents is performed by a handful of computer codes each solving a portion of the problem. The reactor thermal hydraulic response to an event is determined using a system code like TRAC RELAP Advanced Computational Engine (TRACE). The core power response to the same accident scenario is determined using a core physics code like Purdue Advanced Core Simulator (PARCS). Containment response to the reactor depressurization in a Loss Of Coolant Accident (LOCA) type event is calculated by a separate code. Sub-channel analysis is performed with yet another computer code. This is just a sample of the computer codes used to solve the overall problems of nuclear reactor design basis accidents. Traditionally each of these codes operates independently from each other using only the global results from one calculation as boundary conditions to another. Industry's drive to uprate power for reactors has motivated analysts to move from a conservative approach to design basis accident towards a best estimate method. To achieve a best estimate calculation efforts have been aimed at coupling the individual physics models to improve the accuracy of the analysis and reduce margins. The current coupling techniques are sequential in nature. During a calculation time-step data is passed between the two codes. The individual codes solve their portion of the calculation and converge to a solution before the calculation is allowed to proceed to the next time-step. This thesis presents a fully implicit method of simultaneous solving the neutron balance equations, heat conduction equations and the constitutive fluid dynamics equations. It discusses the problems involved in coupling different physics phenomena within multi-physics codes and presents a solution to these problems. The thesis also outlines the basic concepts behind the nodal balance equations, heat transfer equations and the thermal hydraulic equations, which will be coupled to form a fully implicit nonlinear system of equations. The coupling of separate physics models to solve a larger problem and improve accuracy and efficiency of a calculation is not a new idea, however implementing them in an implicit manner and solving the system simultaneously is. Also the application to reactor safety codes is new and has not be done with thermal hydraulics and neutronics codes on realistic applications in the past. The coupling technique described in this thesis is applicable to other similar coupled thermal hydraulic and core physics reactor safety codes. This technique is demonstrated using coupled input decks to show that the system is solved correctly and then verified by using two derivative test problems based on international benchmark problems the OECD/NRC Three mile Island (TMI) Main Steam Line Break (MSLB) problem (representative of pressurized water reactor analysis) and the OECD/NRC Peach Bottom (PB) Turbine Trip (TT) benchmark (representative of boiling water reactor analysis).
Development of 3D pseudo pin-by-pin calculation methodology in ANC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, B.; Mayhue, L.; Huria, H.
2012-07-01
Advanced cores and fuel assembly designs have been developed to improve operational flexibility, economic performance and further enhance safety features of nuclear power plants. The simulation of these new designs, along with strong heterogeneous fuel loading, have brought new challenges to the reactor physics methodologies currently employed in the industrial codes for core analyses. Control rod insertion during normal operation is one operational feature in the AP1000{sup R} plant of Westinghouse next generation Pressurized Water Reactor (PWR) design. This design improves its operational flexibility and efficiency but significantly challenges the conventional reactor physics methods, especially in pin power calculations. Themore » mixture loading of fuel assemblies with significant neutron spectrums causes a strong interaction between different fuel assembly types that is not fully captured with the current core design codes. To overcome the weaknesses of the conventional methods, Westinghouse has developed a state-of-the-art 3D Pin-by-Pin Calculation Methodology (P3C) and successfully implemented in the Westinghouse core design code ANC. The new methodology has been qualified and licensed for pin power prediction. The 3D P3C methodology along with its application and validation will be discussed in the paper. (authors)« less
Hybrid parallel code acceleration methods in full-core reactor physics calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Courau, T.; Plagne, L.; Ponicot, A.
2012-07-01
When dealing with nuclear reactor calculation schemes, the need for three dimensional (3D) transport-based reference solutions is essential for both validation and optimization purposes. Considering a benchmark problem, this work investigates the potential of discrete ordinates (Sn) transport methods applied to 3D pressurized water reactor (PWR) full-core calculations. First, the benchmark problem is described. It involves a pin-by-pin description of a 3D PWR first core, and uses a 8-group cross-section library prepared with the DRAGON cell code. Then, a convergence analysis is performed using the PENTRAN parallel Sn Cartesian code. It discusses the spatial refinement and the associated angular quadraturemore » required to properly describe the problem physics. It also shows that initializing the Sn solution with the EDF SPN solver COCAGNE reduces the number of iterations required to converge by nearly a factor of 6. Using a best estimate model, PENTRAN results are then compared to multigroup Monte Carlo results obtained with the MCNP5 code. Good consistency is observed between the two methods (Sn and Monte Carlo), with discrepancies that are less than 25 pcm for the k{sub eff}, and less than 2.1% and 1.6% for the flux at the pin-cell level and for the pin-power distribution, respectively. (authors)« less
Measurement and simulation of thermal neutron flux distribution in the RTP core
NASA Astrophysics Data System (ADS)
Rabir, Mohamad Hairie B.; Jalal Bayar, Abi Muttaqin B.; Hamzah, Na'im Syauqi B.; Mustafa, Muhammad Khairul Ariff B.; Karim, Julia Bt. Abdul; Zin, Muhammad Rawi B. Mohamed; Ismail, Yahya B.; Hussain, Mohd Huzair B.; Mat Husin, Mat Zin B.; Dan, Roslan B. Md; Ismail, Ahmad Razali B.; Husain, Nurfazila Bt.; Jalil Khan, Zareen Khan B. Abdul; Yakin, Shaiful Rizaide B. Mohd; Saad, Mohamad Fauzi B.; Masood, Zarina Bt.
2018-01-01
The in-core thermal neutron flux distribution was determined using measurement and simulation methods for the Malaysian’s PUSPATI TRIGA Reactor (RTP). In this work, online thermal neutron flux measurement using Self Powered Neutron Detector (SPND) has been performed to verify and validate the computational methods for neutron flux calculation in RTP calculations. The experimental results were used as a validation to the calculations performed with Monte Carlo code MCNP. The detail in-core neutron flux distributions were estimated using MCNP mesh tally method. The neutron flux mapping obtained revealed the heterogeneous configuration of the core. Based on the measurement and simulation, the thermal flux profile peaked at the centre of the core and gradually decreased towards the outer side of the core. The results show a good agreement (relatively) between calculation and measurement where both show the same radial thermal flux profile inside the core: MCNP model over estimation with maximum discrepancy around 20% higher compared to SPND measurement. As our model also predicts well the neutron flux distribution in the core it can be used for the characterization of the full core, that is neutron flux and spectra calculation, dose rate calculations, reaction rate calculations, etc.
Preparation macroconstants to simulate the core of VVER-1000 reactor
NASA Astrophysics Data System (ADS)
Seleznev, V. Y.
2017-01-01
Dynamic model is used in simulators of VVER-1000 reactor for training of operating staff and students. As a code for the simulation of neutron-physical characteristics is used DYNCO code that allows you to perform calculations of stationary, transient and emergency processes in real time to a different geometry of the reactor lattices [1]. To perform calculations using this code, you need to prepare macroconstants for each FA. One way of getting macroconstants is to use the WIMS code, which is based on the use of its own 69-group macroconstants library. This paper presents the results of calculations of FA obtained by the WIMS code for VVER-1000 reactor with different parameters of fuel and coolant, as well as the method of selection of energy groups for further calculation macroconstants.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Farmer, M. T.
MELTSPREAD3 is a transient one-dimensional computer code that has been developed to predict the gravity-driven flow and freezing behavior of molten reactor core materials (corium) in containment geometries. Predictions can be made for corium flowing across surfaces under either dry or wet cavity conditions. The spreading surfaces that can be selected are steel, concrete, a user-specified material (e.g., a ceramic), or an arbitrary combination thereof. The corium can have a wide range of compositions of reactor core materials that includes distinct oxide phases (predominantly Zr, and steel oxides) plus metallic phases (predominantly Zr and steel). The code requires input thatmore » describes the containment geometry, melt “pour” conditions, and cavity atmospheric conditions (i.e., pressure, temperature, and cavity flooding information). For cases in which the cavity contains a preexisting water layer at the time of RPV failure, melt jet breakup and particle bed formation can be calculated mechanistically given the time-dependent melt pour conditions (input data) as well as the heatup and boiloff of water in the melt impingement zone (calculated). For core debris impacting either the containment floor or previously spread material, the code calculates the transient hydrodynamics and heat transfer which determine the spreading and freezing behavior of the melt. The code predicts conditions at the end of the spreading stage, including melt relocation distance, depth and material composition profiles, substrate ablation profile, and wall heatup. Code output can be used as input to other models such as CORQUENCH that evaluate long term core-concrete interaction behavior following the transient spreading stage. MELTSPREAD3 was originally developed to investigate BWR Mark I liner vulnerability, but has been substantially upgraded and applied to other reactor designs (e.g., the EPR), and more recently to the plant accidents at Fukushima Daiichi. The most recent round of improvements that are documented in this report have been specifically implemented to support industry in developing Severe Accident Water Management (SAWM) strategies for Boiling Water Reactors.« less
Improvement of Speckle Contrast Image Processing by an Efficient Algorithm.
Steimers, A; Farnung, W; Kohl-Bareis, M
2016-01-01
We demonstrate an efficient algorithm for the temporal and spatial based calculation of speckle contrast for the imaging of blood flow by laser speckle contrast analysis (LASCA). It reduces the numerical complexity of necessary calculations, facilitates a multi-core and many-core implementation of the speckle analysis and enables an independence of temporal or spatial resolution and SNR. The new algorithm was evaluated for both spatial and temporal based analysis of speckle patterns with different image sizes and amounts of recruited pixels as sequential, multi-core and many-core code.
Revisiting Molecular Dynamics on a CPU/GPU system: Water Kernel and SHAKE Parallelization.
Ruymgaart, A Peter; Elber, Ron
2012-11-13
We report Graphics Processing Unit (GPU) and Open-MP parallel implementations of water-specific force calculations and of bond constraints for use in Molecular Dynamics simulations. We focus on a typical laboratory computing-environment in which a CPU with a few cores is attached to a GPU. We discuss in detail the design of the code and we illustrate performance comparable to highly optimized codes such as GROMACS. Beside speed our code shows excellent energy conservation. Utilization of water-specific lists allows the efficient calculations of non-bonded interactions that include water molecules and results in a speed-up factor of more than 40 on the GPU compared to code optimized on a single CPU core for systems larger than 20,000 atoms. This is up four-fold from a factor of 10 reported in our initial GPU implementation that did not include a water-specific code. Another optimization is the implementation of constrained dynamics entirely on the GPU. The routine, which enforces constraints of all bonds, runs in parallel on multiple Open-MP cores or entirely on the GPU. It is based on Conjugate Gradient solution of the Lagrange multipliers (CG SHAKE). The GPU implementation is partially in double precision and requires no communication with the CPU during the execution of the SHAKE algorithm. The (parallel) implementation of SHAKE allows an increase of the time step to 2.0fs while maintaining excellent energy conservation. Interestingly, CG SHAKE is faster than the usual bond relaxation algorithm even on a single core if high accuracy is expected. The significant speedup of the optimized components transfers the computational bottleneck of the MD calculation to the reciprocal part of Particle Mesh Ewald (PME).
Posttest analysis of the FFTF inherent safety tests
DOE Office of Scientific and Technical Information (OSTI.GOV)
Padilla, A. Jr.; Claybrook, S.W.
Inherent safety tests were performed during 1986 in the 400-MW (thermal) Fast Flux Test Facility (FFTF) reactor to demonstrate the effectiveness of an inherent shutdown device called the gas expansion module (GEM). The GEM device provided a strong negative reactivity feedback during loss-of-flow conditions by increasing the neutron leakage as a result of an expanding gas bubble. The best-estimate pretest calculations for these tests were performed using the IANUS plant analysis code (Westinghouse Electric Corporation proprietary code) and the MELT/SIEX3 core analysis code. These two codes were also used to perform the required operational safety analyses for the FFTF reactormore » and plant. Although it was intended to also use the SASSYS systems (core and plant) analysis code, the calibration of the SASSYS code for FFTF core and plant analysis was not completed in time to perform pretest analyses. The purpose of this paper is to present the results of the posttest analysis of the 1986 FFTF inherent safety tests using the SASSYS code.« less
Improvements and applications of COBRA-TF for stand-alone and coupled LWR safety analyses
DOE Office of Scientific and Technical Information (OSTI.GOV)
Avramova, M.; Cuervo, D.; Ivanov, K.
2006-07-01
The advanced thermal-hydraulic subchannel code COBRA-TF has been recently improved and applied for stand-alone and coupled LWR core calculations at the Pennsylvania State Univ. in cooperation with AREVA NP GmbH (Germany)) and the Technical Univ. of Madrid. To enable COBRA-TF for academic and industrial applications including safety margins evaluations and LWR core design analyses, the code programming, numerics, and basic models were revised and substantially improved. The code has undergone through an extensive validation, verification, and qualification program. (authors)
Numerical optimization of three-dimensional coils for NSTX-U
Lazerson, S. A.; Park, J. -K.; Logan, N.; ...
2015-09-03
A tool for the calculation of optimal three-dimensional (3D) perturbative magnetic fields in tokamaks has been developed. The IPECOPT code builds upon the stellarator optimization code STELLOPT to allow for optimization of linear ideal magnetohydrodynamic perturbed equilibrium (IPEC). This tool has been applied to NSTX-U equilibria, addressing which fields are the most effective at driving NTV torques. The NTV torque calculation is performed by the PENT code. Optimization of the normal field spectrum shows that fields with n = 1 character can drive a large core torque. It is also shown that fields with n = 3 features are capablemore » of driving edge torque and some core torque. Coil current optimization (using the planned in-vessel and existing RWM coils) on NSTX-U suggest the planned coils set is adequate for core and edge torque control. In conclusion, comparison between error field correction experiments on DIII-D and the optimizer show good agreement.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harrison, Cyrus; Larsen, Matt; Brugger, Eric
Strawman is a system designed to explore the in situ visualization and analysis needs of simulation code teams running multi-physics calculations on many-core HPC architectures. It porvides rendering pipelines that can leverage both many-core CPUs and GPUs to render images of simulation meshes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burns, T.D. Jr.
1996-05-01
The Monte Carlo Model System (MCMS) for the Washington State University (WSU) Radiation Center provides a means through which core criticality and power distributions can be calculated, as well as providing a method for neutron and photon transport necessary for BNCT epithermal neutron beam design. The computational code used in this Model System is MCNP4A. The geometric capability of this Monte Carlo code allows the WSU system to be modeled very accurately. A working knowledge of the MCNP4A neutron transport code increases the flexibility of the Model System and is recommended, however, the eigenvalue/power density problems can be run withmore » little direct knowledge of MCNP4A. Neutron and photon particle transport require more experience with the MCNP4A code. The Model System consists of two coupled subsystems; the Core Analysis and Source Plane Generator Model (CASP), and the BeamPort Shell Particle Transport Model (BSPT). The CASP Model incorporates the S({alpha}, {beta}) thermal treatment, and is run as a criticality problem yielding, the system eigenvalue (k{sub eff}), the core power distribution, and an implicit surface source for subsequent particle transport in the BSPT Model. The BSPT Model uses the source plane generated by a CASP run to transport particles through the thermal column beamport. The user can create filter arrangements in the beamport and then calculate characteristics necessary for assessing the BNCT potential of the given filter want. Examples of the characteristics to be calculated are: neutron fluxes, neutron currents, fast neutron KERMAs and gamma KERMAs. The MCMS is a useful tool for the WSU system. Those unfamiliar with the MCNP4A code can use the MCMS transparently for core analysis, while more experienced users will find the particle transport capabilities very powerful for BNCT filter design.« less
Hot zero power reactor calculations using the Insilico code
Hamilton, Steven P.; Evans, Thomas M.; Davidson, Gregory G.; ...
2016-03-18
In this paper we describe the reactor physics simulation capabilities of the insilico code. A description of the various capabilities of the code is provided, including detailed discussion of the geometry, meshing, cross section processing, and neutron transport options. Numerical results demonstrate that the insilico SP N solver with pin-homogenized cross section generation is capable of delivering highly accurate full-core simulation of various PWR problems. Comparison to both Monte Carlo calculations and measured plant data is provided.
The Modeling of Advanced BWR Fuel Designs with the NRC Fuel Depletion Codes PARCS/PATHS
Ward, Andrew; Downar, Thomas J.; Xu, Y.; ...
2015-04-22
The PATHS (PARCS Advanced Thermal Hydraulic Solver) code was developed at the University of Michigan in support of U.S. Nuclear Regulatory Commission research to solve the steady-state, two-phase, thermal-hydraulic equations for a boiling water reactor (BWR) and to provide thermal-hydraulic feedback for BWR depletion calculations with the neutronics code PARCS (Purdue Advanced Reactor Core Simulator). The simplified solution methodology, including a three-equation drift flux formulation and an optimized iteration scheme, yields very fast run times in comparison to conventional thermal-hydraulic systems codes used in the industry, while still retaining sufficient accuracy for applications such as BWR depletion calculations. Lastly, themore » capability to model advanced BWR fuel designs with part-length fuel rods and heterogeneous axial channel flow geometry has been implemented in PATHS, and the code has been validated against previously benchmarked advanced core simulators as well as BWR plant and experimental data. We describe the modifications to the codes and the results of the validation in this paper.« less
NASA Astrophysics Data System (ADS)
Faure, Bastien
The neutronic calculation of a reactor's core is usually done in two steps. After solving the neutron transport equation over an elementary domain of the core, a set of parameters, namely macroscopic cross sections and potentially diffusion coefficients, are defined in order to perform a full core calculation. In the first step, the cell or assembly is calculated using the "fundamental mode theory", the pattern being inserted in an infinite lattice of periodic structures. This simple representation allows a precise modeling for the geometry and the energy variable and can be treated within transport theory with minimalist approximations. However, it supposes that the reactor's core can be treated as a periodic lattice of elementary domains, which is already a big hypothesis, and cannot, at first sight, take into account neutron leakage between two different zones and out of the core. The leakage models propose to correct the transport equation with an additional leakage term in order to represent this phenomenon. For historical reasons, numerical methods for solving the transport equation being limited by computer's features (processor speeds and memory sizes), the leakage term is, in most cases, modeled by a homogeneous and isotropic probability within a "homogeneous leakage model". Driven by technological innovation in the computer science field, "heterogeneous leakage models" have been developed and implemented in several neutron transport calculation codes. This work focuses on a study of some of those models, including the TIBERE model from the DRAGON-3 code developed at Ecole Polytechnique de Montreal, as well as the heterogeneous model from the APOLLO-3 code developed at Commissariat a l'Energie Atomique et aux energies alternatives. The research based on sodium cooled fast reactors and light water reactors has allowed us to demonstrate the interest of those models compared to a homogeneous leakage model. In particular, it has been shown that a heterogeneous model has a significant impact on the calculation of the out of core leakage rate that permits a better estimation of the transport equation eigenvalue Keff . The neutron streaming between two zones of different compositions was also proven to be better calculated.
Reactivity Coefficient Calculation for AP1000 Reactor Using the NODAL3 Code
NASA Astrophysics Data System (ADS)
Pinem, Surian; Malem Sembiring, Tagor; Tukiran; Deswandri; Sunaryo, Geni Rina
2018-02-01
The reactivity coefficient is a very important parameter for inherent safety and stability of nuclear reactors operation. To provide the safety analysis of the reactor, the calculation of changes in reactivity caused by temperature is necessary because it is related to the reactor operation. In this paper, the temperature reactivity coefficients of fuel and moderator of the AP1000 core are calculated, as well as the moderator density and boron concentration. All of these coefficients are calculated at the hot full power condition (HFP). All neutron diffusion constant as a function of temperature, water density and boron concentration were generated by the SRAC2006 code. The core calculations for determination of the reactivity coefficient parameter are done by using NODAL3 code. The calculation results show that the fuel temperature, moderator temperature and boron reactivity coefficients are in the range between -2.613 pcm/°C to -4.657pcm/°C, -1.00518 pcm/°C to 1.00649 pcm/°C and -9.11361 pcm/ppm to -8.0751 pcm/ppm, respectively. For the water density reactivity coefficients, the positive reactivity occurs at the water temperature less than 190 °C. The calculation results show that the reactivity coefficients are accurate because the results have a very good agreement with the design value.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, X. G.; Kim, Y. S.; Choi, K. Y.
2012-07-01
A SBO (station blackout) experiment named SBO-01 was performed at full-pressure IET (Integral Effect Test) facility ATLAS (Advanced Test Loop for Accident Simulation) which is scaled down from the APR1400 (Advanced Power Reactor 1400 MWe). In this study, the transient of SBO-01 is discussed and is subdivided into three phases: the SG fluid loss phase, the RCS fluid loss phase, and the core coolant depletion and core heatup phase. In addition, the typical phenomena in SBO-01 test - SG dryout, natural circulation, core coolant boiling, the PRZ full, core heat-up - are identified. Furthermore, the SBO-01 test is reproduced bymore » the MARS code calculation with the ATLAS model which represents the ATLAS test facility. The experimental and calculated transients are then compared and discussed. The comparison reveals there was malfunction of equipments: the SG leakage through SG MSSV and the measurement error of loop flow meter. As the ATLAS model is validated against the experimental results, it can be further employed to investigate the other possible SBO scenarios and to study the scaling distortions in the ATLAS. (authors)« less
NASA Astrophysics Data System (ADS)
Lasbleis, M.; Day, E. A.; Waszek, L.
2017-12-01
The complex nature of inner core structure has been well-established from seismic studies, with heterogeneities at various length scales, both radially and laterally. Despite this, no geodynamic model has successfully explained all of the observed seismic features. To facilitate comparisons between seismic observations and geodynamic models of inner core growth we have developed a new, open access Python tool - GrowYourIC - that allows users to compare models of inner core structure. The code allows users to simulate different evolution models of the inner core, with user-defined rates of inner core growth, translation and rotation. Once the user has "grown" an inner core with their preferred parameters they can then explore the effect of "their" inner core's evolution on the relative age and growth rate in different regions of the inner core. The code will convert these parameters into seismic properties using either built-in mineral physics models, or user-supplied ones that calculate these seismic properties with users' own preferred mineralogical models. The 3D model of isotropic inner core properties can then be used to calculate the predicted seismic travel time anomalies for a random, or user-specified, set of seismic ray paths through the inner core. A real dataset of inner core body-wave differential travel times is included for the purpose of comparing user-generated models of inner core growth to actual observed travel time anomalies in the top 100km of the inner core. Here, we explore some of the possibilities of our code. We investigate the effect of the limited illumination of the inner core by seismic waves on the robustness of kinematic model interpretation. We test the impact on seismic differential travel time observations of several kinematic models of inner core growth: fast lateral translation; slow differential growth; and inner core super-rotation. We find that a model of inner core evolution incorporating both differential growth and slow super-rotation is able to recreate some of the more intricate details of the seismic observations. Specifically we are able to "grow" an inner core that has an asymmetric shift in isotropic hemisphere boundaries with increasing depth in the inner core.
Neutron dose rate analysis on HTGR-10 reactor using Monte Carlo code
NASA Astrophysics Data System (ADS)
Suwoto; Adrial, H.; Hamzah, A.; Zuhair; Bakhri, S.; Sunaryo, G. R.
2018-02-01
The HTGR-10 reactor is cylinder-shaped core fuelled with kernel TRISO coated fuel particles in the spherical pebble with helium cooling system. The outlet helium gas coolant temperature outputted from the reactor core is designed to 700 °C. One advantage HTGR type reactor is capable of co-generation, as an addition to generating electricity, the reactor was designed to produce heat at high temperature can be used for other processes. The spherical fuel pebble contains 8335 TRISO UO2 kernel coated particles with enrichment of 10% and 17% are dispersed in a graphite matrix. The main purpose of this study was to analysis the distribution of neutron dose rates generated from HTGR-10 reactors. The calculation and analysis result of neutron dose rate in the HTGR-10 reactor core was performed using Monte Carlo MCNP5v1.6 code. The problems of double heterogeneity in kernel fuel coated particles TRISO and spherical fuel pebble in the HTGR-10 core are modelled well with MCNP5v1.6 code. The neutron flux to dose conversion factors taken from the International Commission on Radiological Protection (ICRP-74) was used to determine the dose rate that passes through the active core, reflectors, core barrel, reactor pressure vessel (RPV) and a biological shield. The calculated results of neutron dose rate with MCNP5v1.6 code using a conversion factor of ICRP-74 (2009) for radiation workers in the radial direction on the outside of the RPV (radial position = 220 cm from the center of the patio HTGR-10) provides the respective value of 9.22E-4 μSv/h and 9.58E-4 μSv/h for enrichment 10% and 17%, respectively. The calculated values of neutron dose rates are compliant with BAPETEN Chairman’s Regulation Number 4 Year 2013 on Radiation Protection and Safety in Nuclear Energy Utilization which sets the limit value for the average effective dose for radiation workers 20 mSv/year or 10μSv/h. Thus the protection and safety for radiation workers to be safe from the radiation source has been fulfilled. From the result analysis, it can be concluded that the model of calculation result of neutron dose rate for HTGR-10 core has met the required radiation safety standards.
TREAT Transient Analysis Benchmarking for the HEU Core
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kontogeorgakos, D. C.; Connaway, H. M.; Wright, A. E.
2014-05-01
This work was performed to support the feasibility study on the potential conversion of the Transient Reactor Test Facility (TREAT) at Idaho National Laboratory from the use of high enriched uranium (HEU) fuel to the use of low enriched uranium (LEU) fuel. The analyses were performed by the GTRI Reactor Conversion staff at the Argonne National Laboratory (ANL). The objective of this study was to benchmark the transient calculations against temperature-limited transients performed in the final operating HEU TREAT core configuration. The MCNP code was used to evaluate steady-state neutronics behavior, and the point kinetics code TREKIN was used tomore » determine core power and energy during transients. The first part of the benchmarking process was to calculate with MCNP all the neutronic parameters required by TREKIN to simulate the transients: the transient rod-bank worth, the prompt neutron generation lifetime, the temperature reactivity feedback as a function of total core energy, and the core-average temperature and peak temperature as a functions of total core energy. The results of these calculations were compared against measurements or against reported values as documented in the available TREAT reports. The heating of the fuel was simulated as an adiabatic process. The reported values were extracted from ANL reports, intra-laboratory memos and experiment logsheets and in some cases it was not clear if the values were based on measurements, on calculations or a combination of both. Therefore, it was decided to use the term “reported” values when referring to such data. The methods and results from the HEU core transient analyses will be used for the potential LEU core configurations to predict the converted (LEU) core’s performance.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roseberry, R.J.
The experimental measurements and nuclear analysis of a uniformly loaded, unpoisoned slab core with a partially inserted hafnium rod and/or a partially inserted water gap are described. Comparisons of experimental data with calculated results of the UFO core and flux synthesis techniques are given. It is concluded that one of the flux synthesis techniques and the UFO code are able to predict flux distributions to within approximately -5% of experiment for most cases, with a maximum error of approximately -10% for a channel at the core- reflector boundary. The second synthesis technique failed to give comparable agreement with experiment evenmore » when various refinements were used, e.g. increasing the number of mesh points, performing the flux synthesis technique of iteration, and spectrum-weighting the appropriate calculated fluxes through the use of the SWAKRAUM code. These results are comparable to those reported in Part I of this study. (auth)« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Surkov, A. V., E-mail: surkov.andrew@gmail.com; Kochkin, V. N.; Pesnya, Yu. E.
2015-12-15
A comparison of measured and calculated neutronic characteristics (fast neutron flux and fission rate of {sup 235}U) in the core and reflector of the IR-8 reactor is presented. The irradiation devices equipped with neutron activation detectors were prepared. The determination of fast neutron flux was performed using the {sup 54}Fe (n, p) and {sup 58}Ni (n, p) reactions. The {sup 235}U fission rate was measured using uranium dioxide with 10% enrichment in {sup 235}U. The determination of specific activities of detectors was carried out by measuring the intensity of characteristic gamma peaks using the ORTEC gamma spectrometer. Neutron fields inmore » the core and reflector of the IR-8 reactor were calculated using the MCU-PTR code.« less
Full core analysis of IRIS reactor by using MCNPX.
Amin, E A; Bashter, I I; Hassan, Nabil M; Mustafa, S S
2016-07-01
This paper describes neutronic analysis for fresh fuelled IRIS (International Reactor Innovative and Secure) reactor by MCNPX code. The analysis included criticality calculations, radial power and axial power distribution, nuclear peaking factor and axial offset percent at the beginning of fuel cycle. The effective multiplication factor obtained by MCNPX code is compared with previous calculations by HELIOS/NESTLE, CASMO/SIMULATE, modified CORD-2 nodal calculations and SAS2H/KENO-V code systems. It is found that k-eff value obtained by MCNPX is closer to CORD-2 value. The radial and axial powers are compared with other published results carried out using SAS2H/KENO-V code. Moreover, the WIMS-D5 code is used for studying the effect of enriched boron in form of ZrB2 on the effective multiplication factor (K-eff) of the fuel pin. In this part of calculation, K-eff is calculated at different concentrations of Boron-10 in mg/cm at different stages of burnup of unit cell. The results of this part are compared with published results performed by HELIOS code. Copyright © 2016 Elsevier Ltd. All rights reserved.
EBT reactor systems analysis and cost code: description and users guide (Version 1)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Santoro, R.T.; Uckan, N.A.; Barnes, J.M.
1984-06-01
An ELMO Bumpy Torus (EBT) reactor systems analysis and cost code that incorporates the most recent advances in EBT physics has been written. The code determines a set of reactors that fall within an allowed operating window determined from the coupling of ring and core plasma properties and the self-consistent treatment of the coupled ring-core stability and power balance requirements. The essential elements of the systems analysis and cost code are described, along with the calculational sequences leading to the specification of the reactor options and their associated costs. The input parameters, the constraints imposed upon them, and the operatingmore » range over which the code provides valid results are discussed. A sample problem and the interpretation of the results are also presented.« less
Posttest calculations of bundle quench test CORA-13 with ATHLET-CD
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bestele, J.; Trambauer, K.; Schubert, J.D.
Gesellschaft fuer Anlagen- und Reaktorsicherheit is developing, in cooperation with the Institut fuer Kernenergetik und Energiesysteme, Stuttgart, the system code Analysis of Thermalhydraulics of Leaks and Transients with Core Degradation (ATHLET-CD). The code consists of detailed models of the thermal hydraulics of the reactor coolant system. This thermo-fluid dynamics module is coupled with modules describing the early phase of the core degradation, like cladding deformation, oxidation and melt relocation, and the release and transport of fission products. The assessment of the code is being done by the analysis of separate effect tests, integral tests, and plant events. The code willmore » be applied to the verification of severe accident management procedures. The out-of-pile test CORA-13 was conducted by Forschungszentrum Karlsruhe in their CORA test facility. The test consisted of two phases, a heatup phase and a quench phase. At the beginning of the quench phase, a sharp peak in the hydrogen generation rate was observed. Both phases of the test have been calculated with the system code ATHLET-CD. Special efforts have been made to simulate the heat losses and the flow distribution in the test facility and the thermal hydraulics during the quench phase. In addition to previous calculations, the material relocation and the quench phase have been modeled. The temperature increase during the heatup phase, the starting time of the temperature escalation, and the maximum temperatures have been calculated correctly. At the beginning of the quench phase, an increased hydrogen generation rate has been calculated as measured in the experiment.« less
Adaptive Core Simulation Employing Discrete Inverse Theory - Part II: Numerical Experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abdel-Khalik, Hany S.; Turinsky, Paul J.
2005-07-15
Use of adaptive simulation is intended to improve the fidelity and robustness of important core attribute predictions such as core power distribution, thermal margins, and core reactivity. Adaptive simulation utilizes a selected set of past and current reactor measurements of reactor observables, i.e., in-core instrumentation readings, to adapt the simulation in a meaningful way. The companion paper, ''Adaptive Core Simulation Employing Discrete Inverse Theory - Part I: Theory,'' describes in detail the theoretical background of the proposed adaptive techniques. This paper, Part II, demonstrates several computational experiments conducted to assess the fidelity and robustness of the proposed techniques. The intentmore » is to check the ability of the adapted core simulator model to predict future core observables that are not included in the adaption or core observables that are recorded at core conditions that differ from those at which adaption is completed. Also, this paper demonstrates successful utilization of an efficient sensitivity analysis approach to calculate the sensitivity information required to perform the adaption for millions of input core parameters. Finally, this paper illustrates a useful application for adaptive simulation - reducing the inconsistencies between two different core simulator code systems, where the multitudes of input data to one code are adjusted to enhance the agreement between both codes for important core attributes, i.e., core reactivity and power distribution. Also demonstrated is the robustness of such an application.« less
The concerted calculation of the BN-600 reactor for the deterministic and stochastic codes
NASA Astrophysics Data System (ADS)
Bogdanova, E. V.; Kuznetsov, A. N.
2017-01-01
The solution of the problem of increasing the safety of nuclear power plants implies the existence of complete and reliable information about the processes occurring in the core of a working reactor. Nowadays the Monte-Carlo method is the most general-purpose method used to calculate the neutron-physical characteristic of the reactor. But it is characterized by large time of calculation. Therefore, it may be useful to carry out coupled calculations with stochastic and deterministic codes. This article presents the results of research for possibility of combining stochastic and deterministic algorithms in calculation the reactor BN-600. This is only one part of the work, which was carried out in the framework of the graduation project at the NRC “Kurchatov Institute” in cooperation with S. S. Gorodkov and M. A. Kalugin. It is considering the 2-D layer of the BN-600 reactor core from the international benchmark test, published in the report IAEA-TECDOC-1623. Calculations of the reactor were performed with MCU code and then with a standard operative diffusion algorithm with constants taken from the Monte - Carlo computation. Macro cross-section, diffusion coefficients, the effective multiplication factor and the distribution of neutron flux and power were obtained in 15 energy groups. The reasonable agreement between stochastic and deterministic calculations of the BN-600 is observed.
2D and 3D Models of Convective Turbulence and Oscillations in Intermediate-Mass Main-Sequence Stars
NASA Astrophysics Data System (ADS)
Guzik, Joyce Ann; Morgan, Taylor H.; Nelson, Nicholas J.; Lovekin, Catherine; Kitiashvili, Irina N.; Mansour, Nagi N.; Kosovichev, Alexander
2015-08-01
We present multidimensional modeling of convection and oscillations in main-sequence stars somewhat more massive than the sun, using three separate approaches: 1) Applying the spherical 3D MHD ASH (Anelastic Spherical Harmonics) code to simulate the core convection and radiative zone. Our goal is to determine whether core convection can excite low-frequency gravity modes, and thereby explain the presence of low frequencies for some hybrid gamma Dor/delta Sct variables for which the envelope convection zone is too shallow for the convective blocking mechanism to drive g modes; 2) Using the 3D planar ‘StellarBox’ radiation hydrodynamics code to model the envelope convection zone and part of the radiative zone. Our goals are to examine the interaction of stellar pulsations with turbulent convection in the envelope, excitation of acoustic modes, and the role of convective overshooting; 3) Applying the ROTORC 2D stellar evolution and dynamics code to calculate evolution with a variety of initial rotation rates and extents of core convective overshooting. The nonradial adiabatic pulsation frequencies of these nonspherical models will be calculated using the 2D pulsation code NRO of Clement. We will present new insights into gamma Dor and delta Sct pulsations gained by multidimensional modeling compared to 1D model expectations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Renzi, N.E.; Roseberry, R.J.
>The experimental measurements and nuclear analysis of a uniformly loaded, unpoisoned slab core with a partially insented hafnium rod are described. Comparisons of experimental data with calculated results of the UFO code and flux synthesis techniques are given. It was concluded that one of the flux synthesis techniques and the UFO code are able to predict flux distributions to within approximately 5% of experiment for most cases. An error of approximately 10% was found in the synthesis technique for a channel near the partially inserted rod. The various calculations were able to predict neutron pulsed shutdowns to only approximately 30%.more » (auth)« less
Exposure calculation code module for reactor core analysis: BURNER
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vondy, D.R.; Cunningham, G.W.
1979-02-01
The code module BURNER for nuclear reactor exposure calculations is presented. The computer requirements are shown, as are the reference data and interface data file requirements, and the programmed equations and procedure of calculation are described. The operating history of a reactor is followed over the period between solutions of the space, energy neutronics problem. The end-of-period nuclide concentrations are determined given the necessary information. A steady state, continuous fueling model is treated in addition to the usual fixed fuel model. The control options provide flexibility to select among an unusually wide variety of programmed procedures. The code also providesmore » user option to make a number of auxiliary calculations and print such information as the local gamma source, cumulative exposure, and a fine scale power density distribution in a selected zone. The code is used locally in a system for computation which contains the VENTURE diffusion theory neutronics code and other modules.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hannan, N. A.; Matos, J. E.; Stillman, J. A.
At the request of the Czech Technical University (CTU) in Prague, ANL has performed independent verification calculations using the MCNP Monte Carlo code for three core configurations of the VR-1 reactor: a current core configuration B1 with HEU (36%) IRT-3M fuel assemblies and planned core configurations C1 and C2 with LEU (19.7%) IRT-4M fuel assemblies. Details of these configurations were provided to ANL by CTU. For core configuration B1, criticality calculations were performed for two sets of control rod positions provided to ANL by CTU. Fore core configurations C1 and C2, criticality calculations were done for cases with all controlmore » rods at the top positions, all control rods at the bottom positions, and two critical states of the reactor for different control rod positions. In addition, sensitivity studies for variation of the {sup 235}U mass in each fuel assembly and variation of the fuel meat and cladding thicknesses in each of the fuel tubes were doe for the C1 core configuration. The reactivity worth of the individual control rods was calculated for the B1, C1, and C2 core configurations. Finally, the reactivity feedback coefficients, the prompt neutron lifetime, and the total effective delay neutron fraction were calculated for each of the three cores.« less
Creep relaxation of fuel pin bending and ovalling stresses. [BEND code, OVAL code, MARC-CDC code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chan, D.P.; Jackson, R.J.
1981-10-01
Analytical methods for calculating fuel pin cladding bending and ovalling stresses due to pin bundle-duct mechanical interaction taking into account nonlinear creep are presented. Calculated results are in agreement with finite element results by MARC-CDC program. The methods are used to investigate the effect of creep on the FTR fuel cladding bending and ovalling stresses. It is concluded that the cladding of 316 SS 20 percent CW and reference design has high creep rates in the FTR core region to keep the bending and ovalling stresses to acceptable levels. 6 refs.
Scalability improvements to NRLMOL for DFT calculations of large molecules
NASA Astrophysics Data System (ADS)
Diaz, Carlos Manuel
Advances in high performance computing (HPC) have provided a way to treat large, computationally demanding tasks using thousands of processors. With the development of more powerful HPC architectures, the need to create efficient and scalable code has grown more important. Electronic structure calculations are valuable in understanding experimental observations and are routinely used for new materials predictions. For the electronic structure calculations, the memory and computation time are proportional to the number of atoms. Memory requirements for these calculations scale as N2, where N is the number of atoms. While the recent advances in HPC offer platforms with large numbers of cores, the limited amount of memory available on a given node and poor scalability of the electronic structure code hinder their efficient usage of these platforms. This thesis will present some developments to overcome these bottlenecks in order to study large systems. These developments, which are implemented in the NRLMOL electronic structure code, involve the use of sparse matrix storage formats and the use of linear algebra using sparse and distributed matrices. These developments along with other related development now allow ground state density functional calculations using up to 25,000 basis functions and the excited state calculations using up to 17,000 basis functions while utilizing all cores on a node. An example on a light-harvesting triad molecule is described. Finally, future plans to further improve the scalability will be presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carbajo, J.J.
1995-12-31
This study compares results obtained with two U.S. Nuclear Regulatory Commission (NRC)-sponsored codes, MELCOR version 1.8.3 (1.8PQ) and SCDAP/RELAP5 Mod3.1 release C, for the same transient - a low-pressure, short-term station blackout accident at the Browns Ferry nuclear plant. This work is part of MELCOR assessment activities to compare core damage progression calculations of MELCOR against SCDAP/RELAP5 since the two codes model core damage progression very differently.
CLUMPY: A code for γ-ray signals from dark matter structures
NASA Astrophysics Data System (ADS)
Charbonnier, Aldée; Combet, Céline; Maurin, David
2012-03-01
We present the first public code for semi-analytical calculation of the γ-ray flux astrophysical J-factor from dark matter annihilation/decay in the Galaxy, including dark matter substructures. The core of the code is the calculation of the line of sight integral of the dark matter density squared (for annihilations) or density (for decaying dark matter). The code can be used in three modes: i) to draw skymaps from the Galactic smooth component and/or the substructure contributions, ii) to calculate the flux from a specific halo (that is not the Galactic halo, e.g. dwarf spheroidal galaxies) or iii) to perform simple statistical operations from a list of allowed DM profiles for a given object. Extragalactic contributions and other tracers of DM annihilation (e.g. positrons, anti-protons) will be included in a second release.
Hybrid Reduced Order Modeling Algorithms for Reactor Physics Calculations
NASA Astrophysics Data System (ADS)
Bang, Youngsuk
Reduced order modeling (ROM) has been recognized as an indispensable approach when the engineering analysis requires many executions of high fidelity simulation codes. Examples of such engineering analyses in nuclear reactor core calculations, representing the focus of this dissertation, include the functionalization of the homogenized few-group cross-sections in terms of the various core conditions, e.g. burn-up, fuel enrichment, temperature, etc. This is done via assembly calculations which are executed many times to generate the required functionalization for use in the downstream core calculations. Other examples are sensitivity analysis used to determine important core attribute variations due to input parameter variations, and uncertainty quantification employed to estimate core attribute uncertainties originating from input parameter uncertainties. ROM constructs a surrogate model with quantifiable accuracy which can replace the original code for subsequent engineering analysis calculations. This is achieved by reducing the effective dimensionality of the input parameter, the state variable, or the output response spaces, by projection onto the so-called active subspaces. Confining the variations to the active subspace allows one to construct an ROM model of reduced complexity which can be solved more efficiently. This dissertation introduces a new algorithm to render reduction with the reduction errors bounded based on a user-defined error tolerance which represents the main challenge of existing ROM techniques. Bounding the error is the key to ensuring that the constructed ROM models are robust for all possible applications. Providing such error bounds represents one of the algorithmic contributions of this dissertation to the ROM state-of-the-art. Recognizing that ROM techniques have been developed to render reduction at different levels, e.g. the input parameter space, the state space, and the response space, this dissertation offers a set of novel hybrid ROM algorithms which can be readily integrated into existing methods and offer higher computational efficiency and defendable accuracy of the reduced models. For example, the snapshots ROM algorithm is hybridized with the range finding algorithm to render reduction in the state space, e.g. the flux in reactor calculations. In another implementation, the perturbation theory used to calculate first order derivatives of responses with respect to parameters is hybridized with a forward sensitivity analysis approach to render reduction in the parameter space. Reduction at the state and parameter spaces can be combined to render further reduction at the interface between different physics codes in a multi-physics model with the accuracy quantified in a similar manner to the single physics case. Although the proposed algorithms are generic in nature, we focus here on radiation transport models used in support of the design and analysis of nuclear reactor cores. In particular, we focus on replacing the traditional assembly calculations by ROM models to facilitate the generation of homogenized cross-sections for downstream core calculations. The implication is that assembly calculations could be done instantaneously therefore precluding the need for the expensive evaluation of the few-group cross-sections for all possible core conditions. Given the generic natures of the algorithms, we make an effort to introduce the material in a general form to allow non-nuclear engineers to benefit from this work.
NASA Technical Reports Server (NTRS)
Pao, J. L.; Mehrotra, S. C.; Lan, C. E.
1982-01-01
A computer code base on an improved vortex filament/vortex core method for predicting aerodynamic characteristics of slender wings with edge vortex separations is developed. The code is applicable to camber wings, straked wings or wings with leading edge vortex flaps at subsonic speeds. The prediction of lifting pressure distribution and the computer time are improved by using a pair of concentrated vortex cores above the wing surface. The main features of this computer program are: (1) arbitrary camber shape may be defined and an option for exactly defining leading edge flap geometry is also provided; (2) the side edge vortex system is incorporated.
Development and Implementation of CFD-Informed Models for the Advanced Subchannel Code CTF
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blyth, Taylor S.; Avramova, Maria
The research described in this PhD thesis contributes to the development of efficient methods for utilization of high-fidelity models and codes to inform low-fidelity models and codes in the area of nuclear reactor core thermal-hydraulics. The objective is to increase the accuracy of predictions of quantities of interests using high-fidelity CFD models while preserving the efficiency of low-fidelity subchannel core calculations. An original methodology named Physics- based Approach for High-to-Low Model Information has been further developed and tested. The overall physical phenomena and corresponding localized effects, which are introduced by the presence of spacer grids in light water reactor (LWR)more » cores, are dissected in corresponding four building basic processes, and corresponding models are informed using high-fidelity CFD codes. These models are a spacer grid-directed cross-flow model, a grid-enhanced turbulent mixing model, a heat transfer enhancement model, and a spacer grid pressure loss model. The localized CFD-models are developed and tested using the CFD code STAR-CCM+, and the corresponding global model development and testing in sub-channel formulation is performed in the thermal- hydraulic subchannel code CTF. The improved CTF simulations utilize data-files derived from CFD STAR-CCM+ simulation results covering the spacer grid design desired for inclusion in the CTF calculation. The current implementation of these models is examined and possibilities for improvement and further development are suggested. The validation experimental database is extended by including the OECD/NRC PSBT benchmark data. The outcome is an enhanced accuracy of CTF predictions while preserving the computational efficiency of a low-fidelity subchannel code.« less
Development and Implementation of CFD-Informed Models for the Advanced Subchannel Code CTF
NASA Astrophysics Data System (ADS)
Blyth, Taylor S.
The research described in this PhD thesis contributes to the development of efficient methods for utilization of high-fidelity models and codes to inform low-fidelity models and codes in the area of nuclear reactor core thermal-hydraulics. The objective is to increase the accuracy of predictions of quantities of interests using high-fidelity CFD models while preserving the efficiency of low-fidelity subchannel core calculations. An original methodology named Physics-based Approach for High-to-Low Model Information has been further developed and tested. The overall physical phenomena and corresponding localized effects, which are introduced by the presence of spacer grids in light water reactor (LWR) cores, are dissected in corresponding four building basic processes, and corresponding models are informed using high-fidelity CFD codes. These models are a spacer grid-directed cross-flow model, a grid-enhanced turbulent mixing model, a heat transfer enhancement model, and a spacer grid pressure loss model. The localized CFD-models are developed and tested using the CFD code STAR-CCM+, and the corresponding global model development and testing in sub-channel formulation is performed in the thermal-hydraulic subchannel code CTF. The improved CTF simulations utilize data-files derived from CFD STAR-CCM+ simulation results covering the spacer grid design desired for inclusion in the CTF calculation. The current implementation of these models is examined and possibilities for improvement and further development are suggested. The validation experimental database is extended by including the OECD/NRC PSBT benchmark data. The outcome is an enhanced accuracy of CTF predictions while preserving the computational efficiency of a low-fidelity subchannel code.
Impact of thorium based molten salt reactor on the closure of the nuclear fuel cycle
NASA Astrophysics Data System (ADS)
Jaradat, Safwan Qasim Mohammad
Molten salt reactor (MSR) is one of six reactors selected by the Generation IV International Forum (GIF). The liquid fluoride thorium reactor (LFTR) is a MSR concept based on thorium fuel cycle. LFTR uses liquid fluoride salts as a nuclear fuel. It uses 232Th and 233U as the fertile and fissile materials, respectively. Fluoride salt of these nuclides is dissolved in a mixed carrier salt of lithium and beryllium (FLiBe). The objective of this research was to complete feasibility studies of a small commercial thermal LFTR. The focus was on neutronic calculations in order to prescribe core design parameter such as core size, fuel block pitch (p), fuel channel radius, fuel path, reflector thickness, fuel salt composition, and power. In order to achieve this objective, the applicability of Monte Carlo N-Particle Transport Code (MCNP) to MSR modeling was verified. Then, a prescription for conceptual small thermal reactor LFTR and relevant calculations were performed using MCNP to determine the main neutronic parameters of the core reactor. The MCNP code was used to study the reactor physics characteristics for the FUJI-U3 reactor. The results were then compared with the results obtained from the original FUJI-U3 using the reactor physics code SRAC95 and the burnup analysis code ORIPHY2. The results were comparable with each other. Based on the results, MCNP was found to be a reliable code to model a small thermal LFTR and study all the related reactor physics characteristics. The results of this study were promising and successful in demonstrating a prefatory small commercial LFTR design. The outcome of using a small core reactor with a diameter/height of 280/260 cm that would operate for more than five years at a power level of 150 MWth was studied. The fuel system 7LiF - BeF2 - ThF4 - UF4 with a (233U/ 232Th) = 2.01 % was the candidate fuel for this reactor core.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hanan, N. A.; Matos, J. E.
At The request of the Czech Technical University in Prague, ANL has performed independent verification calculations using the MCNP Monte Carlo code for three core configurations of the VR-1 reactor: a current core configuration B1 with HEU (36%) IRT-3M fuel assemblies and planned core configurations C1 and C2 with LEU (19.7%) IRT-4M fuel assemblies. Details of these configurations were provided to ANL by CTU. For core configuration B1, criticality calculations were performed for two sets of control rod positions provided to ANL by CTU. For core configurations C1 and C2, criticality calculations were done for cases with all control rodsmore » at the top positions, all control rods at the bottom positions, and two critical states of the reactor for different control rod positions. In addition, sensitivity studies for variation of the {sup 235}U mass in each fuel assembly and variation of the fuel meat and cladding thicknesses in each of the fuel tubes were done for the C1 core configuration. Finally the reactivity worth of the individual control rods was calculated for the B1, C1, and C2 core configurations.« less
Validation of the WIMSD4M cross-section generation code with benchmark results
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deen, J.R.; Woodruff, W.L.; Leal, L.E.
1995-01-01
The WIMSD4 code has been adopted for cross-section generation in support of the Reduced Enrichment Research and Test Reactor (RERTR) program at Argonne National Laboratory (ANL). Subsequently, the code has undergone several updates, and significant improvements have been achieved. The capability of generating group-collapsed micro- or macroscopic cross sections from the ENDF/B-V library and the more recent evaluation, ENDF/B-VI, in the ISOTXS format makes the modified version of the WIMSD4 code, WIMSD4M, very attractive, not only for the RERTR program, but also for the reactor physics community. The intent of the present paper is to validate the WIMSD4M cross-section librariesmore » for reactor modeling of fresh water moderated cores. The results of calculations performed with multigroup cross-section data generated with the WIMSD4M code will be compared against experimental results. These results correspond to calculations carried out with thermal reactor benchmarks of the Oak Ridge National Laboratory (ORNL) unreflected HEU critical spheres, the TRX LEU critical experiments, and calculations of a modified Los Alamos HEU D{sub 2}O moderated benchmark critical system. The benchmark calculations were performed with the discrete-ordinates transport code, TWODANT, using WIMSD4M cross-section data. Transport calculations using the XSDRNPM module of the SCALE code system are also included. In addition to transport calculations, diffusion calculations with the DIF3D code were also carried out, since the DIF3D code is used in the RERTR program for reactor analysis and design. For completeness, Monte Carlo results of calculations performed with the VIM and MCNP codes are also presented.« less
NASA Astrophysics Data System (ADS)
Al Zain, Jamal; El Hajjaji, O.; El Bardouni, T.; Boukhal, H.; Jaï, Otman
2018-06-01
The MNSR is a pool type research reactor, which is difficult to model because of the importance of neutron leakage. The aim of this study is to evaluate a 2-D transport model for the reactor compatible with the latest release of the DRAGON code and 3-D diffusion of the DONJON code. DRAGON code is then used to generate the group macroscopic cross sections needed for full core diffusion calculations. The diffusion DONJON code, is then used to compute the effective multiplication factor (keff), the feedback reactivity coefficients and neutron flux which account for variation in fuel and moderator temperatures as well as the void coefficient have been calculated using the DRAGON and DONJON codes for the MNSR research reactor. The cross sections of all the reactor components at different temperatures were generated using the DRAGON code. These group constants were used then in the DONJON code to calculate the multiplication factor and the neutron spectrum at different water and fuel temperatures using 69 energy groups. Only one parameter was changed where all other parameters were kept constant. Finally, Good agreements between the calculated and measured have been obtained for every of the feedback reactivity coefficients and neutron flux.
Implementing Molecular Dynamics for Hybrid High Performance Computers - 1. Short Range Forces
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, W Michael; Wang, Peng; Plimpton, Steven J
The use of accelerators such as general-purpose graphics processing units (GPGPUs) have become popular in scientific computing applications due to their low cost, impressive floating-point capabilities, high memory bandwidth, and low electrical power requirements. Hybrid high performance computers, machines with more than one type of floating-point processor, are now becoming more prevalent due to these advantages. In this work, we discuss several important issues in porting a large molecular dynamics code for use on parallel hybrid machines - 1) choosing a hybrid parallel decomposition that works on central processing units (CPUs) with distributed memory and accelerator cores with shared memory,more » 2) minimizing the amount of code that must be ported for efficient acceleration, 3) utilizing the available processing power from both many-core CPUs and accelerators, and 4) choosing a programming model for acceleration. We present our solution to each of these issues for short-range force calculation in the molecular dynamics package LAMMPS. We describe algorithms for efficient short range force calculation on hybrid high performance machines. We describe a new approach for dynamic load balancing of work between CPU and accelerator cores. We describe the Geryon library that allows a single code to compile with both CUDA and OpenCL for use on a variety of accelerators. Finally, we present results on a parallel test cluster containing 32 Fermi GPGPUs and 180 CPU cores.« less
NASA Astrophysics Data System (ADS)
Wang, Rongjiang; Heimann, Sebastian; Zhang, Yong; Wang, Hansheng; Dahm, Torsten
2017-04-01
A hybrid method is proposed to calculate complete synthetic seismograms based on a spherically symmetric and self-gravitating Earth with a multi-layered structure of atmosphere, ocean, mantle, liquid core and solid core. For large wavelengths, a numerical scheme is used to solve the geodynamic boundary-value problem without any approximation on the deformation and gravity coupling. With the decreasing wavelength, the gravity effect on the deformation becomes negligible and the analytical propagator scheme can be used. Many useful approaches are used to overcome the numerical problems that may arise in both analytical and numerical schemes. Some of these approaches have been established in the seismological community and the others are developed for the first time. Based on the stable and efficient hybrid algorithm, an all-in-one code QSSP is implemented to cover the complete spectrum of seismological interests. The performance of the code is demonstrated by various tests including the curvature effect on teleseismic body and surface waves, the appearance of multiple reflected, teleseismic core phases, the gravity effect on long period surface waves and free oscillations, the simulation of near-field displacement seismograms with the static offset, the coupling of tsunami and infrasound waves, and free oscillations of the solid Earth, the atmosphere and the ocean. QSSP is open source software that can be used as a stand-alone FORTRAN code or may be applied in combination with a Python toolbox to calculate and handle Green's function databases for efficient coding of source inversion problems.
NASA Astrophysics Data System (ADS)
Wang, Rongjiang; Heimann, Sebastian; Zhang, Yong; Wang, Hansheng; Dahm, Torsten
2017-09-01
A hybrid method is proposed to calculate complete synthetic seismograms based on a spherically symmetric and self-gravitating Earth with a multilayered structure of atmosphere, ocean, mantle, liquid core and solid core. For large wavelengths, a numerical scheme is used to solve the geodynamic boundary-value problem without any approximation on the deformation and gravity coupling. With decreasing wavelength, the gravity effect on the deformation becomes negligible and the analytical propagator scheme can be used. Many useful approaches are used to overcome the numerical problems that may arise in both analytical and numerical schemes. Some of these approaches have been established in the seismological community and the others are developed for the first time. Based on the stable and efficient hybrid algorithm, an all-in-one code QSSP is implemented to cover the complete spectrum of seismological interests. The performance of the code is demonstrated by various tests including the curvature effect on teleseismic body and surface waves, the appearance of multiple reflected, teleseismic core phases, the gravity effect on long period surface waves and free oscillations, the simulation of near-field displacement seismograms with the static offset, the coupling of tsunami and infrasound waves, and free oscillations of the solid Earth, the atmosphere and the ocean. QSSP is open source software that can be used as a stand-alone FORTRAN code or may be applied in combination with a Python toolbox to calculate and handle Green's function databases for efficient coding of source inversion problems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Curca-Tivig, Florin; Merk, Stephan; Pautz, Andreas
2007-07-01
Anticipating future needs of our customers and willing to concentrate synergies and competences existing in the company for the benefit of our customers, AREVA NP decided in 2002 to develop the next generation of coupled neutronics/ core thermal-hydraulic (TH) code systems for fuel assembly and core design calculations for both, PWR and BWR applications. The global CONVERGENCE project was born: after a feasibility study of one year (2002) and a conceptual phase of another year (2003), development was started at the beginning of 2004. The present paper introduces the CONVERGENCE project, presents the main feature of the new code systemmore » ARCADIA{sup R} and concludes on customer benefits. ARCADIA{sup R} is designed to meet AREVA NP market and customers' requirements worldwide. Besides state-of-the-art physical modeling, numerical performance and industrial functionality, the ARCADIA{sup R} system is featuring state-of-the-art software engineering. The new code system will bring a series of benefits for our customers: e.g. improved accuracy for heterogeneous cores (MOX/ UOX, Gd...), better description of nuclide chains, and access to local neutronics/ thermal-hydraulics and possibly thermal-mechanical information (3D pin by pin full core modeling). ARCADIA is a registered trademark of AREVA NP. (authors)« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, J.; Alpan, F. A.; Fischer, G.A.
2011-07-01
Traditional two-dimensional (2D)/one-dimensional (1D) SYNTHESIS methodology has been widely used to calculate fast neutron (>1.0 MeV) fluence exposure to reactor pressure vessel in the belt-line region. However, it is expected that this methodology cannot provide accurate fast neutron fluence calculation at elevations far above or below the active core region. A three-dimensional (3D) parallel discrete ordinates calculation for ex-vessel neutron dosimetry on a Westinghouse 4-Loop XL Pressurized Water Reactor has been done. It shows good agreement between the calculated results and measured results. Furthermore, the results show very different fast neutron flux values at some of the former plate locationsmore » and elevations above and below an active core than those calculated by a 2D/1D SYNTHESIS method. This indicates that for certain irregular reactor internal structures, where the fast neutron flux has a very strong local effect, it is required to use a 3D transport method to calculate accurate fast neutron exposure. (authors)« less
NASA Astrophysics Data System (ADS)
Tiyapun, K.; Chimtin, M.; Munsorn, S.; Somchit, S.
2015-05-01
The objective of this work is to demonstrate the method for validating the predication of the calculation methods for neutron flux distribution in the irradiation tubes of TRIGA research reactor (TRR-1/M1) using the MCNP computer code model. The reaction rate using in the experiment includes 27Al(n, α)24Na and 197Au(n, γ)198Au reactions. Aluminium (99.9 wt%) and gold (0.1 wt%) foils and the gold foils covered with cadmium were irradiated in 9 locations in the core referred to as CT, C8, C12, F3, F12, F22, F29, G5, and G33. The experimental results were compared to the calculations performed using MCNP which consisted of the detailed geometrical model of the reactor core. The results from the experimental and calculated normalized reaction rates in the reactor core are in good agreement for both reactions showing that the material and geometrical properties of the reactor core are modelled very well. The results indicated that the difference between the experimental measurements and the calculation of the reactor core using the MCNP geometrical model was below 10%. In conclusion the MCNP computational model which was used to calculate the neutron flux and reaction rate distribution in the reactor core can be used for others reactor core parameters including neutron spectra calculation, dose rate calculation, power peaking factors calculation and optimization of research reactor utilization in the future with the confidence in the accuracy and reliability of the calculation.
Kalantzis, Georgios; Tachibana, Hidenobu
2014-01-01
For microdosimetric calculations event-by-event Monte Carlo (MC) methods are considered the most accurate. The main shortcoming of those methods is the extensive requirement for computational time. In this work we present an event-by-event MC code of low projectile energy electron and proton tracks for accelerated microdosimetric MC simulations on a graphic processing unit (GPU). Additionally, a hybrid implementation scheme was realized by employing OpenMP and CUDA in such a way that both GPU and multi-core CPU were utilized simultaneously. The two implementation schemes have been tested and compared with the sequential single threaded MC code on the CPU. Performance comparison was established on the speed-up for a set of benchmarking cases of electron and proton tracks. A maximum speedup of 67.2 was achieved for the GPU-based MC code, while a further improvement of the speedup up to 20% was achieved for the hybrid approach. The results indicate the capability of our CPU-GPU implementation for accelerated MC microdosimetric calculations of both electron and proton tracks without loss of accuracy. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Study of an External Neutron Source for an Accelerator-Driven System using the PHITS Code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sugawara, Takanori; Iwasaki, Tomohiko; Chiba, Takashi
A code system for the Accelerator Driven System (ADS) has been under development for analyzing dynamic behaviors of a subcritical core coupled with an accelerator. This code system named DSE (Dynamics calculation code system for a Subcritical system with an External neutron source) consists of an accelerator part and a reactor part. The accelerator part employs a database, which is calculated by using PHITS, for investigating the effect related to the accelerator such as the changes of beam energy, beam diameter, void generation, and target level. This analysis method using the database may introduce some errors into dynamics calculations sincemore » the neutron source data derived from the database has some errors in fitting or interpolating procedures. In this study, the effects of various events are investigated to confirm that the method based on the database is appropriate.« less
Wilkinson, Karl A; Hine, Nicholas D M; Skylaris, Chris-Kriton
2014-11-11
We present a hybrid MPI-OpenMP implementation of Linear-Scaling Density Functional Theory within the ONETEP code. We illustrate its performance on a range of high performance computing (HPC) platforms comprising shared-memory nodes with fast interconnect. Our work has focused on applying OpenMP parallelism to the routines which dominate the computational load, attempting where possible to parallelize different loops from those already parallelized within MPI. This includes 3D FFT box operations, sparse matrix algebra operations, calculation of integrals, and Ewald summation. While the underlying numerical methods are unchanged, these developments represent significant changes to the algorithms used within ONETEP to distribute the workload across CPU cores. The new hybrid code exhibits much-improved strong scaling relative to the MPI-only code and permits calculations with a much higher ratio of cores to atoms. These developments result in a significantly shorter time to solution than was possible using MPI alone and facilitate the application of the ONETEP code to systems larger than previously feasible. We illustrate this with benchmark calculations from an amyloid fibril trimer containing 41,907 atoms. We use the code to study the mechanism of delamination of cellulose nanofibrils when undergoing sonification, a process which is controlled by a large number of interactions that collectively determine the structural properties of the fibrils. Many energy evaluations were needed for these simulations, and as these systems comprise up to 21,276 atoms this would not have been feasible without the developments described here.
Neutronics calculation of RTP core
NASA Astrophysics Data System (ADS)
Rabir, Mohamad Hairie B.; Zin, Muhammad Rawi B. Mohamed; Karim, Julia Bt. Abdul; Bayar, Abi Muttaqin B. Jalal; Usang, Mark Dennis Anak; Mustafa, Muhammad Khairul Ariff B.; Hamzah, Na'im Syauqi B.; Said, Norfarizan Bt. Mohd; Jalil, Muhammad Husamuddin B.
2017-01-01
Reactor calculation and simulation are significantly important to ensure safety and better utilization of a research reactor. The Malaysian's PUSPATI TRIGA Reactor (RTP) achieved initial criticality on June 28, 1982. The reactor is designed to effectively implement the various fields of basic nuclear research, manpower training, and production of radioisotopes. Since early 90s, neutronics modelling were used as part of its routine in-core fuel management activities. The are several computer codes have been used in RTP since then, based on 1D neutron diffusion, 2D neutron diffusion and 3D Monte Carlo neutron transport method. This paper describes current progress and overview on neutronics modelling development in RTP. Several important parameters were analysed such as keff, reactivity, neutron flux, power distribution and fission product build-up for the latest core configuration. The developed core neutronics model was validated by means of comparison with experimental and measurement data. Along with the RTP core model, the calculation procedure also developed to establish better prediction capability of RTP's behaviour.
Methodes d'optimisation des parametres 2D du reflecteur dans un reacteur a eau pressurisee
NASA Astrophysics Data System (ADS)
Clerc, Thomas
With a third of the reactors in activity, the Pressurized Water Reactor (PWR) is today the most used reactor design in the world. This technology equips all the 19 EDF power plants. PWRs fit into the category of thermal reactors, because it is mainly the thermal neutrons that contribute to the fission reaction. The pressurized light water is both used as the moderator of the reaction and as the coolant. The active part of the core is composed of uranium, slightly enriched in uranium 235. The reflector is a region surrounding the active core, and containing mostly water and stainless steel. The purpose of the reflector is to protect the vessel from radiations, and also to slow down the neutrons and reflect them into the core. Given that the neutrons participate to the reaction of fission, the study of their behavior within the core is capital to understand the general functioning of how the reactor works. The neutrons behavior is ruled by the transport equation, which is very complex to solve numerically, and requires very long calculation. This is the reason why the core codes that will be used in this study solve simplified equations to approach the neutrons behavior in the core, in an acceptable calculation time. In particular, we will focus our study on the diffusion equation and approximated transport equations, such as SPN or S N equations. The physical properties of the reflector are radically different from those of the fissile core, and this structural change causes important tilt in the neutron flux at the core/reflector interface. This is why it is very important to accurately design the reflector, in order to precisely recover the neutrons behavior over the whole core. Existing reflector calculation techniques are based on the Lefebvre-Lebigot method. This method is only valid if the energy continuum of the neutrons is discretized in two energy groups, and if the diffusion equation is used. The method leads to the calculation of a homogeneous reflector. The aim of this study is to create a computational scheme able to compute the parameters of heterogeneous, multi-group reflectors, with both diffusion and SPN/SN operators. For this purpose, two computational schemes are designed to perform such a reflector calculation. The strategy used in both schemes is to minimize the discrepancies between a power distribution computed with a core code and a reference distribution, which will be obtained with an APOLLO2 calculation based on the method Method Of Characteristics (MOC). In both computational schemes, the optimization parameters, also called control variables, are the diffusion coefficients in each zone of the reflector, for diffusion calculations, and the P-1 corrected macroscopic total cross-sections in each zone of the reflector, for SPN/SN calculations (or correction factors on these parameters). After a first validation of our computational schemes, the results are computed, always by optimizing the fast diffusion coefficient for each zone of the reflector. All the tools of the data assimilation have been used to reflect the different behavior of the solvers in the different parts of the core. Moreover, the reflector is refined in six separated zones, corresponding to the physical structure of the reflector. There will be then six control variables for the optimization algorithms. [special characters omitted]. Our computational schemes are then able to compute heterogeneous, 2-group or multi-group reflectors, using diffusion or SPN/SN operators. The optimization performed reduces the discrepancies distribution between the power computed with the core codes and the reference power. However, there are two main limitations to this study: first the homogeneous modeling of the reflector assemblies doesn't allow to properly describe its physical structure near the core/reflector interface. Moreover, the fissile assemblies are modeled in infinite medium, and this model reaches its limit at the core/reflector interface. These two problems should be tackled in future studies. (Abstract shortened by UMI.).
NASA Astrophysics Data System (ADS)
Hartini, Entin; Andiwijayakusuma, Dinan
2014-09-01
This research was carried out on the development of code for uncertainty analysis is based on a statistical approach for assessing the uncertainty input parameters. In the butn-up calculation of fuel, uncertainty analysis performed for input parameters fuel density, coolant density and fuel temperature. This calculation is performed during irradiation using Monte Carlo N-Particle Transport. The Uncertainty method based on the probabilities density function. Development code is made in python script to do coupling with MCNPX for criticality and burn-up calculations. Simulation is done by modeling the geometry of PWR terrace, with MCNPX on the power 54 MW with fuel type UO2 pellets. The calculation is done by using the data library continuous energy cross-sections ENDF / B-VI. MCNPX requires nuclear data in ACE format. Development of interfaces for obtaining nuclear data in the form of ACE format of ENDF through special process NJOY calculation to temperature changes in a certain range.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hartini, Entin, E-mail: entin@batan.go.id; Andiwijayakusuma, Dinan, E-mail: entin@batan.go.id
2014-09-30
This research was carried out on the development of code for uncertainty analysis is based on a statistical approach for assessing the uncertainty input parameters. In the butn-up calculation of fuel, uncertainty analysis performed for input parameters fuel density, coolant density and fuel temperature. This calculation is performed during irradiation using Monte Carlo N-Particle Transport. The Uncertainty method based on the probabilities density function. Development code is made in python script to do coupling with MCNPX for criticality and burn-up calculations. Simulation is done by modeling the geometry of PWR terrace, with MCNPX on the power 54 MW with fuelmore » type UO2 pellets. The calculation is done by using the data library continuous energy cross-sections ENDF / B-VI. MCNPX requires nuclear data in ACE format. Development of interfaces for obtaining nuclear data in the form of ACE format of ENDF through special process NJOY calculation to temperature changes in a certain range.« less
Rotor Wake/Stator Interaction Noise Prediction Code Technical Documentation and User's Manual
NASA Technical Reports Server (NTRS)
Topol, David A.; Mathews, Douglas C.
2010-01-01
This report documents the improvements and enhancements made by Pratt & Whitney to two NASA programs which together will calculate noise from a rotor wake/stator interaction. The code is a combination of subroutines from two NASA programs with many new features added by Pratt & Whitney. To do a calculation V072 first uses a semi-empirical wake prediction to calculate the rotor wake characteristics at the stator leading edge. Results from the wake model are then automatically input into a rotor wake/stator interaction analytical noise prediction routine which calculates inlet aft sound power levels for the blade-passage-frequency tones and their harmonics, along with the complex radial mode amplitudes. The code allows for a noise calculation to be performed for a compressor rotor wake/stator interaction, a fan wake/FEGV interaction, or a fan wake/core stator interaction. This report is split into two parts, the first part discusses the technical documentation of the program as improved by Pratt & Whitney. The second part is a user's manual which describes how input files are created and how the code is run.
Variation of SNOMED CT coding of clinical research concepts among coding experts.
Andrews, James E; Richesson, Rachel L; Krischer, Jeffrey
2007-01-01
To compare consistency of coding among professional SNOMED CT coders representing three commercial providers of coding services when coding clinical research concepts with SNOMED CT. A sample of clinical research questions from case report forms (CRFs) generated by the NIH-funded Rare Disease Clinical Research Network (RDCRN) were sent to three coding companies with instructions to code the core concepts using SNOMED CT. The sample consisted of 319 question/answer pairs from 15 separate studies. The companies were asked to select SNOMED CT concepts (in any form, including post-coordinated) that capture the core concept(s) reflected in the question. Also, they were asked to state their level of certainty, as well as how precise they felt their coding was. Basic frequencies were calculated to determine raw level agreement among the companies and other descriptive information. Krippendorff's alpha was used to determine a statistical measure of agreement among the coding companies for several measures (semantic, certainty, and precision). No significant level of agreement among the experts was found. There is little semantic agreement in coding of clinical research data items across coders from 3 professional coding services, even using a very liberal definition of agreement.
NASA Astrophysics Data System (ADS)
Dewi Syarifah, Ratna; Su'ud, Zaki; Basar, Khairul; Irwanto, Dwi
2017-07-01
Nuclear power has progressive improvement in the operating performance of exiting reactors and ensuring economic competitiveness of nuclear electricity around the world. The GFR use gas coolant and fast neutron spectrum. This research use helium coolant which has low neutron moderation, chemical inert and single phase. Comparative study on various geometrical core design for modular GFR with UN-PuN fuel long life without refuelling has been done. The calculation use SRAC2006 code both PIJ calculation and CITATION calculation. The data libraries use JENDL 4.0. The variation of fuel fraction is 40% until 65%. In this research, we varied the geometry of core reactor to find the optimum geometry design. The variation of the geometry design is balance cylinder; it means that the diameter active core (D) same with height active core (H). Second, pancake cylinder (D>H) and third, tall cylinder (D
DOE Office of Scientific and Technical Information (OSTI.GOV)
Benoit, J. C.; Bourdot, P.; Eschbach, R.
2012-07-01
A Decay Heat (DH) experiment on the whole core of the French Sodium-Cooled Fast Reactor PHENIX has been conducted in May 2008. The measurements began an hour and a half after the shutdown of the reactor and lasted twelve days. It is one of the experiments used for the experimental validation of the depletion code DARWIN thereby confirming the excellent performance of the aforementioned code. Discrepancies between measured and calculated decay heat do not exceed 8%. (authors)
PEBBLE: a two-dimensional steady-state pebble bed reactor thermal hydraulics code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vondy, D.R.
1981-09-01
This report documents the local implementation of the PEBBLE code to treat the two-dimensional steady-state pebble bed reactor thermal hydraulics problem. This code is implemented as a module of a computation system used for reactor core history calculations. Given power density data, the geometric description in (RZ), and basic heat removal conditions and thermal properties, the coolant properties, flow conditions, and temperature distributions in the pebble fuel elements are predicted. The calculation is oriented to the continuous fueling, steady state condition with consideration of the effect of the high energy neutron flux exposure and temperature history on the thermal conductivity.more » The coolant flow conditions are calculated for the same geometry as used in the neutronics calculation, power density and fluence data being used directly, and temperature results are made available for subsequent use.« less
Test case for VVER-1000 complex modeling using MCU and ATHLET
NASA Astrophysics Data System (ADS)
Bahdanovich, R. B.; Bogdanova, E. V.; Gamtsemlidze, I. D.; Nikonov, S. P.; Tikhomirov, G. V.
2017-01-01
The correct modeling of processes occurring in the fuel core of the reactor is very important. In the design and operation of nuclear reactors it is necessary to cover the entire range of reactor physics. Very often the calculations are carried out within the framework of only one domain, for example, in the framework of structural analysis, neutronics (NT) or thermal hydraulics (TH). However, this is not always correct, as the impact of related physical processes occurring simultaneously, could be significant. Therefore it is recommended to spend the coupled calculations. The paper provides test case for the coupled neutronics-thermal hydraulics calculation of VVER-1000 using the precise neutron code MCU and system engineering code ATHLET. The model is based on the fuel assembly (type 2M). Test case for calculation of power distribution, fuel and coolant temperature, coolant density, etc. has been developed. It is assumed that the test case will be used for simulation of VVER-1000 reactor and in the calculation using other programs, for example, for codes cross-verification. The detailed description of the codes (MCU, ATHLET), geometry and material composition of the model and an iterative calculation scheme is given in the paper. Script in PERL language was written to couple the codes.
NASA Astrophysics Data System (ADS)
Bejaoui, Najoua
The pressurized water nuclear reactors (PWRs) is the largest fleet of nuclear reactors in operation around the world. Although these reactors have been studied extensively by designers and operators using efficient numerical methods, there are still some calculation weaknesses, given the geometric complexity of the core, still unresolved such as the analysis of the neutron flux's behavior at the core-reflector interface. The standard calculation scheme is a two steps process. In the first step, a detailed calculation at the assembly level with reflective boundary conditions, provides homogenized cross-sections for the assemblies, condensed to a reduced number of groups; this step is called the lattice calculation. The second step uses homogenized properties in each assemblies to calculate reactor properties at the core level. This step is called the full-core calculation or whole-core calculation. This decoupling of the two calculation steps is the origin of methodological bias particularly at the interface core reflector: the periodicity hypothesis used to calculate cross section librairies becomes less pertinent for assemblies that are adjacent to the reflector generally represented by these two models: thus the introduction of equivalent reflector or albedo matrices. The reflector helps to slowdown neutrons leaving the reactor and returning them to the core. This effect leads to two fission peaks in fuel assemblies localised at the core/reflector interface, the fission rate increasing due to the greater proportion of reentrant neutrons. This change in the neutron spectrum arises deep inside the fuel located on the outskirts of the core. To remedy this we simulated a peripheral assembly reflected with TMI-PWR reflector and developed an advanced calculation scheme that takes into account the environment of the peripheral assemblies and generate equivalent neutronic properties for the reflector. This scheme is tested on a core without control mechanisms and charged with fresh fuel. The results of this study showed that explicit representation of reflector and calculation of peripheral assembly with our advanced scheme allow corrections to the energy spectrum at the core interface and increase the peripheral power by up to 12% compared with that of the reference scheme.
Development and preliminary verification of the 3D core neutronic code: COCO
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, H.; Mo, K.; Li, W.
As the recent blooming economic growth and following environmental concerns (China)) is proactively pushing forward nuclear power development and encouraging the tapping of clean energy. Under this situation, CGNPC, as one of the largest energy enterprises in China, is planning to develop its own nuclear related technology in order to support more and more nuclear plants either under construction or being operation. This paper introduces the recent progress in software development for CGNPC. The focus is placed on the physical models and preliminary verification results during the recent development of the 3D Core Neutronic Code: COCO. In the COCO code,more » the non-linear Green's function method is employed to calculate the neutron flux. In order to use the discontinuity factor, the Neumann (second kind) boundary condition is utilized in the Green's function nodal method. Additionally, the COCO code also includes the necessary physical models, e.g. single-channel thermal-hydraulic module, burnup module, pin power reconstruction module and cross-section interpolation module. The preliminary verification result shows that the COCO code is sufficient for reactor core design and analysis for pressurized water reactor (PWR). (authors)« less
Development of an object-oriented ORIGEN for advanced nuclear fuel modeling applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Skutnik, S.; Havloej, F.; Lago, D.
2013-07-01
The ORIGEN package serves as the core depletion and decay calculation module within the SCALE code system. A recent major re-factor to the ORIGEN code architecture as part of an overall modernization of the SCALE code system has both greatly enhanced its maintainability as well as afforded several new capabilities useful for incorporating depletion analysis into other code frameworks. This paper will present an overview of the improved ORIGEN code architecture (including the methods and data structures introduced) as well as current and potential future applications utilizing the new ORIGEN framework. (authors)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Daily, Charles R.
2015-10-01
An assessment of the impact on the High Flux Isotope Reactor (HFIR) reactor vessel (RV) displacements-per-atom (dpa) rates due to operations with the proposed low enriched uranium (LEU) core described by Ilas and Primm has been performed and is presented herein. The analyses documented herein support the conclusion that conversion of HFIR to low-enriched uranium (LEU) core operations using the LEU core design of Ilas and Primm will have no negative impact on HFIR RV dpa rates. Since its inception, HFIR has been operated with highly enriched uranium (HEU) cores. As part of an effort sponsored by the National Nuclearmore » Security Administration (NNSA), conversion to LEU cores is being considered for future HFIR operations. The HFIR LEU configurations analyzed are consistent with the LEU core models used by Ilas and Primm and the HEU balance-of-plant models used by Risner and Blakeman in the latest analyses performed to support the HFIR materials surveillance program. The Risner and Blakeman analyses, as well as the studies documented herein, are the first to apply the hybrid transport methods available in the Automated Variance reduction Generator (ADVANTG) code to HFIR RV dpa rate calculations. These calculations have been performed on the Oak Ridge National Laboratory (ORNL) Institutional Cluster (OIC) with version 1.60 of the Monte Carlo N-Particle 5 (MCNP5) computer code.« less
Porting ONETEP to graphical processing unit-based coprocessors. 1. FFT box operations.
Wilkinson, Karl; Skylaris, Chris-Kriton
2013-10-30
We present the first graphical processing unit (GPU) coprocessor-enabled version of the Order-N Electronic Total Energy Package (ONETEP) code for linear-scaling first principles quantum mechanical calculations on materials. This work focuses on porting to the GPU the parts of the code that involve atom-localized fast Fourier transform (FFT) operations. These are among the most computationally intensive parts of the code and are used in core algorithms such as the calculation of the charge density, the local potential integrals, the kinetic energy integrals, and the nonorthogonal generalized Wannier function gradient. We have found that direct porting of the isolated FFT operations did not provide any benefit. Instead, it was necessary to tailor the port to each of the aforementioned algorithms to optimize data transfer to and from the GPU. A detailed discussion of the methods used and tests of the resulting performance are presented, which show that individual steps in the relevant algorithms are accelerated by a significant amount. However, the transfer of data between the GPU and host machine is a significant bottleneck in the reported version of the code. In addition, an initial investigation into a dynamic precision scheme for the ONETEP energy calculation has been performed to take advantage of the enhanced single precision capabilities of GPUs. The methods used here result in no disruption to the existing code base. Furthermore, as the developments reported here concern the core algorithms, they will benefit the full range of ONETEP functionality. Our use of a directive-based programming model ensures portability to other forms of coprocessors and will allow this work to form the basis of future developments to the code designed to support emerging high-performance computing platforms. Copyright © 2013 Wiley Periodicals, Inc.
Thermal hydraulic-severe accident code interfaces for SCDAP/RELAP5/MOD3.2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Coryell, E.W.; Siefken, L.J.; Harvego, E.A.
1997-07-01
The SCDAP/RELAP5 computer code is designed to describe the overall reactor coolant system thermal-hydraulic response, core damage progression, and fission product release during severe accidents. The code is being developed at the Idaho National Engineering Laboratory under the primary sponsorship of the Office of Nuclear Regulatory Research of the U.S. Nuclear Regulatory Commission. The code is the result of merging the RELAP5, SCDAP, and COUPLE codes. The RELAP5 portion of the code calculates the overall reactor coolant system, thermal-hydraulics, and associated reactor system responses. The SCDAP portion of the code describes the response of the core and associated vessel structures.more » The COUPLE portion of the code describes response of lower plenum structures and debris and the failure of the lower head. The code uses a modular approach with the overall structure, input/output processing, and data structures following the pattern established for RELAP5. The code uses a building block approach to allow the code user to easily represent a wide variety of systems and conditions through a powerful input processor. The user can represent a wide variety of experiments or reactor designs by selecting fuel rods and other assembly structures from a range of representative core component models, and arrange them in a variety of patterns within the thermalhydraulic network. The COUPLE portion of the code uses two-dimensional representations of the lower plenum structures and debris beds. The flow of information between the different portions of the code occurs at each system level time step advancement. The RELAP5 portion of the code describes the fluid transport around the system. These fluid conditions are used as thermal and mass transport boundary conditions for the SCDAP and COUPLE structures and debris beds.« less
Fenton, Susan H; Benigni, Mary Sue
2014-01-01
The transition from ICD-9-CM to ICD-10-CM/PCS is expected to result in longitudinal data discontinuities, as occurred with cause-of-death in 1999. The General Equivalence Maps (GEMs), while useful for suggesting potential maps do not provide guidance regarding the frequency of any matches. Longitudinal data comparisons can only be reliable if they use comparability ratios or factors which have been calculated using records coded in both classification systems. This study utilized 3,969 de-identified dually coded records to examine raw comparability ratios, as well as the comparability ratios between the Joint Commission Core Measures. The raw comparability factor results range from 16.216 for Nicotine dependence, unspecified, uncomplicated to 118.009 for Chronic obstructive pulmonary disease, unspecified. The Joint Commission Core Measure comparability factor results range from 27.15 for Acute Respiratory Failure to 130.16 for Acute Myocardial Infarction. These results indicate significant differences in comparability between ICD-9-CM and ICD-10-CM code assignment, including when the codes are used for external reporting such as the Joint Commission Core Measures. To prevent errors in decision-making and reporting, all stakeholders relying on longitudinal data for measure reporting and other purposes should investigate the impact of the conversion on their data.
Methods and codes for neutronic calculations of the MARIA research reactor.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andrzejewski, K.; Kulikowska, T.; Bretscher, M. M.
2002-02-18
The core of the MARIA high flux multipurpose research reactor is highly heterogeneous. It consists of beryllium blocks arranged in 6 x 8 matrix, tubular fuel assemblies, control rods and irradiation channels. The reflector is also heterogeneous and consists of graphite blocks clad with aluminum. Its structure is perturbed by the experimental beam tubes. This paper presents methods and codes used to calculate the MARIA reactor neutronics characteristics and experience gained thus far at IAE and ANL. At ANL the methods of MARIA calculations were developed in connection with the RERTR program. At IAE the package of programs was developedmore » to help its operator in optimization of fuel utilization.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ezsoel, G.; Guba, A.; Perneczky, L.
Results of a small-break loss-of-coolant accident experiment, conducted on the PMK-2 integral-type test facility are presented. The experiment simulated a 1% break in the cold leg of a VVER-440-type reactor. The main phenomena of the experiment are discussed, and in the case of selected events, a more detailed interpretation with the help of measured void fraction, obtained by a special measurement device, is given. Two thermohydraulic computer codes, RELAP5 and ATHLET, are used for posttest calculations. The aim of these calculations is to investigate the code capability for modeling natural circulation phenomena in VVER-440-type reactors. Therefore, the results of themore » experiment and both calculations are compared. Both codes predict most of the transient events well, with the exception that RELAP5 fails to predict the dryout period in the core. In the experiment, the hot- and cold-leg loop-seal clearing is accompanied by natural circulation instabilities, which can be explained by means of the ATHLET calculation.« less
Determination of the NPP Kr\\vsko spent fuel decay heat
NASA Astrophysics Data System (ADS)
Kromar, Marjan; Kurinčič, Bojan
2017-07-01
Nuclear fuel is designed to support fission process in a reactor core. Some of the isotopes, formed during the fission, decay and produce decay heat and radiation. Accurate knowledge of the nuclide inventory producing decay heat is important after reactor shut down, during the fuel storage and subsequent reprocessing or disposal. In this paper possibility to calculate the fuel isotopic composition and determination of the fuel decay heat with the Serpent code is investigated. Serpent is a well-known Monte Carlo code used primarily for the calculation of the neutron transport in a reactor. It has been validated for the burn-up calculations. In the calculation of the fuel decay heat different set of isotopes is important than in the neutron transport case. Comparison with the Origen code is performed to verify that the Serpent is taking into account all isotopes important to assess the fuel decay heat. After the code validation, a sensitivity study is carried out. Influence of several factors such as enrichment, fuel temperature, moderator temperature (density), soluble boron concentration, average power, burnable absorbers, and burnup is analyzed.
Present Status and Extensions of the Monte Carlo Performance Benchmark
NASA Astrophysics Data System (ADS)
Hoogenboom, J. Eduard; Petrovic, Bojan; Martin, William R.
2014-06-01
The NEA Monte Carlo Performance benchmark started in 2011 aiming to monitor over the years the abilities to perform a full-size Monte Carlo reactor core calculation with a detailed power production for each fuel pin with axial distribution. This paper gives an overview of the contributed results thus far. It shows that reaching a statistical accuracy of 1 % for most of the small fuel zones requires about 100 billion neutron histories. The efficiency of parallel execution of Monte Carlo codes on a large number of processor cores shows clear limitations for computer clusters with common type computer nodes. However, using true supercomputers the speedup of parallel calculations is increasing up to large numbers of processor cores. More experience is needed from calculations on true supercomputers using large numbers of processors in order to predict if the requested calculations can be done in a short time. As the specifications of the reactor geometry for this benchmark test are well suited for further investigations of full-core Monte Carlo calculations and a need is felt for testing other issues than its computational performance, proposals are presented for extending the benchmark to a suite of benchmark problems for evaluating fission source convergence for a system with a high dominance ratio, for coupling with thermal-hydraulics calculations to evaluate the use of different temperatures and coolant densities and to study the correctness and effectiveness of burnup calculations. Moreover, other contemporary proposals for a full-core calculation with realistic geometry and material composition will be discussed.
Deterministic Modeling of the High Temperature Test Reactor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ortensi, J.; Cogliati, J. J.; Pope, M. A.
2010-06-01
Idaho National Laboratory (INL) is tasked with the development of reactor physics analysis capability of the Next Generation Nuclear Power (NGNP) project. In order to examine INL’s current prismatic reactor deterministic analysis tools, the project is conducting a benchmark exercise based on modeling the High Temperature Test Reactor (HTTR). This exercise entails the development of a model for the initial criticality, a 19 column thin annular core, and the fully loaded core critical condition with 30 columns. Special emphasis is devoted to the annular core modeling, which shares more characteristics with the NGNP base design. The DRAGON code is usedmore » in this study because it offers significant ease and versatility in modeling prismatic designs. Despite some geometric limitations, the code performs quite well compared to other lattice physics codes. DRAGON can generate transport solutions via collision probability (CP), method of characteristics (MOC), and discrete ordinates (Sn). A fine group cross section library based on the SHEM 281 energy structure is used in the DRAGON calculations. HEXPEDITE is the hexagonal z full core solver used in this study and is based on the Green’s Function solution of the transverse integrated equations. In addition, two Monte Carlo (MC) based codes, MCNP5 and PSG2/SERPENT, provide benchmarking capability for the DRAGON and the nodal diffusion solver codes. The results from this study show a consistent bias of 2–3% for the core multiplication factor. This systematic error has also been observed in other HTTR benchmark efforts and is well documented in the literature. The ENDF/B VII graphite and U235 cross sections appear to be the main source of the error. The isothermal temperature coefficients calculated with the fully loaded core configuration agree well with other benchmark participants but are 40% higher than the experimental values. This discrepancy with the measurement stems from the fact that during the experiments the control rods were adjusted to maintain criticality, whereas in the model, the rod positions were fixed. In addition, this work includes a brief study of a cross section generation approach that seeks to decouple the domain in order to account for neighbor effects. This spectral interpenetration is a dominant effect in annular HTR physics. This analysis methodology should be further explored in order to reduce the error that is systematically propagated in the traditional generation of cross sections.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Downar, Thomas
This report summarizes the current status of VERA-CS Verification and Validation for PWR Core Follow operation and proposes a multi-phase plan for continuing VERA-CS V&V in FY17 and FY18. The proposed plan recognizes the hierarchical nature of a multi-physics code system such as VERA-CS and the importance of first achieving an acceptable level of V&V on each of the single physics codes before focusing on the V&V of the coupled physics solution. The report summarizes the V&V of each of the single physics codes systems currently used for core follow analysis (ie MPACT, CTF, Multigroup Cross Section Generation, and BISONmore » / Fuel Temperature Tables) and proposes specific actions to achieve a uniformly acceptable level of V&V in FY17. The report also recognizes the ongoing development of other codes important for PWR Core Follow (e.g. TIAMAT, MAMBA3D) and proposes Phase II (FY18) VERA-CS V&V activities in which those codes will also reach an acceptable level of V&V. The report then summarizes the current status of VERA-CS multi-physics V&V for PWR Core Follow and the ongoing PWR Core Follow V&V activities for FY17. An automated procedure and output data format is proposed for standardizing the output for core follow calculations and automatically generating tables and figures for the VERA-CS Latex file. A set of acceptance metrics is also proposed for the evaluation and assessment of core follow results that would be used within the script to automatically flag any results which require further analysis or more detailed explanation prior to being added to the VERA-CS validation base. After the Automation Scripts have been completed and tested using BEAVRS, the VERA-CS plan proposes the Watts Bar cycle depletion cases should be performed with the new cross section library and be included in the first draft of the new VERA-CS manual for release at the end of PoR15. Also, within the constraints imposed by the proprietary nature of plant data, as many as possible of the FY17 AMA Plant Core Follow cases should also be included in the VERA-CS manual at the end of PoR15. After completion of the ongoing development of TIAMAT for fully coupled, full core calculations with VERA-CS / BISON 1.5D, and after the completion of the refactoring of MAMBA3D for CIPS analysis in FY17, selected cases from the VERA-CS validation based should be performed, beginning with the legacy cases of Watts Bar and BEAVRS in PoR16. Finally, as potential Phase III future work some additional considerations are identified for extending the VERA-CS V&V to other reactor types such as the BWR.« less
Numerical optimization of three-dimensional coils for NSTX-U
NASA Astrophysics Data System (ADS)
Lazerson, S. A.; Park, J.-K.; Logan, N.; Boozer, A.
2015-10-01
A tool for the calculation of optimal three-dimensional (3D) perturbative magnetic fields in tokamaks has been developed. The IPECOPT code builds upon the stellarator optimization code STELLOPT to allow for optimization of linear ideal magnetohydrodynamic perturbed equilibrium (IPEC). This tool has been applied to NSTX-U equilibria, addressing which fields are the most effective at driving NTV torques. The NTV torque calculation is performed by the PENT code. Optimization of the normal field spectrum shows that fields with n = 1 character can drive a large core torque. It is also shown that fields with n = 3 features are capable of driving edge torque and some core torque. Coil current optimization (using the planned in-vessel and existing RWM coils) on NSTX-U suggest the planned coils set is adequate for core and edge torque control. Comparison between error field correction experiments on DIII-D and the optimizer show good agreement. Notice: This manuscript has been authored by Princeton University under Contract Number DE-AC02-09CH11466 with the U.S. Department of Energy. The publisher, by accepting the article for publication acknowledges, that the United States Government retains a non-exclusive,paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes.
Fundamental approaches for analysis thermal hydraulic parameter for Puspati Research Reactor
NASA Astrophysics Data System (ADS)
Hashim, Zaredah; Lanyau, Tonny Anak; Farid, Mohamad Fairus Abdul; Kassim, Mohammad Suhaimi; Azhar, Noraishah Syahirah
2016-01-01
The 1-MW PUSPATI Research Reactor (RTP) is the one and only nuclear pool type research reactor developed by General Atomic (GA) in Malaysia. It was installed at Malaysian Nuclear Agency and has reached the first criticality on 8 June 1982. Based on the initial core which comprised of 80 standard TRIGA fuel elements, the very fundamental thermal hydraulic model was investigated during steady state operation using the PARET-code. The main objective of this paper is to determine the variation of temperature profiles and Departure of Nucleate Boiling Ratio (DNBR) of RTP at full power operation. The second objective is to confirm that the values obtained from PARET-code are in agreement with Safety Analysis Report (SAR) for RTP. The code was employed for the hot and average channels in the core in order to calculate of fuel's center and surface, cladding, coolant temperatures as well as DNBR's values. In this study, it was found that the results obtained from the PARET-code showed that the thermal hydraulic parameters related to safety for initial core which was cooled by natural convection was in agreement with the designed values and safety limit in SAR.
Analysis of JSI TRIGA MARK II reactor physical parameters calculated with TRIPOLI and MCNP.
Henry, R; Tiselj, I; Snoj, L
2015-03-01
New computational model of the JSI TRIGA Mark II research reactor was built for TRIPOLI computer code and compared with existing MCNP code model. The same modelling assumptions were used in order to check the differences of the mathematical models of both Monte Carlo codes. Differences between the TRIPOLI and MCNP predictions of keff were up to 100pcm. Further validation was performed with analyses of the normalized reaction rates and computations of kinetic parameters for various core configurations. Copyright © 2014 Elsevier Ltd. All rights reserved.
HackaMol: An Object-Oriented Modern Perl Library for Molecular Hacking on Multiple Scales
Riccardi, Demian M.; Parks, Jerry M.; Johs, Alexander; ...
2015-03-20
HackaMol is an open source, object-oriented toolkit written in Modern Perl that organizes atoms within molecules and provides chemically intuitive attributes and methods. The library consists of two components: HackaMol, the core that contains classes for storing and manipulating molecular information, and HackaMol::X, the extensions that use the core. We tested the core; it is well-documented and easy to install across computational platforms. Our goal for the extensions is to provide a more flexible space for researchers to develop and share new methods. In this application note, we provide a description of the core classes and two extensions: HackaMol::X::Calculator, anmore » abstract calculator that uses code references to generalize interfaces with external programs, and HackaMol::X::Vina, a structured class that provides an interface with the AutoDock Vina docking program.« less
HackaMol: An Object-Oriented Modern Perl Library for Molecular Hacking on Multiple Scales.
Riccardi, Demian; Parks, Jerry M; Johs, Alexander; Smith, Jeremy C
2015-04-27
HackaMol is an open source, object-oriented toolkit written in Modern Perl that organizes atoms within molecules and provides chemically intuitive attributes and methods. The library consists of two components: HackaMol, the core that contains classes for storing and manipulating molecular information, and HackaMol::X, the extensions that use the core. The core is well-tested, well-documented, and easy to install across computational platforms. The goal of the extensions is to provide a more flexible space for researchers to develop and share new methods. In this application note, we provide a description of the core classes and two extensions: HackaMol::X::Calculator, an abstract calculator that uses code references to generalize interfaces with external programs, and HackaMol::X::Vina, a structured class that provides an interface with the AutoDock Vina docking program.
HackaMol: An Object-Oriented Modern Perl Library for Molecular Hacking on Multiple Scales
DOE Office of Scientific and Technical Information (OSTI.GOV)
Riccardi, Demian M.; Parks, Jerry M.; Johs, Alexander
HackaMol is an open source, object-oriented toolkit written in Modern Perl that organizes atoms within molecules and provides chemically intuitive attributes and methods. The library consists of two components: HackaMol, the core that contains classes for storing and manipulating molecular information, and HackaMol::X, the extensions that use the core. We tested the core; it is well-documented and easy to install across computational platforms. Our goal for the extensions is to provide a more flexible space for researchers to develop and share new methods. In this application note, we provide a description of the core classes and two extensions: HackaMol::X::Calculator, anmore » abstract calculator that uses code references to generalize interfaces with external programs, and HackaMol::X::Vina, a structured class that provides an interface with the AutoDock Vina docking program.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhong, Z.; Klann, R. T.; Nuclear Engineering Division
2007-08-03
An initial series of calculations of the reactivity-worth of the OSMOSE samples in the MINERVE reactor with the R2-UO2 and MORGANE/R core configuration were completed. The calculation model was generated using the lattice physics code DRAGON. In addition, an initial comparison of calculated values to experimental measurements was performed based on preliminary results for the R1-MOX configuration.
SOPHAEROS code development and its application to falcon tests
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lajtha, G.; Missirlian, M.; Kissane, M.
1996-12-31
One of the key issues in source-term evaluation in nuclear reactor severe accidents is determination of the transport behavior of fission products released from the degrading core. The SOPHAEROS computer code is being developed to predict fission product transport in a mechanistic way in light water reactor circuits. These applications of the SOPHAEROS code to the Falcon experiments, among others not presented here, indicate that the numerical scheme of the code is robust, and no convergence problems are encountered. The calculation is also very fast being three times longer on a Sun SPARC 5 workstation than real time and typicallymore » {approx} 10 times faster than an identical calculation with the VICTORIA code. The study demonstrates that the SOPHAEROS 1.3 code is a suitable tool for prediction of the vapor chemistry and fission product transport with a reasonable level of accuracy. Furthermore, the fexibility of the code material data bank allows improvement of understanding of fission product transport and deposition in the circuit. Performing sensitivity studies with different chemical species or with different properties (saturation pressure, chemical equilibrium constants) is very straightforward.« less
NASA Astrophysics Data System (ADS)
Martin, G. B.; Kirtman, B.; Spera, F. J.
2010-12-01
Computational studies implementing Density Functional Theory (DFT) methods have become very popular in the Materials Sciences in recent years. DFT codes are now used routinely to simulate properties of geomaterials—mainly silicates and geochemically important metals such as Fe. These materials are ubiquitous in the Earth’s mantle and core and in terrestrial exoplanets. Because of computational limitations, most First Principles Molecular Dynamics (FPMD) calculations are done on systems of only 100 atoms for a few picoseconds. While this approach can be useful for calculating physical quantities related to crystal structure, vibrational frequency, and other lattice-scale properties (especially in crystals), it would be useful to be able to compute larger systems especially for extracting transport properties and coordination statistics. Previous studies have used codes such as VASP where CPU time increases as N2, making calculations on systems of more than 100 atoms computationally very taxing. SIESTA (Soler, et al. 2002) is a an order-N (linear-scaling) DFT code that enables electronic structure and MD computations on larger systems (N 1000) by making approximations such as localized numerical orbitals. Here we test the applicability of SIESTA to simulate geosilicates in the liquid and glass state. We have used SIESTA for MD simulations of liquid Mg2SiO4 at various state points pertinent to the Earth’s mantle and congruous with those calculated in a previous DFT study using the VASP code (DeKoker, et al. 2008). The core electronic wave functions of Mg, Si, and O were approximated using pseudopotentials with a core cutoff radius of 1.38, 1.0, and 0.61 Angstroms respectively. The Ceperly-Alder parameterization of the Local Density Approximation (LDA) was used as the exchange-correlation functional. Known systematic overbinding of LDA was corrected with the addition of a pressure term, P 1.6 GPa, which is the pressure calculated by SIESTA at the experimental zero-pressure volume of forsterite under static conditions (Stixrude and Lithgow-Bertollini 2005). Results are reported here that show SIESTA calculations of T and P on densities in the range of 2.7 - 5.0 g/cc of liquid Mg2SiO4 are similar to the VASP calculations of DeKoker et al. (2008), which used the same functional. This opens the possibility of conducting fast /emph{ab initio} MD simulations of geomaterials with a hundreds of atoms.
NASA Astrophysics Data System (ADS)
Mielikainen, Jarno; Huang, Bormin; Huang, Allen H.-L.
2015-05-01
Intel Many Integrated Core (MIC) ushers in a new era of supercomputing speed, performance, and compatibility. It allows the developers to run code at trillions of calculations per second using the familiar programming model. In this paper, we present our results of optimizing the updated Goddard shortwave radiation Weather Research and Forecasting (WRF) scheme on Intel Many Integrated Core Architecture (MIC) hardware. The Intel Xeon Phi coprocessor is the first product based on Intel MIC architecture, and it consists of up to 61 cores connected by a high performance on-die bidirectional interconnect. The co-processor supports all important Intel development tools. Thus, the development environment is familiar one to a vast number of CPU developers. Although, getting a maximum performance out of Xeon Phi will require using some novel optimization techniques. Those optimization techniques are discusses in this paper. The results show that the optimizations improved performance of the original code on Xeon Phi 7120P by a factor of 1.3x.
Jones, S.; Hirschi, R.; Pignatari, M.; ...
2015-01-15
We present a comparison of 15M ⊙ , 20M ⊙ and 25M ⊙ stellar models from three different codes|GENEC, KEPLER and MESA|and their nucleosynthetic yields. The models are calculated from the main sequence up to the pre-supernova (pre-SN) stage and do not include rotation. The GENEC and KEPLER models hold physics assumptions that are characteristic of the two codes. The MESA code is generally more flexible; overshooting of the convective core during the hydrogen and helium burning phases in MESA is chosen such that the CO core masses are consistent with those in the GENEC models. Full nucleosynthesis calculations aremore » performed for all models using the NuGrid post-processing tool MPPNP and the key energy-generating nuclear reaction rates are the same for all codes. We are thus able to highlight the key diferences between the models that are caused by the contrasting physics assumptions and numerical implementations of the three codes. A reasonable agreement is found between the surface abundances predicted by the models computed using the different codes, with GENEC exhibiting the strongest enrichment of H-burning products and KEPLER exhibiting the weakest. There are large variations in both the structure and composition of the models—the 15M ⊙ and 20M ⊙ in particular—at the pre-SN stage from code to code caused primarily by convective shell merging during the advanced stages. For example the C-shell abundances of O, Ne and Mg predicted by the three codes span one order of magnitude in the 15M ⊙ models. For the alpha elements between Si and Fe the differences are even larger. The s-process abundances in the C shell are modified by the merging of convective shells; the modification is strongest in the 15M ⊙ model in which the C-shell material is exposed to O-burning temperatures and the γ -process is activated. The variation in the s-process abundances across the codes is smallest in the 25M ⊙ models, where it is comparable to the impact of nuclear reaction rate uncertainties. In general the differences in the results from the three codes are due to their contrasting physics assumptions (e.g. prescriptions for mass loss and convection). The broadly similar evolution of the 25M ⊙ models gives us reassurance that different stellar evolution codes do produce similar results. For the 15M ⊙ and 20M ⊙ models, however, the different input physics and the interplay between the various convective zones lead to important differences in both the pre-supernova structure and nucleosynthesis predicted by the three codes. For the KEPLER models the core masses are different and therefore an exact match could not be expected.« less
Nonlinear dynamic simulation of single- and multi-spool core engines
NASA Technical Reports Server (NTRS)
Schobeiri, T.; Lippke, C.; Abouelkheir, M.
1993-01-01
In this paper a new computational method for accurate simulation of the nonlinear dynamic behavior of single- and multi-spool core engines, turbofan engines, and power generation gas turbine engines is presented. In order to perform the simulation, a modularly structured computer code has been developed which includes individual mathematical modules representing various engine components. The generic structure of the code enables the dynamic simulation of arbitrary engine configurations ranging from single-spool thrust generation to multi-spool thrust/power generation engines under adverse dynamic operating conditions. For precise simulation of turbine and compressor components, row-by-row calculation procedures were implemented that account for the specific turbine and compressor cascade and blade geometry and characteristics. The dynamic behavior of the subject engine is calculated by solving a number of systems of partial differential equations, which describe the unsteady behavior of the individual components. In order to ensure the capability, accuracy, robustness, and reliability of the code, comprehensive critical performance assessment and validation tests were performed. As representatives, three different transient cases with single- and multi-spool thrust and power generation engines were simulated. The transient cases range from operating with a prescribed fuel schedule, to extreme load changes, to generator and turbine shut down.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steiner, J.L.; Lime, J.F.; Elson, J.S.
One dimensional TRAC transient calculations of the process inherent ultimate safety (PIUS) advanced reactor design were performed for a pump-trip SCRAM. The TRAC calculations showed that the reactor power response and shutdown were in qualitative agreement with the one-dimensional analyses presented in the PIUS Preliminary Safety Information Document (PSID) submitted by Asea Brown Boveri (ABB) to the US Nuclear Regulatory Commission for preapplication safety review. The PSID analyses were performed with the ABB-developed RIGEL code. The TRAC-calculated phenomena and trends were also similar to those calculated with another one-dimensional PIUS model, the Brookhaven National Laboratory developed PIPA code. A TRACmore » pump-trip SCRAM transient has also been calculated with a TRAC model containing a multi-dimensional representation of the PIUS intemal flow structures and core region. The results obtained using the TRAC fully one-dimensional PIUS model are compared to the RIGEL, PIPA, and TRAC multi-dimensional results.« less
Anisn-Dort Neutron-Gamma Flux Intercomparison Exercise for a Simple Testing Model
NASA Astrophysics Data System (ADS)
Boehmer, B.; Konheiser, J.; Borodkin, G.; Brodkin, E.; Egorov, A.; Kozhevnikov, A.; Zaritsky, S.; Manturov, G.; Voloschenko, A.
2003-06-01
The ability of transport codes ANISN, DORT, ROZ-6, MCNP and TRAMO, as well as nuclear data libraries BUGLE-96, ABBN-93, VITAMIN-B6 and ENDF/B-6 to deliver consistent gamma and neutron flux results was tested in the calculation of a one-dimensional cylindrical model consisting of a homogeneous core and an outer zone with a single material. Model variants with H2O, Fe, Cr and Ni in the outer zones were investigated. The results are compared with MCNP-ENDF/B-6 results. Discrepancies are discussed. The specified test model is proposed as a computational benchmark for testing calculation codes and data libraries.
Impact of Americium-241 (n,γ) Branching Ratio on SFR Core Reactivity and Spent Fuel Characteristics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hiruta, Hikaru; Youinou, Gilles J.; Dixon, Brent W.
An accurate prediction of core physics and fuel cycle parameters largely depends on the order of details and accuracy in nuclear data taken into account for actual calculations. 241Am is a major gateway nuclide for most of minor actinides and thus important nuclide for core physics and fuel-cycle calculations. The 241Am(n,?) branching ratio (BR) is in fact the energy dependent (see Fig. 1), therefore, it is necessary to taken into account the spectrum effect on the calculation of the average BR for the full-core depletion calculations. Moreover, the accuracy of the BR used in the depletion calculations could significantly influencemore » the core physics performance and post irradiated fuel compositions. The BR of 241Am(n,?) in ENDF/B-VII.0 library is relatively small and flat in thermal energy range, gradually increases within the intermediate energy range, and even becomes larger at the fast energy range. This indicates that the properly collapsed BR for fast reactors could be significantly different from that of thermal reactors. The evaluated BRs are also differ from one evaluation to another. As seen in Table I, average BRs for several evaluated libraries calculated by means of a fast spectrum are similar but have some differences. Most of currently available depletion codes use a pre-determined single value BR for each library. However, ideally it should be determined on-the-fly basis like that of one-group cross sections. These issues provide a strong incentive to investigate the effect of different 241Am(n,?) BRs on core and spent fuel parameters. This paper investigates the impact of the 241Am(n,?) BR on the results of SFR full-core based fuel-cycle calculations. The analysis is performed by gradually increasing the value of BR from 0.15 to 0.25 and studying its impact on the core reactivity and characteristics of SFR spent fuels over extended storage times (~10,000 years).« less
Monte Carlo Analysis of the Battery-Type High Temperature Gas Cooled Reactor
NASA Astrophysics Data System (ADS)
Grodzki, Marcin; Darnowski, Piotr; Niewiński, Grzegorz
2017-12-01
The paper presents a neutronic analysis of the battery-type 20 MWth high-temperature gas cooled reactor. The developed reactor model is based on the publicly available data being an `early design' variant of the U-battery. The investigated core is a battery type small modular reactor, graphite moderated, uranium fueled, prismatic, helium cooled high-temperature gas cooled reactor with graphite reflector. The two core alternative designs were investigated. The first has a central reflector and 30×4 prismatic fuel blocks and the second has no central reflector and 37×4 blocks. The SERPENT Monte Carlo reactor physics computer code, with ENDF and JEFF nuclear data libraries, was applied. Several nuclear design static criticality calculations were performed and compared with available reference results. The analysis covered the single assembly models and full core simulations for two geometry models: homogenous and heterogenous (explicit). A sensitivity analysis of the reflector graphite density was performed. An acceptable agreement between calculations and reference design was obtained. All calculations were performed for the fresh core state.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Galloway, Jack; Matthews, Topher
The development of MAMBA is targeted at capturing both core wide CRUD induced power shifts (CIPS) as well as pin-level CRUD induced localized corrosion (CILC). Both CIPS and CILC require some sort of information from thermal-hydraulic, neutronics, and fuel performance codes, although the degree of coupling is different for the two effects. Since CIPS necessarily requires a core-wide power distribution solve, it requires tight coupling with a neutronics code. Conversely, CIPS tends to be an individual pin phenomenon, requiring tight coupling a fuel performance code. As efforts are now focused on coupling MAMBA within the VERA suite, a natural separationmore » has surfaced in which a FORTRAN rewrite of MAMBA is optimal for VERA integration to capture CIPS behavior, while a CILC focused calculation would benefit from a tight coupling with BISON, motivating a MOOSE version of MAMBA.« less
Criticality Calculations with MCNP6 - Practical Lectures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, Forrest B.; Rising, Michael Evan; Alwin, Jennifer Louise
2016-11-29
These slides are used to teach MCNP (Monte Carlo N-Particle) usage to nuclear criticality safety analysts. The following are the lecture topics: course information, introduction, MCNP basics, criticality calculations, advanced geometry, tallies, adjoint-weighted tallies and sensitivities, physics and nuclear data, parameter studies, NCS validation I, NCS validation II, NCS validation III, case study 1 - solution tanks, case study 2 - fuel vault, case study 3 - B&W core, case study 4 - simple TRIGA, case study 5 - fissile mat. vault, criticality accident alarm systems. After completion of this course, you should be able to: Develop an input modelmore » for MCNP; Describe how cross section data impact Monte Carlo and deterministic codes; Describe the importance of validation of computer codes and how it is accomplished; Describe the methodology supporting Monte Carlo codes and deterministic codes; Describe pitfalls of Monte Carlo calculations; Discuss the strengths and weaknesses of Monte Carlo and Discrete Ordinants codes; The diffusion theory model is not strictly valid for treating fissile systems in which neutron absorption, voids, and/or material boundaries are present. In the context of these limitations, identify a fissile system for which a diffusion theory solution would be adequate.« less
Results of the Simulation of the HTR-Proteus Core 4.2 Using PEBBED-COMBINE: FY10 Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hans Gougar
2010-07-01
ABSTRACT The Idaho National Laboratory’s deterministic neutronics analysis codes and methods were applied to the computation of the core multiplication factor of the HTR-Proteus pebble bed reactor critical facility. This report is a follow-on to INL/EXT-09-16620 in which the same calculation was performed but using earlier versions of the codes and less developed methods. In that report, results indicated that the cross sections generated using COMBINE-7.0 did not yield satisfactory estimates of keff. It was concluded in the report that the modeling of control rods was not satisfactory. In the past year, improvements to the homogenization capability in COMBINE havemore » enabled the explicit modeling of TRIS particles, pebbles, and heterogeneous core zones including control rod regions using a new multi-scale version of COMBINE in which the 1-dimensional discrete ordinate transport code ANISN has been integrated. The new COMBINE is shown to yield benchmark quality results for pebble unit cell models, the first step in preparing few-group diffusion parameters for core simulations. In this report, the full critical core is modeled once again but with cross sections generated using the capabilities and physics of the improved COMBINE code. The new PEBBED-COMBINE model enables the exact modeling of the pebbles and control rod region along with better approximation to structures in the reflector. Initial results for the core multiplication factor indicate significant improvement in the INL’s tools for modeling the neutronic properties of a pebble bed reactor. Errors on the order of 1.6-2.5% in keff are obtained; a significant improvement over the 5-6% error observed in the earlier This is acceptable for a code system and model in the early stages of development but still too high for a production code. Analysis of a simpler core model indicates an over-prediction of the flux in the low end of the thermal spectrum. Causes of this discrepancy are under investigation. New homogenization techniques and assumptions were used in this analysis and as such, they require further confirmation and validation. Further refinement and review of the complex Proteus core model are likely to reduce the errors even further.« less
Theoretical Developments in Understanding Massive Star Formation
NASA Technical Reports Server (NTRS)
Yorke, Harold W.; Bodenheimer, Peter
2007-01-01
Except under special circumstances massive stars in galactic disks will form through accretion. The gravitational collapse of a molecular cloud core will initially produce one or more low mass quasi-hydrostatic objects of a few Jupiter masses. Through subsequent accretion the masses of these cores grow as they simultaneously evolve toward hydrogen burning central densities and temperatures. We review the evolution of accreting (proto-)stars, including new results calculated with a publicly available stellar evolution code written by the authors.
NASA Astrophysics Data System (ADS)
Osuský, F.; Bahdanovich, R.; Farkas, G.; Haščík, J.; Tikhomirov, G. V.
2017-01-01
The paper is focused on development of the coupled neutronics-thermal hydraulics model for the Gas-cooled Fast Reactor. It is necessary to carefully investigate coupled calculations of new concepts to avoid recriticality scenarios, as it is not possible to ensure sub-critical state for a fast reactor core under core disruptive accident conditions. Above mentioned calculations are also very suitable for development of new passive or inherent safety systems that can mitigate the occurrence of the recriticality scenarios. In the paper, the most promising fuel material compositions together with a geometry model are described for the Gas-cooled fast reactor. Seven fuel pin and fuel assembly geometry is proposed as a test case for coupled calculation with three different enrichments of fissile material in the form of Pu-UC. The reflective boundary condition is used in radial directions of the test case and vacuum boundary condition is used in axial directions. During these condition, the nuclear system is in super-critical state and to achieve a stable state (which is numerical representation of operational conditions) it is necessary to decrease the reactivity of the system. The iteration scheme is proposed, where SCALE code system is used for collapsing of a macroscopic cross-section into few group representation as input for coupled code NESTLE.
A solid reactor core thermal model for nuclear thermal rockets
NASA Astrophysics Data System (ADS)
Rider, William J.; Cappiello, Michael W.; Liles, Dennis R.
1991-01-01
A Helium/Hydrogen Cooled Reactor Analysis (HERA) computer code has been developed. HERA has the ability to model arbitrary geometries in three dimensions, which allows the user to easily analyze reactor cores constructed of prismatic graphite elements. The code accounts for heat generation in the fuel, control rods, and other structures; conduction and radiation across gaps; convection to the coolant; and a variety of boundary conditions. The numerical solution scheme has been optimized for vector computers, making long transient analyses economical. Time integration is either explicit or implicit, which allows the use of the model to accurately calculate both short- or long-term transients with an efficient use of computer time. Both the basic spatial and temporal integration schemes have been benchmarked against analytical solutions.
[Series: Medical Applications of the PHITS Code (2): Acceleration by Parallel Computing].
Furuta, Takuya; Sato, Tatsuhiko
2015-01-01
Time-consuming Monte Carlo dose calculation becomes feasible owing to the development of computer technology. However, the recent development is due to emergence of the multi-core high performance computers. Therefore, parallel computing becomes a key to achieve good performance of software programs. A Monte Carlo simulation code PHITS contains two parallel computing functions, the distributed-memory parallelization using protocols of message passing interface (MPI) and the shared-memory parallelization using open multi-processing (OpenMP) directives. Users can choose the two functions according to their needs. This paper gives the explanation of the two functions with their advantages and disadvantages. Some test applications are also provided to show their performance using a typical multi-core high performance workstation.
Salko, Robert K.; Schmidt, Rodney C.; Avramova, Maria N.
2014-11-23
This study describes major improvements to the computational infrastructure of the CTF subchannel code so that full-core, pincell-resolved (i.e., one computational subchannel per real bundle flow channel) simulations can now be performed in much shorter run-times, either in stand-alone mode or as part of coupled-code multi-physics calculations. These improvements support the goals of the Department Of Energy Consortium for Advanced Simulation of Light Water Reactors (CASL) Energy Innovation Hub to develop high fidelity multi-physics simulation tools for nuclear energy design and analysis.
Post-Test Analysis of 11% Break at PSB-VVER Experimental Facility using Cathare 2 Code
NASA Astrophysics Data System (ADS)
Sabotinov, Luben; Chevrier, Patrick
The best estimate French thermal-hydraulic computer code CATHARE 2 Version 2.5_1 was used for post-test analysis of the experiment “11% upper plenum break”, conducted at the large-scale test facility PSB-VVER in Russia. The PSB rig is 1:300 scaled model of VVER-1000 NPP. A computer model has been developed for CATHARE 2 V2.5_1, taking into account all important components of the PSB facility: reactor model (lower plenum, core, bypass, upper plenum, downcomer), 4 separated loops, pressurizer, horizontal multitube steam generators, break section. The secondary side is represented by recirculation model. A large number of sensitivity calculations has been performed regarding break modeling, reactor pressure vessel modeling, counter current flow modeling, hydraulic losses, heat losses. The comparison between calculated and experimental results shows good prediction of the basic thermal-hydraulic phenomena and parameters such as pressures, temperatures, void fractions, loop seal clearance, etc. The experimental and calculation results are very sensitive regarding the fuel cladding temperature, which show a periodical nature. With the applied CATHARE 1D modeling, the global thermal-hydraulic parameters and the core heat up have been reasonably predicted.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rohatgi, U.S.; Cheng, H.S.; Khan, H.J.
This document is the User`s Manual for the Boiling Water Reactor (BWR), and Simplified Boiling Water Reactor (SBWR) systems transient code RAMONA-4B. The code uses a three-dimensional neutron-kinetics model coupled with a multichannel, nonequilibrium, drift-flux, phase-flow model of the thermal hydraulics of the reactor vessel. The code is designed to analyze a wide spectrum of BWR core and system transients. Chapter 1 gives an overview of the code`s capabilities and limitations; Chapter 2 describes the code`s structure, lists major subroutines, and discusses the computer requirements. Chapter 3 is on code, auxillary codes, and instructions for running RAMONA-4B on Sun SPARCmore » and IBM Workstations. Chapter 4 contains component descriptions and detailed card-by-card input instructions. Chapter 5 provides samples of the tabulated output for the steady-state and transient calculations and discusses the plotting procedures for the steady-state and transient calculations. Three appendices contain important user and programmer information: lists of plot variables (Appendix A) listings of input deck for sample problem (Appendix B), and a description of the plotting program PAD (Appendix C). 24 refs., 18 figs., 11 tabs.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Page, R.; Jones, J.R.
1997-07-01
Ensuring that safety analysis needs are met in the future is likely to lead to the development of new codes and the further development of existing codes. It is therefore advantageous to define standards for data interfaces and to develop software interfacing techniques which can readily accommodate changes when they are made. Defining interface standards is beneficial but is necessarily restricted in application if future requirements are not known in detail. Code interfacing methods are of particular relevance with the move towards automatic grid frequency response operation where the integration of plant dynamic, core follow and fault study calculation toolsmore » is considered advantageous. This paper describes the background and features of a new code TALINK (Transient Analysis code LINKage program) used to provide a flexible interface to link the RELAP5 thermal hydraulics code with the PANTHER neutron kinetics and the SIBDYM whole plant dynamic modelling codes used by Nuclear Electric. The complete package enables the codes to be executed in parallel and provides an integrated whole plant thermal-hydraulics and neutron kinetics model. In addition the paper discusses the capabilities and pedigree of the component codes used to form the integrated transient analysis package and the details of the calculation of a postulated Sizewell `B` Loss of offsite power fault transient.« less
Efficient implementation of core-excitation Bethe-Salpeter equation calculations
NASA Astrophysics Data System (ADS)
Gilmore, K.; Vinson, John; Shirley, E. L.; Prendergast, D.; Pemmaraju, C. D.; Kas, J. J.; Vila, F. D.; Rehr, J. J.
2015-12-01
We present an efficient implementation of the Bethe-Salpeter equation (BSE) method for obtaining core-level spectra including X-ray absorption (XAS), X-ray emission (XES), and both resonant and non-resonant inelastic X-ray scattering spectra (N/RIXS). Calculations are based on density functional theory (DFT) electronic structures generated either by ABINIT or QuantumESPRESSO, both plane-wave basis, pseudopotential codes. This electronic structure is improved through the inclusion of a GW self energy. The projector augmented wave technique is used to evaluate transition matrix elements between core-level and band states. Final two-particle scattering states are obtained with the NIST core-level BSE solver (NBSE). We have previously reported this implementation, which we refer to as OCEAN (Obtaining Core Excitations from Ab initio electronic structure and NBSE) (Vinson et al., 2011). Here, we present additional efficiencies that enable us to evaluate spectra for systems ten times larger than previously possible; containing up to a few thousand electrons. These improvements include the implementation of optimal basis functions that reduce the cost of the initial DFT calculations, more complete parallelization of the screening calculation and of the action of the BSE Hamiltonian, and various memory reductions. Scaling is demonstrated on supercells of SrTiO3 and example spectra for the organic light emitting molecule Tris-(8-hydroxyquinoline)aluminum (Alq3) are presented. The ability to perform large-scale spectral calculations is particularly advantageous for investigating dilute or non-periodic systems such as doped materials, amorphous systems, or complex nano-structures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dokhane, A.; Canepa, S.; Ferroukhi, H.
For stability analyses of the Swiss operating Boiling-Water-Reactors (BWRs), the methodology employed and validated so far at the Paul Scherrer Inst. (PSI) was based on the RAMONA-3 code with a hybrid upstream static lattice/core analysis approach using CASMO-4 and PRESTO-2. More recently, steps were undertaken towards a new methodology based on the SIMULATE-3K (S3K) code for the dynamical analyses combined with the CMSYS system relying on the CASMO/SIMULATE-3 suite of codes and which was established at PSI to serve as framework for the development and validation of reference core models of all the Swiss reactors and operated cycles. This papermore » presents a first validation of the new methodology on the basis of a benchmark recently organised by a Swiss utility and including the participation of several international organisations with various codes/methods. Now in parallel, a transition from CASMO-4E (C4E) to CASMO-5M (C5M) as basis for the CMSYS core models was also recently initiated at PSI. Consequently, it was considered adequate to address the impact of this transition both for the steady-state core analyses as well as for the stability calculations and to achieve thereby, an integral approach for the validation of the new S3K methodology. Therefore, a comparative assessment of C4 versus C5M is also presented in this paper with particular emphasis on the void coefficients and their impact on the downstream stability analysis results. (authors)« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Setiani, Tia Dwi, E-mail: tiadwisetiani@gmail.com; Suprijadi; Nuclear Physics and Biophysics Reaserch Division, Faculty of Mathematics and Natural Sciences, Institut Teknologi Bandung Jalan Ganesha 10 Bandung, 40132
Monte Carlo (MC) is one of the powerful techniques for simulation in x-ray imaging. MC method can simulate the radiation transport within matter with high accuracy and provides a natural way to simulate radiation transport in complex systems. One of the codes based on MC algorithm that are widely used for radiographic images simulation is MC-GPU, a codes developed by Andrea Basal. This study was aimed to investigate the time computation of x-ray imaging simulation in GPU (Graphics Processing Unit) compared to a standard CPU (Central Processing Unit). Furthermore, the effect of physical parameters to the quality of radiographic imagesmore » and the comparison of image quality resulted from simulation in the GPU and CPU are evaluated in this paper. The simulations were run in CPU which was simulated in serial condition, and in two GPU with 384 cores and 2304 cores. In simulation using GPU, each cores calculates one photon, so, a large number of photon were calculated simultaneously. Results show that the time simulations on GPU were significantly accelerated compared to CPU. The simulations on the 2304 core of GPU were performed about 64 -114 times faster than on CPU, while the simulation on the 384 core of GPU were performed about 20 – 31 times faster than in a single core of CPU. Another result shows that optimum quality of images from the simulation was gained at the history start from 10{sup 8} and the energy from 60 Kev to 90 Kev. Analyzed by statistical approach, the quality of GPU and CPU images are relatively the same.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kurosu, K; Department of Medical Physics ' Engineering, Osaka University Graduate School of Medicine, Osaka; Takashina, M
Purpose: Monte Carlo codes are becoming important tools for proton beam dosimetry. However, the relationships between the customizing parameters and percentage depth dose (PDD) of GATE and PHITS codes have not been reported which are studied for PDD and proton range compared to the FLUKA code and the experimental data. Methods: The beam delivery system of the Indiana University Health Proton Therapy Center was modeled for the uniform scanning beam in FLUKA and transferred identically into GATE and PHITS. This computational model was built from the blue print and validated with the commissioning data. Three parameters evaluated are the maximummore » step size, cut off energy and physical and transport model. The dependence of the PDDs on the customizing parameters was compared with the published results of previous studies. Results: The optimal parameters for the simulation of the whole beam delivery system were defined by referring to the calculation results obtained with each parameter. Although the PDDs from FLUKA and the experimental data show a good agreement, those of GATE and PHITS obtained with our optimal parameters show a minor discrepancy. The measured proton range R90 was 269.37 mm, compared to the calculated range of 269.63 mm, 268.96 mm, and 270.85 mm with FLUKA, GATE and PHITS, respectively. Conclusion: We evaluated the dependence of the results for PDDs obtained with GATE and PHITS Monte Carlo generalpurpose codes on the customizing parameters by using the whole computational model of the treatment nozzle. The optimal parameters for the simulation were then defined by referring to the calculation results. The physical model, particle transport mechanics and the different geometrybased descriptions need accurate customization in three simulation codes to agree with experimental data for artifact-free Monte Carlo simulation. This study was supported by Grants-in Aid for Cancer Research (H22-3rd Term Cancer Control-General-043) from the Ministry of Health, Labor and Welfare of Japan, Grants-in-Aid for Scientific Research (No. 23791419), and JSPS Core-to-Core program (No. 23003). The authors have no conflict of interest.« less
A Two-Step Approach to Uncertainty Quantification of Core Simulators
Yankov, Artem; Collins, Benjamin; Klein, Markus; ...
2012-01-01
For the multiple sources of error introduced into the standard computational regime for simulating reactor cores, rigorous uncertainty analysis methods are available primarily to quantify the effects of cross section uncertainties. Two methods for propagating cross section uncertainties through core simulators are the XSUSA statistical approach and the “two-step” method. The XSUSA approach, which is based on the SUSA code package, is fundamentally a stochastic sampling method. Alternatively, the two-step method utilizes generalized perturbation theory in the first step and stochastic sampling in the second step. The consistency of these two methods in quantifying uncertainties in the multiplication factor andmore » in the core power distribution was examined in the framework of phase I-3 of the OECD Uncertainty Analysis in Modeling benchmark. With the Three Mile Island Unit 1 core as a base model for analysis, the XSUSA and two-step methods were applied with certain limitations, and the results were compared to those produced by other stochastic sampling-based codes. Based on the uncertainty analysis results, conclusions were drawn as to the method that is currently more viable for computing uncertainties in burnup and transient calculations.« less
Earth and Planetary Science Letters
NASA Technical Reports Server (NTRS)
Nishiizumi, K.; Klein, J.; Middleton, R.; Masarik, J.; Reedy, R. C.; Arnold, J. R.; Fink, D.
1997-01-01
Systematic measurements of the concentrations of cosmogen Ca-41 (half-life = 1.04 x 10(exp 5) yr) in the Apollo 15 long core 15001-15006 were performed by accelerator mass spectroscopy. Earlier measurements of cosmogenic Be-10, C-14, Al-26, Cl-36, and Mn-53 in the same core have provided confirmation and improvement of theoretical models for predicting production profiles of nuclides by cosmic ray induced spallation in the Moon and large meteorites. Unlike these nuclides, Ca-40 in the lunar surface is produced mainly by thermal neutron capture reactions on Ca-40. The maximum production of Ca-41, about 1 dpm/g Ca, was observed at a depth in the Moon of about 150 g/sq cm. For depths below about 300 g/sq cm, Ca-41 production falls off exponentially with an e-folding length of 175 g/sq cm. Neutron production in the Moon was modeled with the Los Alamos High Energy Transport Code System, and yields of nuclei produced by low-energy thermal and epithermal neutrons were calculated with the Monte Carlo N-Particle code. The new theoretical calculations using these codes are in good agreement with our measured Ca-41 concentrations as well as with Co-60 and direct neutron fluence measurements in the Moon.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pang, Xiaoying; Rybarcyk, Larry
HPSim is a GPU-accelerated online multi-particle beam dynamics simulation tool for ion linacs. It was originally developed for use on the Los Alamos 800-MeV proton linac. It is a “z-code” that contains typical linac beam transport elements. The linac RF-gap transformation utilizes transit-time-factors to calculate the beam acceleration therein. The space-charge effects are computed using the 2D SCHEFF (Space CHarge EFFect) algorithm, which calculates the radial and longitudinal space charge forces for cylindrically symmetric beam distributions. Other space- charge routines to be incorporated include the 3D PICNIC and a 3D Poisson solver. HPSim can simulate beam dynamics in drift tubemore » linacs (DTLs) and coupled cavity linacs (CCLs). Elliptical superconducting cavity (SC) structures will also be incorporated into the code. The computational core of the code is written in C++ and accelerated using the NVIDIA CUDA technology. Users access the core code, which is wrapped in Python/C APIs, via Pythons scripts that enable ease-of-use and automation of the simulations. The overall linac description including the EPICS PV machine control parameters is kept in an SQLite database that also contains calibration and conversion factors required to transform the machine set points into model values used in the simulation.« less
Nuclear modules for space electric propulsion
NASA Technical Reports Server (NTRS)
Difilippo, F. C.
1998-01-01
Analysis of interplanetary cargo and piloted missions requires calculations of the performances and masses of subsystems to be integrated in a final design. In a preliminary and scoping stage the designer needs to evaluate options iteratively by using fast computer simulations. The Oak Ridge National Laboratory (ORNL) has been involved in the development of models and calculational procedures for the analysis (neutronic and thermal hydraulic) of power sources for nuclear electric propulsion. The nuclear modules will be integrated into the whole simulation of the nuclear electric propulsion system. The vehicles use either a Brayton direct-conversion cycle, using the heated helium from a NERVA-type reactor, or a potassium Rankine cycle, with the working fluid heated on the secondary side of a heat exchanger and lithium on the primary side coming from a fast reactor. Given a set of input conditions, the codes calculate composition. dimensions, volumes, and masses of the core, reflector, control system, pressure vessel, neutron and gamma shields, as well as the thermal hydraulic conditions of the coolant, clad and fuel. Input conditions are power, core life, pressure and temperature of the coolant at the inlet of the core, either the temperature of the coolant at the outlet of the core or the coolant mass flow and the fluences and integrated doses at the cargo area. Using state-of-the-art neutron cross sections and transport codes, a database was created for the neutronic performance of both reactor designs. The free parameters of the models are the moderator/fuel mass ratio for the NERVA reactor and the enrichment and the pitch of the lattice for the fast reactor. Reactivity and energy balance equations are simultaneously solved to find the reactor design. Thermalhydraulic conditions are calculated by solving the one-dimensional versions of the equations of conservation of mass, energy, and momentum with compressible flow.
Khattab, K; Sulieman, I
2009-04-01
The MCNP-4C code, based on the probabilistic approach, was used to model the 3D configuration of the core of the Syrian miniature neutron source reactor (MNSR). The continuous energy neutron cross sections from the ENDF/B-VI library were used to calculate the thermal and fast neutron fluxes in the inner and outer irradiation sites of MNSR. The thermal fluxes in the MNSR inner irradiation sites were also measured experimentally by the multiple foil activation method ((197)Au (n, gamma) (198)Au and (59)Co (n, gamma) (60)Co). The foils were irradiated simultaneously in each of the five MNSR inner irradiation sites to measure the thermal neutron flux and the epithermal index in each site. The calculated and measured results agree well.
Preliminary Analysis of the BASALA-H Experimental Programme
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blaise, Patrick; Fougeras, Philippe; Philibert, Herve
2002-07-01
This paper is focused on the preliminary analysis of results obtained on the first cores of the first phase of the BASALA (Boiling water reactor Advanced core physics Study Aimed at mox fuel Lattice) programme, aimed at studying the neutronic parameters in ABWR core in hot conditions, currently under investigation in the French EOLE critical facility, within the framework of a cooperation between NUPEC, CEA and Cogema. The first 'on-line' analysis of the results has been made, using a new preliminary design and safety scheme based on the French APOLLO-2 code in its 2.4 qualified version and associated CEA-93 V4more » (JEF-2.2) Library, that will enable the Experimental Physics Division (SPEx) to perform future core designs. It describes the scheme adopted and the results obtained in various cases, going to the critical size determination to the reactivity worth of the perturbed configurations (voided, over-moderated, and poisoned with Gd{sub 2}O{sub 3}-UO{sub 2} pins). A preliminary study on the experimental results on the MISTRAL-4 is resumed, and the comparison of APOLLO-2 versus MCNP-4C calculations on these cores is made. The results obtained show very good agreements between the two codes, and versus the experiment. This work opens the way to the future full analysis of the experimental results of the qualifying teams with completely validated schemes, based on the new 2.5 version of the APOLLO-2 code. (authors)« less
Core Composition and the Magnetic Field of Mercury
NASA Astrophysics Data System (ADS)
Spohn, T.; Breuer, D.
2005-05-01
The density of Mercury suggests a core of approximately 1800 km radius and a mantle of approximately 600 km thickness. Convection in the mantle is often claimed to be capable of freezing the core over the lifetime of the solar system if the core is nearly pure iron. The thermal history calculations of Stevenson et al. (1983) and Schubert et al. (1988) suggest that about 5 weight-% sulphur are required to lower the core liquidus sufficiently to prevent complete freezing of the core and maintain a significant fluid outer core shell. Other candidates for a light alloying element require similarly large concentrations. The requirement of a significant concentration of volatile elements in the core is likely to be at variance with cosmochemical arguments for a mostly refractory, volatile poor composition of the planet. We have re-addressed the question of the freezing of Mercury's core using parameterized convection models based on the stagnant lid theory of planetary mantle convection. We have compared these results to earlier calculations (Conzelmann and Spohn, 1999) of Hermian mantle convection using a finite-amplitude convection code. We find consistently that the stagnant lid tends to thermally insulate the deep interior and we find mantle and core temperatures significantly larger than those calculated by Stevenson et al. (1983) and Schubert et al. (1988). As a consequence we find fluid outer core shells for reasonable mantle rheology parameters even for compositions with as little as 0.1 weight-% sulphur. Stevenson, D.J., T. Spohn, and G. Schubert. Icarus, 54, 466, 1983. Schubert, G. M.N. Ross, D.J. Stevenson, and T. Spohn, in Mercury, F. Vilas, C.R. Chapman and M.S. Matthews, eds., p.429, 1988. Conzelmann, V. and T. Spohn, Bull. Am. Astr. Soc., 31, 1102, 1999.
Waveguide to Core: A New Approach to RF Modelling
NASA Astrophysics Data System (ADS)
Wright, John; Shiraiwa, Syunichi; Rf-Scidac Team
2017-10-01
A new technique for the calculation of RF waves in toroidal geometry enables the simultaneous incorporation of antenna geometry, plasma facing components (PFCs), the scrape off-layer (SOL) and core propagation [Shiraiwa, NF 2017]. Calculations with this technique naturally capture wave propagation in the SOL and its interactions with non-conforming PFCs permitting self-consistent calculation of core absorption and edge power loss. The main motivating insight is that the core plasma region having closed flux surfaces requires a hot plasma dielectric while the open field line region in the scrape-off layer needs only a cold plasma dielectric. Spectral approaches work well for the former and finite elements work well for the latter. The validity of this process follows directly from the superposition principle of Maxwell's equations making this technique exact. The method is independent of the codes or representations used and works for any frequency regime. Applications to minority heating in Alcator C-Mod and ITER and high harmonic heating in NSTX-U will be presented in single pass and multi-pass regimes. Support from DoE Grant Number DE-FG02-91-ER54109 (theory and computer resources) and DE-FC02-01ER54648 (RF SciDAC).
Measurement and calculation of fast neutron and gamma spectra in well defined cores in LR-0 reactor.
Košťál, Michal; Matěj, Zdeněk; Cvachovec, František; Rypar, Vojtěch; Losa, Evžen; Rejchrt, Jiří; Mravec, Filip; Veškrna, Martin
2017-02-01
A well-defined neutron spectrum is essential for many types of experimental topics and is also important for both calibration and testing of spectrometric and dosimetric detectors. Provided it is well described, such a spectrum can also be employed as a reference neutron field that is suitable for validating selected cross sections. The present paper aims to compare calculations and measurements of such a well-defined spectra in geometrically similar cores of the LR-0 reactor with fuel containing slightly different enrichments (2%, 3.3% and 3.6%). The common feature to all cores is a centrally located dry channel which can be used for the insertion of studied materials. The calculation of neutron and gamma spectra was realized with the MCNP6 code using ENDF/B-VII.0, JEFF-3.1, JENDL-3.3, ROSFOND-2010 and CENDL-3.1 nuclear data libraries. Only minor differences in neutron and gamma spectra were found in the comparison of the presented reactor cores with different fuel enrichments. One exception is the gamma spectrum in the higher energy region (above 8MeV), where more pronounced variations could be observed. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Kuntoro, Iman; Sembiring, T. M.; Susilo, Jati; Deswandri; Sunaryo, G. R.
2018-02-01
Calculations of criticality of the AP1000 core due to the use of new edition of nuclear data library namely ENDF/B-VII and ENDF/B-VII.1 have been done. This work is aimed to know the accuracy of ENDF/B-VII.1 compared to ENDF/B-VII and ENDF/B-VI.8. in determining the criticality parameter of AP1000. Analysis ws imposed to core at cold zero power (CZP) conditions. The calculations have been carried out by means of MCNP computer code for 3 dimension geometry. The results show that criticality parameter namely effective multiplication factor of the AP1000 core are higher than that ones resulted from ENDF/B-VI.8 with relative differences of 0.39% for application of ENDF/B-VII and of 0.34% for application of ENDF/B-VII.1.
NASA Astrophysics Data System (ADS)
Greiner, Nathan
Core simulations for Pressurized Water Reactors (PWR) is insured by a set of computer codes which allows, under certain assumptions, to approximate the physical quantities of interest, such as the effective multiplication factor or the power or temperature distributions. The neutronics calculation scheme relies on three great steps : -- the production of an isotopic cross-sections library ; -- the production of a reactor database through the lattice calculation ; -- the full-core calculation. In the lattice calculation, in which Boltzmann's transport equation is solved over an assembly geometry, the temperature distribution is uniform and constant during irradiation. This represents a set of approximations since, on the one hand, the temperature distribution in the assembly is not uniform (strong temperature gradients in the fuel pins, discrepancies between the fuel pins) and on the other hand, irradiation causes the thermal properties of the pins to change, which modifies the temperature distribution. Our work aims at implementing and introducing a neutronics-thermomechanics coupling into the lattice calculation to finely discretize the temperature distribution and to study its effects. To perform the study, CEA (Commissariat a l'Energie Atomique et aux Energies Alternatives) lattice code APOLLO2 was used for neutronics and EDF (Electricite De France) code C3THER was used for the thermal calculations. We show very small effects of the pin-scaled coupling when comparing the use of a temperature profile with the use of an uniform temperature over UOX-type and MOX-type fuels. We next investigate the thermal feedback using an assembly-scaled coupling taking into account the presence of large water gaps on an UOX-type assembly at burnup 0. We show the very small impact on the calculation of the hot spot factor. Finally, the coupling is introduced into the isotopic depletion calculation and we show that reactivity and isotopic number densities deviations remain small albeit not negligible for UOX-type and MOX-type assemblies. The specific behavior of gadolinium-stuffed fuel pins in an UO2Gd2O 3-type assembly is highlighted.
Calculation of the Phenix end-of-life test 'Control Rod Withdrawal' with the ERANOS code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tiberi, V.
2012-07-01
The Inst. of Radiological Protection and Nuclear Safety (IRSN) acts as technical support to French public authorities. As such, IRSN is in charge of safety assessment of operating and under construction reactors, as well as future projects. In this framework, one current objective of IRSN is to evaluate the ability and accuracy of numerical tools to foresee consequences of accidents. Neutronic studies step in the safety assessment from different points of view among which the core design and its protection system. They are necessary to evaluate the core behavior in case of accident in order to assess the integrity ofmore » the first barrier and the absence of a prompt criticality risk. To reach this objective one main physical quantity has to be evaluated accurately: the neutronic power distribution in core during whole reactor lifetime. Phenix end of life tests, carried out in 2009, aim at increasing the experience feedback on sodium cooled fast reactors. These experiments have been done in the framework of the development of the 4. generation of nuclear reactors. Ten tests have been carried out: 6 on neutronic and fuel aspects, 2 on thermal hydraulics and 2 for the emergency shutdown. Two of them have been chosen for an international exercise on thermal hydraulics and neutronics in the frame of an IAEA Coordinated Research Project. Concerning neutronics, the Control Rod Withdrawal test is relevant for safety because it allows evaluating the capability of calculation tools to compute the radial power distribution on fast reactors core configurations in which the flux field is very deformed. IRSN participated to this benchmark with the ERANOS code developed by CEA for fast reactors studies. This paper presents the results obtained in the framework of the benchmark activity. A relatively good agreement was found with available measures considering the approximations done in the modeling. The work underlines the importance of burn-up calculations in order to have a fine core concentrations mesh for the calculation of the power distribution. (authors)« less
Processor-in-memory-and-storage architecture
DOE Office of Scientific and Technical Information (OSTI.GOV)
DeBenedictis, Erik
A method and apparatus for performing reliable general-purpose computing. Each sub-core of a plurality of sub-cores of a processor core processes a same instruction at a same time. A code analyzer receives a plurality of residues that represents a code word corresponding to the same instruction and an indication of whether the code word is a memory address code or a data code from the plurality of sub-cores. The code analyzer determines whether the plurality of residues are consistent or inconsistent. The code analyzer and the plurality of sub-cores perform a set of operations based on whether the code wordmore » is a memory address code or a data code and a determination of whether the plurality of residues are consistent or inconsistent.« less
Method for depleting BWRs using optimal control rod patterns
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taner, M.S.; Levine, S.H.; Hsiao, M.Y.
1991-01-01
Control rod (CR) programming is an essential core management activity for boiling water reactors (BWRs). After establishing a core reload design for a BWR, CR programming is performed to develop a sequence of exposure-dependent CR patterns that assure the safe and effective depletion of the core through a reactor cycle. A time-variant target power distribution approach has been assumed in this study. The authors have developed OCTOPUS to implement a new two-step method for designing semioptimal CR programs for BWRs. The optimization procedure of OCTOPUS is based on the method of approximation programming and uses the SIMULATE-E code for nucleonicsmore » calculations.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Naqvi, S
2014-06-15
Purpose: Most medical physics programs emphasize proficiency in routine clinical calculations and QA. The formulaic aspect of these calculations and prescriptive nature of measurement protocols obviate the need to frequently apply basic physical principles, which, therefore, gradually decay away from memory. E.g. few students appreciate the role of electron transport in photon dose, making it difficult to understand key concepts such as dose buildup, electronic disequilibrium effects and Bragg-Gray theory. These conceptual deficiencies manifest when the physicist encounters a new system, requiring knowledge beyond routine activities. Methods: Two interactive computer simulation tools are developed to facilitate deeper learning of physicalmore » principles. One is a Monte Carlo code written with a strong educational aspect. The code can “label” regions and interactions to highlight specific aspects of the physics, e.g., certain regions can be designated as “starters” or “crossers,” and any interaction type can be turned on and off. Full 3D tracks with specific portions highlighted further enhance the visualization of radiation transport problems. The second code calculates and displays trajectories of a collection electrons under arbitrary space/time dependent Lorentz force using relativistic kinematics. Results: Using the Monte Carlo code, the student can interactively study photon and electron transport through visualization of dose components, particle tracks, and interaction types. The code can, for instance, be used to study kerma-dose relationship, explore electronic disequilibrium near interfaces, or visualize kernels by using interaction forcing. The electromagnetic simulator enables the student to explore accelerating mechanisms and particle optics in devices such as cyclotrons and linacs. Conclusion: The proposed tools are designed to enhance understanding of abstract concepts by highlighting various aspects of the physics. The simulations serve as virtual experiments that give deeper and long lasting understanding of core principles. The student can then make sound judgements in novel situations encountered beyond routine clinical activities.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gerhard Strydom
2014-04-01
The INL PHISICS code system consists of three modules providing improved core simulation capability: INSTANT (performing 3D nodal transport core calculations), MRTAU (depletion and decay heat generation) and a perturbation/mixer module. Coupling of the PHISICS code suite to the thermal hydraulics system code RELAP5-3D has recently been finalized, and as part of the code verification and validation program the exercises defined for Phase I of the OECD/NEA MHTGR 350 MW Benchmark were completed. This paper provides an overview of the MHTGR Benchmark, and presents selected results of the three steady state exercises 1-3 defined for Phase I. For Exercise 1,more » a stand-alone steady-state neutronics solution for an End of Equilibrium Cycle Modular High Temperature Reactor (MHTGR) was calculated with INSTANT, using the provided geometry, material descriptions, and detailed cross-section libraries. Exercise 2 required the modeling of a stand-alone thermal fluids solution. The RELAP5-3D results of four sub-cases are discussed, consisting of various combinations of coolant bypass flows and material thermophysical properties. Exercise 3 combined the first two exercises in a coupled neutronics and thermal fluids solution, and the coupled code suite PHISICS/RELAP5-3D was used to calculate the results of two sub-cases. The main focus of the paper is a comparison of the traditional RELAP5-3D “ring” model approach vs. a much more detailed model that include kinetics feedback on individual block level and thermal feedbacks on a triangular sub-mesh. The higher fidelity of the block model is illustrated with comparison results on the temperature, power density and flux distributions, and the typical under-predictions produced by the ring model approach are highlighted.« less
NASA Technical Reports Server (NTRS)
Bade, W. L.; Yos, J. M.
1975-01-01
The present, third volume of the final report is a programmer's manual for the code. It provides a listing of the FORTRAN 4 source program; a complete glossary of FORTRAN symbols; a discussion of the purpose and method of operation of each subroutine (including mathematical analyses of special algorithms); and a discussion of the operation of the code on IBM/360 and UNIVAC 1108 systems, including required control cards and the overlay structure used to accommodate the code to the limited core size of the 1108. In addition, similar information is provided to document the programming of the NOZFIT code, which is employed to set up nozzle profile curvefits for use in NATA.
Porting a Hall MHD Code to a Graphic Processing Unit
NASA Technical Reports Server (NTRS)
Dorelli, John C.
2011-01-01
We present our experience porting a Hall MHD code to a Graphics Processing Unit (GPU). The code is a 2nd order accurate MUSCL-Hancock scheme which makes use of an HLL Riemann solver to compute numerical fluxes and second-order finite differences to compute the Hall contribution to the electric field. The divergence of the magnetic field is controlled with Dedner?s hyperbolic divergence cleaning method. Preliminary benchmark tests indicate a speedup (relative to a single Nehalem core) of 58x for a double precision calculation. We discuss scaling issues which arise when distributing work across multiple GPUs in a CPU-GPU cluster.
NIRP Core Software Suite v. 1.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
Whitener, Dustin Heath; Folz, Wesley; Vo, Duong
The NIRP Core Software Suite is a core set of code that supports multiple applications. It includes miscellaneous base code for data objects, mathematic equations, and user interface components; and the framework includes several fully-developed software applications that exist as stand-alone tools to compliment other applications. The stand-alone tools are described below. Analyst Manager: An application to manage contact information for people (analysts) that use the software products. This information is often included in generated reports and may be used to identify the owners of calculations. Radionuclide Viewer: An application for viewing the DCFPAK radiological data. Compliments the Mixture Managermore » tool. Mixture Manager: An application to create and manage radionuclides mixtures that are commonly used in other applications. High Explosive Manager: An application to manage explosives and their properties. Chart Viewer: An application to view charts of data (e.g. meteorology charts). Other applications may use this framework to create charts specific to their data needs.« less
Method for rapid high-frequency seismogram calculation
NASA Astrophysics Data System (ADS)
Stabile, Tony Alfredo; De Matteis, Raffaella; Zollo, Aldo
2009-02-01
We present a method for rapid, high-frequency seismogram calculation that makes use of an algorithm to automatically generate an exhaustive set of seismic phases with an appreciable amplitude on the seismogram. The method uses a hierarchical order of ray and seismic-phase generation, taking into account some existing constraints for ray paths and some physical constraints. To compute synthetic seismograms, the COMRAD code (from the Italian: "COdice Multifase per il RAy-tracing Dinamico") uses as core a dynamic ray-tracing code. To validate the code, we have computed in a layered medium synthetic seismograms using both COMRAD and a code that computes the complete wave field by the discrete wave number method. The seismograms are compared according to a time-frequency misfit criteria based on the continuous wavelet transform of the signals. Although the number of phases is considerably reduced by the selection criteria, the results show that the loss in amplitude on the whole seismogram is negligible. Moreover, the time for the computing of the synthetics using the COMRAD code (truncating the ray series at the 10th generation) is 3-4-fold less than that needed for the AXITRA code (up to a frequency of 25 Hz).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kodavasal, Janardhan; Harms, Kevin; Srivastava, Priyesh
A closed-cycle gasoline compression ignition engine simulation near top dead center (TDC) was used to profile the performance of a parallel commercial engine computational fluid dynamics code, as it was scaled on up to 4096 cores of an IBM Blue Gene/Q supercomputer. The test case has 9 million cells near TDC, with a fixed mesh size of 0.15 mm, and was run on configurations ranging from 128 to 4096 cores. Profiling was done for a small duration of 0.11 crank angle degrees near TDC during ignition. Optimization of input/output performance resulted in a significant speedup in reading restart files, andmore » in an over 100-times speedup in writing restart files and files for post-processing. Improvements to communication resulted in a 1400-times speedup in the mesh load balancing operation during initialization, on 4096 cores. An improved, “stiffness-based” algorithm for load balancing chemical kinetics calculations was developed, which results in an over 3-times faster run-time near ignition on 4096 cores relative to the original load balancing scheme. With this improvement to load balancing, the code achieves over 78% scaling efficiency on 2048 cores, and over 65% scaling efficiency on 4096 cores, relative to 256 cores.« less
Equilibrium cycle pin by pin transport depletion calculations with DeCART
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kochunas, B.; Downar, T.; Taiwo, T.
As the Advanced Fuel Cycle Initiative (AFCI) program has matured it has become more important to utilize more advanced simulation methods. The work reported here was performed as part of the AFCI fellowship program to develop and demonstrate the capability of performing high fidelity equilibrium cycle calculations. As part of the work here, a new multi-cycle analysis capability was implemented in the DeCART code which included modifying the depletion modules to perform nuclide decay calculations, implementing an assembly shuffling pattern description, and modifying iteration schemes. During the work, stability issues were uncovered with respect to converging simultaneously the neutron flux,more » isotopics, and fluid density and temperature distributions in 3-D. Relaxation factors were implemented which considerably improved the stability of the convergence. To demonstrate the capability two core designs were utilized, a reference UOX core and a CORAIL core. Full core equilibrium cycle calculations were performed on both cores and the discharge isotopics were compared. From this comparison it was noted that the improved modeling capability was not drastically different in its prediction of the discharge isotopics when compared to 2-D single assembly or 2-D core models. For fissile isotopes such as U-235, Pu-239, and Pu-241 the relative differences were 1.91%, 1.88%, and 0.59%), respectively. While this difference may not seem large it translates to mass differences on the order of tens of grams per assembly, which may be significant for the purposes of accounting of special nuclear material. (authors)« less
Elaborate SMART MCNP Modelling Using ANSYS and Its Applications
NASA Astrophysics Data System (ADS)
Song, Jaehoon; Surh, Han-bum; Kim, Seung-jin; Koo, Bonsueng
2017-09-01
An MCNP 3-dimensional model can be widely used to evaluate various design parameters such as a core design or shielding design. Conventionally, a simplified 3-dimensional MCNP model is applied to calculate these parameters because of the cumbersomeness of modelling by hand. ANSYS has a function for converting the CAD `stp' format into an MCNP input in the geometry part. Using ANSYS and a 3- dimensional CAD file, a very detailed and sophisticated MCNP 3-dimensional model can be generated. The MCNP model is applied to evaluate the assembly weighting factor at the ex-core detector of SMART, and the result is compared with a simplified MCNP SMART model and assembly weighting factor calculated by DORT, which is a deterministic Sn code.
Bumper 3 Update for IADC Protection Manual
NASA Technical Reports Server (NTRS)
Christiansen, Eric L.; Nagy, Kornel; Hyde, Jim
2016-01-01
The Bumper code has been the standard in use by NASA and contractors to perform meteoroid/debris risk assessments since 1990. It has undergone extensive revisions and updates [NASA JSC HITF website; Christiansen et al., 1992, 1997]. NASA Johnson Space Center (JSC) has applied BUMPER to risk assessments for Space Station, Shuttle, Mir, Extravehicular Mobility Units (EMU) space suits, and other spacecraft (e.g., LDEF, Iridium, TDRS, and Hubble Space Telescope). Bumper continues to be updated with changes in the ballistic limit equations describing failure threshold of various spacecraft components, as well as changes in the meteoroid and debris environment models. Significant efforts are expended to validate Bumper and benchmark it to other meteoroid/debris risk assessment codes. Bumper 3 is a refactored version of Bumper II. The structure of the code was extensively modified to improve maintenance, performance and flexibility. The architecture was changed to separate the frequently updated ballistic limit equations from the relatively stable common core functions of the program. These updates allow NASA to produce specific editions of the Bumper 3 that are tailored for specific customer requirements. The core consists of common code necessary to process the Micrometeoroid and Orbital Debris (MMOD) environment models, assess shadowing and calculate MMOD risk. The library of target response subroutines includes a board range of different types of MMOD shield ballistic limit equations as well as equations describing damage to various spacecraft subsystems or hardware (thermal protection materials, windows, radiators, solar arrays, cables, etc.). The core and library of ballistic response subroutines are maintained under configuration control. A change in the core will affect all editions of the code, whereas a change in one or more of the response subroutines will affect all editions of the code that contain the particular response subroutines which are modified. Note that the Bumper II program is no longer maintained or distributed by NASA.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gehin, Jess C; Godfrey, Andrew T; Evans, Thomas M
The Consortium for Advanced Simulation of Light Water Reactors (CASL) is developing a collection of methods and software products known as VERA, the Virtual Environment for Reactor Applications, including a core simulation capability called VERA-CS. A key milestone for this endeavor is to validate VERA against measurements from operating nuclear power reactors. The first step in validation against plant data is to determine the ability of VERA to accurately simulate the initial startup physics tests for Watts Bar Nuclear Power Station, Unit 1 (WBN1) cycle 1. VERA-CS calculations were performed with the Insilico code developed at ORNL using cross sectionmore » processing from the SCALE system and the transport capabilities within the Denovo transport code using the SPN method. The calculations were performed with ENDF/B-VII.0 cross sections in 252 groups (collapsed to 23 groups for the 3D transport solution). The key results of the comparison of calculations with measurements include initial criticality, control rod worth critical configurations, control rod worth, differential boron worth, and isothermal temperature reactivity coefficient (ITC). The VERA results for these parameters show good agreement with measurements, with the exception of the ITC, which requires additional investigation. Results are also compared to those obtained with Monte Carlo methods and a current industry core simulator.« less
Neutron-gamma flux and dose calculations in a Pressurized Water Reactor (PWR)
NASA Astrophysics Data System (ADS)
Brovchenko, Mariya; Dechenaux, Benjamin; Burn, Kenneth W.; Console Camprini, Patrizio; Duhamel, Isabelle; Peron, Arthur
2017-09-01
The present work deals with Monte Carlo simulations, aiming to determine the neutron and gamma responses outside the vessel and in the basemat of a Pressurized Water Reactor (PWR). The model is based on the Tihange-I Belgian nuclear reactor. With a large set of information and measurements available, this reactor has the advantage to be easily modelled and allows validation based on the experimental measurements. Power distribution calculations were therefore performed with the MCNP code at IRSN and compared to the available in-core measurements. Results showed a good agreement between calculated and measured values over the whole core. In this paper, the methods and hypotheses used for the particle transport simulation from the fission distribution in the core to the detectors outside the vessel of the reactor are also summarized. The results of the simulations are presented including the neutron and gamma doses and flux energy spectra. MCNP6 computational results comparing JEFF3.1 and ENDF-B/VII.1 nuclear data evaluations and sensitivity of the results to some model parameters are presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shemon, Emily R.; Smith, Micheal A.; Lee, Changho
2016-02-16
PROTEUS-SN is a three-dimensional, highly scalable, high-fidelity neutron transport code developed at Argonne National Laboratory. The code is applicable to all spectrum reactor transport calculations, particularly those in which a high degree of fidelity is needed either to represent spatial detail or to resolve solution gradients. PROTEUS-SN solves the second order formulation of the transport equation using the continuous Galerkin finite element method in space, the discrete ordinates approximation in angle, and the multigroup approximation in energy. PROTEUS-SN’s parallel methodology permits the efficient decomposition of the problem by both space and angle, permitting large problems to run efficiently on hundredsmore » of thousands of cores. PROTEUS-SN can also be used in serial or on smaller compute clusters (10’s to 100’s of cores) for smaller homogenized problems, although it is generally more computationally expensive than traditional homogenized methodology codes. PROTEUS-SN has been used to model partially homogenized systems, where regions of interest are represented explicitly and other regions are homogenized to reduce the problem size and required computational resources. PROTEUS-SN solves forward and adjoint eigenvalue problems and permits both neutron upscattering and downscattering. An adiabatic kinetics option has recently been included for performing simple time-dependent calculations in addition to standard steady state calculations. PROTEUS-SN handles void and reflective boundary conditions. Multigroup cross sections can be generated externally using the MC2-3 fast reactor multigroup cross section generation code or internally using the cross section application programming interface (API) which can treat the subgroup or resonance table libraries. PROTEUS-SN is written in Fortran 90 and also includes C preprocessor definitions. The code links against the PETSc, METIS, HDF5, and MPICH libraries. It optionally links against the MOAB library and is a part of the SHARP multi-physics suite for coupled multi-physics analysis of nuclear reactors. This user manual describes how to set up a neutron transport simulation with the PROTEUS-SN code. A companion methodology manual describes the theory and algorithms within PROTEUS-SN.« less
Comparisons for ESTA-Task3: ASTEC, CESAM and CLÉS
NASA Astrophysics Data System (ADS)
Christensen-Dalsgaard, J.
The ESTA activity under the CoRoT project aims at testing the tools for computing stellar models and oscillation frequencies that will be used in the analysis of asteroseismic data from CoRoT and other large-scale upcoming asteroseismic projects. Here I report results of comparisons between calculations using the Aarhus code (ASTEC) and two other codes, for models that include diffusion and settling. It is found that there are likely deficiencies, requiring further study, in the ASTEC computation of models including convective cores.
A computer program for estimation from incomplete multinomial data
NASA Technical Reports Server (NTRS)
Credeur, K. R.
1978-01-01
Coding is given for maximum likelihood and Bayesian estimation of the vector p of multinomial cell probabilities from incomplete data. Also included is coding to calculate and approximate elements of the posterior mean and covariance matrices. The program is written in FORTRAN 4 language for the Control Data CYBER 170 series digital computer system with network operating system (NOS) 1.1. The program requires approximately 44000 octal locations of core storage. A typical case requires from 72 seconds to 92 seconds on CYBER 175 depending on the value of the prior parameter.
NASA Technical Reports Server (NTRS)
Clement, J. D.; Kirby, K. D.
1973-01-01
Exploratory calculations were performed for several gas core breeder reactor configurations. The computational method involved the use of the MACH-1 one dimensional diffusion theory code and the THERMOS integral transport theory code for thermal cross sections. Computations were performed to analyze thermal breeder concepts and nonbreeder concepts. Analysis of breeders was restricted to the (U-233)-Th breeding cycle, and computations were performed to examine a range of parameters. These parameters include U-233 to hydrogen atom ratio in the gaseous cavity, carbon to thorium atom ratio in the breeding blanket, cavity size, and blanket size.
VICTORIA: A mechanistic model for radionuclide behavior in the reactor coolant system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schaperow, J.H.; Bixler, N.E.
1996-12-31
VICTORIA is the U.S. Nuclear Regulatory Commission`s (NRC`s) mechanistic, best-estimate code for analysis of fission product release from the core and subsequent transport in the reactor vessel and reactor coolant system. VICTORIA requires thermal-hydraulic data (i.e., temperatures, pressures, and velocities) as input. In the past, these data have been taken from the results of calculations from thermal-hydraulic codes such as SCDAP/RELAP5, MELCOR, and MAAP. Validation and assessment of VICTORIA 1.0 have been completed. An independent peer review of VICTORIA, directed by Brookhaven National Laboratory and supported by experts in the areas of fuel release, fission product chemistry, and aerosol physics,more » has been undertaken. This peer review, which will independently assess the code`s capabilities, is nearing completion with the peer review committee`s final report expected in Dec 1996. A limited amount of additional development is expected as a result of the peer review. Following this additional development, the NRC plans to release VICTORIA 1.1 and an updated and improved code manual. Future plans mainly involve use of the code for plant calculations to investigate specific safety issues as they arise. Also, the code will continue to be used in support of the Phebus experiments.« less
Natural Circulation Level Optimization and the Effect during ULOF Accident in the SPINNOR Reactors
NASA Astrophysics Data System (ADS)
Abdullah, Ade Gafar; Su'ud, Zaki; Kurniadi, Rizal; Kurniasih, Neny; Yulianti, Yanti
2010-12-01
Natural circulation level optimization and the effect during loss of flow accident in the 250 MWt MOX fuelled small Pb-Bi Cooled non-refueling nuclear reactors (SPINNOR) have been performed. The simulation was performed using FI-ITB safety code which has been developed in ITB. The simulation begins with steady state calculation of neutron flux, power distribution and temperature distribution across the core, hot pool and cool pool, and also steam generator. When the accident is started due to the loss of pumping power the power distribution and the temperature distribution of core, hot pool and cool pool, and steam generator change. Then the feedback reactivity calculation is conducted, followed by kinetic calculation. The process is repeated until the optimum power distribution is achieved. The results show that the SPINNOR reactor has inherent safety capability against this accident.
An approach to model reactor core nodalization for deterministic safety analysis
NASA Astrophysics Data System (ADS)
Salim, Mohd Faiz; Samsudin, Mohd Rafie; Mamat @ Ibrahim, Mohd Rizal; Roslan, Ridha; Sadri, Abd Aziz; Farid, Mohd Fairus Abd
2016-01-01
Adopting good nodalization strategy is essential to produce an accurate and high quality input model for Deterministic Safety Analysis (DSA) using System Thermal-Hydraulic (SYS-TH) computer code. The purpose of such analysis is to demonstrate the compliance against regulatory requirements and to verify the behavior of the reactor during normal and accident conditions as it was originally designed. Numerous studies in the past have been devoted to the development of the nodalization strategy for small research reactor (e.g. 250kW) up to the bigger research reactor (e.g. 30MW). As such, this paper aims to discuss the state-of-arts thermal hydraulics channel to be employed in the nodalization for RTP-TRIGA Research Reactor specifically for the reactor core. At present, the required thermal-hydraulic parameters for reactor core, such as core geometrical data (length, coolant flow area, hydraulic diameters, and axial power profile) and material properties (including the UZrH1.6, stainless steel clad, graphite reflector) have been collected, analyzed and consolidated in the Reference Database of RTP using standardized methodology, mainly derived from the available technical documentations. Based on the available information in the database, assumptions made on the nodalization approach and calculations performed will be discussed and presented. The development and identification of the thermal hydraulics channel for the reactor core will be implemented during the SYS-TH calculation using RELAP5-3D® computer code. This activity presented in this paper is part of the development of overall nodalization description for RTP-TRIGA Research Reactor under the IAEA Norwegian Extra-Budgetary Programme (NOKEBP) mentoring project on Expertise Development through the Analysis of Reactor Thermal-Hydraulics for Malaysia, denoted as EARTH-M.
An approach to model reactor core nodalization for deterministic safety analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Salim, Mohd Faiz, E-mail: mohdfaizs@tnb.com.my; Samsudin, Mohd Rafie, E-mail: rafies@tnb.com.my; Mamat Ibrahim, Mohd Rizal, E-mail: m-rizal@nuclearmalaysia.gov.my
Adopting good nodalization strategy is essential to produce an accurate and high quality input model for Deterministic Safety Analysis (DSA) using System Thermal-Hydraulic (SYS-TH) computer code. The purpose of such analysis is to demonstrate the compliance against regulatory requirements and to verify the behavior of the reactor during normal and accident conditions as it was originally designed. Numerous studies in the past have been devoted to the development of the nodalization strategy for small research reactor (e.g. 250kW) up to the bigger research reactor (e.g. 30MW). As such, this paper aims to discuss the state-of-arts thermal hydraulics channel to bemore » employed in the nodalization for RTP-TRIGA Research Reactor specifically for the reactor core. At present, the required thermal-hydraulic parameters for reactor core, such as core geometrical data (length, coolant flow area, hydraulic diameters, and axial power profile) and material properties (including the UZrH{sub 1.6}, stainless steel clad, graphite reflector) have been collected, analyzed and consolidated in the Reference Database of RTP using standardized methodology, mainly derived from the available technical documentations. Based on the available information in the database, assumptions made on the nodalization approach and calculations performed will be discussed and presented. The development and identification of the thermal hydraulics channel for the reactor core will be implemented during the SYS-TH calculation using RELAP5-3D{sup ®} computer code. This activity presented in this paper is part of the development of overall nodalization description for RTP-TRIGA Research Reactor under the IAEA Norwegian Extra-Budgetary Programme (NOKEBP) mentoring project on Expertise Development through the Analysis of Reactor Thermal-Hydraulics for Malaysia, denoted as EARTH-M.« less
RIXS of Ammonium Nitrate using OCEAN
NASA Astrophysics Data System (ADS)
Vinson, John; Jach, Terrence; Mueller, Matthias; Unterumsberger, Rainer; Beckhoff, Burkhard
The
Stability properties and fast ion confinement of hybrid tokamak plasma configurations
NASA Astrophysics Data System (ADS)
Graves, J. P.; Brunetti, D.; Pfefferle, D.; Faustin, J. M. P.; Cooper, W. A.; Kleiner, A.; Lanthaler, S.; Patten, H. W.; Raghunathan, M.
2015-11-01
In hybrid scenarios with flat q just above unity, extremely fast growing tearing modes are born from toroidal sidebands of the near resonant ideal internal kink mode. New scalings of the growth rate with the magnetic Reynolds number arise from two fluid effects and sheared toroidal flow. Non-linear saturated 1/1 dominant modes obtained from initial value stability calculation agree with the amplitude of the 1/1 component of a 3D VMEC equilibrium calculation. Viable and realistic equilibrium representation of such internal kink modes allow fast ion studies to be accurately established. Calculations of MAST neutral beam ion distributions using the VENUS-LEVIS code show very good agreement of observed impaired core fast ion confinement when long lived modes occur. The 3D ICRH code SCENIC also enables the establishment of minority RF distributions in hybrid plasmas susceptible to saturated near resonant internal kink modes.
Accident Analysis for the NIST Research Reactor Before and After Fuel Conversion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baek J.; Diamond D.; Cuadra, A.
Postulated accidents have been analyzed for the 20 MW D2O-moderated research reactor (NBSR) at the National Institute of Standards and Technology (NIST). The analysis has been carried out for the present core, which contains high enriched uranium (HEU) fuel and for a proposed equilibrium core with low enriched uranium (LEU) fuel. The analyses employ state-of-the-art calculational methods. Three-dimensional Monte Carlo neutron transport calculations were performed with the MCNPX code to determine homogenized fuel compositions in the lower and upper halves of each fuel element and to determine the resulting neutronic properties of the core. The accident analysis employed a modelmore » of the primary loop with the RELAP5 code. The model includes the primary pumps, shutdown pumps outlet valves, heat exchanger, fuel elements, and flow channels for both the six inner and twenty-four outer fuel elements. Evaluations were performed for the following accidents: (1) control rod withdrawal startup accident, (2) maximum reactivity insertion accident, (3) loss-of-flow accident resulting from loss of electrical power with an assumption of failure of shutdown cooling pumps, (4) loss-of-flow accident resulting from a primary pump seizure, and (5) loss-of-flow accident resulting from inadvertent throttling of a flow control valve. In addition, natural circulation cooling at low power operation was analyzed. The analysis shows that the conversion will not lead to significant changes in the safety analysis and the calculated minimum critical heat flux ratio and maximum clad temperature assure that there is adequate margin to fuel failure.« less
GPU COMPUTING FOR PARTICLE TRACKING
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nishimura, Hiroshi; Song, Kai; Muriki, Krishna
2011-03-25
This is a feasibility study of using a modern Graphics Processing Unit (GPU) to parallelize the accelerator particle tracking code. To demonstrate the massive parallelization features provided by GPU computing, a simplified TracyGPU program is developed for dynamic aperture calculation. Performances, issues, and challenges from introducing GPU are also discussed. General purpose Computation on Graphics Processing Units (GPGPU) bring massive parallel computing capabilities to numerical calculation. However, the unique architecture of GPU requires a comprehensive understanding of the hardware and programming model to be able to well optimize existing applications. In the field of accelerator physics, the dynamic aperture calculationmore » of a storage ring, which is often the most time consuming part of the accelerator modeling and simulation, can benefit from GPU due to its embarrassingly parallel feature, which fits well with the GPU programming model. In this paper, we use the Tesla C2050 GPU which consists of 14 multi-processois (MP) with 32 cores on each MP, therefore a total of 448 cores, to host thousands ot threads dynamically. Thread is a logical execution unit of the program on GPU. In the GPU programming model, threads are grouped into a collection of blocks Within each block, multiple threads share the same code, and up to 48 KB of shared memory. Multiple thread blocks form a grid, which is executed as a GPU kernel. A simplified code that is a subset of Tracy++ [2] is developed to demonstrate the possibility of using GPU to speed up the dynamic aperture calculation by having each thread track a particle.« less
Core/corona modeling of diode-imploded annular loads
NASA Astrophysics Data System (ADS)
Terry, R. E.; Guillory, J. U.
1980-11-01
The effects of a tenuous exterior plasma corona with anomalous resistivity on the compression and heating of a hollow, collisional aluminum z-pinch plasma are predicted by a one-dimensional code. As the interior ("core") plasma is imploded by its axial current, the energy exchange between core and corona determines the current partition. Under the conditions of rapid core heating and compression, the increase in coronal current provides a trade-off between radial acceleration and compression, which reduces the implosion forces and softens the pitch. Combined with a heuristic account of energy and momentum transport in the strongly coupled core plasma and an approximate radiative loss calculation including Al line, recombination and Bremsstrahlung emission, the current model can provide a reasonably accurate description of imploding annular plasma loads that remain azimuthally symmetric. The implications for optimization of generator load coupling are examined.
An assessment of the CORCON-MOD3 code. Part 1: Thermal-hydraulic calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Strizhov, V.; Kanukova, V.; Vinogradova, T.
1996-09-01
This report deals with the subject of CORCON-Mod3 code validation (thermal-hydraulic modeling capability only) based on MCCI (molten core concrete interaction) experiments conducted under different programs in the past decade. Thermal-hydraulic calculations (i.e., concrete ablation, melt temperature, melt energy, concrete temperature, and condensible and non-condensible gas generation) were performed with the code, and compared with the data from 15 experiments, conducted at different scales using both simulant (metallic and oxidic) and prototypic melt materials, using different concrete types, and with and without an overlying water pool. Sensitivity studies were performed in a few cases involving, for example, heat transfer frommore » melt to concrete, condensed phase chemistry, etc. Further, special analysis was performed using the ACE L8 experimental data to illustrate the differences between the experimental and the reactor conditions, and to demonstrate that with proper corrections made to the code, the calculated results were in better agreement with the experimental data. Generally, in the case of dry cavity and metallic melts, CORCON-Mod3 thermal-hydraulic calculations were in good agreement with the test data. For oxidic melts in a dry cavity, uncertainties in heat transfer models played an important role for two melt configurations--a stratified geometry with segregated metal and oxide layers, and a heterogeneous mixture. Some discrepancies in the gas release data were noted in a few cases.« less
Verification of ARES transport code system with TAKEDA benchmarks
NASA Astrophysics Data System (ADS)
Zhang, Liang; Zhang, Bin; Zhang, Penghe; Chen, Mengteng; Zhao, Jingchang; Zhang, Shun; Chen, Yixue
2015-10-01
Neutron transport modeling and simulation are central to many areas of nuclear technology, including reactor core analysis, radiation shielding and radiation detection. In this paper the series of TAKEDA benchmarks are modeled to verify the critical calculation capability of ARES, a discrete ordinates neutral particle transport code system. SALOME platform is coupled with ARES to provide geometry modeling and mesh generation function. The Koch-Baker-Alcouffe parallel sweep algorithm is applied to accelerate the traditional transport calculation process. The results show that the eigenvalues calculated by ARES are in excellent agreement with the reference values presented in NEACRP-L-330, with a difference less than 30 pcm except for the first case of model 3. Additionally, ARES provides accurate fluxes distribution compared to reference values, with a deviation less than 2% for region-averaged fluxes in all cases. All of these confirms the feasibility of ARES-SALOME coupling and demonstrate that ARES has a good performance in critical calculation.
MC 2 -3: Multigroup Cross Section Generation Code for Fast Reactor Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Changho; Yang, Won Sik
This paper presents the methods and performance of the MC2 -3 code, which is a multigroup cross-section generation code for fast reactor analysis, developed to improve the resonance self-shielding and spectrum calculation methods of MC2 -2 and to simplify the current multistep schemes generating region-dependent broad-group cross sections. Using the basic neutron data from ENDF/B data files, MC2 -3 solves the consistent P1 multigroup transport equation to determine the fundamental mode spectra for use in generating multigroup neutron cross sections. A homogeneous medium or a heterogeneous slab or cylindrical unit cell problem is solved in ultrafine (2082) or hyperfine (~400more » 000) group levels. In the resolved resonance range, pointwise cross sections are reconstructed with Doppler broadening at specified temperatures. The pointwise cross sections are directly used in the hyperfine group calculation, whereas for the ultrafine group calculation, self-shielded cross sections are prepared by numerical integration of the pointwise cross sections based upon the narrow resonance approximation. For both the hyperfine and ultrafine group calculations, unresolved resonances are self-shielded using the analytic resonance integral method. The ultrafine group calculation can also be performed for a two-dimensional whole-core problem to generate region-dependent broad-group cross sections. Verification tests have been performed using the benchmark problems for various fast critical experiments including Los Alamos National Laboratory critical assemblies; Zero-Power Reactor, Zero-Power Physics Reactor, and Bundesamt für Strahlenschutz experiments; Monju start-up core; and Advanced Burner Test Reactor. Verification and validation results with ENDF/B-VII.0 data indicated that eigenvalues from MC2 -3/DIF3D agreed well with Monte Carlo N-Particle5 MCNP5 or VIM Monte Carlo solutions within 200 pcm and regionwise one-group fluxes were in good agreement with Monte Carlo solutions.« less
Nuclear data uncertainty propagation by the XSUSA method in the HELIOS2 lattice code
NASA Astrophysics Data System (ADS)
Wemple, Charles; Zwermann, Winfried
2017-09-01
Uncertainty quantification has been extensively applied to nuclear criticality analyses for many years and has recently begun to be applied to depletion calculations. However, regulatory bodies worldwide are trending toward requiring such analyses for reactor fuel cycle calculations, which also requires uncertainty propagation for isotopics and nuclear reaction rates. XSUSA is a proven methodology for cross section uncertainty propagation based on random sampling of the nuclear data according to covariance data in multi-group representation; HELIOS2 is a lattice code widely used for commercial and research reactor fuel cycle calculations. This work describes a technique to automatically propagate the nuclear data uncertainties via the XSUSA approach through fuel lattice calculations in HELIOS2. Application of the XSUSA methodology in HELIOS2 presented some unusual challenges because of the highly-processed multi-group cross section data used in commercial lattice codes. Currently, uncertainties based on the SCALE 6.1 covariance data file are being used, but the implementation can be adapted to other covariance data in multi-group structure. Pin-cell and assembly depletion calculations, based on models described in the UAM-LWR Phase I and II benchmarks, are performed and uncertainties in multiplication factor, reaction rates, isotope concentrations, and delayed-neutron data are calculated. With this extension, it will be possible for HELIOS2 users to propagate nuclear data uncertainties directly from the microscopic cross sections to subsequent core simulations.
NASA Astrophysics Data System (ADS)
Aufiero, M.; Cammi, A.; Fiorina, C.; Leppänen, J.; Luzzi, L.; Ricotti, M. E.
2013-10-01
In this work, the Monte Carlo burn-up code SERPENT-2 has been extended and employed to study the material isotopic evolution of the Molten Salt Fast Reactor (MSFR). This promising GEN-IV nuclear reactor concept features peculiar characteristics such as the on-line fuel reprocessing, which prevents the use of commonly available burn-up codes. Besides, the presence of circulating nuclear fuel and radioactive streams from the core to the reprocessing plant requires a precise knowledge of the fuel isotopic composition during the plant operation. The developed extension of SERPENT-2 directly takes into account the effects of on-line fuel reprocessing on burn-up calculations and features a reactivity control algorithm. It is here assessed against a dedicated version of the deterministic ERANOS-based EQL3D procedure (PSI-Switzerland) and adopted to analyze the MSFR fuel salt isotopic evolution. Particular attention is devoted to study the effects of reprocessing time constants and efficiencies on the conversion ratio and the molar concentration of elements relevant for solubility issues (e.g., trivalent actinides and lanthanides). Quantities of interest for fuel handling and safety issues are investigated, including decay heat and activities of hazardous isotopes (neutron and high energy gamma emitters) in the core and in the reprocessing stream. The radiotoxicity generation is also analyzed for the MSFR nominal conditions. The production of helium and the depletion in tungsten content due to nuclear reactions are calculated for the nickel-based alloy selected as reactor structural material of the MSFR. These preliminary evaluations can be helpful in studying the radiation damage of both the primary salt container and the axial reflectors.
Performance optimization of Qbox and WEST on Intel Knights Landing
NASA Astrophysics Data System (ADS)
Zheng, Huihuo; Knight, Christopher; Galli, Giulia; Govoni, Marco; Gygi, Francois
We present the optimization of electronic structure codes Qbox and WEST targeting the Intel®Xeon Phi™processor, codenamed Knights Landing (KNL). Qbox is an ab-initio molecular dynamics code based on plane wave density functional theory (DFT) and WEST is a post-DFT code for excited state calculations within many-body perturbation theory. Both Qbox and WEST employ highly scalable algorithms which enable accurate large-scale electronic structure calculations on leadership class supercomputer platforms beyond 100,000 cores, such as Mira and Theta at the Argonne Leadership Computing Facility. In this work, features of the KNL architecture (e.g. hierarchical memory) are explored to achieve higher performance in key algorithms of the Qbox and WEST codes and to develop a road-map for further development targeting next-generation computing architectures. In particular, the optimizations of the Qbox and WEST codes on the KNL platform will target efficient large-scale electronic structure calculations of nanostructured materials exhibiting complex structures and prediction of their electronic and thermal properties for use in solar and thermal energy conversion device. This work was supported by MICCoM, as part of Comp. Mats. Sci. Program funded by the U.S. DOE, Office of Sci., BES, MSE Division. This research used resources of the ALCF, which is a DOE Office of Sci. User Facility under Contract DE-AC02-06CH11357.
Performance of the MTR core with MOX fuel using the MCNP4C2 code.
Shaaban, Ismail; Albarhoum, Mohamad
2016-08-01
The MCNP4C2 code was used to simulate the MTR-22 MW research reactor and perform the neutronic analysis for a new fuel namely: a MOX (U3O8&PuO2) fuel dispersed in an Al matrix for One Neutronic Trap (ONT) and Three Neutronic Traps (TNTs) in its core. Its new characteristics were compared to its original characteristics based on the U3O8-Al fuel. Experimental data for the neutronic parameters including criticality relative to the MTR-22 MW reactor for the original U3O8-Al fuel at nominal power were used to validate the calculated values and were found acceptable. The achieved results seem to confirm that the use of MOX fuel in the MTR-22 MW will not degrade the safe operational conditions of the reactor. In addition, the use of MOX fuel in the MTR-22 MW core leads to reduce the uranium fuel enrichment with (235)U and the amount of loaded (235)U in the core by about 34.84% and 15.21% for the ONT and TNTs cases, respectively. Copyright © 2016 Elsevier Ltd. All rights reserved.
A NEW HYBRID N-BODY-COAGULATION CODE FOR THE FORMATION OF GAS GIANT PLANETS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bromley, Benjamin C.; Kenyon, Scott J., E-mail: bromley@physics.utah.edu, E-mail: skenyon@cfa.harvard.edu
2011-04-20
We describe an updated version of our hybrid N-body-coagulation code for planet formation. In addition to the features of our 2006-2008 code, our treatment now includes algorithms for the one-dimensional evolution of the viscous disk, the accretion of small particles in planetary atmospheres, gas accretion onto massive cores, and the response of N-bodies to the gravitational potential of the gaseous disk and the swarm of planetesimals. To validate the N-body portion of the algorithm, we use a battery of tests in planetary dynamics. As a first application of the complete code, we consider the evolution of Pluto-mass planetesimals in amore » swarm of 0.1-1 cm pebbles. In a typical evolution time of 1-3 Myr, our calculations transform 0.01-0.1 M{sub sun} disks of gas and dust into planetary systems containing super-Earths, Saturns, and Jupiters. Low-mass planets form more often than massive planets; disks with smaller {alpha} form more massive planets than disks with larger {alpha}. For Jupiter-mass planets, masses of solid cores are 10-100 M{sub +}.« less
Overview and Current Status of Analyses of Potential LEU Design Concepts for TREAT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Connaway, H. M.; Kontogeorgakos, D. C.; Papadias, D. D.
2015-10-01
Neutronic and thermal-hydraulic analyses have been performed to evaluate the performance of different low-enriched uranium (LEU) fuel design concepts for the conversion of the Transient Reactor Test Facility (TREAT) from its current high-enriched uranium (HEU) fuel. TREAT is an experimental reactor developed to generate high neutron flux transients for the testing of nuclear fuels. The goal of this work was to identify an LEU design which can maintain the performance of the existing HEU core while continuing to operate safely. A wide variety of design options were considered, with a focus on minimizing peak fuel temperatures and optimizing the powermore » coupling between the TREAT core and test samples. Designs were also evaluated to ensure that they provide sufficient reactivity and shutdown margin for each control rod bank. Analyses were performed using the core loading and experiment configuration of historic M8 Power Calibration experiments (M8CAL). The Monte Carlo code MCNP was utilized for steady-state analyses, and transient calculations were performed with the point kinetics code TREKIN. Thermal analyses were performed with the COMSOL multi-physics code. Using the results of this study, a new LEU Baseline design concept is being established, which will be evaluated in detail in a future report.« less
Conceptual Core Analysis of Long Life PWR Utilizing Thorium-Uranium Fuel Cycle
NASA Astrophysics Data System (ADS)
Rouf; Su'ud, Zaki
2016-08-01
Conceptual core analysis of long life PWR utilizing thorium-uranium based fuel has conducted. The purpose of this study is to evaluate neutronic behavior of reactor core using combined thorium and enriched uranium fuel. Based on this fuel composition, reactor core have higher conversion ratio rather than conventional fuel which could give longer operation length. This simulation performed using SRAC Code System based on library SRACLIB-JDL32. The calculation carried out for (Th-U)O2 and (Th-U)C fuel with uranium composition 30 - 40% and gadolinium (Gd2O3) as burnable poison 0,0125%. The fuel composition adjusted to obtain burn up length 10 - 15 years under thermal power 600 - 1000 MWt. The key properties such as uranium enrichment, fuel volume fraction, percentage of uranium are evaluated. Core calculation on this study adopted R-Z geometry divided by 3 region, each region have different uranium enrichment. The result show multiplication factor every burn up step for 15 years operation length, power distribution behavior, power peaking factor, and conversion ratio. The optimum core design achieved when thermal power 600 MWt, percentage of uranium 35%, U-235 enrichment 11 - 13%, with 14 years operation length, axial and radial power peaking factor about 1.5 and 1.2 respectively.
NASA Astrophysics Data System (ADS)
Takamatsu, Kuniyoshi; Nakagawa, Shigeaki; Takeda, Tetsuaki
Safety demonstration tests using the High Temperature Engineering Test Reactor (HTTR) are in progress to verify its inherent safety features and improve the safety technology and design methodology for High-temperature Gas-cooled Reactors (HTGRs). The reactivity insertion test is one of the safety demonstration tests for the HTTR. This test simulates the rapid increase in the reactor power by withdrawing the control rod without operating the reactor power control system. In addition, the loss of coolant flow tests has been conducted to simulate the rapid decrease in the reactor power by tripping one, two or all out of three gas circulators. The experimental results have revealed the inherent safety features of HTGRs, such as the negative reactivity feedback effect. The numerical analysis code, which was named-ACCORD-, was developed to analyze the reactor dynamics including the flow behavior in the HTTR core. We have modified this code to use a model with four parallel channels and twenty temperature coefficients. Furthermore, we added another analytical model of the core for calculating the heat conduction between the fuel channels and the core in the case of the loss of coolant flow tests. This paper describes the validation results for the newly developed code using the experimental results. Moreover, the effect of the model is formulated quantitatively with our proposed equation. Finally, the pre-analytical result of the loss of coolant flow test by tripping all gas circulators is also discussed.
OCTGRAV: Sparse Octree Gravitational N-body Code on Graphics Processing Units
NASA Astrophysics Data System (ADS)
Gaburov, Evghenii; Bédorf, Jeroen; Portegies Zwart, Simon
2010-10-01
Octgrav is a very fast tree-code which runs on massively parallel Graphical Processing Units (GPU) with NVIDIA CUDA architecture. The algorithms are based on parallel-scan and sort methods. The tree-construction and calculation of multipole moments is carried out on the host CPU, while the force calculation which consists of tree walks and evaluation of interaction list is carried out on the GPU. In this way, a sustained performance of about 100GFLOP/s and data transfer rates of about 50GB/s is achieved. It takes about a second to compute forces on a million particles with an opening angle of heta approx 0.5. To test the performance and feasibility, we implemented the algorithms in CUDA in the form of a gravitational tree-code which completely runs on the GPU. The tree construction and traverse algorithms are portable to many-core devices which have support for CUDA or OpenCL programming languages. The gravitational tree-code outperforms tuned CPU code during the tree-construction and shows a performance improvement of more than a factor 20 overall, resulting in a processing rate of more than 2.8 million particles per second. The code has a convenient user interface and is freely available for use.
Systematic studies of Niv transitions of astrophysical importance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fleming, J.; Brage, T.; Bell, K.L.
1995-12-01
E1, M2, M1, and E2 rates of transitions between {ital n}=2 levels of Niv have been calculated using the two independent codes CIV3 and MCHF. Convergence of each of the approaches has been studied and comparisons made as the complexity of the calculations increases to include valence, core-valence and core-core correlation. The agreement between the two methods is sufficiently good to allow us to set quite narrow uncertainty bars. For the {sup 1}{ital S}{endash}{sup 1}{ital P}{sup 0} resonance line, our recommended {ital f}-value is 0.609 with an estimated uncertainty of 0.002, while our recommended {ital A}-value for the {sup 1}{italmore » S}{sub 0}{endash}{sup 3}{ital P}{sup 0}{sub 1} intercombination line is 580 s{sup {minus}}{sup 1} with an estimated uncertainty of 10 s{sup {minus}}{sup 1}. {copyright} 1995 The American Astronomical Society.« less
High-Throughput Characterization of Porous Materials Using Graphics Processing Units
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Jihan; Martin, Richard L.; Rübel, Oliver
We have developed a high-throughput graphics processing units (GPU) code that can characterize a large database of crystalline porous materials. In our algorithm, the GPU is utilized to accelerate energy grid calculations where the grid values represent interactions (i.e., Lennard-Jones + Coulomb potentials) between gas molecules (i.e., CHmore » $$_{4}$$ and CO$$_{2}$$) and material's framework atoms. Using a parallel flood fill CPU algorithm, inaccessible regions inside the framework structures are identified and blocked based on their energy profiles. Finally, we compute the Henry coefficients and heats of adsorption through statistical Widom insertion Monte Carlo moves in the domain restricted to the accessible space. The code offers significant speedup over a single core CPU code and allows us to characterize a set of porous materials at least an order of magnitude larger than ones considered in earlier studies. For structures selected from such a prescreening algorithm, full adsorption isotherms can be calculated by conducting multiple grand canonical Monte Carlo simulations concurrently within the GPU.« less
Neutronics Analysis of SMART Small Modular Reactor using SRAC 2006 Code
NASA Astrophysics Data System (ADS)
Ramdhani, Rahmi N.; Prastyo, Puguh A.; Waris, Abdul; Widayani; Kurniadi, Rizal
2017-07-01
Small modular reactors (SMRs) are part of a new generation of nuclear reactor being developed worldwide. One of the advantages of SMR is the flexibility to adopt the advanced design concepts and technology. SMART (System integrated Modular Advanced ReacTor) is a small sized integral type PWR with a thermal power of 330 MW that has been developed by KAERI (Korea Atomic Energy Research Institute). SMART core consists of 57 fuel assemblies which are based on the well proven 17×17 array that has been used in Korean commercial PWRs. SMART is soluble boron free, and the high initial reactivity is mainly controlled by burnable absorbers. The goal of this study is to perform neutronics evaluation of SMART core with UO2 as main fuel. Neutronics calculation was performed by using PIJ and CITATION modules of SRAC 2006 code with JENDL 3.3 as nuclear data library.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dunn, Floyd E.; Hu, Lin-wen; Wilson, Erik
The STAT code was written to automate many of the steady-state thermal hydraulic safety calculations for the MIT research reactor, both for conversion of the reactor from high enrichment uranium fuel to low enrichment uranium fuel and for future fuel re-loads after the conversion. A Monte-Carlo statistical propagation approach is used to treat uncertainties in important parameters in the analysis. These safety calculations are ultimately intended to protect against high fuel plate temperatures due to critical heat flux or departure from nucleate boiling or onset of flow instability; but additional margin is obtained by basing the limiting safety settings onmore » avoiding onset of nucleate boiling. STAT7 can simultaneously analyze all of the axial nodes of all of the fuel plates and all of the coolant channels for one stripe of a fuel element. The stripes run the length of the fuel, from the bottom to the top. Power splits are calculated for each axial node of each plate to determine how much of the power goes out each face of the plate. By running STAT7 multiple times, full core analysis has been performed by analyzing the margin to ONB for each axial node of each stripe of each plate of each element in the core.« less
Kaliman, Ilya A; Krylov, Anna I
2017-04-30
A new hardware-agnostic contraction algorithm for tensors of arbitrary symmetry and sparsity is presented. The algorithm is implemented as a stand-alone open-source code libxm. This code is also integrated with general tensor library libtensor and with the Q-Chem quantum-chemistry package. An overview of the algorithm, its implementation, and benchmarks are presented. Similarly to other tensor software, the algorithm exploits efficient matrix multiplication libraries and assumes that tensors are stored in a block-tensor form. The distinguishing features of the algorithm are: (i) efficient repackaging of the individual blocks into large matrices and back, which affords efficient graphics processing unit (GPU)-enabled calculations without modifications of higher-level codes; (ii) fully asynchronous data transfer between disk storage and fast memory. The algorithm enables canonical all-electron coupled-cluster and equation-of-motion coupled-cluster calculations with single and double substitutions (CCSD and EOM-CCSD) with over 1000 basis functions on a single quad-GPU machine. We show that the algorithm exhibits predicted theoretical scaling for canonical CCSD calculations, O(N 6 ), irrespective of the data size on disk. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
S. Bays; W. Skerjanc; M. Pope
A comparative analysis and comparison of results obtained between 2-D lattice calculations and 3-D full core nodal calculations, in the frame of MOX fuel design, was conducted. This study revealed a set of advantages and disadvantages, with respect to each method, which can be used to guide the level of accuracy desired for future fuel and fuel cycle calculations. For the purpose of isotopic generation for fuel cycle analyses, the approach of using a 2-D lattice code (i.e., fuel assembly in infinite lattice) gave reasonable predictions of uranium and plutonium isotope concentrations at the predicted 3-D core simulation batch averagemore » discharge burnup. However, it was found that the 2-D lattice calculation can under-predict the power of pins located along a shared edge between MOX and UO2 by as much as 20%. In this analysis, this error did not occur in the peak pin. However, this was a coincidence and does not rule out the possibility that the peak pin could occur in a lattice position with high calculation uncertainty in future un-optimized studies. Another important consideration in realistic fuel design is the prediction of the peak axial burnup and neutron fluence. The use of 3-D core simulation gave peak burnup conditions, at the pellet level, to be approximately 1.4 times greater than what can be predicted using back-of-the-envelope assumptions of average specific power and irradiation time.« less
Pretest and posttest calculations of Semiscale Test S-07-10D with the TRAC computer program. [PWR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duerre, K.H.; Cort, G.E.; Knight, T.D.
The Transient Reactor Analysis Code (TRAC) developed at the Los Alamos National Laboratory was used to predict the behavior of the small-break experiment designated Semiscale S-07-10D. This test simulates a 10 per cent communicative cold-leg break with delayed Emergency Core Coolant injection and blowdown of the broken-loop steam generator secondary. Both pretest calculations that incorporated measured initial conditions and posttest calculations that incorporated measured initial conditions and measured transient boundary conditions were completed. The posttest calculated parameters were generally between those obtained from pretest calculations and those from the test data. The results are strongly dependent on depressurization rate and,more » hence, on break flow.« less
OEDGE modeling for the planned tungsten ring experiment on DIII-D
Elder, J. David; Stangeby, Peter C.; Abrams, Tyler W.; ...
2017-04-19
The OEDGE code is used to model tungsten erosion and transport for DIII-D experiments with toroidal rings of high-Z metal tiles. Such modeling is needed for both experimental and diagnostic design to have estimates of the expected core and edge tungsten density and to understand the various factors contributing to the uncertainties in these calculations. OEDGE simulations are performed using the planned experimental magnetic geometries and plasma conditions typical of both L-mode and inter-ELM H-mode discharges in DIII-D. OEDGE plasma reconstruction based on specific representative discharges for similar geometries is used to determine the plasma conditions applied to tungsten plasmamore » impurity simulations. We developed a new model for tungsten erosion in OEDGE which imports charge-state resolved carbon impurity fluxes and impact energies from a separate OEDGE run which models the carbon production, transport and deposition for the same plasma conditions as the tungsten simulations. Furthermore, these values are then used to calculate the gross tungsten physical sputtering due to carbon plasma impurities which is then added to any sputtering by deuterium ions; tungsten self-sputtering is also included. The code results are found to be dependent on the following factors: divertor geometry and closure, the choice of cross-field anomalous transport coefficients, divertor plasma conditions (affecting both tungsten source strength and transport), the choice of tungsten atomic physics data used in the model (in particular sviz(Te) for W-atoms), and the model of the carbon flux and energy used for 2 calculating the tungsten source due to sputtering. The core tungsten density is found to be of order 10 15 m -3 (excluding effects of any core transport barrier and with significant variability depending on the other factors mentioned) with density decaying into the scrape off layer.« less
FY2012 summary of tasks completed on PROTEUS-thermal work.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, C.H.; Smith, M.A.
2012-06-06
PROTEUS is a suite of the neutronics codes, both old and new, that can be used within the SHARP codes being developed under the NEAMS program. Discussion here is focused on updates and verification and validation activities of the SHARP neutronics code, DeCART, for application to thermal reactor analysis. As part of the development of SHARP tools, the different versions of the DeCART code created for PWR, BWR, and VHTR analysis were integrated. Verification and validation tests for the integrated version were started, and the generation of cross section libraries based on the subgroup method was revisited for the targetedmore » reactor types. The DeCART code has been reorganized in preparation for an efficient integration of the different versions for PWR, BWR, and VHTR analysis. In DeCART, the old-fashioned common blocks and header files have been replaced by advanced memory structures. However, the changing of variable names was minimized in order to limit problems with the code integration. Since the remaining stability problems of DeCART were mostly caused by the CMFD methodology and modules, significant work was performed to determine whether they could be replaced by more stable methods and routines. The cross section library is a key element to obtain accurate solutions. Thus, the procedure for generating cross section libraries was revisited to provide libraries tailored for the targeted reactor types. To improve accuracy in the cross section library, an attempt was made to replace the CENTRM code by the MCNP Monte Carlo code as a tool obtaining reference resonance integrals. The use of the Monte Carlo code allows us to minimize problems or approximations that CENTRM introduces since the accuracy of the subgroup data is limited by that of the reference solutions. The use of MCNP requires an additional set of libraries without resonance cross sections so that reference calculations can be performed for a unit cell in which only one isotope of interest includes resonance cross sections, among the isotopes in the composition. The OECD MHTGR-350 benchmark core was simulated using DeCART as initial focus of the verification/validation efforts. Among the benchmark problems, Exercise 1 of Phase 1 is a steady-state benchmark case for the neutronics calculation for which block-wise cross sections were provided in 26 energy groups. This type of problem was designed for a homogenized geometry solver like DIF3D rather than the high-fidelity code DeCART. Instead of the homogenized block cross sections given in the benchmark, the VHTR-specific 238-group ENDF/B-VII.0 library of DeCART was directly used for preliminary calculations. Initial results showed that the multiplication factors of a fuel pin and a fuel block with or without a control rod hole were off by 6, -362, and -183 pcm Dk from comparable MCNP solutions, respectively. The 2-D and 3-D one-third core calculations were also conducted for the all-rods-out (ARO) and all-rods-in (ARI) configurations, producing reasonable results. Figure 1 illustrates the intermediate (1.5 eV - 17 keV) and thermal (below 1.5 eV) group flux distributions. As seen from VHTR cores with annular fuels, the intermediate group fluxes are relatively high in the fuel region, but the thermal group fluxes are higher in the inner and outer graphite reflector regions than in the fuel region. To support the current project, a new three-year I-NERI collaboration involving ANL and KAERI was started in November 2011, focused on performing in-depth verification and validation of high-fidelity multi-physics simulation codes for LWR and VHTR. The work scope includes generating improved cross section libraries for the targeted reactor types, developing benchmark models for verification and validation of the neutronics code with or without thermo-fluid feedback, and performing detailed comparisons of predicted reactor parameters against both Monte Carlo solutions and experimental measurements. The following list summarizes the work conducted so far for PROTEUS-Thermal Tasks: Unification of different versions of DeCART was initiated, and at the same time code modernization was conducted to make code unification efficient; (2) Regeneration of cross section libraries was attempted for the targeted reactor types, and the procedure for generating cross section libraries was updated by replacing CENTRM with MCNP for reference resonance integrals; (3) The MHTGR-350 benchmark core was simulated using DeCART with VHTR-specific 238-group ENDF/B-VII.0 library, and MCNP calculations were performed for comparison; and (4) Benchmark problems for PWR and BWR analysis were prepared for the DeCART verification/validation effort. In the coming months, the work listed above will be completed. Cross section libraries will be generated with optimized group structures for specific reactor types.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rouxelin, Pascal Nicolas; Strydom, Gerhard
Best-estimate plus uncertainty analysis of reactors is replacing the traditional conservative (stacked uncertainty) method for safety and licensing analysis. To facilitate uncertainty analysis applications, a comprehensive approach and methodology must be developed and applied. High temperature gas cooled reactors (HTGRs) have several features that require techniques not used in light-water reactor analysis (e.g., coated-particle design and large graphite quantities at high temperatures). The International Atomic Energy Agency has therefore launched the Coordinated Research Project on HTGR Uncertainty Analysis in Modeling to study uncertainty propagation in the HTGR analysis chain. The benchmark problem defined for the prismatic design is represented bymore » the General Atomics Modular HTGR 350. The main focus of this report is the compilation and discussion of the results obtained for various permutations of Exercise I 2c and the use of the cross section data in Exercise II 1a of the prismatic benchmark, which is defined as the last and first steps of the lattice and core simulation phases, respectively. The report summarizes the Idaho National Laboratory (INL) best estimate results obtained for Exercise I 2a (fresh single-fuel block), Exercise I 2b (depleted single-fuel block), and Exercise I 2c (super cell) in addition to the first results of an investigation into the cross section generation effects for the super-cell problem. The two dimensional deterministic code known as the New ESC based Weighting Transport (NEWT) included in the Standardized Computer Analyses for Licensing Evaluation (SCALE) 6.1.2 package was used for the cross section evaluation, and the results obtained were compared to the three dimensional stochastic SCALE module KENO VI. The NEWT cross section libraries were generated for several permutations of the current benchmark super-cell geometry and were then provided as input to the Phase II core calculation of the stand alone neutronics Exercise II 1a. The steady state core calculations were simulated with the INL coupled-code system known as the Parallel and Highly Innovative Simulation for INL Code System (PHISICS) and the system thermal-hydraulics code known as the Reactor Excursion and Leak Analysis Program (RELAP) 5 3D using the nuclear data libraries previously generated with NEWT. It was observed that significant differences in terms of multiplication factor and neutron flux exist between the various permutations of the Phase I super-cell lattice calculations. The use of these cross section libraries only leads to minor changes in the Phase II core simulation results for fresh fuel but shows significantly larger discrepancies for spent fuel cores. Furthermore, large incongruities were found between the SCALE NEWT and KENO VI results for the super cells, and while some trends could be identified, a final conclusion on this issue could not yet be reached. This report will be revised in mid 2016 with more detailed analyses of the super-cell problems and their effects on the core models, using the latest version of SCALE (6.2). The super-cell models seem to show substantial improvements in terms of neutron flux as compared to single-block models, particularly at thermal energies.« less
Impact of Stellar Convection Criteria on the Nucleosynthetic Yields of Population III Supernovae.
NASA Astrophysics Data System (ADS)
Teffs, Jacob; Young, Tim; Lawlor, Tim
2018-01-01
A grid of 15-80 solar mass Z=0 stellar models are evolved to pre-core collapse using the stellar evolution code BRAHAMA. Each initial zero-age main sequence mass model star is evolved with two different convection criteria, Ledoux and Schwarzchild. The choice of convection produces significant changes in the evolutionary model tracks on the HR diagram, mass loss, and interior core and envelope structures. At onset of core collapse, a SNe explosion is initiated using a one-dimensional radiation-hydrodynamics code and followed for 400 days. The explosion energy is varied between 1-10 foes depending on the model as there are no observationally determined energies for population III supernovae. Due to structure differences, the Schwarzchild models resemble Type II-P SNe in their lightcurve while the Ledoux models resemble SN1987a, a Type IIpec. The nucleosynthesis is calculated using TORCH, a 3,208 isotope network, in a post process method using the hydrodynamic history. The Ledoux models have, on average, higher yields for elements above Fe compared to the Schwarzchild. Using a Salpeter IMF and other recently published population III IMF’s, the net integrated yields per solar mass are calculated and compared to published theoretical results and to published observations of extremely metal poor halo stars of [Fe/H] < -3. Preliminary results show the lower mass models of both criteria show similar trends to the extremely metal poor halo stars but more work and analysis is required.
Palkowski, Marek; Bielecki, Wlodzimierz
2017-06-02
RNA secondary structure prediction is a compute intensive task that lies at the core of several search algorithms in bioinformatics. Fortunately, the RNA folding approaches, such as the Nussinov base pair maximization, involve mathematical operations over affine control loops whose iteration space can be represented by the polyhedral model. Polyhedral compilation techniques have proven to be a powerful tool for optimization of dense array codes. However, classical affine loop nest transformations used with these techniques do not optimize effectively codes of dynamic programming of RNA structure predictions. The purpose of this paper is to present a novel approach allowing for generation of a parallel tiled Nussinov RNA loop nest exposing significantly higher performance than that of known related code. This effect is achieved due to improving code locality and calculation parallelization. In order to improve code locality, we apply our previously published technique of automatic loop nest tiling to all the three loops of the Nussinov loop nest. This approach first forms original rectangular 3D tiles and then corrects them to establish their validity by means of applying the transitive closure of a dependence graph. To produce parallel code, we apply the loop skewing technique to a tiled Nussinov loop nest. The technique is implemented as a part of the publicly available polyhedral source-to-source TRACO compiler. Generated code was run on modern Intel multi-core processors and coprocessors. We present the speed-up factor of generated Nussinov RNA parallel code and demonstrate that it is considerably faster than related codes in which only the two outer loops of the Nussinov loop nest are tiled.
Parallel Task Management Library for MARTe
NASA Astrophysics Data System (ADS)
Valcarcel, Daniel F.; Alves, Diogo; Neto, Andre; Reux, Cedric; Carvalho, Bernardo B.; Felton, Robert; Lomas, Peter J.; Sousa, Jorge; Zabeo, Luca
2014-06-01
The Multithreaded Application Real-Time executor (MARTe) is a real-time framework with increasing popularity and support in the thermonuclear fusion community. It allows modular code to run in a multi-threaded environment leveraging on the current multi-core processor (CPU) technology. One application that relies on the MARTe framework is the Joint European Torus (JET) tokamak WAll Load Limiter System (WALLS). It calculates and monitors the temperature on metal tiles and plasma facing components (PFCs) that can melt or flake if their temperature gets too high when exposed to power loads. One of the main time consuming tasks in WALLS is the calculation of thermal diffusion models in real-time. These models tend to be described by very large state-space models thus making them perfect candidates for parallelisation. MARTe's traditional approach for task parallelisation is to split the problem into several Real-Time Threads, each responsible for a self-contained sequential execution of an input-to-output chain. This is usually possible, but it might not always be practical for algorithmic or technical reasons. Also, it might not be easily scalable with an increase in the number of available CPU cores. The WorkLibrary introduces a “GPU-like approach” of splitting work among the available cores of modern CPUs that is (i) straightforward to use in an application, (ii) scalable with the availability of cores and all of this (iii) without rewriting or recompiling the source code. The first part of this article explains the motivation behind the library, its architecture and implementation. The second part presents a real application for WALLS, a parallel version of a large state-space model describing the 2D thermal diffusion on a JET tile.
NASA Astrophysics Data System (ADS)
Winteler, Christian
2014-02-01
In this dissertation we present the main features of a new nuclear reaction network evolution code. This new code allows nucleosynthesis calculations for large numbers of nuclides. The main results in this dissertation are all obtained using this new code. The strength of standard big bang nucleosynthesis is, that all primordial abundances are determined by only one free parameter, the baryon-to-photon ratio η. We perform self consistent nucleosynthesis calculations for the latest WMAP value η = (6.16±0.15)×10^-10 . We predict primordial light element abundances: D/H = (2.84 ± 0.23)×10^-5, 3He/H = (1.07 ± 0.09)×10^-5, Yp = 0.2490±0.0005 and 7Li/H = (4.57 ± 0.55)×10^-10, in agreement with current observations and other predictions. We investigate the influence of the main production rate on the 6 Li abundance, but find no significant increase of the predicted value, which is known to be orders of magnitude lower than the observed. The r-process is responsible for the formation of about half of the elements heavier than iron in our solar system. This neutron capture process requires explosive environments with large neutron densities. The exact astrophysical site where the r-process occurs has not yet been identified. We explore jets from magnetorotational core collapse supernovae (MHD jets) as possible r-process site. In a parametric study, assuming adiabatic expansion, we find good agreement with solar system abundances for a superposition of components with different electron fraction (Ye ), ranging from Ye = 0.1 to Ye = 0.3. Fission is found to be important only for Ye ≤ 0.17. The first postprocessing calculations with data from 3D MHD core collapse supernova simulations are performed for two different simulations. Calculations are based on two different methods to extract data from the simulation: tracer particles and a two dimensional, mass weighted histogram. Both results yield almost identical results. We find that both simulations can reproduce the global solar r-process abundance pattern. The ejected mass is found to be in agreement with galactic chemical evolution for a rare event rate of one MHD jet every hundredth to thousandth supernova.
Design study of long-life PWR using thorium cycle
DOE Office of Scientific and Technical Information (OSTI.GOV)
Subkhi, Moh. Nurul; Su'ud, Zaki; Waris, Abdul
2012-06-06
Design study of long-life Pressurized Water Reactor (PWR) using thorium cycle has been performed. Thorium cycle in general has higher conversion ratio in the thermal spectrum domain than uranium cycle. Cell calculation, Burn-up and multigroup diffusion calculation was performed by PIJ-CITATION-SRAC code using libraries based on JENDL 3.2. The neutronic analysis result of infinite cell calculation shows that {sup 231}Pa better than {sup 237}Np as burnable poisons in thorium fuel system. Thorium oxide system with 8%{sup 233}U enrichment and 7.6{approx} 8%{sup 231}Pa is the most suitable fuel for small-long life PWR core because it gives reactivity swing less than 1%{Delta}k/kmore » and longer burn up period (more than 20 year). By using this result, small long-life PWR core can be designed for long time operation with reduced excess reactivity as low as 0.53%{Delta}k/k and reduced power peaking during its operation.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bolstad, J.W.; Haarman, R.A.
The results of two transients involving the loss of a steam generator in a single-pass, steam generator, pressurized water reactor have been analyzed using a state-of-the-art, thermal-hydraulic computer code. Computed results include the formation of a steam bubble in the core while the pressurizer is solid. Calculations show that continued injection of high pressure water would have stopped the scenario. These are similar to the happenings at Three Mile Island.
Numerical Simulation of the Emergency Condenser of the SWR-1000
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krepper, Eckhard; Schaffrath, Andreas; Aszodi, Attila
The SWR-1000 is a new innovative boiling water reactor (BWR) concept, which was developed by Siemens AG. This concept is characterized in particular by passive safety systems (e.g., four emergency condensers, four building condensers, eight passive pressure pulse transmitters, and six gravity-driven core-flooding lines). In the framework of the BWR Physics and Thermohydraulic Complementary Action to the European Union BWR Research and Development Cluster, emergency condenser tests were performed by Forschungszentrum Juelich at the NOKO test facility. Posttest calculations with ATHLET are presented, which aim at the determination of the removable power of the emergency condenser and its operation mode.more » The one-dimensional thermal-hydraulic code ATHLET was extended by the module KONWAR for the calculation of the heat transfer coefficient during condensation in horizontal tubes. In addition, results of conventional finite difference calculations using the code CFX-4 are presented, which investigate the natural convection during the heatup process at the secondary side of the NOKO test facility.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hans D. Gougar
The Idaho National Laboratory’s deterministic neutronics analysis codes and methods were applied to the computation of the core multiplication factor of the HTR-Proteus pebble bed reactor critical facility. A combination of unit cell calculations (COMBINE-PEBDAN), 1-D discrete ordinates transport (SCAMP), and nodal diffusion calculations (PEBBED) were employed to yield keff and flux profiles. Preliminary results indicate that these tools, as currently configured and used, do not yield satisfactory estimates of keff. If control rods are not modeled, these methods can deliver much better agreement with experimental core eigenvalues which suggests that development efforts should focus on modeling control rod andmore » other absorber regions. Under some assumptions and in 1D subcore analyses, diffusion theory agrees well with transport. This suggests that developments in specific areas can produce a viable core simulation approach. Some corrections have been identified and can be further developed, specifically: treatment of the upper void region, treatment of inter-pebble streaming, and explicit (multiscale) transport modeling of TRISO fuel particles as a first step in cross section generation. Until corrections are made that yield better agreement with experiment, conclusions from core design and burnup analyses should be regarded as qualitative and not benchmark quality.« less
The effects of temperatures on the pebble flow in a pebble bed high temperature reactor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sen, R. S.; Cogliati, J. J.; Gougar, H. D.
2012-07-01
The core of a pebble bed high temperature reactor (PBHTR) moves during operation, a feature which leads to better fuel economy (online refueling with no burnable poisons) and lower fuel stress. The pebbles are loaded at the top and trickle to the bottom of the core after which the burnup of each is measured. The pebbles that are not fully burned are recirculated through the core until the target burnup is achieved. The flow pattern of the pebbles through the core is of importance for core simulations because it couples the burnup distribution to the core temperature and power profiles,more » especially in cores with two or more radial burnup 'zones '. The pebble velocity profile is a strong function of the core geometry and the friction between the pebbles and the surrounding structures (other pebbles or graphite reflector blocks). The friction coefficient for graphite in a helium environment is inversely related to the temperature. The Thorium High Temperature Reactor (THTR) operated in Germany between 1983 and 1989. It featured a two-zone core, an inner core (IC) and outer core (OC), with different fuel mixtures loaded in each zone. The rate at which the IC was refueled relative to the OC in THTR was designed to be 0.56. During its operation, however, this ratio was measured to be 0.76, suggesting the pebbles in the inner core traveled faster than expected. It has been postulated that the positive feedback effect between inner core temperature, burnup, and pebble flow was underestimated in THTR. Because of the power shape, the center of the core in a typical cylindrical PBHTR operates at a higher temperature than the region next to the side reflector. The friction between pebbles in the IC is lower than that in the OC, perhaps causing a higher relative flow rate and lower average burnup, which in turn yield a higher local power density. Furthermore, the pebbles in the center region have higher velocities than the pebbles next to the side reflector due to the interaction between the pebbles and the immobile graphite reflector as well as the geometry of the discharge conus near the bottom of the core. In this paper, the coupling between the temperature profile and the pebble flow dynamics was analyzed by using PEBBED/THERMIX and PEBBLES codes by modeling the HTR-10 reactor in China. Two extreme and opposing velocity profiles are used as a starting point for the iterations. The PEBBED/THERMIX code is used to calculate the burnup, power and temperature profiles with one of the velocity profiles as input. The resulting temperature profile is then passed to PEBBLES code to calculate the updated pebble velocity profile taking the new temperature profile into account. If the aforementioned hypothesis is correct, the strong temperature effect upon the friction coefficients would cause the two cases to converge to different final velocity and temperature profiles. The results of this analysis indicates that a single zone pebble bed core is self-stabilizing in terms of the pebble velocity profile and the effect of the temperature profile on the pebble flow is insignificant. (authors)« less
Dielectronic and Trielectronic Recombination Rate Coefficients of Be-like Ar14+
NASA Astrophysics Data System (ADS)
Huang, Z. K.; Wen, W. Q.; Xu, X.; Mahmood, S.; Wang, S. X.; Wang, H. B.; Dou, L. J.; Khan, N.; Badnell, N. R.; Preval, S. P.; Schippers, S.; Xu, T. H.; Yang, Y.; Yao, K.; Xu, W. Q.; Chuai, X. Y.; Zhu, X. L.; Zhao, D. M.; Mao, L. J.; Ma, X. M.; Li, J.; Mao, R. S.; Yuan, Y. J.; Wu, B.; Sheng, L. N.; Yang, J. C.; Xu, H. S.; Zhu, L. F.; Ma, X.
2018-03-01
Electron–ion recombination of Be-like 40Ar14+ has been measured by employing the electron–ion merged-beams method at the cooler storage ring CSRm. The measured absolute recombination rate coefficients for collision energies from 0 to 60 eV are presented, covering all dielectronic recombination (DR) resonances associated with 2s 2 → 2s2p core transitions. In addition, strong trielectronic recombination (TR) resonances associated with 2s 2 → 2p 2 core transitions were observed. Both DR and TR processes lead to series of peaks in the measured recombination spectrum, which have been identified by the Rydberg formula. Theoretical calculations of recombination rate coefficients were performed using the state-of-the-art multi-configuration Breit–Pauli atomic structure code AUTOSTRUCTURE to compare with the experimental results. The plasma rate coefficients for DR+TR of Ar14+ were deduced from the measured electron–ion recombination rate coefficients in the temperature range from 103 to 107 K, and compared with calculated data from the literature. The experimentally derived plasma rate coefficients are 60% larger and 30% lower than the previously recommended atomic data for the temperature ranges of photoionized plasmas and collisionally ionized plasmas, respectively. However, good agreement was found between experimental results and the calculations by Gu and Colgan et al. The plasma rate coefficients deduced from experiment and calculated by the current AUTOSTRUCTURE code show agreement that is better than 30% from 104 to 107 K. The present results constitute a set of benchmark data for use in astrophysical modeling.
NASA Astrophysics Data System (ADS)
Mielikainen, Jarno; Huang, Bormin; Huang, Allen
2015-10-01
The Thompson cloud microphysics scheme is a sophisticated cloud microphysics scheme in the Weather Research and Forecasting (WRF) model. The scheme is very suitable for massively parallel computation as there are no interactions among horizontal grid points. Compared to the earlier microphysics schemes, the Thompson scheme incorporates a large number of improvements. Thus, we have optimized the speed of this important part of WRF. Intel Many Integrated Core (MIC) ushers in a new era of supercomputing speed, performance, and compatibility. It allows the developers to run code at trillions of calculations per second using the familiar programming model. In this paper, we present our results of optimizing the Thompson microphysics scheme on Intel Many Integrated Core Architecture (MIC) hardware. The Intel Xeon Phi coprocessor is the first product based on Intel MIC architecture, and it consists of up to 61 cores connected by a high performance on-die bidirectional interconnect. The coprocessor supports all important Intel development tools. Thus, the development environment is familiar one to a vast number of CPU developers. Although, getting a maximum performance out of MICs will require using some novel optimization techniques. New optimizations for an updated Thompson scheme are discusses in this paper. The optimizations improved the performance of the original Thompson code on Xeon Phi 7120P by a factor of 1.8x. Furthermore, the same optimizations improved the performance of the Thompson on a dual socket configuration of eight core Intel Xeon E5-2670 CPUs by a factor of 1.8x compared to the original Thompson code.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holzgrewe, F.; Hegedues, F.; Paratte, J.M.
1995-03-01
The light water reactor BOXER code was used to determine the fast azimuthal neutron fluence distribution at the inner surface of the reactor pressure vessel after the tenth cycle of a pressurized water reactor (PWR). Using a cross-section library in 45 groups, fixed-source calculations in transport theory and x-y geometry were carried out to determine the fast azimuthal neutron flux distribution at the inner surface of the pressure vessel for four different cycles. From these results, the fast azimuthal neutron fluence after the tenth cycle was estimated and compared with the results obtained from scraping test experiments. In these experiments,more » small samples of material were taken from the inner surface of the pressure vessel. The fast neutron fluence was then determined form the measured activity of the samples. Comparing the BOXER and scraping test results have maximal differences of 15%, which is very good, considering the factor of 10{sup 3} neutron attenuation between the reactor core and the pressure vessel. To compare the BOXER results with an independent code, the 21st cycle of the PWR was also calculated with the TWODANT two-dimensional transport code, using the same group structure and cross-section library. Deviations in the fast azimuthal flux distribution were found to be <3%, which verifies the accuracy of the BOXER results.« less
Decay-ratio calculation in the frequency domain with the LAPUR code using 1D-kinetics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Munoz-Cobo, J. L.; Escriva, A.; Garcia, C.
This paper deals with the problem of computing the Decay Ratio in the frequency domain codes as the LAPUR code. First, it is explained how to calculate the feedback reactivity in the frequency domain using slab-geometry i.e. 1D kinetics, also we show how to perform the coupling of the 1D kinetics with the thermal-hydraulic part of the LAPUR code in order to obtain the reactivity feedback coefficients for the different channels. In addition, we show how to obtain the reactivity variation in the complex domain by solving the eigenvalue equation in the frequency domain and we compare this result withmore » the reactivity variation obtained in first order perturbation theory using the 1D neutron fluxes of the base case. Because LAPUR works in the linear regime, it is assumed that in general the perturbations are small. There is also a section devoted to the reactivity weighting factors used to couple the reactivity contribution from the different channels to the reactivity of the entire reactor core in point kinetics and 1D kinetics. Finally we analyze the effects of the different approaches on the DR value. (authors)« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Peiyuan; Brown, Timothy; Fullmer, William D.
Five benchmark problems are developed and simulated with the computational fluid dynamics and discrete element model code MFiX. The benchmark problems span dilute and dense regimes, consider statistically homogeneous and inhomogeneous (both clusters and bubbles) particle concentrations and a range of particle and fluid dynamic computational loads. Several variations of the benchmark problems are also discussed to extend the computational phase space to cover granular (particles only), bidisperse and heat transfer cases. A weak scaling analysis is performed for each benchmark problem and, in most cases, the scalability of the code appears reasonable up to approx. 103 cores. Profiling ofmore » the benchmark problems indicate that the most substantial computational time is being spent on particle-particle force calculations, drag force calculations and interpolating between discrete particle and continuum fields. Hardware performance analysis was also carried out showing significant Level 2 cache miss ratios and a rather low degree of vectorization. These results are intended to serve as a baseline for future developments to the code as well as a preliminary indicator of where to best focus performance optimizations.« less
Fukushima Daiichi Radionuclide Inventories
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cardoni, Jeffrey N.; Jankovsky, Zachary Kyle
Radionuclide inventories are generated to permit detailed analyses of the Fukushima Daiichi meltdowns. This is necessary information for severe accident calculations, dose calculations, and source term and consequence analyses. Inventories are calculated using SCALE6 and compared to values predicted by international researchers supporting the OECD/NEA's Benchmark Study on the Accident at Fukushima Daiichi Nuclear Power Station (BSAF). Both sets of inventory information are acceptable for best-estimate analyses of the Fukushima reactors. Consistent nuclear information for severe accident codes, including radionuclide class masses and core decay powers, are also derived from the SCALE6 analyses. Key nuclide activity ratios are calculated asmore » functions of burnup and nuclear data in order to explore the utility for nuclear forensics and support future decommissioning efforts.« less
Validation of the new code package APOLLO2.8 for accurate PWR neutronics calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Santamarina, A.; Bernard, D.; Blaise, P.
2013-07-01
This paper summarizes the Qualification work performed to demonstrate the accuracy of the new APOLLO2.S/SHEM-MOC package based on JEFF3.1.1 nuclear data file for the prediction of PWR neutronics parameters. This experimental validation is based on PWR mock-up critical experiments performed in the EOLE/MINERVE zero-power reactors and on P.I. Es on spent fuel assemblies from the French PWRs. The Calculation-Experiment comparison for the main design parameters is presented: reactivity of UOX and MOX lattices, depletion calculation and fuel inventory, reactivity loss with burnup, pin-by-pin power maps, Doppler coefficient, Moderator Temperature Coefficient, Void coefficient, UO{sub 2}-Gd{sub 2}O{sub 3} poisoning worth, Efficiency ofmore » Ag-In-Cd and B4C control rods, Reflector Saving for both standard 2-cm baffle and GEN3 advanced thick SS reflector. From this qualification process, calculation biases and associated uncertainties are derived. This code package APOLLO2.8 is already implemented in the ARCADIA new AREVA calculation chain for core physics and is currently under implementation in the future neutronics package of the French utility Electricite de France. (authors)« less
Density Functional O(N) Calculations
NASA Astrophysics Data System (ADS)
Ordejón, Pablo
1998-03-01
We have developed a scheme for performing Density Functional Theory calculations with O(N) scaling.(P. Ordejón, E. Artacho and J. M. Soler, Phys. Rev. B, 53), 10441 (1996) The method uses arbitrarily flexible and complete Atomic Orbitals (AO) basis sets. This gives a wide range of choice, from extremely fast calculations with minimal basis sets, to greatly accurate calculations with complete sets. The size-efficiency of AO bases, together with the O(N) scaling of the algorithm, allow the application of the method to systems with many hundreds of atoms, in single processor workstations. I will present the SIESTA code,(D. Sanchez-Portal, P. Ordejón, E. Artacho and J. M. Soler, Int. J. Quantum Chem., 65), 453 (1997) in which the method is implemented, with several LDA, LSD and GGA functionals available, and using norm-conserving, non-local pseudopotentials (in the Kleinman-Bylander form) to eliminate the core electrons. The calculation of static properties such as energies, forces, pressure, stress and magnetic moments, as well as molecular dynamics (MD) simulations capabilities (including variable cell shape, constant temperature and constant pressure MD) are fully implemented. I will also show examples of the accuracy of the method, and applications to large-scale materials and biomolecular systems.
Kinetic simulation of edge instability in fusion plasmas
NASA Astrophysics Data System (ADS)
Fulton, Daniel Patrick
In this work, gyrokinetic simulations in edge plasmas of both tokamaks and field reversed. configurations (FRC) have been carried out using the Gyrokinetic Toroidal Code (GTC) and A New Code (ANC) has been formulated for cross-separatrix FRC simulation. In the tokamak edge, turbulent transport in the pedestal of an H-mode DIII-D plasma is. studied via simulations of electrostatic driftwaves. Annulus geometry is used and simulations focus on two radial locations corresponding to the pedestal top with mild pressure gradient and steep pressure gradient. A reactive trapped electron instability with typical ballooning mode structure is excited in the pedestal top. At the steep gradient, the electrostatic instability exhibits unusual mode structure, peaking at poloidal angles theta=+- pi/2. Simulations find this unusual mode structure is due to steep pressure gradients in the pedestal but not due to the particular DIII-D magnetic geometry. Realistic DIII-D geometry has a stabilizing effect compared to a simple circular tokamak geometry. Driftwave instability in FRC is studied for the first time using gyrokinetic simulation. GTC. is upgraded to treat realistic equilibrium calculated by an MHD equilibrium code. Electrostatic local simulations in outer closed flux surfaces find ion-scale modes are stable due to the large ion gyroradius and that electron drift-interchange modes are excited by electron temperature gradient and bad magnetic curvature. In the scrape-off layer (SOL) ion-scale modes are excited by density gradient and bad curvature. Collisions have weak effects on instabilities both in the core and SOL. Simulation results are consistent with density fluctuation measurements in the C-2 experiment using Doppler backscattering (DBS). The critical density gradients measured by the DBS qualitatively agree with the linear instability threshold calculated by GTC simulations. One outstanding critical issue in the FRC is the interplay between turbulence in the FRC. core and SOL regions. While the magnetic flux coordinates used by GTC provide a number of computational advantages, they present unique challenges at the magnetic field separatrix. To address this limitation, a new code, capable of coupled core-SOL simulations, is formulated, implemented, and successfully verified.
Gravitational-Wave and Neutrino Signals from Core-Collapse Supernovae with QCD Phase Transition
NASA Astrophysics Data System (ADS)
Zha, Shuai; Leung, Shing Chi; Lin, Lap Ming; Chu, Ming-Chung
Core-collapse supernovae (CCSNe) mark the catastrophic death of massive stars. We simulate CCSNe with a hybrid equations of state (EOS) containing a QCD (quantum chromodynamics) phase transition. The hybrid EOS incorporates the pure hadronic HShen EOS and the MIT Bag Model, with a Gibbs construction. Our two-dimensional hydrodynamics code includes a fifth-order shock capturing scheme WENO and models neutrino transport with the isotropic diffusion source approximation (IDSA). As the proto-neutron-star accretes matter and the core enters the mixed phase, a second collapse takes place due to softening of the EOS. We calculate the gravitational-wave (GW) and neutrino signals for this kind of CCSNe model. Future detection of these signals from CCSNe may help to constrain this scenario and the hybrid EOS.
RAMONA-3B application to Browns Ferry ATWS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Slovik, G.C.; Neymotin, L.; Cazzoli, E.
1984-01-01
This paper discusses two preliminary MSIV clsoure ATWS calculations done using the RAMONA-3B code and the work being done to create the necessary cross section sets for the Browns Ferry Unit 1 reactor. The RAMONA-3B code employs a three-dimensional neutron kinetics model coupled with one-dimensional, four equation, nonhomogeneous, nonequilibrium thermal hydraulics. To be compatible with 3-D neutron kinetics, the code uses parallel coolant channels in the core. It also includes a boron transport model and all necessary BWR components such as jet pump, recirculation pump, steam separator, steamline with safety and relief valves, main steam isolation valve, turbine stop valve,more » and turbine bypass valve. A summary of RAMONA-3B neutron kinetics and thermal hydraulics models is presented in the Appendix.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
The following are reported: theoretical calculations (configuration interaction, relativistic effective core potentials, polyatomics, CASSCF); proposed theoretical studies (clusters of Cu, Ag, Au, Ni, Pt, Pd, Rh, Ir, Os, Ru; transition metal cluster ions; transition metal carbide clusters; bimetallic mixed transition metal clusters); reactivity studies on transition metal clusters (reactivity with H{sub 2}, C{sub 2}H{sub 4}, hydrocarbons; NO and CO chemisorption on surfaces). Computer facilities and codes to be used, are described. 192 refs, 13 figs.
Structure of neutron-rich nuclei around the N = 50 shell-gap closure
NASA Astrophysics Data System (ADS)
Faul, T.; Duchêne, G.; Thomas, J.-C.; Nowacki, F.; Huyse, M.; Van Duppen, P.
2010-04-01
The structure of neutron-rich nuclei in the vicinity of 78Ni have been investigated via the β-decay of 71,73,75Cu isotopes (ISOLDE, CERN). Experimental results have been compared with shell-model calculations performed with the ANTOINE code using a large (2p3/21f5/22p1/21g9/2) valence space and a 56/28Ni28 core.
Should One Use the Ray-by-Ray Approximation in Core-Collapse Supernova Simulations?
Skinner, M. Aaron; Burrows, Adam; Dolence, Joshua C.
2016-10-28
We perform the first self-consistent, time-dependent, multi-group calculations in two dimensions (2D) to address the consequences of using the ray-by-ray+ transport simplification in core-collapse supernova simulations. Such a dimensional reduction is employed by many researchers to facilitate their resource-intensive calculations. Our new code (Fornax) implements multi-D transport, and can, by zeroing out transverse flux terms, emulate the ray-by-ray+ scheme. Using the same microphysics, initial models, resolution, and code, we compare the results of simulating 12-, 15-, 20-, and 25-M⊙ progenitor models using these two transport methods. Our findings call into question the wisdom of the pervasive use of the ray-by-ray+more » approach. Employing it leads to maximum post-bounce/preexplosion shock radii that are almost universally larger by tens of kilometers than those derived using the more accurate scheme, typically leaving the post-bounce matter less bound and artificially more “explodable.” In fact, for our 25-M⊙ progenitor, the ray-by-ray+ model explodes, while the corresponding multi-D transport model does not. Therefore, in two dimensions the combination of ray-by-ray+ with the axial sloshing hydrodynamics that is a feature of 2D supernova dynamics can result in quantitatively, and perhaps qualitatively, incorrect results.« less
Should One Use the Ray-by-Ray Approximation in Core-collapse Supernova Simulations?
NASA Astrophysics Data System (ADS)
Skinner, M. Aaron; Burrows, Adam; Dolence, Joshua C.
2016-11-01
We perform the first self-consistent, time-dependent, multi-group calculations in two dimensions (2D) to address the consequences of using the ray-by-ray+ transport simplification in core-collapse supernova simulations. Such a dimensional reduction is employed by many researchers to facilitate their resource-intensive calculations. Our new code (Fornax) implements multi-D transport, and can, by zeroing out transverse flux terms, emulate the ray-by-ray+ scheme. Using the same microphysics, initial models, resolution, and code, we compare the results of simulating 12, 15, 20, and 25 M ⊙ progenitor models using these two transport methods. Our findings call into question the wisdom of the pervasive use of the ray-by-ray+ approach. Employing it leads to maximum post-bounce/pre-explosion shock radii that are almost universally larger by tens of kilometers than those derived using the more accurate scheme, typically leaving the post-bounce matter less bound and artificially more “explodable.” In fact, for our 25 M ⊙ progenitor, the ray-by-ray+ model explodes, while the corresponding multi-D transport model does not. Therefore, in two dimensions, the combination of ray-by-ray+ with the axial sloshing hydrodynamics that is a feature of 2D supernova dynamics can result in quantitatively, and perhaps qualitatively, incorrect results.
Station blackout calculations for Browns Ferry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ott, L.J.; Weber, C.F.; Hyman, C.R.
1985-01-01
This paper presents the results of calculations performed with the ORNL SASA code suite for the Station Blackout Severe Accident Sequence at Browns Ferry. The accident is initiated by a loss of offsite power combined with failure of all onsite emergency diesel generators to start and load. The Station Blackout is assumed to persist beyond the point of battery exhaustion (at six hours) and without DC power, cooling water could no longer be injected into the reactor vessel. Calculations are continued through the period of core degradation and melting, reactor vessel failure, and the subsequent containment failure. An estimate ofmore » the magnitude and timing of the concomitant fission product releases is also provided.« less
GPU accelerated implementation of NCI calculations using promolecular density.
Rubez, Gaëtan; Etancelin, Jean-Matthieu; Vigouroux, Xavier; Krajecki, Michael; Boisson, Jean-Charles; Hénon, Eric
2017-05-30
The NCI approach is a modern tool to reveal chemical noncovalent interactions. It is particularly attractive to describe ligand-protein binding. A custom implementation for NCI using promolecular density is presented. It is designed to leverage the computational power of NVIDIA graphics processing unit (GPU) accelerators through the CUDA programming model. The code performances of three versions are examined on a test set of 144 systems. NCI calculations are particularly well suited to the GPU architecture, which reduces drastically the computational time. On a single compute node, the dual-GPU version leads to a 39-fold improvement for the biggest instance compared to the optimal OpenMP parallel run (C code, icc compiler) with 16 CPU cores. Energy consumption measurements carried out on both CPU and GPU NCI tests show that the GPU approach provides substantial energy savings. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
GAMSOR: Gamma Source Preparation and DIF3D Flux Solution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, M. A.; Lee, C. H.; Hill, R. N.
2016-12-15
Nuclear reactors that rely upon the fission reaction have two modes of thermal energy deposition in the reactor system: neutron absorption and gamma absorption. The gamma rays are typically generated by neutron absorption reactions or during the fission process which means the primary driver of energy production is of course the neutron interaction. In conventional reactor physics methods, the gamma heating component is ignored such that the gamma absorption is forced to occur at the gamma emission site. For experimental reactor systems like EBR-II and FFTF, the placement of structural pins and assemblies internal to the core leads to problemsmore » with power heating predictions because there is no fission power source internal to the assembly to dictate a spatial distribution of the power. As part of the EBR-II support work in the 1980s, the GAMSOR code was developed to assist analysts in calculating the gamma heating. The GAMSOR code is a modified version of DIF3D and actually functions within a sequence of DIF3D calculations. The gamma flux in a conventional fission reactor system does not perturb the neutron flux and thus the gamma flux calculation can be cast as a fixed source problem given a solution to the steady state neutron flux equation. This leads to a sequence of DIF3D calculations, called the GAMSOR sequence, which involves solving the neutron flux, then the gamma flux, then combining the results to do a summary edit. In this manuscript, we go over the GAMSOR code and detail how it is put together and functions. We also discuss how to setup the GAMSOR sequence and input for each DIF3D calculation in the GAMSOR sequence. With the GAMSOR capability, users can take any valid steady state DIF3D calculation and compute the power distribution due to neutron and gamma heating. The MC2-3 code is the preferable companion code to use for generating neutron and gamma cross section data, but the GAMSOR code can accept cross section data from other sources. To further this aspect, an additional utility code was created which demonstrates how to merge the neutron and gamma cross section data together to carry out a simultaneous solve of the two systems.« less
Wong, Alex W K; Lau, Stephen C L; Fong, Mandy W M; Cella, David; Lai, Jin-Shei; Heinemann, Allen W
2018-04-03
To determine the extent to which the content of the Quality of Life in Neurological Disorders (Neuro-QoL) covers the International Classification of Functioning, Disability and Health (ICF) Core Sets for multiple sclerosis (MS), stroke, spinal cord injury (SCI), and traumatic brain injury (TBI) using summary linkage indicators. Content analysis by linking content of the Neuro-QoL to corresponding ICF codes of each Core Set for MS, stroke, SCI, and TBI. Three academic centers. None. None. Four summary linkage indicators proposed by MacDermid et al were estimated to compare the content coverage between Neuro-QoL and the ICF codes of Core Sets for MS, stroke, MS, and TBI. Neuro-QoL represented 20% to 30% Core Set codes for different conditions in which more codes in Core Sets for MS (29%), stroke (28%), and TBI (28%) were covered than those for SCI in the long-term (20%) and early postacute (19%) contexts. Neuro-QoL represented nearly half of the unique Activity and Participation codes (43%-49%) and less than one third of the unique Body Function codes (12%-32%). It represented fewer Environmental Factors codes (2%-6%) and no Body Structures codes. Absolute linkage indicators found that at least 60% of Neuro-QoL items were linked to Core Set codes (63%-95%), but many items covered the same codes as revealed by unique linkage indicators (7%-13%), suggesting high concept redundancy among items. The Neuro-QoL links more closely to ICF Core Sets for stroke, MS, and TBI than to those for SCI, and primarily covers activity and participation ICF domains. Other instruments are needed to address concepts not measured by the Neuro-QoL when a comprehensive health assessment is needed. Copyright © 2018 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.
Parallel computation of multigroup reactivity coefficient using iterative method
NASA Astrophysics Data System (ADS)
Susmikanti, Mike; Dewayatna, Winter
2013-09-01
One of the research activities to support the commercial radioisotope production program is a safety research target irradiation FPM (Fission Product Molybdenum). FPM targets form a tube made of stainless steel in which the nuclear degrees of superimposed high-enriched uranium. FPM irradiation tube is intended to obtain fission. The fission material widely used in the form of kits in the world of nuclear medicine. Irradiation FPM tube reactor core would interfere with performance. One of the disorders comes from changes in flux or reactivity. It is necessary to study a method for calculating safety terrace ongoing configuration changes during the life of the reactor, making the code faster became an absolute necessity. Neutron safety margin for the research reactor can be reused without modification to the calculation of the reactivity of the reactor, so that is an advantage of using perturbation method. The criticality and flux in multigroup diffusion model was calculate at various irradiation positions in some uranium content. This model has a complex computation. Several parallel algorithms with iterative method have been developed for the sparse and big matrix solution. The Black-Red Gauss Seidel Iteration and the power iteration parallel method can be used to solve multigroup diffusion equation system and calculated the criticality and reactivity coeficient. This research was developed code for reactivity calculation which used one of safety analysis with parallel processing. It can be done more quickly and efficiently by utilizing the parallel processing in the multicore computer. This code was applied for the safety limits calculation of irradiated targets FPM with increment Uranium.
Signature of chaos in the 4 f -core-excited states for highly-charged tungsten ions
NASA Astrophysics Data System (ADS)
Safronova, Ulyana; Safronova, Alla
2014-05-01
We evaluate radiative and autoionizing transition rates in highly charged W ions in search for the signature of chaos. In particularly, previously published results for Ag-like W27+, Tm-like W5+, and Yb-like W4+ ions as well as newly obtained for I-like W21+, Xe-like W20+, Cs-like W19+, and La-like W17+ ions (with ground configuration [Kr] 4d10 4fk with k = 7, 8, 9, and 11, respectively) are considered that were calculated using the multiconfiguration relativistic Hebrew University Lawrence Livermore Atomic Code (HULLAC code) and the Hartree-Fock-Relativistic method (COWAN code). The main emphasis was on verification of Gaussian statistics of rates as a function of transition energy. There was no evidence of such statistics for above mentioned previously published results as well as for the transitions between the excited and autoionizing states for newly calculated results. However, we did find the Gaussian profile for the transitions between excited states such as the [Kr] 4d10 4fk - [Kr] 4d10 4f k - 1 5 d transitions , for newly calculated W ions. This work is supported in part by DOE under NNSA Cooperative Agreement DE-NA0001984.
Dosimetric parameters of three new solid core I‐125 brachytherapy sources
Solberg, Timothy D.; DeMarco, John J.; Hugo, Geoffrey; Wallace, Robert E.
2002-01-01
Monte Carlo calculations and TLD measurements have been performed for the purpose of characterizing dosimetric properties of new commercially available brachytherapy sources. All sources tested consisted of a solid core, upon which a thin layer of I125 has been adsorbed, encased within a titanium housing. The PharmaSeed BT‐125 source manufactured by Syncor is available in silver or palladium core configurations while the ADVANTAGE source from IsoAid has silver only. Dosimetric properties, including the dose rate constant, radial dose function, and anisotropy characteristics were determined according to the TG‐43 protocol. Additionally, the geometry function was calculated exactly using Monte Carlo and compared with both the point and line source approximations. The 1999 NIST standard was followed in determining air kerma strength. Dose rate constants were calculated to be 0.955±0.005,0.967±0.005, and 0.962±0.005 cGyh−1U−1 for the PharmaSeed BT‐125‐1, BT‐125‐2, and ADVANTAGE sources, respectively. TLD measurements were in excellent agreement with Monte Carlo calculations. Radial dose function, g(r), calculated to a distance of 10 cm, and anisotropy function F(r, θ), calculated for radii from 0.5 to 7.0 cm, were similar among all source configurations. Anisotropy constants, ϕ¯an, were calculated to be 0.941, 0.944, and 0.960 for the three sources, respectively. All dosimetric parameters were found to be in close agreement with previously published data for similar source configurations. The MCNP Monte Carlo code appears to be ideally suited to low energy dosimetry applications. PACS number(s): 87.53.–j PMID:11958652
Investigation on the Core Bypass Flow in a Very High Temperature Reactor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hassan, Yassin
2013-10-22
Uncertainties associated with the core bypass flow are some of the key issues that directly influence the coolant mass flow distribution and magnitude, and thus the operational core temperature profiles, in the very high-temperature reactor (VHTR). Designers will attempt to configure the core geometry so the core cooling flow rate magnitude and distribution conform to the design values. The objective of this project is to study the bypass flow both experimentally and computationally. Researchers will develop experimental data using state-of-the-art particle image velocimetry in a small test facility. The team will attempt to obtain full field temperature distribution using racksmore » of thermocouples. The experimental data are intended to benchmark computational fluid dynamics (CFD) codes by providing detailed information. These experimental data are urgently needed for validation of the CFD codes. The following are the project tasks: • Construct a small-scale bench-top experiment to resemble the bypass flow between the graphite blocks, varying parameters to address their impact on bypass flow. Wall roughness of the graphite block walls, spacing between the blocks, and temperature of the blocks are some of the parameters to be tested. • Perform CFD to evaluate pre- and post-test calculations and turbulence models, including sensitivity studies to achieve high accuracy. • Develop the state-of-the art large eddy simulation (LES) using appropriate subgrid modeling. • Develop models to be used in systems thermal hydraulics codes to account and estimate the bypass flows. These computer programs include, among others, RELAP3D, MELCOR, GAMMA, and GAS-NET. Actual core bypass flow rate may vary considerably from the design value. Although the uncertainty of the bypass flow rate is not known, some sources have stated that the bypass flow rates in the Fort St. Vrain reactor were between 8 and 25 percent of the total reactor mass flow rate. If bypass flow rates are on the high side, the quantity of cooling flow through the core may be considerably less than the nominal design value, causing some regions of the core to operate at temperatures in excess of the design values. These effects are postulated to lead to localized hot regions in the core that must be considered when evaluating the VHTR operational and accident scenarios.« less
Monte Carol-based validation of neutronic methodology for EBR-II analyses
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liaw, J.R.; Finck, P.J.
1993-01-01
The continuous-energy Monte Carlo code VIM (Ref. 1) has been validated extensively over the years against fast critical experiments and other neutronic analysis codes. A high degree of confidence in VIM for predicting reactor physics parameters has been firmly established. This paper presents a numerical validation of two conventional multigroup neutronic analysis codes, DIF3D (Ref. 4) and VARIANT (Ref. 5), against VIM for two Experimental Breeder Reactor II (EBR-II) core loadings in detailed three-dimensional hexagonal-z geometry. The DIF3D code is based on nodal diffusion theory, and it is used in calculations for day-today reactor operations, whereas the VARIANT code ismore » based on nodal transport theory and is used with increasing frequency for specific applications. Both DIF3D and VARIANT rely on multigroup cross sections generated from ENDF/B-V by the ETOE-2/MC[sup 2]-II/SDX (Ref. 6) code package. Hence, this study also validates the multigroup cross-section processing methodology against the continuous-energy approach used in VIM.« less
Update and evaluation of decay data for spent nuclear fuel analyses
NASA Astrophysics Data System (ADS)
Simeonov, Teodosi; Wemple, Charles
2017-09-01
Studsvik's approach to spent nuclear fuel analyses combines isotopic concentrations and multi-group cross-sections, calculated by the CASMO5 or HELIOS2 lattice transport codes, with core irradiation history data from the SIMULATE5 reactor core simulator and tabulated isotopic decay data. These data sources are used and processed by the code SNF to predict spent nuclear fuel characteristics. Recent advances in the generation procedure for the SNF decay data are presented. The SNF decay data includes basic data, such as decay constants, atomic masses and nuclide transmutation chains; radiation emission spectra for photons from radioactive decay, alpha-n reactions, bremsstrahlung, and spontaneous fission, electrons and alpha particles from radioactive decay, and neutrons from radioactive decay, spontaneous fission, and alpha-n reactions; decay heat production; and electro-atomic interaction data for bremsstrahlung production. These data are compiled from fundamental (ENDF, ENSDF, TENDL) and processed (ESTAR) sources for nearly 3700 nuclides. A rigorous evaluation procedure of internal consistency checks and comparisons to measurements and benchmarks, and code-to-code verifications is performed at the individual isotope level and using integral characteristics on a fuel assembly level (e.g., decay heat, radioactivity, neutron and gamma sources). Significant challenges are presented by the scope and complexity of the data processing, a dearth of relevant detailed measurements, and reliance on theoretical models for some data.
An Approach in Radiation Therapy Treatment Planning: A Fast, GPU-Based Monte Carlo Method.
Karbalaee, Mojtaba; Shahbazi-Gahrouei, Daryoush; Tavakoli, Mohammad B
2017-01-01
An accurate and fast radiation dose calculation is essential for successful radiation radiotherapy. The aim of this study was to implement a new graphic processing unit (GPU) based radiation therapy treatment planning for accurate and fast dose calculation in radiotherapy centers. A program was written for parallel running based on GPU. The code validation was performed by EGSnrc/DOSXYZnrc. Moreover, a semi-automatic, rotary, asymmetric phantom was designed and produced using a bone, the lung, and the soft tissue equivalent materials. All measurements were performed using a Mapcheck dosimeter. The accuracy of the code was validated using the experimental data, which was obtained from the anthropomorphic phantom as the gold standard. The findings showed that, compared with those of DOSXYZnrc in the virtual phantom and for most of the voxels (>95%), <3% dose-difference or 3 mm distance-to-agreement (DTA) was found. Moreover, considering the anthropomorphic phantom, compared to the Mapcheck dose measurements, <5% dose-difference or 5 mm DTA was observed. Fast calculation speed and high accuracy of GPU-based Monte Carlo method in dose calculation may be useful in routine radiation therapy centers as the core and main component of a treatment planning verification system.
Benchmark of Atucha-2 PHWR RELAP5-3D control rod model by Monte Carlo MCNP5 core calculation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pecchia, M.; D'Auria, F.; Mazzantini, O.
2012-07-01
Atucha-2 is a Siemens-designed PHWR reactor under construction in the Republic of Argentina. Its geometrical complexity and peculiarities require the adoption of advanced Monte Carlo codes for performing realistic neutronic simulations. Therefore core models of Atucha-2 PHWR were developed using MCNP5. In this work a methodology was set up to collect the flux in the hexagonal mesh by which the Atucha-2 core is represented. The scope of this activity is to evaluate the effect of obliquely inserted control rod on neutron flux in order to validate the RELAP5-3D{sup C}/NESTLE three dimensional neutron kinetic coupled thermal-hydraulic model, applied by GRNSPG/UNIPI formore » performing selected transients of Chapter 15 FSAR of Atucha-2. (authors)« less
Nuclear design analysis of square-lattice honeycomb space nuclear rocket engine
NASA Astrophysics Data System (ADS)
Widargo, Reza; Anghaie, Samim
1999-01-01
The square-lattice honeycomb reactor is designed based on a cylindrical core that is determined to have critical diameter and length of 0.50 m and 0.50 c, respectively. A 0.10-cm thick radial graphite reflector, in addition to a 0.20-m thick axial graphite reflector are used to reduce neutron leakage from the reactor. The core is fueled with solid solution of 93% enriched (U, Zr, Nb)C, which is one of several ternary uranium carbides that are considered for this concept. The fuel is to be fabricated as 2 mm grooved (U, Zr, Nb)C wafers. The fuel wafers are used to form square-lattice honeycomb fuel assemblies, 0.10 m in length with 30% cross-sectional flow area. Five fuel assemblies are stacked up axially to form the reactor core. Based on the 30% void fraction, the width of the square flow channel is about 1.3 mm. The hydrogen propellant is passed through these flow channels and removes the heat from the reactor core. To perform nuclear design analysis, a series of neutron transport and diffusion codes are used. The preliminary results are obtained using a simple four-group cross-section model. To optimize the nuclear design, the fuel densities are varied for each assembly. Tantalum, hafnium and tungsten are considered and used as a replacement for niobium in fuel material to provide water submersion sub-criticality for the reactor. Axial and radial neutron flux and power density distributions are calculated for the core. Results of the neutronic analysis indicate that the core has a relatively fast spectrum. From the results of the thermal hydraulic analyses, eight axial temperature zones are chosen for the calculation of group average cross-sections. An iterative process is conducted to couple the neutronic calculations with the thermal hydraulics calculations. Results of the nuclear design analysis indicate that a compact core can be designed based on ternary uranium carbide square-lattice honeycomb fuel. This design provides a relatively high thrust to weight ratio.
SPOC Benchmark Case: SNRE Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vishal Patel; Michael Eades; Claude Russel Joyner II
The Small Nuclear Rocket Engine (SNRE) was modeled in the Center for Space Nuclear Research’s (CSNR) Space Propulsion Optimization Code (SPOC). SPOC aims to create nuclear thermal propulsion (NTP) geometries quickly to perform parametric studies on design spaces of historic and new NTP designs. The SNRE geometry was modeled in SPOC and a critical core with a reasonable amount of criticality margin was found. The fuel, tie-tubes, reflector, and control drum masses were predicted rather well. These are all very important for neutronics calculations so the active reactor geometries created with SPOC can continue to be trusted. Thermal calculations ofmore » the average and hot fuel channels agreed very well. The specific impulse calculations used historically and in SPOC disagree so mass flow rates and impulses differed. Modeling peripheral and power balance components that do not affect nuclear characteristics of the core is not a feature of SPOC and as such, these components should continue to be designed using other tools. A full paper detailing the available SNRE data and comparisons with SPOC outputs will be submitted as a follow-up to this abstract.« less
SU-E-T-37: A GPU-Based Pencil Beam Algorithm for Dose Calculations in Proton Radiation Therapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kalantzis, G; Leventouri, T; Tachibana, H
Purpose: Recent developments in radiation therapy have been focused on applications of charged particles, especially protons. Over the years several dose calculation methods have been proposed in proton therapy. A common characteristic of all these methods is their extensive computational burden. In the current study we present for the first time, to our best knowledge, a GPU-based PBA for proton dose calculations in Matlab. Methods: In the current study we employed an analytical expression for the protons depth dose distribution. The central-axis term is taken from the broad-beam central-axis depth dose in water modified by an inverse square correction whilemore » the distribution of the off-axis term was considered Gaussian. The serial code was implemented in MATLAB and was launched on a desktop with a quad core Intel Xeon X5550 at 2.67GHz with 8 GB of RAM. For the parallelization on the GPU, the parallel computing toolbox was employed and the code was launched on a GTX 770 with Kepler architecture. The performance comparison was established on the speedup factors. Results: The performance of the GPU code was evaluated for three different energies: low (50 MeV), medium (100 MeV) and high (150 MeV). Four square fields were selected for each energy, and the dose calculations were performed with both the serial and parallel codes for a homogeneous water phantom with size 300×300×300 mm3. The resolution of the PBs was set to 1.0 mm. The maximum speedup of ∼127 was achieved for the highest energy and the largest field size. Conclusion: A GPU-based PB algorithm for proton dose calculations in Matlab was presented. A maximum speedup of ∼127 was achieved. Future directions of the current work include extension of our method for dose calculation in heterogeneous phantoms.« less
Development and validation of a low-frequency modeling code for high-moment transmitter rod antennas
NASA Astrophysics Data System (ADS)
Jordan, Jared Williams; Sternberg, Ben K.; Dvorak, Steven L.
2009-12-01
The goal of this research is to develop and validate a low-frequency modeling code for high-moment transmitter rod antennas to aid in the design of future low-frequency TX antennas with high magnetic moments. To accomplish this goal, a quasi-static modeling algorithm was developed to simulate finite-length, permeable-core, rod antennas. This quasi-static analysis is applicable for low frequencies where eddy currents are negligible, and it can handle solid or hollow cores with winding insulation thickness between the antenna's windings and its core. The theory was programmed in Matlab, and the modeling code has the ability to predict the TX antenna's gain, maximum magnetic moment, saturation current, series inductance, and core series loss resistance, provided the user enters the corresponding complex permeability for the desired core magnetic flux density. In order to utilize the linear modeling code to model the effects of nonlinear core materials, it is necessary to use the correct complex permeability for a specific core magnetic flux density. In order to test the modeling code, we demonstrated that it can accurately predict changes in the electrical parameters associated with variations in the rod length and the core thickness for antennas made out of low carbon steel wire. These tests demonstrate that the modeling code was successful in predicting the changes in the rod antenna characteristics under high-current nonlinear conditions due to changes in the physical dimensions of the rod provided that the flux density in the core was held constant in order to keep the complex permeability from changing.
Spectroscopic properties of 130Sb, 132Te and 134I nuclei in 100-132Sn magic cores
NASA Astrophysics Data System (ADS)
Benrachi, Fatima; Khiter, Meriem; Laouet, Nadjet
2017-09-01
We have performed shell model calculations by means of Oxbash nuclear structure code using recent experimental single particle (spes) and single hole (shes) energies with valence space models above the 100sn and 132sn doubly magic cores. The two-body matrix elements (tbme) of original CD-Bonn realistic interaction are introduced after have been modified taking into account the three-body forces. We have focused our study on spectroscopic properties evaluation of 130Sb, 132Te and 134I nuclei, in particular their energy spectra, transition probabilities and moments have been determined. The getting spectra are in reasonable agreement with the experimental data.
Advanced Fuels for LWRs: Fully-Ceramic Microencapsulated and Related Concepts FY 2012 Interim Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
R. Sonat Sen; Brian Boer; John D. Bess
2012-03-01
This report summarizes the progress in the Deep Burn project at Idaho National Laboratory during the first half of fiscal year 2012 (FY2012). The current focus of this work is on Fully-Ceramic Microencapsulated (FCM) fuel containing low-enriched uranium (LEU) uranium nitride (UN) fuel kernels. UO2 fuel kernels have not been ruled out, and will be examined as later work in FY2012. Reactor physics calculations confirmed that the FCM fuel containing 500 mm diameter kernels of UN fuel has positive MTC with a conventional fuel pellet radius of 4.1 mm. The methodology was put into place and validated against MCNP tomore » perform whole-core calculations using DONJON, which can interpolate cross sections from a library generated using DRAGON. Comparisons to MCNP were performed on the whole core to confirm the accuracy of the DRAGON/DONJON schemes. A thermal fluid coupling scheme was also developed and implemented with DONJON. This is currently able to iterate between diffusion calculations and thermal fluid calculations in order to update fuel temperatures and cross sections in whole-core calculations. Now that the DRAGON/DONJON calculation capability is in place and has been validated against MCNP results, and a thermal-hydraulic capability has been implemented in the DONJON methodology, the work will proceed to more realistic reactor calculations. MTC calculations at the lattice level without the correct burnable poison are inadequate to guarantee zero or negative values in a realistic mode of operation. Using the DONJON calculation methodology described in this report, a startup core with enrichment zoning and burnable poisons will be designed. Larger fuel pins will be evaluated for their ability to (1) alleviate the problem of positive MTC and (2) increase reactivity-limited burnup. Once the critical boron concentration of the startup core is determined, MTC will be calculated to verify a non-positive value. If the value is positive, the design will be changed to require less soluble boron by, for example, increasing the reactivity hold-down by burnable poisons. Then, the whole core analysis will be repeated until an acceptable design is found. Calculations of departure from nucleate boiling ratio (DNBR) will be included in the safety evaluation as well. Once a startup core is shown to be viable, subsequent reloads will be simulated by shuffling fuel and introducing fresh fuel. The PASTA code has been updated with material properties of UN fuel from literature and a model for the diffusion and release of volatile fission products from the SiC matrix material . Preliminary simulations have been performed for both normal conditions and elevated temperatures. These results indicated that the fuel performs well and that the SiC matrix has a good retention of the fission products. The path forward for fuel performance work includes improvement of metallic fission product release from the kernel. Results should be considered preliminary and further validation is required.« less
Ex-Vessel Core Melt Modeling Comparison between MELTSPREAD-CORQUENCH and MELCOR 2.1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robb, Kevin R.; Farmer, Mitchell; Francis, Matthew W.
System-level code analyses by both United States and international researchers predict major core melting, bottom head failure, and corium-concrete interaction for Fukushima Daiichi Unit 1 (1F1). Although system codes such as MELCOR and MAAP are capable of capturing a wide range of accident phenomena, they currently do not contain detailed models for evaluating some ex-vessel core melt behavior. However, specialized codes containing more detailed modeling are available for melt spreading such as MELTSPREAD as well as long-term molten corium-concrete interaction (MCCI) and debris coolability such as CORQUENCH. In a preceding study, Enhanced Ex-Vessel Analysis for Fukushima Daiichi Unit 1: Meltmore » Spreading and Core-Concrete Interaction Analyses with MELTSPREAD and CORQUENCH, the MELTSPREAD-CORQUENCH codes predicted the 1F1 core melt readily cooled in contrast to predictions by MELCOR. The user community has taken notice and is in the process of updating their systems codes; specifically MAAP and MELCOR, to improve and reduce conservatism in their ex-vessel core melt models. This report investigates why the MELCOR v2.1 code, compared to the MELTSPREAD and CORQUENCH 3.03 codes, yield differing predictions of ex-vessel melt progression. To accomplish this, the differences in the treatment of the ex-vessel melt with respect to melt spreading and long-term coolability are examined. The differences in modeling approaches are summarized, and a comparison of example code predictions is provided.« less
Analysis of unmitigated large break loss of coolant accidents using MELCOR code
NASA Astrophysics Data System (ADS)
Pescarini, M.; Mascari, F.; Mostacci, D.; De Rosa, F.; Lombardo, C.; Giannetti, F.
2017-11-01
In the framework of severe accident research activity developed by ENEA, a MELCOR nodalization of a generic Pressurized Water Reactor of 900 MWe has been developed. The aim of this paper is to present the analysis of MELCOR code calculations concerning two independent unmitigated large break loss of coolant accident transients, occurring in the cited type of reactor. In particular, the analysis and comparison between the transients initiated by an unmitigated double-ended cold leg rupture and an unmitigated double-ended hot leg rupture in the loop 1 of the primary cooling system is presented herein. This activity has been performed focusing specifically on the in-vessel phenomenology that characterizes this kind of accidents. The analysis of the thermal-hydraulic transient phenomena and the core degradation phenomena is therefore here presented. The analysis of the calculated data shows the capability of the code to reproduce the phenomena typical of these transients and permits their phenomenological study. A first sequence of main events is here presented and shows that the cold leg break transient results faster than the hot leg break transient because of the position of the break. Further analyses are in progress to quantitatively assess the results of the code nodalization for accident management strategy definition and fission product source term evaluation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Okada, K.; Okamoto, A.; Kitajima, S.
To investigate the deuteron and triton density ratio in core plasmas, a new methodology with measurement of tritium (DT) and deuterium (DD) neutron count rate ratio using a double-crystal time-of-flight (TOF) spectrometer is proposed. Multi-discriminator electronic circuits for the first and second detectors are used in addition to the TOF technique. The optimum arrangement of the detectors and discrimination window were examined considering the relations between the geometrical arrangement and deposited energy using a Monte Carlo Code, PHITS (Particle and Heavy Ion Transport Code System). An experiment to verify the calculations was performed using DD neutrons from an accelerator.
Coupled reactors analysis: New needs and advances using Monte Carlo methodology
Aufiero, M.; Palmiotti, G.; Salvatores, M.; ...
2016-08-20
Coupled reactors and the coupling features of large or heterogeneous core reactors can be investigated with the Avery theory that allows a physics understanding of the main features of these systems. However, the complex geometries that are often encountered in association with coupled reactors, require a detailed geometry description that can be easily provided by modern Monte Carlo (MC) codes. This implies a MC calculation of the coupling parameters defined by Avery and of the sensitivity coefficients that allow further detailed physics analysis. The results presented in this paper show that the MC code SERPENT has been successfully modifed tomore » meet the required capabilities.« less
Pretest mediction of Semiscale Test S-07-10 B. [PWR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dobbe, C A
A best estimate prediction of Semiscale Test S-07-10B was performed at INEL by EG and G Idaho as part of the RELAP4/MOD6 code assessment effort and as the Nuclear Regulatory Commission pretest calculation for the Small Break Experiment. The RELAP4/MOD6 Update 4 and the RELAP4/MOD7 computer codes were used to analyze Semiscale Test S-07-10B, a 10% communicative cold leg break experiment. The Semiscale Mod-3 system utilized an electrially heated simulated core operating at a power level of 1.94 MW. The initial system pressure and temperature in the upper plenum was 2276 psia and 604/sup 0/F, respectively.
Generalized Advanced Propeller Analysis System (GAPAS). Volume 2: Computer program user manual
NASA Technical Reports Server (NTRS)
Glatt, L.; Crawford, D. R.; Kosmatka, J. B.; Swigart, R. J.; Wong, E. W.
1986-01-01
The Generalized Advanced Propeller Analysis System (GAPAS) computer code is described. GAPAS was developed to analyze advanced technology multi-bladed propellers which operate on aircraft with speeds up to Mach 0.8 and altitudes up to 40,000 feet. GAPAS includes technology for analyzing aerodynamic, structural, and acoustic performance of propellers. The computer code was developed for the CDC 7600 computer and is currently available for industrial use on the NASA Langley computer. A description of all the analytical models incorporated in GAPAS is included. Sample calculations are also described as well as users requirements for modifying the analysis system. Computer system core requirements and running times are also discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, T; Lin, H; Xu, X
Purpose: (1) To perform phase space (PS) based source modeling for Tomotherapy and Varian TrueBeam 6 MV Linacs, (2) to examine the accuracy and performance of the ARCHER Monte Carlo code on a heterogeneous computing platform with Many Integrated Core coprocessors (MIC, aka Xeon Phi) and GPUs, and (3) to explore the software micro-optimization methods. Methods: The patient-specific source of Tomotherapy and Varian TrueBeam Linacs was modeled using the PS approach. For the helical Tomotherapy case, the PS data were calculated in our previous study (Su et al. 2014 41(7) Medical Physics). For the single-view Varian TrueBeam case, we analyticallymore » derived them from the raw patient-independent PS data in IAEA’s database, partial geometry information of the jaw and MLC as well as the fluence map. The phantom was generated from DICOM images. The Monte Carlo simulation was performed by ARCHER-MIC and GPU codes, which were benchmarked against a modified parallel DPM code. Software micro-optimization was systematically conducted, and was focused on SIMD vectorization of tight for-loops and data prefetch, with the ultimate goal of increasing 512-bit register utilization and reducing memory access latency. Results: Dose calculation was performed for two clinical cases, a Tomotherapy-based prostate cancer treatment and a TrueBeam-based left breast treatment. ARCHER was verified against the DPM code. The statistical uncertainty of the dose to the PTV was less than 1%. Using double-precision, the total wall time of the multithreaded CPU code on a X5650 CPU was 339 seconds for the Tomotherapy case and 131 seconds for the TrueBeam, while on 3 5110P MICs it was reduced to 79 and 59 seconds, respectively. The single-precision GPU code on a K40 GPU took 45 seconds for the Tomotherapy dose calculation. Conclusion: We have extended ARCHER, the MIC and GPU-based Monte Carlo dose engine to Tomotherapy and Truebeam dose calculations.« less
Methodology, status and plans for development and assessment of the code ATHLET
DOE Office of Scientific and Technical Information (OSTI.GOV)
Teschendorff, V.; Austregesilo, H.; Lerchl, G.
1997-07-01
The thermal-hydraulic computer code ATHLET (Analysis of THermal-hydraulics of LEaks and Transients) is being developed by the Gesellschaft fuer Anlagen- und Reaktorsicherheit (GRS) for the analysis of anticipated and abnormal plant transients, small and intermediate leaks as well as large breaks in light water reactors. The aim of the code development is to cover the whole spectrum of design basis and beyond design basis accidents (without core degradation) for PWRs and BWRs with only one code. The main code features are: advanced thermal-hydraulics; modular code architecture; separation between physical models and numerical methods; pre- and post-processing tools; portability. The codemore » has features that are of special interest for applications to small leaks and transients with accident management, e.g. initialization by a steady-state calculation, full-range drift-flux model, dynamic mixture level tracking. The General Control Simulation Module of ATHLET is a flexible tool for the simulation of the balance-of-plant and control systems including the various operator actions in the course of accident sequences with AM measures. The code development is accompained by a systematic and comprehensive validation program. A large number of integral experiments and separate effect tests, including the major International Standard Problems, have been calculated by GRS and by independent organizations. The ATHLET validation matrix is a well balanced set of integral and separate effects tests derived from the CSNI proposal emphasizing, however, the German combined ECC injection system which was investigated in the UPTF, PKL and LOBI test facilities.« less
Inelastic losses in X-ray absorption theory
NASA Astrophysics Data System (ADS)
Campbell, Luke Whalin
There is a surprising lack of many body effects observed in XAS (X-ray Absorption Spectroscopy) experiments. While collective excitations and other satellite effects account for between 20% and 40% of the spectral weight of the core hole and photoelectron excitation spectrum, the only commonly observed many body effect is a relatively structureless amplitude reduction to the fine structure, typically no more than a 10% effect. As a result, many particle effects are typically neglected in the XAS codes used to predict and interpret modern experiments. To compensate, the amplitude reduction factor is simply fitted to experimental data. In this work, a quasi-boson model is developed to treat the case of XAS, when the system has both a photoelectron and a core hole. We find that there is a strong interference between the extrinsic and intrinsic losses. The interference reduces the excitation amplitudes at low energies where the core hole and photo electron induced excitations tend to cancel. At high energies, the interference vanishes, and the theory reduces to the sudden approximation. The x-ray absorption spectrum including many-body excitations is represented by a convolution of the one-electron absorption spectrum with an energy dependent spectral function. The latter has an asymmetric quasiparticle peak and broad satellite structure. The net result is a phasor sum, which yields the many body amplitude reduction and phase shift of the fine structure oscillations (EXAFS), and possibly additional satellite structure. Calculations for several cases of interest are found to be in reasonable agreement with experiment. Edge singularity effects and deviations from the final state rule arising from this theory are also discussed. The ab initio XAS code FEFF has been extended for calculations of the many body amplitude reduction and phase shift in x-ray spectroscopies. A new broadened plasmon pole self energy is added. The dipole matrix elements are modified to include a projection operator to calculate deviations from the final state rule and edge singularities.
NASA Astrophysics Data System (ADS)
Zheng, Jingjing; Mielke, Steven L.; Clarkson, Kenneth L.; Truhlar, Donald G.
2012-08-01
We present a Fortran program package, MSTor, which calculates partition functions and thermodynamic functions of complex molecules involving multiple torsional motions by the recently proposed MS-T method. This method interpolates between the local harmonic approximation in the low-temperature limit, and the limit of free internal rotation of all torsions at high temperature. The program can also carry out calculations in the multiple-structure local harmonic approximation. The program package also includes six utility codes that can be used as stand-alone programs to calculate reduced moment of inertia matrices by the method of Kilpatrick and Pitzer, to generate conformational structures, to calculate, either analytically or by Monte Carlo sampling, volumes for torsional subdomains defined by Voronoi tessellation of the conformational subspace, to generate template input files, and to calculate one-dimensional torsional partition functions using the torsional eigenvalue summation method. Catalogue identifier: AEMF_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEMF_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 77 434 No. of bytes in distributed program, including test data, etc.: 3 264 737 Distribution format: tar.gz Programming language: Fortran 90, C, and Perl Computer: Itasca (HP Linux cluster, each node has two-socket, quad-core 2.8 GHz Intel Xeon X5560 “Nehalem EP” processors), Calhoun (SGI Altix XE 1300 cluster, each node containing two quad-core 2.66 GHz Intel Xeon “Clovertown”-class processors sharing 16 GB of main memory), Koronis (Altix UV 1000 server with 190 6-core Intel Xeon X7542 “Westmere” processors at 2.66 GHz), Elmo (Sun Fire X4600 Linux cluster with AMD Opteron cores), and Mac Pro (two 2.8 GHz Quad-core Intel Xeon processors) Operating system: Linux/Unix/Mac OS RAM: 2 Mbytes Classification: 16.3, 16.12, 23 Nature of problem: Calculation of the partition functions and thermodynamic functions (standard-state energy, enthalpy, entropy, and free energy as functions of temperatures) of complex molecules involving multiple torsional motions. Solution method: The multi-structural approximation with torsional anharmonicity (MS-T). The program also provides results for the multi-structural local harmonic approximation [1]. Restrictions: There is no limit on the number of torsions that can be included in either the Voronoi calculation or the full MS-T calculation. In practice, the range of problems that can be addressed with the present method consists of all multi-torsional problems for which one can afford to calculate all the conformations and their frequencies. Unusual features: The method can be applied to transition states as well as stable molecules. The program package also includes the hull program for the calculation of Voronoi volumes and six utility codes that can be used as stand-alone programs to calculate reduced moment-of-inertia matrices by the method of Kilpatrick and Pitzer, to generate conformational structures, to calculate, either analytically or by Monte Carlo sampling, volumes for torsional subdomain defined by Voronoi tessellation of the conformational subspace, to generate template input files, and to calculate one-dimensional torsional partition functions using the torsional eigenvalue summation method. Additional comments: The program package includes a manual, installation script, and input and output files for a test suite. Running time: There are 24 test runs. The running time of the test runs on a single processor of the Itasca computer is less than 2 seconds. J. Zheng, T. Yu, E. Papajak, I.M. Alecu, S.L. Mielke, D.G. Truhlar, Practical methods for including torsional anharmonicity in thermochemical calculations of complex molecules: The internal-coordinate multi-structural approximation, Phys. Chem. Chem. Phys. 13 (2011) 10885-10907.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simunovic, Srdjan
2015-02-16
CASL's modeling and simulation technology, the Virtual Environment for Reactor Applications (VERA), incorporates coupled physics and science-based models, state-of-the-art numerical methods, modern computational science, integrated uncertainty quantification (UQ) and validation against data from operating pressurized water reactors (PWRs), single-effect experiments, and integral tests. The computational simulation component of VERA is the VERA Core Simulator (VERA-CS). The core simulator is the specific collection of multi-physics computer codes used to model and deplete a LWR core over multiple cycles. The core simulator has a single common input file that drives all of the different physics codes. The parser code, VERAIn, converts VERAmore » Input into an XML file that is used as input to different VERA codes.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mollerach, R.; Leszczynski, F.; Fink, J.
2006-07-01
In 2005 the Argentine Government took the decision to complete the construction of the Atucha-II nuclear power plant, which has been progressing slowly during the last ten years. Atucha-II is a 745 MWe nuclear station moderated and cooled with heavy water, of German (Siemens) design located in Argentina. It has a pressure-vessel design with 451 vertical coolant channels, and the fuel assemblies (FA) are clusters of 37 natural UO{sub 2} rods with an active length of 530 cm. For the reactor physics area, a revision and update calculation methods and models (cell, supercell and reactor) was recently carried out coveringmore » cell, supercell (control rod) and core calculations. As a validation of the new models some benchmark comparisons were done with Monte Carlo calculations with MCNP5. This paper presents comparisons of cell and supercell benchmark problems based on a slightly idealized model of the Atucha-I core obtained with the WIMS-D5 and DRAGON codes with MCNP5 results. The Atucha-I core was selected because it is smaller, similar from a neutronic point of view, and more symmetric than Atucha-II Cell parameters compared include cell k-infinity, relative power levels of the different rings of fuel rods, and some two-group macroscopic cross sections. Supercell comparisons include supercell k-infinity changes due to the control rods (tubes) of steel and hafnium. (authors)« less
Radiological characterization of the pressure vessel internals of the BNL High Flux Beam Reactor.
Holden, Norman E; Reciniello, Richard N; Hu, Jih-Perng
2004-08-01
In preparation for the eventual decommissioning of the High Flux Beam Reactor after the permanent removal of its fuel elements from the Brookhaven National Laboratory, measurements and calculations of the decay gamma-ray dose-rate were performed in the reactor pressure vessel and on vessel internal structures such as the upper and lower thermal shields, the Transition Plate, and the Control Rod blades. Measurements of gamma-ray dose rates were made using Red Perspex polymethyl methacrylate high-dose film, a Radcal "peanut" ion chamber, and Eberline's RO-7 high-range ion chamber. As a comparison, the Monte Carlo MCNP code and MicroShield code were used to model the gamma-ray transport and dose buildup. The gamma-ray dose rate at 8 cm above the center of the Transition Plate was measured to be 160 Gy h (using an RO-7) and 88 Gy h at 8 cm above and about 5 cm lateral to the Transition Plate (using Red Perspex film). This compares with a calculated dose rate of 172 Gy h using Micro-Shield. The gamma-ray dose rate was 16.2 Gy h measured at 76 cm from the reactor core (using the "peanut" ion chamber) and 16.3 Gy h at 87 cm from the core (using Red Perspex film). The similarity of dose rates measured with different instruments indicates that using different methods and instruments is acceptable if the measurement (and calculation) parameters are well defined. Different measurement techniques may be necessary due to constraints such as size restrictions.
egs_brachy: a versatile and fast Monte Carlo code for brachytherapy
NASA Astrophysics Data System (ADS)
Chamberland, Marc J. P.; Taylor, Randle E. P.; Rogers, D. W. O.; Thomson, Rowan M.
2016-12-01
egs_brachy is a versatile and fast Monte Carlo (MC) code for brachytherapy applications. It is based on the EGSnrc code system, enabling simulation of photons and electrons. Complex geometries are modelled using the EGSnrc C++ class library and egs_brachy includes a library of geometry models for many brachytherapy sources, in addition to eye plaques and applicators. Several simulation efficiency enhancing features are implemented in the code. egs_brachy is benchmarked by comparing TG-43 source parameters of three source models to previously published values. 3D dose distributions calculated with egs_brachy are also compared to ones obtained with the BrachyDose code. Well-defined simulations are used to characterize the effectiveness of many efficiency improving techniques, both as an indication of the usefulness of each technique and to find optimal strategies. Efficiencies and calculation times are characterized through single source simulations and simulations of idealized and typical treatments using various efficiency improving techniques. In general, egs_brachy shows agreement within uncertainties with previously published TG-43 source parameter values. 3D dose distributions from egs_brachy and BrachyDose agree at the sub-percent level. Efficiencies vary with radionuclide and source type, number of sources, phantom media, and voxel size. The combined effects of efficiency-improving techniques in egs_brachy lead to short calculation times: simulations approximating prostate and breast permanent implant (both with (2 mm)3 voxels) and eye plaque (with (1 mm)3 voxels) treatments take between 13 and 39 s, on a single 2.5 GHz Intel Xeon E5-2680 v3 processor core, to achieve 2% average statistical uncertainty on doses within the PTV. egs_brachy will be released as free and open source software to the research community.
egs_brachy: a versatile and fast Monte Carlo code for brachytherapy.
Chamberland, Marc J P; Taylor, Randle E P; Rogers, D W O; Thomson, Rowan M
2016-12-07
egs_brachy is a versatile and fast Monte Carlo (MC) code for brachytherapy applications. It is based on the EGSnrc code system, enabling simulation of photons and electrons. Complex geometries are modelled using the EGSnrc C++ class library and egs_brachy includes a library of geometry models for many brachytherapy sources, in addition to eye plaques and applicators. Several simulation efficiency enhancing features are implemented in the code. egs_brachy is benchmarked by comparing TG-43 source parameters of three source models to previously published values. 3D dose distributions calculated with egs_brachy are also compared to ones obtained with the BrachyDose code. Well-defined simulations are used to characterize the effectiveness of many efficiency improving techniques, both as an indication of the usefulness of each technique and to find optimal strategies. Efficiencies and calculation times are characterized through single source simulations and simulations of idealized and typical treatments using various efficiency improving techniques. In general, egs_brachy shows agreement within uncertainties with previously published TG-43 source parameter values. 3D dose distributions from egs_brachy and BrachyDose agree at the sub-percent level. Efficiencies vary with radionuclide and source type, number of sources, phantom media, and voxel size. The combined effects of efficiency-improving techniques in egs_brachy lead to short calculation times: simulations approximating prostate and breast permanent implant (both with (2 mm) 3 voxels) and eye plaque (with (1 mm) 3 voxels) treatments take between 13 and 39 s, on a single 2.5 GHz Intel Xeon E5-2680 v3 processor core, to achieve 2% average statistical uncertainty on doses within the PTV. egs_brachy will be released as free and open source software to the research community.
Lee, Tai-Sung; Hu, Yuan; Sherborne, Brad; Guo, Zhuyan; York, Darrin M
2017-07-11
We report the implementation of the thermodynamic integration method on the pmemd module of the AMBER 16 package on GPUs (pmemdGTI). The pmemdGTI code typically delivers over 2 orders of magnitude of speed-up relative to a single CPU core for the calculation of ligand-protein binding affinities with no statistically significant numerical differences and thus provides a powerful new tool for drug discovery applications.
Evaluation Criteria for Nursing Student Application of Evidence-Based Practice: A Delphi Study.
Bostwick, Lina; Linden, Lois
2016-06-01
Core clinical evaluation criteria do not exist for measuring prelicensure baccalaureate nursing students' application of evidence-based practice (EBP) during direct care assignments. The study objective was to achieve consensus among EBP nursing experts to create clinical criteria for faculty to use in evaluating students' application of EBP principles. A three-round Delphi method was used. Experts were invited to participate in Web-based surveys. Data were analyzed using qualitative coding and categorizing. Quantitative analyses were descriptive calculations for rating and ranking. Expert consensus occurred in the Delphi rounds. The study provides a set of 10 core clinical evaluation criteria for faculty evaluating students' progression toward competency in their application of EBP. A baccalaureate program curriculum requiring the use of Bostwick's EBP Core Clinical Evaluation Criteria will provide a clear definition for understanding basic core EBP competence as expected for the assessment of student learning. [J Nurs Educ. 2016;55(5):336-341.]. Copyright 2016, SLACK Incorporated.
NASA Astrophysics Data System (ADS)
Ilham, Muhammad; Su'ud, Zaki
2017-01-01
Growing energy needed due to increasing of the world’s population encourages development of technology and science of nuclear power plant in its safety and security. In this research, it will be explained about design study of modular fast reactor with helium gas cooling (GCFR) small long life reactor, which can be operated over 20 years. It had been conducted about neutronic design GCFR with Mixed Oxide (UO2-PuO2) fuel in range of 100-200 MWth NPPs of power and 50-60% of fuel fraction variation with cylindrical pin cell and cylindrical balance of reactor core geometry. Calculation method used SRAC-CITATION code. The obtained results are the effective multiplication factor and density value of core reactor power (with geometry optimalization) to obtain optimum design core reactor power, whereas the obtained of optimum core reactor power is 200 MWth with 55% of fuel fraction and 9-13% of percentages.
Stability and accuracy of 3D neutron transport simulations using the 2D/1D method in MPACT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Collins, Benjamin, E-mail: collinsbs@ornl.gov; Stimpson, Shane, E-mail: stimpsonsg@ornl.gov; Kelley, Blake W., E-mail: kelleybl@umich.edu
2016-12-01
A consistent “2D/1D” neutron transport method is derived from the 3D Boltzmann transport equation, to calculate fuel-pin-resolved neutron fluxes for realistic full-core Pressurized Water Reactor (PWR) problems. The 2D/1D method employs the Method of Characteristics to discretize the radial variables and a lower order transport solution to discretize the axial variable. This paper describes the theory of the 2D/1D method and its implementation in the MPACT code, which has become the whole-core deterministic neutron transport solver for the Consortium for Advanced Simulations of Light Water Reactors (CASL) core simulator VERA-CS. Several applications have been performed on both leadership-class and industry-classmore » computing clusters. Results are presented for whole-core solutions of the Watts Bar Nuclear Power Station Unit 1 and compared to both continuous-energy Monte Carlo results and plant data.« less
Stability and accuracy of 3D neutron transport simulations using the 2D/1D method in MPACT
Collins, Benjamin; Stimpson, Shane; Kelley, Blake W.; ...
2016-08-25
We derived a consistent “2D/1D” neutron transport method from the 3D Boltzmann transport equation, to calculate fuel-pin-resolved neutron fluxes for realistic full-core Pressurized Water Reactor (PWR) problems. The 2D/1D method employs the Method of Characteristics to discretize the radial variables and a lower order transport solution to discretize the axial variable. Our paper describes the theory of the 2D/1D method and its implementation in the MPACT code, which has become the whole-core deterministic neutron transport solver for the Consortium for Advanced Simulations of Light Water Reactors (CASL) core simulator VERA-CS. We also performed several applications on both leadership-class and industry-classmore » computing clusters. Results are presented for whole-core solutions of the Watts Bar Nuclear Power Station Unit 1 and compared to both continuous-energy Monte Carlo results and plant data.« less
MODA: a new algorithm to compute optical depths in multidimensional hydrodynamic simulations
NASA Astrophysics Data System (ADS)
Perego, Albino; Gafton, Emanuel; Cabezón, Rubén; Rosswog, Stephan; Liebendörfer, Matthias
2014-08-01
Aims: We introduce the multidimensional optical depth algorithm (MODA) for the calculation of optical depths in approximate multidimensional radiative transport schemes, equally applicable to neutrinos and photons. Motivated by (but not limited to) neutrino transport in three-dimensional simulations of core-collapse supernovae and neutron star mergers, our method makes no assumptions about the geometry of the matter distribution, apart from expecting optically transparent boundaries. Methods: Based on local information about opacities, the algorithm figures out an escape route that tends to minimize the optical depth without assuming any predefined paths for radiation. Its adaptivity makes it suitable for a variety of astrophysical settings with complicated geometry (e.g., core-collapse supernovae, compact binary mergers, tidal disruptions, star formation, etc.). We implement the MODA algorithm into both a Eulerian hydrodynamics code with a fixed, uniform grid and into an SPH code where we use a tree structure that is otherwise used for searching neighbors and calculating gravity. Results: In a series of numerical experiments, we compare the MODA results with analytically known solutions. We also use snapshots from actual 3D simulations and compare the results of MODA with those obtained with other methods, such as the global and local ray-by-ray method. It turns out that MODA achieves excellent accuracy at a moderate computational cost. In appendix we also discuss implementation details and parallelization strategies.
SHOULD ONE USE THE RAY-BY-RAY APPROXIMATION IN CORE-COLLAPSE SUPERNOVA SIMULATIONS?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Skinner, M. Aaron; Burrows, Adam; Dolence, Joshua C., E-mail: burrows@astro.princeton.edu, E-mail: askinner@astro.princeton.edu, E-mail: jdolence@lanl.gov
2016-11-01
We perform the first self-consistent, time-dependent, multi-group calculations in two dimensions (2D) to address the consequences of using the ray-by-ray+ transport simplification in core-collapse supernova simulations. Such a dimensional reduction is employed by many researchers to facilitate their resource-intensive calculations. Our new code (Fornax) implements multi-D transport, and can, by zeroing out transverse flux terms, emulate the ray-by-ray+ scheme. Using the same microphysics, initial models, resolution, and code, we compare the results of simulating 12, 15, 20, and 25 M {sub ⊙} progenitor models using these two transport methods. Our findings call into question the wisdom of the pervasive usemore » of the ray-by-ray+ approach. Employing it leads to maximum post-bounce/pre-explosion shock radii that are almost universally larger by tens of kilometers than those derived using the more accurate scheme, typically leaving the post-bounce matter less bound and artificially more “explodable.” In fact, for our 25 M {sub ⊙} progenitor, the ray-by-ray+ model explodes, while the corresponding multi-D transport model does not. Therefore, in two dimensions, the combination of ray-by-ray+ with the axial sloshing hydrodynamics that is a feature of 2D supernova dynamics can result in quantitatively, and perhaps qualitatively, incorrect results.« less
Alfvén eigenmode evolution computed with the VENUS and KINX codes for the ITER baseline scenario
DOE Office of Scientific and Technical Information (OSTI.GOV)
Isaev, M. Yu., E-mail: isaev-my@nrcki.ru; Medvedev, S. Yu.; Cooper, W. A.
A new application of the VENUS code is described, which computes alpha particle orbits in the perturbed electromagnetic fields and its resonant interaction with the toroidal Alfvén eigenmodes (TAEs) for the ITER device. The ITER baseline scenario with Q = 10 and the plasma toroidal current of 15 MA is considered as the most important and relevant for the International Tokamak Physics Activity group on energetic particles (ITPA-EP). For this scenario, typical unstable TAE-modes with the toroidal index n = 20 have been predicted that are localized in the plasma core near the surface with safety factor q = 1.more » The spatial structure of ballooning and antiballooning modes has been computed with the ideal MHD code KINX. The linear growth rates and the saturation levels taking into account the damping effects and the different mode frequencies have been calculated with the VENUS code for both ballooning and antiballooning TAE-modes.« less
AN OPEN-SOURCE NEUTRINO RADIATION HYDRODYNAMICS CODE FOR CORE-COLLAPSE SUPERNOVAE
DOE Office of Scientific and Technical Information (OSTI.GOV)
O’Connor, Evan, E-mail: evanoconnor@ncsu.edu; CITA, Canadian Institute for Theoretical Astrophysics, Toronto, M5S 3H8
2015-08-15
We present an open-source update to the spherically symmetric, general-relativistic hydrodynamics, core-collapse supernova (CCSN) code GR1D. The source code is available at http://www.GR1Dcode.org. We extend its capabilities to include a general-relativistic treatment of neutrino transport based on the moment formalisms of Shibata et al. and Cardall et al. We pay special attention to implementing and testing numerical methods and approximations that lessen the computational demand of the transport scheme by removing the need to invert large matrices. This is especially important for the implementation and development of moment-like transport methods in two and three dimensions. A critical component of neutrinomore » transport calculations is the neutrino–matter interaction coefficients that describe the production, absorption, scattering, and annihilation of neutrinos. In this article we also describe our open-source neutrino interaction library NuLib (available at http://www.nulib.org). We believe that an open-source approach to describing these interactions is one of the major steps needed to progress toward robust models of CCSNe and robust predictions of the neutrino signal. We show, via comparisons to full Boltzmann neutrino-transport simulations of CCSNe, that our neutrino transport code performs remarkably well. Furthermore, we show that the methods and approximations we employ to increase efficiency do not decrease the fidelity of our results. We also test the ability of our general-relativistic transport code to model failed CCSNe by evolving a 40-solar-mass progenitor to the onset of collapse to a black hole.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Uematsu, Hitoshi; Yamamoto, Toru; Izutsu, Sadayuki
1990-06-01
A reactivity-initiated event is a design-basis accident for the safety analysis of boiling water reactors. It is defined as a rapid transient of reactor power caused by a reactivity insertion of over $1.0 due to a postulated drop or abnormal withdrawal of the control rod from the core. Strong space-dependent feedback effects are associated with the local power increase due to control rod movement. A realistic treatment of the core status in a transient by a code with a detailed core model is recommended in evaluating this event. A three-dimensional transient code, ARIES, has been developed to meet this need.more » The code simulates the event with three-dimensional neutronics, coupled with multichannel thermal hydraulics, based on a nonequilibrium separated flow model. The experimental data obtained in reactivity accident tests performed with the SPERT III-E core are used to verify the entire code, including thermal-hydraulic models.« less
Production Level CFD Code Acceleration for Hybrid Many-Core Architectures
NASA Technical Reports Server (NTRS)
Duffy, Austen C.; Hammond, Dana P.; Nielsen, Eric J.
2012-01-01
In this work, a novel graphics processing unit (GPU) distributed sharing model for hybrid many-core architectures is introduced and employed in the acceleration of a production-level computational fluid dynamics (CFD) code. The latest generation graphics hardware allows multiple processor cores to simultaneously share a single GPU through concurrent kernel execution. This feature has allowed the NASA FUN3D code to be accelerated in parallel with up to four processor cores sharing a single GPU. For codes to scale and fully use resources on these and the next generation machines, codes will need to employ some type of GPU sharing model, as presented in this work. Findings include the effects of GPU sharing on overall performance. A discussion of the inherent challenges that parallel unstructured CFD codes face in accelerator-based computing environments is included, with considerations for future generation architectures. This work was completed by the author in August 2010, and reflects the analysis and results of the time.
NASA Astrophysics Data System (ADS)
Medley, S. S.; Budny, R. V.; Mansfield, D. K.; Redi, M. H.; Roquemore, A. L.; Fisher, R. K.; Duong, H. H.; McChesney, J. M.; Parks, P. B.; Petrov, M. P.; Gorelenkov, N. N.
1996-10-01
The energy distributions and radial density profiles of the fast confined trapped alpha particles in DT experiments on TFTR are being measured in the energy range 0.5 - 3.5 MeV using the pellet charge exchange (PCX) diagnostic. A brief description of the measurement technique which involves active neutral particle analysis using the ablation cloud surrounding an injected impurity pellet as the neutralizer is presented. This paper focuses on alpha and triton measurements in the core of MHD quiescent TFTR discharges where the expected classical slowing-down and pitch angle scattering effects are not complicated by stochastic ripple diffusion and sawtooth activity. In particular, the first measurement of the alpha slowing-down distribution up to the birth energy, obtained using boron pellet injection, is presented. The measurements are compared with predictions using either the TRANSP Monte Carlo code and/or a Fokker - Planck Post-TRANSP processor code, which assumes that the alphas and tritons are well confined and slow down classically. Both the shape of the measured alpha and triton energy distributions and their density ratios are in good agreement with the code calculations. We can conclude that the PCX measurements are consistent with classical thermalization of the fusion-generated alphas and tritons.
NASA Astrophysics Data System (ADS)
Stoekl, Alexander; Dorfi, Ernst
2014-05-01
In the early, embedded phase of evolution of terrestrial planets, the planetary core accumulates gas from the circumstellar disk into a planetary envelope. This atmosphere is very significant for the further thermal evolution of the planet by forming an insulation around the rocky core. The disk-captured envelope is also the staring point for the atmospheric evolution where the atmosphere is modified by outgassing from the planetary core and atmospheric mass loss once the planet is exposed to the radiation field of the host star. The final amount of persistent atmosphere around the evolved planet very much characterizes the planet and is a key criterion for habitability. The established way to study disk accumulated atmospheres are hydrostatic models, even though in many cases the assumption of stationarity is unlikely to be fulfilled. We present, for the first time, time-dependent radiation hydrodynamics simulations of the accumulation process and the interaction between the disk-nebula gas and the planetary core. The calculations were performed with the TAPIR-Code (short for The adaptive, implicit RHD-Code) in spherical symmetry solving the equations of hydrodynamics, gray radiative transport, and convective energy transport. The models range from the surface of the solid core up to the Hill radius where the planetary envelope merges into the surrounding protoplanetary disk. Our results show that the time-scale of gas capturing and atmospheric growth strongly depends on the mass of the solid core. The amount of atmosphere accumulated during the lifetime of the protoplanetary disk (typically a few Myr) varies accordingly with the mass of the planet. Thus, a core with Mars-mass will end up with about 10 bar of atmosphere while for an Earth-mass core, the surface pressure reaches several 1000 bar. Even larger planets with several Earth masses quickly capture massive envelopes which in turn become gravitationally unstable leading to runaway accretion and the eventual formation of a gas planet.
NASA Technical Reports Server (NTRS)
Kutepov, A. A.; Feofilov, A. G.; Manuilova, R. O.; Yankovsky, V. A.; Rezac, L.; Pesnell, W. D.; Goldberg, R. A.
2008-01-01
The Accelerated Lambda Iteration (ALI) technique was developed in stellar astrophysics at the beginning of 1990s for solving the non-LTE radiative transfer problem in atomic lines and multiplets in stellar atmospheres. It was later successfully applied to modeling the non-LTE emissions and radiative cooling/heating in the vibrational-rotational bands of molecules in planetary atmospheres. Similar to the standard lambda iterations ALI operates with the matrices of minimal dimension. However, it provides higher convergence rate and stability due to removing from the iterating process the photons trapped in the optically thick line cores. In the current ALI-ARMS (ALI for Atmospheric Radiation and Molecular Spectra) code version additional acceleration of calculations is provided by utilizing the opacity distribution function (ODF) approach and "decoupling". The former allows replacing the band branches by single lines of special shape, whereas the latter treats non-linearity caused by strong near-resonant vibration-vibrational level coupling without additional linearizing the statistical equilibrium equations. Latest code application for the non-LTE diagnostics of the molecular band emissions of Earth's and Martian atmospheres as well as for the non-LTE IR cooling/heating calculations are discussed.
Deploying electromagnetic particle-in-cell (EM-PIC) codes on Xeon Phi accelerators boards
NASA Astrophysics Data System (ADS)
Fonseca, Ricardo
2014-10-01
The complexity of the phenomena involved in several relevant plasma physics scenarios, where highly nonlinear and kinetic processes dominate, makes purely theoretical descriptions impossible. Further understanding of these scenarios requires detailed numerical modeling, but fully relativistic particle-in-cell codes such as OSIRIS are computationally intensive. The quest towards Exaflop computer systems has lead to the development of HPC systems based on add-on accelerator cards, such as GPGPUs and more recently the Xeon Phi accelerators that power the current number 1 system in the world. These cards, also referred to as Intel Many Integrated Core Architecture (MIC) offer peak theoretical performances of >1 TFlop/s for general purpose calculations in a single board, and are receiving significant attention as an attractive alternative to CPUs for plasma modeling. In this work we report on our efforts towards the deployment of an EM-PIC code on a Xeon Phi architecture system. We will focus on the parallelization and vectorization strategies followed, and present a detailed performance evaluation of code performance in comparison with the CPU code.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sen, Ramazan Sonat; Hummel, Andrew John; Hiruta, Hikaru
The deterministic full core simulators require homogenized group constants covering the operating and transient conditions over the entire lifetime. Traditionally, the homogenized group constants are generated using lattice physics code over an assembly or block in the case of prismatic high temperature reactors (HTR). For the case of strong absorbers that causes strong local depressions on the flux profile require special techniques during homogenization over a large volume. Fuel blocks with burnable poisons or control rod blocks are example of such cases. Over past several decades, there have been a tremendous number of studies performed for improving the accuracy ofmore » full-core calculations through the homogenization procedure. However, those studies were mostly performed for light water reactor (LWR) analyses, thus, may not be directly applicable to advanced thermal reactors such as HTRs. This report presents the application of SuPer-Homogenization correction method to a hypothetical HTR core.« less
Network Coding on Heterogeneous Multi-Core Processors for Wireless Sensor Networks
Kim, Deokho; Park, Karam; Ro, Won W.
2011-01-01
While network coding is well known for its efficiency and usefulness in wireless sensor networks, the excessive costs associated with decoding computation and complexity still hinder its adoption into practical use. On the other hand, high-performance microprocessors with heterogeneous multi-cores would be used as processing nodes of the wireless sensor networks in the near future. To this end, this paper introduces an efficient network coding algorithm developed for the heterogenous multi-core processors. The proposed idea is fully tested on one of the currently available heterogeneous multi-core processors referred to as the Cell Broadband Engine. PMID:22164053
Outcomes of Grazing Impacts between Sub-Neptunes in Kepler Multis
NASA Astrophysics Data System (ADS)
Hwang, Jason; Chatterjee, Sourav; Lombardi, James, Jr.; Steffen, Jason H.; Rasio, Frederic
2018-01-01
Studies of high-multiplicity, tightly packed planetary systems suggest that dynamical instabilities are common and affect both the orbits and planet structures, where the compact orbits and typically low densities make physical collisions likely outcomes. Since the structure of many of these planets is such that the mass is dominated by a rocky core, but the volume is dominated by a tenuous gas envelope, the sticky-sphere approximation, used in dynamical integrators, may be a poor model for these collisions. We perform five sets of collision calculations, including detailed hydrodynamics, sampling mass ratios, and core mass fractions typical in Kepler Multis. In our primary set of calculations, we use Kepler-36 as a nominal remnant system, as the two planets have a small dynamical separation and an extreme density ratio. We use an N-body code, Mercury 6.2, to integrate initially unstable systems and study the resultant collisions in detail. We use these collisions, focusing on grazing collisions, in combination with realistic planet models created using gas profiles from Modules for Experiments in Stellar Astrophysics and core profiles using equations of state from Seager et al. to perform hydrodynamic calculations, finding scatterings, mergers, and even a potential planet–planet binary. We dynamically integrate the remnant systems, examine the stability, and estimate the final densities, finding that the remnant densities are sensitive to the core masses, and collisions result in generally more stable systems. We provide prescriptions for predicting the outcomes and modeling the changes in mass and orbits following collisions for general use in dynamical integrators.
Input Files and Procedures for Analysis of SMA Hybrid Composite Beams in MSC.Nastran and ABAQUS
NASA Technical Reports Server (NTRS)
Turner, Travis L.; Patel, Hemant D.
2005-01-01
A thermoelastic constitutive model for shape memory alloys (SMAs) and SMA hybrid composites (SMAHCs) was recently implemented in the commercial codes MSC.Nastran and ABAQUS. The model is implemented and supported within the core of the commercial codes, so no user subroutines or external calculations are necessary. The model and resulting structural analysis has been previously demonstrated and experimentally verified for thermoelastic, vibration and acoustic, and structural shape control applications. The commercial implementations are described in related documents cited in the references, where various results are also shown that validate the commercial implementations relative to a research code. This paper is a companion to those documents in that it provides additional detail on the actual input files and solution procedures and serves as a repository for ASCII text versions of the input files necessary for duplication of the available results.
LATIS3D: The Goal Standard for Laser-Tissue-Interaction Modeling
NASA Astrophysics Data System (ADS)
London, R. A.; Makarewicz, A. M.; Kim, B. M.; Gentile, N. A.; Yang, T. Y. B.
2000-03-01
The goal of this LDRD project has been to create LATIS3D-the world's premier computer program for laser-tissue interaction modeling. The development was based on recent experience with the 2D LATIS code and the ASCI code, KULL. With LATIS3D, important applications in laser medical therapy were researched including dynamical calculations of tissue emulsification and ablation, photothermal therapy, and photon transport for photodynamic therapy. This project also enhanced LLNL's core competency in laser-matter interactions and high-energy-density physics by pushing simulation codes into new parameter regimes and by attracting external expertise. This will benefit both existing LLNL programs such as ICF and SBSS and emerging programs in medical technology and other laser applications. The purpose of this project was to develop and apply a computer program for laser-tissue interaction modeling to aid in the development of new instruments and procedures in laser medicine.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deslippe, Jack; da Jornada, Felipe H.; Vigil-Fowler, Derek
2016-10-06
We profile and optimize calculations performed with the BerkeleyGW code on the Xeon-Phi architecture. BerkeleyGW depends both on hand-tuned critical kernels as well as on BLAS and FFT libraries. We describe the optimization process and performance improvements achieved. We discuss a layered parallelization strategy to take advantage of vector, thread and node-level parallelism. We discuss locality changes (including the consequence of the lack of L3 cache) and effective use of the on-package high-bandwidth memory. We show preliminary results on Knights-Landing including a roofline study of code performance before and after a number of optimizations. We find that the GW methodmore » is particularly well-suited for many-core architectures due to the ability to exploit a large amount of parallelism over plane-wave components, band-pairs, and frequencies.« less
Consistent criticality and radiation studies of Swiss spent nuclear fuel: The CS2M approach.
Rochman, D; Vasiliev, A; Ferroukhi, H; Pecchia, M
2018-06-15
In this paper, a new method is proposed to systematically calculate at the same time canister loading curves and radiation sources, based on the inventory information from an in-core fuel management system. As a demonstration, the isotopic contents of the assemblies come from a Swiss PWR, considering more than 6000 cases from 34 reactor cycles. The CS 2 M approach consists in combining four codes: CASMO and SIMULATE to extract the assembly characteristics (based on validated models), the SNF code for source emission and MCNP for criticality calculations for specific canister loadings. The considered cases cover enrichments from 1.9 to 5.0% for the UO 2 assemblies and 4.8% for the MOX, with assembly burnup values from 7 to 74 MWd/kgU. Because such a study is based on the individual fuel assembly history, it opens the possibility to optimize canister loadings from the point-of-view of criticality, decay heat and emission sources. Copyright © 2018 Elsevier B.V. All rights reserved.
Efficient Calculation of Exact Exchange Within the Quantum Espresso Software Package
NASA Astrophysics Data System (ADS)
Barnes, Taylor; Kurth, Thorsten; Carrier, Pierre; Wichmann, Nathan; Prendergast, David; Kent, Paul; Deslippe, Jack
Accurate simulation of condensed matter at the nanoscale requires careful treatment of the exchange interaction between electrons. In the context of plane-wave DFT, these interactions are typically represented through the use of approximate functionals. Greater accuracy can often be obtained through the use of functionals that incorporate some fraction of exact exchange; however, evaluation of the exact exchange potential is often prohibitively expensive. We present an improved algorithm for the parallel computation of exact exchange in Quantum Espresso, an open-source software package for plane-wave DFT simulation. Through the use of aggressive load balancing and on-the-fly transformation of internal data structures, our code exhibits speedups of approximately an order of magnitude for practical calculations. Additional optimizations are presented targeting the many-core Intel Xeon-Phi ``Knights Landing'' architecture, which largely powers NERSC's new Cori system. We demonstrate the successful application of the code to difficult problems, including simulation of water at a platinum interface and computation of the X-ray absorption spectra of transition metal oxides.
Calculated organ doses for Mayak production association central hall using ICRP and MCNP.
Choe, Dong-Ok; Shelkey, Brenda N; Wilde, Justin L; Walk, Heidi A; Slaughter, David M
2003-03-01
As part of an ongoing dose reconstruction project, equivalent organ dose rates from photons and neutrons were estimated using the energy spectra measured in the central hall above the graphite reactor core located in the Russian Mayak Production Association facility. Reconstruction of the work environment was necessary due to the lack of personal dosimeter data for neutrons in the time period prior to 1987. A typical worker scenario for the central hall was developed for the Monte Carlo Neutron Photon-4B (MCNP) code. The resultant equivalent dose rates for neutrons and photons were compared with the equivalent dose rates derived from calculations using the conversion coefficients in the International Commission on Radiological Protection Publications 51 and 74 in order to validate the model scenario for this Russian facility. The MCNP results were in good agreement with the results of the ICRP publications indicating the modeling scenario was consistent with actual work conditions given the spectra provided. The MCNP code will allow for additional orientations to accurately reflect source locations.
NASA Technical Reports Server (NTRS)
Kumar, A.; Rudy, D. H.; Drummond, J. P.; Harris, J. E.
1982-01-01
Several two- and three-dimensional external and internal flow problems solved on the STAR-100 and CYBER-203 vector processing computers are described. The flow field was described by the full Navier-Stokes equations which were then solved by explicit finite-difference algorithms. Problem results and computer system requirements are presented. Program organization and data base structure for three-dimensional computer codes which will eliminate or improve on page faulting, are discussed. Storage requirements for three-dimensional codes are reduced by calculating transformation metric data in each step. As a result, in-core grid points were increased in number by 50% to 150,000, with a 10% execution time increase. An assessment of current and future machine requirements shows that even on the CYBER-205 computer only a few problems can be solved realistically. Estimates reveal that the present situation is more storage limited than compute rate limited, but advancements in both storage and speed are essential to realistically calculate three-dimensional flow.
Tree-based solvers for adaptive mesh refinement code FLASH - I: gravity and optical depths
NASA Astrophysics Data System (ADS)
Wünsch, R.; Walch, S.; Dinnbier, F.; Whitworth, A.
2018-04-01
We describe an OctTree algorithm for the MPI parallel, adaptive mesh refinement code FLASH, which can be used to calculate the gas self-gravity, and also the angle-averaged local optical depth, for treating ambient diffuse radiation. The algorithm communicates to the different processors only those parts of the tree that are needed to perform the tree-walk locally. The advantage of this approach is a relatively low memory requirement, important in particular for the optical depth calculation, which needs to process information from many different directions. This feature also enables a general tree-based radiation transport algorithm that will be described in a subsequent paper, and delivers excellent scaling up to at least 1500 cores. Boundary conditions for gravity can be either isolated or periodic, and they can be specified in each direction independently, using a newly developed generalization of the Ewald method. The gravity calculation can be accelerated with the adaptive block update technique by partially re-using the solution from the previous time-step. Comparison with the FLASH internal multigrid gravity solver shows that tree-based methods provide a competitive alternative, particularly for problems with isolated or mixed boundary conditions. We evaluate several multipole acceptance criteria (MACs) and identify a relatively simple approximate partial error MAC which provides high accuracy at low computational cost. The optical depth estimates are found to agree very well with those of the RADMC-3D radiation transport code, with the tree-solver being much faster. Our algorithm is available in the standard release of the FLASH code in version 4.0 and later.
Extensions of the MCNP5 and TRIPOLI4 Monte Carlo Codes for Transient Reactor Analysis
NASA Astrophysics Data System (ADS)
Hoogenboom, J. Eduard; Sjenitzer, Bart L.
2014-06-01
To simulate reactor transients for safety analysis with the Monte Carlo method the generation and decay of delayed neutron precursors is implemented in the MCNP5 and TRIPOLI4 general purpose Monte Carlo codes. Important new variance reduction techniques like forced decay of precursors in each time interval and the branchless collision method are included to obtain reasonable statistics for the power production per time interval. For simulation of practical reactor transients also the feedback effect from the thermal-hydraulics must be included. This requires coupling of the Monte Carlo code with a thermal-hydraulics (TH) code, providing the temperature distribution in the reactor, which affects the neutron transport via the cross section data. The TH code also provides the coolant density distribution in the reactor, directly influencing the neutron transport. Different techniques for this coupling are discussed. As a demonstration a 3x3 mini fuel assembly with a moving control rod is considered for MCNP5 and a mini core existing of 3x3 PWR fuel assemblies with control rods and burnable poisons for TRIPOLI4. Results are shown for reactor transients due to control rod movement or withdrawal. The TRIPOLI4 transient calculation is started at low power and includes thermal-hydraulic feedback. The power rises about 10 decades and finally stabilises the reactor power at a much higher level than initial. The examples demonstrate that the modified Monte Carlo codes are capable of performing correct transient calculations, taking into account all geometrical and cross section detail.
ab initio MD simulations of geomaterials with ~1000 atoms
NASA Astrophysics Data System (ADS)
Martin, G. B.; Kirtman, B.; Spera, F. J.
2009-12-01
In the last two decades, ab initio studies of materials using Density Functional Theory (DFT) have increased exponentially in popularity. DFT codes are now used routinely to simulate properties of geomaterials--mainly silicates and geochemically important metals such as Fe. These materials are ubiquitous in the Earth’s mantle and core and in terrestrial exoplanets. Because of computational limitations, most First Principles Molecular Dynamics (FPMD) calculations are done on systems of only ~100 atoms for a few picoseconds. While this approach can be useful for calculating physical quantities related to crystal structure, vibrational frequency, and other lattice-scale properties (especially in crystals), it is statistically marginal for duplicating physical properties of the liquid state like transport and structure. In MD simulations in the NEV ensemble, temperature (T), and pressure (P) fluctuations scale as N-1/2; small particle number (N) systems are therefore characterized by greater statistical state point location uncertainty than large N systems. Previous studies have used codes such as VASP where CPU time increases with N2, making calculations with N much greater than 100 impractical. SIESTA (Soler, et al. 2002) is a DFT code that enables electronic structure and MD computations on larger systems (N~103) by making some approximations, such as localized numerical orbitals, that would be useful in modeling some properties of geomaterials. Here we test the applicability of SIESTA to simulate geosilicates, both hydrous and anhydrous, in the solid and liquid state. We have used SIESTA for lattice calculations of brucite, Mg(OH)2, that compare very well to experiment and calculations using CRYSTAL, another DFT code. Good agreement between more classical DFT calculations and SIESTA is needed to justify study of geosilicates using SIESTA across a range of pressures and temperatures relevant to the Earth’s interior. Thus, it is useful to adjust parameters in SIESTA in accordance with calculations from CRYSTAL as a check on feasibility. Results are reported here that suggest SIESTA may indeed be useful to model silicate liquids at very high T and P.
DESIGN CHARACTERISTICS OF THE IDAHO NATIONAL LABORATORY HIGH-TEMPERATURE GAS-COOLED TEST REACTOR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sterbentz, James; Bayless, Paul; Strydom, Gerhard
2016-11-01
Uncertainty and sensitivity analysis is an indispensable element of any substantial attempt in reactor simulation validation. The quantification of uncertainties in nuclear engineering has grown more important and the IAEA Coordinated Research Program (CRP) on High-Temperature Gas Cooled Reactor (HTGR) initiated in 2012 aims to investigate the various uncertainty quantification methodologies for this type of reactors. The first phase of the CRP is dedicated to the estimation of cell and lattice model uncertainties due to the neutron cross sections co-variances. Phase II is oriented towards the investigation of propagated uncertainties from the lattice to the coupled neutronics/thermal hydraulics core calculations.more » Nominal results for the prismatic single block (Ex.I-2a) and super cell models (Ex.I-2c) have been obtained using the SCALE 6.1.3 two-dimensional lattice code NEWT coupled to the TRITON sequence for cross section generation. In this work, the TRITON/NEWT-flux-weighted cross sections obtained for Ex.I-2a and various models of Ex.I-2c is utilized to perform a sensitivity analysis of the MHTGR-350 core power densities and eigenvalues. The core solutions are obtained with the INL coupled code PHISICS/RELAP5-3D, utilizing a fixed-temperature feedback for Ex. II-1a.. It is observed that the core power density does not vary significantly in shape, but the magnitude of these variations increases as the moderator-to-fuel ratio increases in the super cell lattice models.« less
I-NERI Quarterly Technical Report (April 1 to June 30, 2005)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang Oh; Prof. Hee Cheon NO; Prof. John Lee
2005-06-01
The objective of this Korean/United States/laboratory/university collaboration is to develop new advanced computational methods for safety analysis codes for very-high-temperature gas-cooled reactors (VHTGRs) and numerical and experimental validation of these computer codes. This study consists of five tasks for FY-03: (1) development of computational methods for the VHTGR, (2) theoretical modification of aforementioned computer codes for molecular diffusion (RELAP5/ATHENA) and modeling CO and CO2 equilibrium (MELCOR), (3) development of a state-of-the-art methodology for VHTGR neutronic analysis and calculation of accurate power distributions and decay heat deposition rates, (4) reactor cavity cooling system experiment, and (5) graphite oxidation experiment. Second quartermore » of Year 3: (A) Prof. NO and Kim continued Task 1. As a further plant application of GAMMA code, we conducted two analyses: IAEA GT-MHR benchmark calculation for LPCC and air ingress analysis for PMR 600MWt. The GAMMA code shows comparable peak fuel temperature trend to those of other country codes. The analysis results for air ingress show much different trend from that of previous PBR analysis: later onset of natural circulation and less significant rise in graphite temperature. (B) Prof. Park continued Task 2. We have designed new separate effect test device having same heat transfer area and different diameter and total number of U-bands of air cooling pipe. New design has smaller pressure drop in the air cooling pipe than the previous one as designed with larger diameter and less number of U-bands. With the device, additional experiments have been performed to obtain temperature distributions of the water tank, the surface and the center of cooling pipe on axis. The results will be used to optimize the design of SNU-RCCS. (C) Prof. NO continued Task 3. The experimental work of air ingress is going on without any concern: With nuclear graphite IG-110, various kinetic parameters and reaction rates for the C/CO2 reaction were measured. Then, the rates of C/CO2 reaction were compared to the ones of C/O2 reaction. The rate equation for C/CO2 has been developed. (D) INL added models to RELAP5/ATHENA to cacilate the chemical reactions in a VHTR during an air ingress accident. Limited testing of the models indicate that they are calculating a correct special distribution in gas compositions. (E) INL benchmarked NACOK natural circulation data. (F) Professor Lee et al at the University of Michigan (UM) Task 5. The funding was received from the DOE Richland Office at the end of May and the subcontract paperwork was delivered to the UM on the sixth of June. The objective of this task is to develop a state of the art neutronics model for determining power distributions and decay heat deposition rates in a VHTGR core. Our effort during the reporting period covered reactor physics analysis of coated particles and coupled nuclear-thermal-hydraulic (TH) calculations, together with initial calculations for decay heat deposition rates in the core.« less
Low-power lead-cooled fast reactor loaded with MOX-fuel
NASA Astrophysics Data System (ADS)
Sitdikov, E. R.; Terekhova, A. M.
2017-01-01
Fast reactor for the purpose of implementation of research, education of undergraduate and doctoral students in handling innovative fast reactors and training specialists for atomic research centers and nuclear power plants (BRUTs) was considered. Hard neutron spectrum achieved in the fast reactor with compact core and lead coolant. Possibility of prompt neutron runaway of the reactor is excluded due to the low reactivity margin which is less than the effective fraction of delayed neutrons. The possibility of using MOX fuel in the BRUTs reactor was examined. The effect of Keff growth connected with replacement of natural lead coolant to 208Pb coolant was evaluated. The calculations and reactor core model were performed using the Serpent Monte Carlo code.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Suzuki, Akihiro; Maeda, Keiichi; Shigeyama, Toshikazu
A two-dimensional special relativistic radiation-hydrodynamics code is developed and applied to numerical simulations of supernova shock breakout in bipolar explosions of a blue supergiant. Our calculations successfully simulate the dynamical evolution of a blast wave in the star and its emergence from the surface. Results of the model with spherical energy deposition show a good agreement with previous simulations. Furthermore, we calculate several models with bipolar energy deposition and compare their results with the spherically symmetric model. The bolometric light curves of the shock breakout emission are calculated by a ray-tracing method. Our radiation-hydrodynamic models indicate that the early partmore » of the shock breakout emission can be used to probe the geometry of the blast wave produced as a result of the gravitational collapse of the iron core.« less
Bohlin, Jon; Eldholm, Vegard; Pettersson, John H O; Brynildsrud, Ola; Snipen, Lars
2017-02-10
The core genome consists of genes shared by the vast majority of a species and is therefore assumed to have been subjected to substantially stronger purifying selection than the more mobile elements of the genome, also known as the accessory genome. Here we examine intragenic base composition differences in core genomes and corresponding accessory genomes in 36 species, represented by the genomes of 731 bacterial strains, to assess the impact of selective forces on base composition in microbes. We also explore, in turn, how these results compare with findings for whole genome intragenic regions. We found that GC content in coding regions is significantly higher in core genomes than accessory genomes and whole genomes. Likewise, GC content variation within coding regions was significantly lower in core genomes than in accessory genomes and whole genomes. Relative entropy in coding regions, measured as the difference between observed and expected trinucleotide frequencies estimated from mononucleotide frequencies, was significantly higher in the core genomes than in accessory and whole genomes. Relative entropy was positively associated with coding region GC content within the accessory genomes, but not within the corresponding coding regions of core or whole genomes. The higher intragenic GC content and relative entropy, as well as the lower GC content variation, observed in the core genomes is most likely associated with selective constraints. It is unclear whether the positive association between GC content and relative entropy in the more mobile accessory genomes constitutes signatures of selection or selective neutral processes.
NMR Shielding in Metals Using the Augmented Plane Wave Method
2015-01-01
We present calculations of solid state NMR magnetic shielding in metals, which includes both the orbital and the complete spin response of the system in a consistent way. The latter contains an induced spin-polarization of the core states and needs an all-electron self-consistent treatment. In particular, for transition metals, the spin hyperfine field originates not only from the polarization of the valence s-electrons, but the induced magnetic moment of the d-electrons polarizes the core s-states in opposite direction. The method is based on DFT and the augmented plane wave approach as implemented in the WIEN2k code. A comparison between calculated and measured NMR shifts indicates that first-principle calculations can obtain converged results and are more reliable than initially concluded based on previous publications. Nevertheless large k-meshes (up to 2 000 000 k-points in the full Brillouin-zone) and some Fermi-broadening are necessary. Our results show that, in general, both spin and orbital components of the NMR shielding must be evaluated in order to reproduce experimental shifts, because the orbital part cancels the shift of the usually highly ionic reference compound only for simple sp-elements but not for transition metals. This development paves the way for routine NMR calculations of metallic systems. PMID:26322148
GAMSOR: Gamma Source Preparation and DIF3D Flux Solution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, M. A.; Lee, C. H.; Hill, R. N.
2017-06-28
Nuclear reactors that rely upon the fission reaction have two modes of thermal energy deposition in the reactor system: neutron absorption and gamma absorption. The gamma rays are typically generated by neutron capture reactions or during the fission process which means the primary driver of energy production is of course the neutron interaction. In conventional reactor physics methods, the gamma heating component is ignored such that the gamma absorption is forced to occur at the gamma emission site. For experimental reactor systems like EBR-II and FFTF, the placement of structural pins and assemblies internal to the core leads to problemsmore » with power heating predictions because there is no fission power source internal to the assembly to dictate a spatial distribution of the power. As part of the EBR-II support work in the 1980s, the GAMSOR code was developed to assist analysts in calculating the gamma heating. The GAMSOR code is a modified version of DIF3D and actually functions within a sequence of DIF3D calculations. The gamma flux in a conventional fission reactor system does not perturb the neutron flux and thus the gamma flux calculation can be cast as a fixed source problem given a solution to the steady state neutron flux equation. This leads to a sequence of DIF3D calculations, called the GAMSOR sequence, which involves solving the neutron flux, then the gamma flux, and then combining the results to do a summary edit. In this manuscript, we go over the GAMSOR code and detail how it is put together and functions. We also discuss how to setup the GAMSOR sequence and input for each DIF3D calculation in the GAMSOR sequence.« less
Heuristic rules embedded genetic algorithm for in-core fuel management optimization
NASA Astrophysics Data System (ADS)
Alim, Fatih
The objective of this study was to develop a unique methodology and a practical tool for designing loading pattern (LP) and burnable poison (BP) pattern for a given Pressurized Water Reactor (PWR) core. Because of the large number of possible combinations for the fuel assembly (FA) loading in the core, the design of the core configuration is a complex optimization problem. It requires finding an optimal FA arrangement and BP placement in order to achieve maximum cycle length while satisfying the safety constraints. Genetic Algorithms (GA) have been already used to solve this problem for LP optimization for both PWR and Boiling Water Reactor (BWR). The GA, which is a stochastic method works with a group of solutions and uses random variables to make decisions. Based on the theories of evaluation, the GA involves natural selection and reproduction of the individuals in the population for the next generation. The GA works by creating an initial population, evaluating it, and then improving the population by using the evaluation operators. To solve this optimization problem, a LP optimization package, GARCO (Genetic Algorithm Reactor Code Optimization) code is developed in the framework of this thesis. This code is applicable for all types of PWR cores having different geometries and structures with an unlimited number of FA types in the inventory. To reach this goal, an innovative GA is developed by modifying the classical representation of the genotype. To obtain the best result in a shorter time, not only the representation is changed but also the algorithm is changed to use in-core fuel management heuristics rules. The improved GA code was tested to demonstrate and verify the advantages of the new enhancements. The developed methodology is explained in this thesis and preliminary results are shown for the VVER-1000 reactor hexagonal geometry core and the TMI-1 PWR. The improved GA code was tested to verify the advantages of new enhancements. The core physics code used for VVER in this research is Moby-Dick, which was developed to analyze the VVER by SKODA Inc. The SIMULATE-3 code, which is an advanced two-group nodal code, is used to analyze the TMI-1.
Development of the V4.2m5 and V5.0m0 Multigroup Cross Section Libraries for MPACT for PWR and BWR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Kang Seog; Clarno, Kevin T.; Gentry, Cole
2017-03-01
The MPACT neutronics module of the Consortium for Advanced Simulation of Light Water Reactors (CASL) core simulator is a 3-D whole core transport code being developed for the CASL toolset, Virtual Environment for Reactor Analysis (VERA). Key characteristics of the MPACT code include (1) a subgroup method for resonance selfshielding and (2) a whole-core transport solver with a 2-D/1-D synthesis method. The MPACT code requires a cross section library to support all the MPACT core simulation capabilities which would be the most influencing component for simulation accuracy.
Abraham, Mark James; Murtola, Teemu; Schulz, Roland; ...
2015-07-15
GROMACS is one of the most widely used open-source and free software codes in chemistry, used primarily for dynamical simulations of biomolecules. It provides a rich set of calculation types, preparation and analysis tools. Several advanced techniques for free-energy calculations are supported. In version 5, it reaches new performance heights, through several new and enhanced parallelization algorithms. This work on every level; SIMD registers inside cores, multithreading, heterogeneous CPU–GPU acceleration, state-of-the-art 3D domain decomposition, and ensemble-level parallelization through built-in replica exchange and the separate Copernicus framework. Finally, the latest best-in-class compressed trajectory storage format is supported.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abraham, Mark James; Murtola, Teemu; Schulz, Roland
GROMACS is one of the most widely used open-source and free software codes in chemistry, used primarily for dynamical simulations of biomolecules. It provides a rich set of calculation types, preparation and analysis tools. Several advanced techniques for free-energy calculations are supported. In version 5, it reaches new performance heights, through several new and enhanced parallelization algorithms. This work on every level; SIMD registers inside cores, multithreading, heterogeneous CPU–GPU acceleration, state-of-the-art 3D domain decomposition, and ensemble-level parallelization through built-in replica exchange and the separate Copernicus framework. Finally, the latest best-in-class compressed trajectory storage format is supported.
Optimization of small long-life PWR based on thorium fuel
NASA Astrophysics Data System (ADS)
Subkhi, Moh Nurul; Suud, Zaki; Waris, Abdul; Permana, Sidik
2015-09-01
A conceptual design of small long-life Pressurized Water Reactor (PWR) using thorium fuel has been investigated in neutronic aspect. The cell-burn up calculations were performed by PIJ SRAC code using nuclear data library based on JENDL 3.2, while the multi-energy-group diffusion calculations were optimized in three-dimension X-Y-Z geometry of core by COREBN. The excess reactivity of thorium nitride with ZIRLO cladding is considered during 5 years of burnup without refueling. Optimization of 350 MWe long life PWR based on 5% 233U & 2.8% 231Pa, 6% 233U & 2.8% 231Pa and 7% 233U & 6% 231Pa give low excess reactivity.
batman: BAsic Transit Model cAlculatioN in Python
NASA Astrophysics Data System (ADS)
Kreidberg, Laura
2015-11-01
I introduce batman, a Python package for modeling exoplanet transit light curves. The batman package supports calculation of light curves for any radially symmetric stellar limb darkening law, using a new integration algorithm for models that cannot be quickly calculated analytically. The code uses C extension modules to speed up model calculation and is parallelized with OpenMP. For a typical light curve with 100 data points in transit, batman can calculate one million quadratic limb-darkened models in 30 seconds with a single 1.7 GHz Intel Core i5 processor. The same calculation takes seven minutes using the four-parameter nonlinear limb darkening model (computed to 1 ppm accuracy). Maximum truncation error for integrated models is an input parameter that can be set as low as 0.001 ppm, ensuring that the community is prepared for the precise transit light curves we anticipate measuring with upcoming facilities. The batman package is open source and publicly available at https://github.com/lkreidberg/batman .
Embedded-cluster calculations in a numeric atomic orbital density-functional theory framework.
Berger, Daniel; Logsdail, Andrew J; Oberhofer, Harald; Farrow, Matthew R; Catlow, C Richard A; Sherwood, Paul; Sokol, Alexey A; Blum, Volker; Reuter, Karsten
2014-07-14
We integrate the all-electron electronic structure code FHI-aims into the general ChemShell package for solid-state embedding quantum and molecular mechanical (QM/MM) calculations. A major undertaking in this integration is the implementation of pseudopotential functionality into FHI-aims to describe cations at the QM/MM boundary through effective core potentials and therewith prevent spurious overpolarization of the electronic density. Based on numeric atomic orbital basis sets, FHI-aims offers particularly efficient access to exact exchange and second order perturbation theory, rendering the established QM/MM setup an ideal tool for hybrid and double-hybrid level density functional theory calculations of solid systems. We illustrate this capability by calculating the reduction potential of Fe in the Fe-substituted ZSM-5 zeolitic framework and the reaction energy profile for (photo-)catalytic water oxidation at TiO2(110).
Embedded-cluster calculations in a numeric atomic orbital density-functional theory framework
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berger, Daniel, E-mail: daniel.berger@ch.tum.de; Oberhofer, Harald; Reuter, Karsten
2014-07-14
We integrate the all-electron electronic structure code FHI-aims into the general ChemShell package for solid-state embedding quantum and molecular mechanical (QM/MM) calculations. A major undertaking in this integration is the implementation of pseudopotential functionality into FHI-aims to describe cations at the QM/MM boundary through effective core potentials and therewith prevent spurious overpolarization of the electronic density. Based on numeric atomic orbital basis sets, FHI-aims offers particularly efficient access to exact exchange and second order perturbation theory, rendering the established QM/MM setup an ideal tool for hybrid and double-hybrid level density functional theory calculations of solid systems. We illustrate this capabilitymore » by calculating the reduction potential of Fe in the Fe-substituted ZSM-5 zeolitic framework and the reaction energy profile for (photo-)catalytic water oxidation at TiO{sub 2}(110)« less
Wu, Xin; Koslowski, Axel; Thiel, Walter
2012-07-10
In this work, we demonstrate that semiempirical quantum chemical calculations can be accelerated significantly by leveraging the graphics processing unit (GPU) as a coprocessor on a hybrid multicore CPU-GPU computing platform. Semiempirical calculations using the MNDO, AM1, PM3, OM1, OM2, and OM3 model Hamiltonians were systematically profiled for three types of test systems (fullerenes, water clusters, and solvated crambin) to identify the most time-consuming sections of the code. The corresponding routines were ported to the GPU and optimized employing both existing library functions and a GPU kernel that carries out a sequence of noniterative Jacobi transformations during pseudodiagonalization. The overall computation times for single-point energy calculations and geometry optimizations of large molecules were reduced by one order of magnitude for all methods, as compared to runs on a single CPU core.
A complex approach to the blue-loop problem
NASA Astrophysics Data System (ADS)
Ostrowski, Jakub; Daszynska-Daszkiewicz, Jadwiga
2015-08-01
The problem of the blue loops during the core helium burning, outstanding for almost fifty years, is one of the most difficult and poorly understood problems in stellar astrophysics. Most of the work focused on the blue loops done so far has been performed with old stellar evolution codes and with limited computational resources. In the end the obtained conclusions were based on a small sample of models and could not have taken into account more advanced effects and interactions between them.The emergence of the blue loops depends on many details of the evolution calculations, in particular on chemical composition, opacity, mixing processes etc. The non-linear interactions between these factors contribute to the statement that in most cases it is hard to predict without a precise stellar modeling whether a loop will emerge or not. The high sensitivity of the blue loops to even small changes of the internal structure of a star yields one more issue: a sensitivity to numerical problems, which are common in calculations of stellar models on advanced stages of the evolution.To tackle this problem we used a modern stellar evolution code MESA. We calculated a large grid of evolutionary tracks (about 8000 models) with masses in the range of 3.0 - 25.0 solar masses from the zero age main sequence to the depletion of helium in the core. In order to make a comparative analysis, we varied metallicity, helium abundance and different mixing parameters resulting from convective overshooting, rotation etc.The better understanding of the properties of the blue loops is crucial for our knowledge of the population of blue supergiants or pulsating variables such as Cepheids, α-Cygni or Slowly Pulsating B-type supergiants. In case of more massive models it is also of great importance for studies of the progenitors of supernovae.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chandler, David
2014-03-01
Under the sponsorship of the US Department of Energy National Nuclear Security Administration, staff members at the Oak Ridge National Laboratory have been conducting studies to determine whether the High Flux Isotope Reactor (HFIR) can be converted from high enriched uranium (HEU) fuel to low enriched uranium (LEU) fuel. As part of these ongoing studies, an assessment of the impact that the HEU to LEU fuel conversion has on the nuclear heat generation rates in regions of the HFIR cold source system and its moderator vessel was performed and is documented in this report. Silicon production rates in the coldmore » source aluminum regions and few-group neutron fluxes in the cold source moderator were also estimated. Neutronics calculations were performed with the Monte Carlo N-Particle code to determine the nuclear heat generation rates in regions of the HFIR cold source and its vessel for the HEU core operating at a full reactor power (FP) of 85 MW(t) and the reference LEU core operating at an FP of 100 MW(t). Calculations were performed with beginning-of-cycle (BOC) and end-of-cycle (EOC) conditions to bound typical irradiation conditions. Average specific BOC heat generation rates of 12.76 and 12.92 W/g, respectively, were calculated for the hemispherical region of the cold source liquid hydrogen (LH2) for the HEU and LEU cores, and EOC heat generation rates of 13.25 and 12.86 W/g, respectively, were calculated for the HEU and LEU cores. Thus, the greatest heat generation rates were calculated for the EOC HEU core, and it is concluded that the conversion from HEU to LEU fuel and the resulting increase of FP from 85 MW to 100 MW will not impact the ability of the heat removal equipment to remove the heat deposited in the cold source system. Silicon production rates in the cold source aluminum regions are estimated to be about 12.0% greater at BOC and 2.7% greater at EOC for the LEU core in comparison to the HEU core. Silicon is aluminum s major transmutation product and affects mechanical properties of aluminum including density, neutron irradiation hardening, swelling, and loss of ductility. Because slightly greater quantities of silicon will be produced in the cold source moderator vessel for the LEU core, these effects will be slightly greater for the LEU core than for the HEU core. Three-group (thermal, epithermal, and fast) neutron flux results tallied in the cold source LH2 hemisphere show greater values for the LEU core under both BOC and EOC conditions. The thermal neutron flux in the LH2 hemisphere for the LEU core is about 12.4% greater at BOC and 2.7% greater at EOC than for the HEU core. Therefore, cold neutron scattering will not be adversely affected and the 4 12 neutrons conveyed to the cold neutron guide hall for research applications will be enhanced.« less
Axisymmetric Calculations of a Low-Boom Inlet in a Supersonic Wind Tunnel
NASA Technical Reports Server (NTRS)
Chima, Rodrick V.; Hirt, Stefanie M.; Reger, Robert
2011-01-01
This paper describes axisymmetric CFD predictions made of a supersonic low-boom inlet with a facility diffuser, cold pipe, and mass flow plug within wind tunnel walls, and compares the CFD calculations with the experimental data. The inlet was designed for use on a small supersonic aircraft that would cruise at Mach 1.6, with a Mach number over the wing of 1.7. The inlet was tested in the 8-ft by 6-ft Supersonic Wind Tunnel at NASA Glenn Research Center in the fall of 2010 to demonstrate the performance and stability of a practical flight design that included a novel bypass duct. The inlet design is discussed here briefly. Prior to the test, CFD calculations were made to predict the performance of the inlet and its associated wind tunnel hardware, and to estimate flow areas needed to throttle the inlet. The calculations were done with the Wind-US CFD code and are described in detail. After the test, comparisons were made between computed and measured shock patterns, total pressure recoveries, and centerline pressures. The results showed that the dual-stream inlet had excellent performance, with capture ratios near one, a peak core total pressure recovery of 96 percent, and a large stable operating range. Predicted core recovery agreed well with the experiment but predicted bypass recovery and maximum capture ratio were high. Calculations of offdesign performance of the inlet along a flight profile agreed well with measurements and previous calculations.
Using Intel Xeon Phi to accelerate the WRF TEMF planetary boundary layer scheme
NASA Astrophysics Data System (ADS)
Mielikainen, Jarno; Huang, Bormin; Huang, Allen
2014-05-01
The Weather Research and Forecasting (WRF) model is designed for numerical weather prediction and atmospheric research. The WRF software infrastructure consists of several components such as dynamic solvers and physics schemes. Numerical models are used to resolve the large-scale flow. However, subgrid-scale parameterizations are for an estimation of small-scale properties (e.g., boundary layer turbulence and convection, clouds, radiation). Those have a significant influence on the resolved scale due to the complex nonlinear nature of the atmosphere. For the cloudy planetary boundary layer (PBL), it is fundamental to parameterize vertical turbulent fluxes and subgrid-scale condensation in a realistic manner. A parameterization based on the Total Energy - Mass Flux (TEMF) that unifies turbulence and moist convection components produces a better result that the other PBL schemes. For that reason, the TEMF scheme is chosen as the PBL scheme we optimized for Intel Many Integrated Core (MIC), which ushers in a new era of supercomputing speed, performance, and compatibility. It allows the developers to run code at trillions of calculations per second using the familiar programming model. In this paper, we present our optimization results for TEMF planetary boundary layer scheme. The optimizations that were performed were quite generic in nature. Those optimizations included vectorization of the code to utilize vector units inside each CPU. Furthermore, memory access was improved by scalarizing some of the intermediate arrays. The results show that the optimization improved MIC performance by 14.8x. Furthermore, the optimizations increased CPU performance by 2.6x compared to the original multi-threaded code on quad core Intel Xeon E5-2603 running at 1.8 GHz. Compared to the optimized code running on a single CPU socket the optimized MIC code is 6.2x faster.
Potential of pin-by-pin SPN calculations as an industrial reference
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fliscounakis, M.; Girardi, E.; Courau, T.
2012-07-01
This paper aims at analysing the potential of pin-by-pin SP{sub n} calculations to compute the neutronic flux in PWR cores as an alternative to the diffusion approximation. As far as pin-by-pin calculations are concerned, a SPH equivalence is used to preserve the reactions rates. The use of SPH equivalence is a common practice in core diffusion calculations. In this paper, a methodology to generalize the equivalence procedure in the SP{sub n} equations context is presented. In order to verify and validate the equivalence procedure, SP{sub n} calculations are compared to 2D transport reference results obtained with the APOLL02 code. Themore » validation cases consist in 3x3 analytical assembly color sets involving burn-up heterogeneities, UOX/MOX interfaces, and control rods. Considering various energy discretizations (up to 26 groups) and flux development orders (up to 7) for the SP{sub n} equations, results show that 26-group SP{sub 3} calculations are very close to the transport reference (with pin production rates discrepancies < 1%). This proves the high interest of pin-by-pin SP{sub n} calculations as an industrial reference when relying on 26 energy groups combined with SP{sub 3} flux development order. Additionally, the SP{sub n} results are compared to diffusion pin-by-pin calculations, in order to evaluate the potential benefit of using a SP{sub n} solver as an alternative to diffusion. Discrepancies on pin-production rates are less than 1.6% for 6-group SP{sub 3} calculations against 3.2% for 2-group diffusion calculations. This shows that SP{sub n} solvers may be considered as an alternative to multigroup diffusion. (authors)« less
An Architecture for Coexistence with Multiple Users in Frequency Hopping Cognitive Radio Networks
2013-03-01
the base WARP system, a custom IP core written in VHDL , and the Virtex IV’s embedded PowerPC core with C code to implement the radio and hopset...shown in Appendix C as Figure C.2. All VHDL code necessary to implement this IP core is included in Appendix G. 69 Figure 3.19: FPGA bus structure...subsystem functionality. A total of 1,430 lines of VHDL code were implemented for this research. 1 library ieee; 2 use ieee.std logic 1164.all; 3 use
NASA Astrophysics Data System (ADS)
Zuhair; Suwoto; Setiadipura, T.; Bakhri, S.; Sunaryo, G. R.
2018-02-01
As a part of the solution searching for possibility to control the plutonium, a current effort is focused on mechanisms to maximize consumption of plutonium. Plutonium core solution is a unique case in the high temperature reactor which is intended to reduce the accumulation of plutonium. However, the safety performance of the plutonium core which tends to produce a positive temperature coefficient of reactivity should be examined. The pebble bed inherent safety features which are characterized by a negative temperature coefficient of reactivity must be maintained under any circumstances. The purpose of this study is to investigate the characteristic of temperature coefficient of reactivity for plutonium core of pebble bed reactor. A series of calculations with plutonium loading varied from 0.5 g to 1.5 g per fuel pebble were performed by the MCNPX code and ENDF/B-VII library. The calculation results show that the k eff curve of 0.5 g Pu/pebble declines sharply with the increase in fuel burnup while the greater Pu loading per pebble yields k eff curve declines slighter. The fuel with high Pu content per pebble may reach long burnup cycle. From the temperature coefficient point of view, it is concluded that the reactor containing 0.5 g-1.25 g Pu/pebble at high burnup has less favorable safety features if it is operated at high temperature. The use of fuel with Pu content of 1.5 g/pebble at high burnup should be considered carefully from core safety aspect because it could affect transient behavior into a fatal accident situation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wagner, John C; Peplow, Douglas E.; Mosher, Scott W
2011-01-01
This paper provides a review of the hybrid (Monte Carlo/deterministic) radiation transport methods and codes used at the Oak Ridge National Laboratory and examples of their application for increasing the efficiency of real-world, fixed-source Monte Carlo analyses. The two principal hybrid methods are (1) Consistent Adjoint Driven Importance Sampling (CADIS) for optimization of a localized detector (tally) region (e.g., flux, dose, or reaction rate at a particular location) and (2) Forward Weighted CADIS (FW-CADIS) for optimizing distributions (e.g., mesh tallies over all or part of the problem space) or multiple localized detector regions (e.g., simultaneous optimization of two or moremore » localized tally regions). The two methods have been implemented and automated in both the MAVRIC sequence of SCALE 6 and ADVANTG, a code that works with the MCNP code. As implemented, the methods utilize the results of approximate, fast-running 3-D discrete ordinates transport calculations (with the Denovo code) to generate consistent space- and energy-dependent source and transport (weight windows) biasing parameters. These methods and codes have been applied to many relevant and challenging problems, including calculations of PWR ex-core thermal detector response, dose rates throughout an entire PWR facility, site boundary dose from arrays of commercial spent fuel storage casks, radiation fields for criticality accident alarm system placement, and detector response for special nuclear material detection scenarios and nuclear well-logging tools. Substantial computational speed-ups, generally O(102-4), have been realized for all applications to date. This paper provides a brief review of the methods, their implementation, results of their application, and current development activities, as well as a considerable list of references for readers seeking more information about the methods and/or their applications.« less
A coupled/uncoupled deformation and fatigue damage algorithm utilizing the finite element method
NASA Technical Reports Server (NTRS)
Wilt, Thomas E.; Arnold, Steven M.
1994-01-01
A fatigue damage computational algorithm utilizing a multiaxial, isothermal, continuum based fatigue damage model for unidirectional metal matrix composites has been implemented into the commercial finite element code MARC using MARC user subroutines. Damage is introduced into the finite element solution through the concept of effective stress which fully couples the fatigue damage calculations with the finite element deformation solution. An axisymmetric stress analysis was performed on a circumferentially reinforced ring, wherein both the matrix cladding and the composite core were assumed to behave elastic-perfectly plastic. The composite core behavior was represented using Hill's anisotropic continuum based plasticity model, and similarly, the matrix cladding was represented by an isotropic plasticity model. Results are presented in the form of S-N curves and damage distribution plots.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nakhai, B.
A new method for solving radiation transport problems is presented. The heart of the technique is a new cross section processing procedure for the calculation of group-to-point and point-to-group cross sections sets. The method is ideally suited for problems which involve media with highly fluctuating cross sections, where the results of the traditional multigroup calculations are beclouded by the group averaging procedures employed. Extensive computational efforts, which would be required to evaluate double integrals in the multigroup treatment numerically, prohibit iteration to optimize the energy boundaries. On the other hand, use of point-to-point techniques (as in the stochastic technique) ismore » often prohibitively expensive due to the large computer storage requirement. The pseudo-point code is a hybrid of the two aforementioned methods (group-to-group and point-to-point) - hence the name pseudo-point - that reduces the computational efforts of the former and the large core requirements of the latter. The pseudo-point code generates the group-to-point or the point-to-group transfer matrices, and can be coupled with the existing transport codes to calculate pointwise energy-dependent fluxes. This approach yields much more detail than is available from the conventional energy-group treatments. Due to the speed of this code, several iterations could be performed (in affordable computing efforts) to optimize the energy boundaries and the weighting functions. The pseudo-point technique is demonstrated by solving six problems, each depicting a certain aspect of the technique. The results are presented as flux vs energy at various spatial intervals. The sensitivity of the technique to the energy grid and the savings in computational effort are clearly demonstrated.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bovy, Jo, E-mail: bovy@ias.edu
I describe the design, implementation, and usage of galpy, a python package for galactic-dynamics calculations. At its core, galpy consists of a general framework for representing galactic potentials both in python and in C (for accelerated computations); galpy functions, objects, and methods can generally take arbitrary combinations of these as arguments. Numerical orbit integration is supported with a variety of Runge-Kutta-type and symplectic integrators. For planar orbits, integration of the phase-space volume is also possible. galpy supports the calculation of action-angle coordinates and orbital frequencies for a given phase-space point for general spherical potentials, using state-of-the-art numerical approximations for axisymmetricmore » potentials, and making use of a recent general approximation for any static potential. A number of different distribution functions (DFs) are also included in the current release; currently, these consist of two-dimensional axisymmetric and non-axisymmetric disk DFs, a three-dimensional disk DF, and a DF framework for tidal streams. I provide several examples to illustrate the use of the code. I present a simple model for the Milky Way's gravitational potential consistent with the latest observations. I also numerically calculate the Oort functions for different tracer populations of stars and compare them to a new analytical approximation. Additionally, I characterize the response of a kinematically warm disk to an elliptical m = 2 perturbation in detail. Overall, galpy consists of about 54,000 lines, including 23,000 lines of code in the module, 11,000 lines of test code, and about 20,000 lines of documentation. The test suite covers 99.6% of the code. galpy is available at http://github.com/jobovy/galpy with extensive documentation available at http://galpy.readthedocs.org/en/latest.« less
A New Capability for Nuclear Thermal Propulsion Design
DOE Office of Scientific and Technical Information (OSTI.GOV)
Amiri, Benjamin W.; Nuclear and Radiological Engineering Department, University of Florida, Gainesville, FL 32611; Kapernick, Richard J.
2007-01-30
This paper describes a new capability for Nuclear Thermal Propulsion (NTP) design that has been developed, and presents the results of some analyses performed with this design tool. The purpose of the tool is to design to specified mission and material limits, while maximizing system thrust to weight. The head end of the design tool utilizes the ROCket Engine Transient Simulation (ROCETS) code to generate a system design and system design requirements as inputs to the core analysis. ROCETS is a modular system level code which has been used extensively in the liquid rocket engine industry for many years. Themore » core design tool performs high-fidelity reactor core nuclear and thermal-hydraulic design analysis. At the heart of this process are two codes TMSS-NTP and NTPgen, which together greatly automate the analysis, providing the capability to rapidly produce designs that meet all specified requirements while minimizing mass. A PERL based command script, called CORE DESIGNER controls the execution of these two codes, and checks for convergence throughout the process. TMSS-NTP is executed first, to produce a suite of core designs that meet the specified reactor core mechanical, thermal-hydraulic and structural requirements. The suite of designs consists of a set of core layouts and, for each core layout specific designs that span a range of core fuel volumes. NTPgen generates MCNPX models for each of the core designs from TMSS-NTP. Iterative analyses are performed in NTPgen until a reactor design (fuel volume) is identified for each core layout that meets cold and hot operation reactivity requirements and that is zoned to meet a radial core power distribution requirement.« less
Preliminary topical report on comparison reactor disassembly calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
McLaughlin, T.P.
1975-11-01
Preliminary results of comparison disassembly calculations for a representative LMFBR model (2100-l voided core) and arbitrary accident conditions are described. The analytical methods employed were the computer programs: FX2- POOL, PAD, and VENUS-II. The calculated fission energy depositions are in good agreement, as are measures of the destructive potential of the excursions, kinetic energy, and work. However, in some cases the resulting fuel temperatures are substantially divergent. Differences in the fission energy deposition appear to be attributable to residual inconsistencies in specifying the comparison cases. In contrast, temperature discrepancies probably stem from basic differences in the energy partition models inherentmore » in the codes. Although explanations of the discrepancies are being pursued, the preliminary results indicate that all three computational methods provide a consistent, global characterization of the contrived disassembly accident. (auth)« less
VERA and VERA-EDU 3.5 Release Notes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sieger, Matt; Salko, Robert K.; Kochunas, Brendan M.
The Virtual Environment for Reactor Applications components included in this distribution include selected computational tools and supporting infrastructure that solve neutronics, thermal-hydraulics, fuel performance, and coupled neutronics-thermal hydraulics problems. The infrastructure components provide a simplified common user input capability and provide for the physics integration with data transfer and coupled-physics iterative solution algorithms. Neutronics analysis can be performed for 2D lattices, 2D core and 3D core problems for pressurized water reactor geometries that can be used to calculate criticality and fission rate distributions by pin for input fuel compositions. MPACT uses the Method of Characteristics transport approach for 2D problems.more » For 3D problems, MPACT uses the 2D/1D method which uses 2D MOC in a radial plane and diffusion or SPn in the axial direction. MPACT includes integrated cross section capabilities that provide problem-specific cross sections generated using the subgroup methodology. The code can be executed both 2D and 3D problems in parallel to reduce overall run time. A thermal-hydraulics capability is provided with CTF (an updated version of COBRA-TF) that allows thermal-hydraulics analyses for single and multiple assemblies using the simplified VERA common input. This distribution also includes coupled neutronics/thermal-hydraulics capabilities to allow calculations using MPACT coupled with CTF. The VERA fuel rod performance component BISON calculates, on a 2D or 3D basis, fuel rod temperature, fuel rod internal pressure, free gas volume, clad integrity and fuel rod waterside diameter. These capabilities allow simulation of power cycling, fuel conditioning and deconditioning, high burnup performance, power uprate scoping studies, and accident performance. Input/Output capabilities include the VERA Common Input (VERAIn) script which converts the ASCII common input file to the intermediate XML used to drive all of the physics codes in the VERA Core Simulator (VERA-CS). VERA component codes either input the VERA XML format directly, or provide a preprocessor which can convert the XML into native input. VERAView is an interactive graphical interface for the visualization and engineering analyses of output data from VERA. The python-based software is easy to install and intuitive to use, and provides instantaneous 2D and 3D images, 1D plots, and alpha-numeric data from VERA multi-physics simulations. Testing within CASL has focused primarily on Westinghouse four-loop reactor geometries and conditions with example problems included in the distribution.« less
Fuel Performance Calculations for FeCrAl Cladding in BWRs
DOE Office of Scientific and Technical Information (OSTI.GOV)
George, Nathan; Sweet, Ryan; Maldonado, G. Ivan
2015-01-01
This study expands upon previous neutronics analyses of the reactivity impact of alternate cladding concepts in boiling water reactor (BWR) cores and directs focus toward contrasting fuel performance characteristics of FeCrAl cladding against those of traditional Zircaloy. Using neutronics results from a modern version of the 3D nodal simulator NESTLE, linear power histories were generated and supplied to the BISON-CASL code for fuel performance evaluations. BISON-CASL (formerly Peregrine) expands on material libraries implemented in the BISON fuel performance code and the MOOSE framework by providing proprietary material data. By creating material libraries for Zircaloy and FeCrAl cladding, the thermomechanical behaviormore » of the fuel rod (e.g., strains, centerline fuel temperature, and time to gap closure) were investigated and contrasted.« less
Advanced capabilities for materials modelling with Quantum ESPRESSO
NASA Astrophysics Data System (ADS)
Giannozzi, P.; Andreussi, O.; Brumme, T.; Bunau, O.; Buongiorno Nardelli, M.; Calandra, M.; Car, R.; Cavazzoni, C.; Ceresoli, D.; Cococcioni, M.; Colonna, N.; Carnimeo, I.; Dal Corso, A.; de Gironcoli, S.; Delugas, P.; DiStasio, R. A., Jr.; Ferretti, A.; Floris, A.; Fratesi, G.; Fugallo, G.; Gebauer, R.; Gerstmann, U.; Giustino, F.; Gorni, T.; Jia, J.; Kawamura, M.; Ko, H.-Y.; Kokalj, A.; Küçükbenli, E.; Lazzeri, M.; Marsili, M.; Marzari, N.; Mauri, F.; Nguyen, N. L.; Nguyen, H.-V.; Otero-de-la-Roza, A.; Paulatto, L.; Poncé, S.; Rocca, D.; Sabatini, R.; Santra, B.; Schlipf, M.; Seitsonen, A. P.; Smogunov, A.; Timrov, I.; Thonhauser, T.; Umari, P.; Vast, N.; Wu, X.; Baroni, S.
2017-11-01
Quantum EXPRESSO is an integrated suite of open-source computer codes for quantum simulations of materials using state-of-the-art electronic-structure techniques, based on density-functional theory, density-functional perturbation theory, and many-body perturbation theory, within the plane-wave pseudopotential and projector-augmented-wave approaches. Quantum EXPRESSO owes its popularity to the wide variety of properties and processes it allows to simulate, to its performance on an increasingly broad array of hardware architectures, and to a community of researchers that rely on its capabilities as a core open-source development platform to implement their ideas. In this paper we describe recent extensions and improvements, covering new methodologies and property calculators, improved parallelization, code modularization, and extended interoperability both within the distribution and with external software.
Advanced capabilities for materials modelling with Quantum ESPRESSO.
Giannozzi, P; Andreussi, O; Brumme, T; Bunau, O; Buongiorno Nardelli, M; Calandra, M; Car, R; Cavazzoni, C; Ceresoli, D; Cococcioni, M; Colonna, N; Carnimeo, I; Dal Corso, A; de Gironcoli, S; Delugas, P; DiStasio, R A; Ferretti, A; Floris, A; Fratesi, G; Fugallo, G; Gebauer, R; Gerstmann, U; Giustino, F; Gorni, T; Jia, J; Kawamura, M; Ko, H-Y; Kokalj, A; Küçükbenli, E; Lazzeri, M; Marsili, M; Marzari, N; Mauri, F; Nguyen, N L; Nguyen, H-V; Otero-de-la-Roza, A; Paulatto, L; Poncé, S; Rocca, D; Sabatini, R; Santra, B; Schlipf, M; Seitsonen, A P; Smogunov, A; Timrov, I; Thonhauser, T; Umari, P; Vast, N; Wu, X; Baroni, S
2017-10-24
Quantum EXPRESSO is an integrated suite of open-source computer codes for quantum simulations of materials using state-of-the-art electronic-structure techniques, based on density-functional theory, density-functional perturbation theory, and many-body perturbation theory, within the plane-wave pseudopotential and projector-augmented-wave approaches. Quantum EXPRESSO owes its popularity to the wide variety of properties and processes it allows to simulate, to its performance on an increasingly broad array of hardware architectures, and to a community of researchers that rely on its capabilities as a core open-source development platform to implement their ideas. In this paper we describe recent extensions and improvements, covering new methodologies and property calculators, improved parallelization, code modularization, and extended interoperability both within the distribution and with external software.
Advanced capabilities for materials modelling with Quantum ESPRESSO.
Andreussi, Oliviero; Brumme, Thomas; Bunau, Oana; Buongiorno Nardelli, Marco; Calandra, Matteo; Car, Roberto; Cavazzoni, Carlo; Ceresoli, Davide; Cococcioni, Matteo; Colonna, Nicola; Carnimeo, Ivan; Dal Corso, Andrea; de Gironcoli, Stefano; Delugas, Pietro; DiStasio, Robert; Ferretti, Andrea; Floris, Andrea; Fratesi, Guido; Fugallo, Giorgia; Gebauer, Ralph; Gerstmann, Uwe; Giustino, Feliciano; Gorni, Tommaso; Jia, Junteng; Kawamura, Mitsuaki; Ko, Hsin-Yu; Kokalj, Anton; Küçükbenli, Emine; Lazzeri, Michele; Marsili, Margherita; Marzari, Nicola; Mauri, Francesco; Nguyen, Ngoc Linh; Nguyen, Huy-Viet; Otero-de-la-Roza, Alberto; Paulatto, Lorenzo; Poncé, Samuel; Giannozzi, Paolo; Rocca, Dario; Sabatini, Riccardo; Santra, Biswajit; Schlipf, Martin; Seitsonen, Ari Paavo; Smogunov, Alexander; Timrov, Iurii; Thonhauser, Timo; Umari, Paolo; Vast, Nathalie; Wu, Xifan; Baroni, Stefano
2017-09-27
Quantum ESPRESSO is an integrated suite of open-source computer codes for quantum simulations of materials using state-of-the art electronic-structure techniques, based on density-functional theory, density-functional perturbation theory, and many-body perturbation theory, within the plane-wave pseudo-potential and projector-augmented-wave approaches. Quantum ESPRESSO owes its popularity to the wide variety of properties and processes it allows to simulate, to its performance on an increasingly broad array of hardware architectures, and to a community of researchers that rely on its capabilities as a core open-source development platform to implement theirs ideas. In this paper we describe recent extensions and improvements, covering new methodologies and property calculators, improved parallelization, code modularization, and extended interoperability both within the distribution and with external software. © 2017 IOP Publishing Ltd.
RELAP5-3D Results for Phase I (Exercise 2) of the OECD/NEA MHTGR-350 MW Benchmark
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gerhard Strydom
2012-06-01
The coupling of the PHISICS code suite to the thermal hydraulics system code RELAP5-3D has recently been initiated at the Idaho National Laboratory (INL) to provide a fully coupled prismatic Very High Temperature Reactor (VHTR) system modeling capability as part of the NGNP methods development program. The PHISICS code consists of three modules: INSTANT (performing 3D nodal transport core calculations), MRTAU (depletion and decay heat generation) and a perturbation/mixer module. As part of the verification and validation activities, steady state results have been obtained for Exercise 2 of Phase I of the newly-defined OECD/NEA MHTGR-350 MW Benchmark. This exercise requiresmore » participants to calculate a steady-state solution for an End of Equilibrium Cycle 350 MW Modular High Temperature Reactor (MHTGR), using the provided geometry, material, and coolant bypass flow description. The paper provides an overview of the MHTGR Benchmark and presents typical steady state results (e.g. solid and gas temperatures, thermal conductivities) for Phase I Exercise 2. Preliminary results are also provided for the early test phase of Exercise 3 using a two-group cross-section library and the Relap5-3D model developed for Exercise 2.« less
RELAP5-3D results for phase I (Exercise 2) of the OECD/NEA MHTGR-350 MW benchmark
DOE Office of Scientific and Technical Information (OSTI.GOV)
Strydom, G.; Epiney, A. S.
2012-07-01
The coupling of the PHISICS code suite to the thermal hydraulics system code RELAP5-3D has recently been initiated at the Idaho National Laboratory (INL) to provide a fully coupled prismatic Very High Temperature Reactor (VHTR) system modeling capability as part of the NGNP methods development program. The PHISICS code consists of three modules: INSTANT (performing 3D nodal transport core calculations), MRTAU (depletion and decay heat generation) and a perturbation/mixer module. As part of the verification and validation activities, steady state results have been obtained for Exercise 2 of Phase I of the newly-defined OECD/NEA MHTGR-350 MW Benchmark. This exercise requiresmore » participants to calculate a steady-state solution for an End of Equilibrium Cycle 350 MW Modular High Temperature Reactor (MHTGR), using the provided geometry, material, and coolant bypass flow description. The paper provides an overview of the MHTGR Benchmark and presents typical steady state results (e.g. solid and gas temperatures, thermal conductivities) for Phase I Exercise 2. Preliminary results are also provided for the early test phase of Exercise 3 using a two-group cross-section library and the Relap5-3D model developed for Exercise 2. (authors)« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garnier, Ch.; Mailhe, P.; Sontheimer, F.
2007-07-01
Fuel performance is a key factor for minimizing operating costs in nuclear plants. One of the important aspects of fuel performance is fuel rod design, based upon reliable tools able to verify the safety of current fuel solutions, prevent potential issues in new core managements and guide the invention of tomorrow's fuels. AREVA is developing its future global fuel rod code COPERNIC3, which is able to calculate the thermal-mechanical behavior of advanced fuel rods in nuclear plants. Some of the best practices to achieve this goal are described, by reviewing the three pillars of a fuel rod code: the database,more » the modelling and the computer and numerical aspects. At first, the COPERNIC3 database content is described, accompanied by the tools developed to effectively exploit the data. Then is given an overview of the main modelling aspects, by emphasizing the thermal, fission gas release and mechanical sub-models. In the last part, numerical solutions are detailed in order to increase the computational performance of the code, with a presentation of software configuration management solutions. (authors)« less
Byrd, Gary D; Winkelstein, Peter
2014-10-01
Based on the authors' shared interest in the interprofessional challenges surrounding health information management, this study explores the degree to which librarians, informatics professionals, and core health professionals in medicine, nursing, and public health share common ethical behavior norms grounded in moral principles. Using the "Principlism" framework from a widely cited textbook of biomedical ethics, the authors analyze the statements in the ethical codes for associations of librarians (Medical Library Association [MLA], American Library Association, and Special Libraries Association), informatics professionals (American Medical Informatics Association [AMIA] and American Health Information Management Association), and core health professionals (American Medical Association, American Nurses Association, and American Public Health Association). This analysis focuses on whether and how the statements in these eight codes specify core moral norms (Autonomy, Beneficence, Non-Maleficence, and Justice), core behavioral norms (Veracity, Privacy, Confidentiality, and Fidelity), and other norms that are empirically derived from the code statements. These eight ethical codes share a large number of common behavioral norms based most frequently on the principle of Beneficence, then on Autonomy and Justice, but rarely on Non-Maleficence. The MLA and AMIA codes share the largest number of common behavioral norms, and these two associations also share many norms with the other six associations. The shared core of behavioral norms among these professions, all grounded in core moral principles, point to many opportunities for building effective interprofessional communication and collaboration regarding the development, management, and use of health information resources and technologies.
Byrd, Gary D.; Winkelstein, Peter
2014-01-01
Objective: Based on the authors' shared interest in the interprofessional challenges surrounding health information management, this study explores the degree to which librarians, informatics professionals, and core health professionals in medicine, nursing, and public health share common ethical behavior norms grounded in moral principles. Methods: Using the “Principlism” framework from a widely cited textbook of biomedical ethics, the authors analyze the statements in the ethical codes for associations of librarians (Medical Library Association [MLA], American Library Association, and Special Libraries Association), informatics professionals (American Medical Informatics Association [AMIA] and American Health Information Management Association), and core health professionals (American Medical Association, American Nurses Association, and American Public Health Association). This analysis focuses on whether and how the statements in these eight codes specify core moral norms (Autonomy, Beneficence, Non-Maleficence, and Justice), core behavioral norms (Veracity, Privacy, Confidentiality, and Fidelity), and other norms that are empirically derived from the code statements. Results: These eight ethical codes share a large number of common behavioral norms based most frequently on the principle of Beneficence, then on Autonomy and Justice, but rarely on Non-Maleficence. The MLA and AMIA codes share the largest number of common behavioral norms, and these two associations also share many norms with the other six associations. Implications: The shared core of behavioral norms among these professions, all grounded in core moral principles, point to many opportunities for building effective interprofessional communication and collaboration regarding the development, management, and use of health information resources and technologies. PMID:25349543
Pushing configuration-interaction to the limit: Towards massively parallel MCSCF calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vogiatzis, Konstantinos D.; Ma, Dongxia; Olsen, Jeppe
A new large-scale parallel multiconfigurational self-consistent field (MCSCF) implementation in the open-source NWChem computational chemistry code is presented. The generalized active space approach is used to partition large configuration interaction (CI) vectors and generate a sufficient number of batches that can be distributed to the available cores. Massively parallel CI calculations with large active spaces can be performed. The new parallel MCSCF implementation is tested for the chromium trimer and for an active space of 20 electrons in 20 orbitals, which can now routinely be performed. Unprecedented CI calculations with an active space of 22 electrons in 22 orbitals formore » the pentacene systems were performed and a single CI iteration calculation with an active space of 24 electrons in 24 orbitals for the chromium tetramer was possible. In conclusion, the chromium tetramer corresponds to a CI expansion of one trillion Slater determinants (914 058 513 424) and is the largest conventional CI calculation attempted up to date.« less
Pushing configuration-interaction to the limit: Towards massively parallel MCSCF calculations
Vogiatzis, Konstantinos D.; Ma, Dongxia; Olsen, Jeppe; ...
2017-11-14
A new large-scale parallel multiconfigurational self-consistent field (MCSCF) implementation in the open-source NWChem computational chemistry code is presented. The generalized active space approach is used to partition large configuration interaction (CI) vectors and generate a sufficient number of batches that can be distributed to the available cores. Massively parallel CI calculations with large active spaces can be performed. The new parallel MCSCF implementation is tested for the chromium trimer and for an active space of 20 electrons in 20 orbitals, which can now routinely be performed. Unprecedented CI calculations with an active space of 22 electrons in 22 orbitals formore » the pentacene systems were performed and a single CI iteration calculation with an active space of 24 electrons in 24 orbitals for the chromium tetramer was possible. In conclusion, the chromium tetramer corresponds to a CI expansion of one trillion Slater determinants (914 058 513 424) and is the largest conventional CI calculation attempted up to date.« less
Pushing configuration-interaction to the limit: Towards massively parallel MCSCF calculations
NASA Astrophysics Data System (ADS)
Vogiatzis, Konstantinos D.; Ma, Dongxia; Olsen, Jeppe; Gagliardi, Laura; de Jong, Wibe A.
2017-11-01
A new large-scale parallel multiconfigurational self-consistent field (MCSCF) implementation in the open-source NWChem computational chemistry code is presented. The generalized active space approach is used to partition large configuration interaction (CI) vectors and generate a sufficient number of batches that can be distributed to the available cores. Massively parallel CI calculations with large active spaces can be performed. The new parallel MCSCF implementation is tested for the chromium trimer and for an active space of 20 electrons in 20 orbitals, which can now routinely be performed. Unprecedented CI calculations with an active space of 22 electrons in 22 orbitals for the pentacene systems were performed and a single CI iteration calculation with an active space of 24 electrons in 24 orbitals for the chromium tetramer was possible. The chromium tetramer corresponds to a CI expansion of one trillion Slater determinants (914 058 513 424) and is the largest conventional CI calculation attempted up to date.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Geslot, Benoit; Gruel, Adrien; Pepino, Alexandra
2015-07-01
MINERVE is a two-zone pool type zero power reactor operated by CEA (Cadarache, France). Kinetic parameters of the core (prompt neutron decay constant, delayed neutron fraction, generation time) have been recently measured using various pile noise experimental techniques, namely Feynman-α, Rossi-α and Cohn-α. Results are discussed and compared to each other's. The measurement campaign has been conducted in the framework of a tri-partite collaboration between CEA, SCK.CEN and PSI. Results presented in this paper were obtained thanks to a time-stamping acquisition system developed by CEA. PSI performed simultaneous measurements which are presented in a companion paper. Signals come from twomore » high efficiency fission chambers located in the graphite reflector next to the core driver zone. Experiments were conducted at critical state with a reactor power of 0.2 W. The core integral fission rate is obtained from a calibrated miniature fission chamber located at the center of the core. Other results obtained in two sub-critical configurations will be presented elsewhere. Best estimate delayed neutron fraction comes from the Cohn-α method: 747 ± 15 pcm (1σ). In this case, the prompt decay constant is 79 ± 0.5 s{sup -1} and the generation time is 94.5 ± 0.7 μs. Other methods give consistent results within the confidence intervals. Experimental results are compared to calculated values obtained from a full 3D core modeling with the CEA-developed Monte Carlo code TRIPOLI4.9 associated with its continuous energy JEFF3.1.1-based library. A very good agreement is observed for the calculated delayed neutron fraction (748.7 ± 0.4 pcm at 1σ), that is a difference of -0.3% with the experiment. On the contrary, a 10% discrepancy is observed for the calculated generation time (104.4 ± 0.1 μs at 1σ). (authors)« less
NASA Astrophysics Data System (ADS)
Olson, Richard F.
2013-05-01
Rendering of point scatterer based radar scenes for millimeter wave (mmW) seeker tests in real-time hardware-in-the-loop (HWIL) scene generation requires efficient algorithms and vector-friendly computer architectures for complex signal synthesis. New processor technology from Intel implements an extended 256-bit vector SIMD instruction set (AVX, AVX2) in a multi-core CPU design providing peak execution rates of hundreds of GigaFLOPS (GFLOPS) on one chip. Real world mmW scene generation code can approach peak SIMD execution rates only after careful algorithm and source code design. An effective software design will maintain high computing intensity emphasizing register-to-register SIMD arithmetic operations over data movement between CPU caches or off-chip memories. Engineers at the U.S. Army Aviation and Missile Research, Development and Engineering Center (AMRDEC) applied two basic parallel coding methods to assess new 256-bit SIMD multi-core architectures for mmW scene generation in HWIL. These include use of POSIX threads built on vector library functions and more portable, highlevel parallel code based on compiler technology (e.g. OpenMP pragmas and SIMD autovectorization). Since CPU technology is rapidly advancing toward high processor core counts and TeraFLOPS peak SIMD execution rates, it is imperative that coding methods be identified which produce efficient and maintainable parallel code. This paper describes the algorithms used in point scatterer target model rendering, the parallelization of those algorithms, and the execution performance achieved on an AVX multi-core machine using the two basic parallel coding methods. The paper concludes with estimates for scale-up performance on upcoming multi-core technology.
Advanced Test Reactor Core Modeling Update Project Annual Report for Fiscal Year 2012
DOE Office of Scientific and Technical Information (OSTI.GOV)
David W. Nigg, Principal Investigator; Kevin A. Steuhm, Project Manager
Legacy computational reactor physics software tools and protocols currently used for support of Advanced Test Reactor (ATR) core fuel management and safety assurance, and to some extent, experiment management, are inconsistent with the state of modern nuclear engineering practice, and are difficult, if not impossible, to properly verify and validate (V&V) according to modern standards. Furthermore, the legacy staff knowledge required for application of these tools and protocols from the 1960s and 1970s is rapidly being lost due to staff turnover and retirements. In late 2009, the Idaho National Laboratory (INL) initiated a focused effort, the ATR Core Modeling Updatemore » Project, to address this situation through the introduction of modern high-fidelity computational software and protocols. This aggressive computational and experimental campaign will have a broad strategic impact on the operation of the ATR, both in terms of improved computational efficiency and accuracy for support of ongoing DOE programs as well as in terms of national and international recognition of the ATR National Scientific User Facility (NSUF). The ATR Core Modeling Update Project, targeted for full implementation in phase with the next anticipated ATR Core Internals Changeout (CIC) in the 2014-2015 time frame, began during the last quarter of Fiscal Year 2009, and has just completed its third full year. Key accomplishments so far have encompassed both computational as well as experimental work. A new suite of stochastic and deterministic transport theory based reactor physics codes and their supporting nuclear data libraries (HELIOS, KENO6/SCALE, NEWT/SCALE, ATTILA, and an extended implementation of MCNP5) has been installed at the INL under various licensing arrangements. Corresponding models of the ATR and ATRC are now operational with all five codes, demonstrating the basic feasibility of the new code packages for their intended purpose. Of particular importance, a set of as-run core depletion HELIOS calculations for all ATR cycles since August 2009, Cycle 145A through Cycle 151B, was successfully completed during 2012. This major effort supported a decision late in the year to proceed with the phased incorporation of the HELIOS methodology into the ATR Core Safety Analysis Package (CSAP) preparation process, in parallel with the established PDQ-based methodology, beginning late in Fiscal Year 2012. Acquisition of the advanced SERPENT (VTT-Finland) and MC21 (DOE-NR) Monte Carlo stochastic neutronics simulation codes was also initiated during the year and some initial applications of SERPENT to ATRC experiment analysis were demonstrated. These two new codes will offer significant additional capability, including the possibility of full-3D Monte Carlo fuel management support capabilities for the ATR at some point in the future. Finally, a capability for rigorous sensitivity analysis and uncertainty quantification based on the TSUNAMI system has been implemented and initial computational results have been obtained. This capability will have many applications as a tool for understanding the margins of uncertainty in the new models as well as for validation experiment design and interpretation.« less
Moats and Drawbridges: An Isolation Primitive for Reconfigurable Hardware Based Systems
2007-05-01
these systems, and after being run through an optimizing CAD tool the resulting circuit is a single entangled mess of gates and wires. To prevent the...translates MATLAB [48] algorithms into HDL, logic synthesis translates this HDL into a netlist, a synthesis tool uses a place-and-route algorithm to...Core Soft Core µ Soft P Core µP Core Hard Soft Algorithms MATLAB gcc ExecutableC Code HDL C Code Bitstream Place and Route NetlistLogic Synthesis EDK µP
1981-12-01
file.library-unit{.subunit).SYMAP Statement Map: library-file. library-unit.subunit).SMAP Type Map: 1 ibrary.fi le. 1 ibrary-unit{.subunit). TMAP The library...generator SYMAP Symbol Map code generator SMAP Updated Statement Map code generator TMAP Type Map code generator A.3.5 The PUNIT Command The P UNIT...Core.Stmtmap) NAME Tmap (Core.Typemap) END Example A-3 Compiler Command Stream for the Code Generator Texas Instruments A-5 Ada Optimizing Compiler
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xiao, Heng; Endo, Satoshi; Wong, May
Yamaguchi and Feingold (2012) note that the cloud fields in their Weather Research and Forecasting (WRF) large-eddy simulations (LESs) of marine stratocumulus exhibit a strong sensitivity to time stepping choices. In this study, we reproduce and analyze this sensitivity issue using two stratocumulus cases, one marine and one continental. Results show that (1) the sensitivity is associated with spurious motions near the moisture jump between the boundary layer and the free atmosphere, and (2) these spurious motions appear to arise from neglecting small variations in water vapor mixing ratio (qv) in the pressure gradient calculation in the acoustic substepping portionmore » of the integration procedure. We show that this issue is remedied in the WRF dynamical core by replacing the prognostic equation for the potential temperature θ with one for the moist potential temperature θm=θ(1+1.61qv), which allows consistent treatment of moisture in the calculation of pressure during the acoustic substeps. With this modification, the spurious motions and the sensitivity to the time stepping settings (i.e., the dynamic time step length and number of acoustic substeps) are eliminated in both of the example stratocumulus cases. This modification improves the applicability of WRF for LES applications, and possibly other models using similar dynamical core formulations, and also permits the use of longer time steps than in the original code.« less
Optimization of small long-life PWR based on thorium fuel
DOE Office of Scientific and Technical Information (OSTI.GOV)
Subkhi, Moh Nurul, E-mail: nsubkhi@students.itb.ac.id; Physics Dept., Faculty of Science and Technology, State Islamic University of Sunan Gunung Djati Bandung Jalan A.H Nasution 105 Bandung; Suud, Zaki, E-mail: szaki@fi.itb.ac.id
2015-09-30
A conceptual design of small long-life Pressurized Water Reactor (PWR) using thorium fuel has been investigated in neutronic aspect. The cell-burn up calculations were performed by PIJ SRAC code using nuclear data library based on JENDL 3.2, while the multi-energy-group diffusion calculations were optimized in three-dimension X-Y-Z geometry of core by COREBN. The excess reactivity of thorium nitride with ZIRLO cladding is considered during 5 years of burnup without refueling. Optimization of 350 MWe long life PWR based on 5% {sup 233}U & 2.8% {sup 231}Pa, 6% {sup 233}U & 2.8% {sup 231}Pa and 7% {sup 233}U & 6% {supmore » 231}Pa give low excess reactivity.« less
Shafer, Morgan W.; Unterberg, Ezekial A.; Wingen, Andreas; ...
2014-12-29
Recent observations on DIII-D have advanced the understanding of plasma response to applied resonant magnetic perturbations (RMPs) in both H-mode and L-mode plasmas. Three distinct 3D features localized in minor radius are imaged via filtered soft x-ray emission: (i) the formation of lobes extending from the unperturbed separatrix in the X-point region at the plasma boundary, (ii) helical kink-like perturbations in the steep-gradient region inside the separatrix, and (iii) amplified islands in the core of a low-rotation L-mode plasma. In this study, these measurements are used to test and to validate plasma response models, which are crucial for providing predictivemore » capability of edge-localized mode control. In particular, vacuum and two-fluid resistive magnetohydrodynamic(MHD) responses are tested in the regions of these measurements. At the plasma boundary in H-mode discharges with n = 3 RMPs applied, measurements compare well to vacuum-field calculations that predict lobe structures. Yet in the steep-gradient region, measurements agree better with calculations from the linear resistive two-fluid MHD code, M3D-C1. Relative to the vacuum fields, the resistive two-fluid MHD calculations show a reduction in the pitch-resonant components of the normal magnetic field (screening), and amplification of non-resonant components associated with ideal kink modes. However, the calculations still over-predict the amplitude of the measuredperturbation by a factor of 4. In a slowly rotating L-mode plasma with n = 1 RMPs, core islands are observed amplified from vacuum predictions. Finally, these results indicate that while the vacuum approach describes measurements in the edge region well, it is important to include effects of extended MHD in the pedestal and deeper in the plasma core.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hep, J.; Konecna, A.; Krysl, V.
2011-07-01
This paper describes the application of effective source in forward calculations and the adjoint method to the solution of fast neutron fluence and activation detector activities in the reactor pressure vessel (RPV) and RPV cavity of a VVER-440 reactor. Its objective is the demonstration of both methods on a practical task. The effective source method applies the Boltzmann transport operator to time integrated source data in order to obtain neutron fluence and detector activities. By weighting the source data by time dependent decay of the detector activity, the result of the calculation is the detector activity. Alternatively, if the weightingmore » is uniform with respect to time, the result is the fluence. The approach works because of the inherent linearity of radiation transport in non-multiplying time-invariant media. Integrated in this way, the source data are referred to as the effective source. The effective source in the forward calculations method thereby enables the analyst to replace numerous intensive transport calculations with a single transport calculation in which the time dependence and magnitude of the source are correctly represented. In this work, the effective source method has been expanded slightly in the following way: neutron source data were performed with few group method calculation using the active core calculation code MOBY-DICK. The follow-up neutron transport calculation was performed using the neutron transport code TORT to perform multigroup calculations. For comparison, an alternative method of calculation has been used based upon adjoint functions of the Boltzmann transport equation. Calculation of the three-dimensional (3-D) adjoint function for each required computational outcome has been obtained using the deterministic code TORT and the cross section library BGL440. Adjoint functions appropriate to the required fast neutron flux density and neutron reaction rates have been calculated for several significant points within the RPV and RPV cavity of the VVER-440 reacto rand located axially at the position of maximum power and at the position of the weld. Both of these methods (the effective source and the adjoint function) are briefly described in the present paper. The paper also describes their application to the solution of fast neutron fluence and detectors activities for the VVER-440 reactor. (authors)« less
Atomic Processes for XUV Lasers: Alkali Atoms and Ions
NASA Astrophysics Data System (ADS)
Dimiduk, David Paul
The development of extreme ultraviolet (XUV) lasers is dependent upon knowledge of processes in highly excited atoms. Described here are spectroscopy experiments which have identified and characterized certain autoionizing energy levels in core-excited alkali atoms and ions. Such levels, termed quasi-metastable, have desirable characteristics as upper levels for efficient, powerful XUV lasers. Quasi -metastable levels are among the most intense emission lines in the XUV spectra of core-excited alkalis. Laser experiments utilizing these levels have proved to be useful in characterizing other core-excited levels. Three experiments to study quasi-metastable levels are reported. The first experiment is vacuum ultraviolet (VUV) absorption spectroscopy on the Cs 109 nm transitions using high-resolution laser techniques. This experiment confirms the identification of transitions to a quasi-metastable level, estimates transition oscillator strengths, and estimates the hyperfine splitting of the quasi-metastable level. The second experiment, XUV emission spectroscopy of Ca II and Sr II in a microwave-heated plasma, identifies transitions from quasi-metastable levels in these ions, and provides confirming evidence of their radiative, rather than autoionizing, character. In the third experiment, core-excited Ca II ions are produced by inner-shell photoionization of Ca with soft x-rays from a laser-produced plasma. This preliminary experiment demonstrated a method of creating large numbers of these highly-excited ions for future spectroscopic experiments. Experimental and theoretical evidence suggests the CA II 3{ rm p}^5 3d4s ^4 {rm F}^circ_{3/2 } quasi-metastable level may be directly pumped via a dipole ionization process from the Ca I ground state. The direct process is permitted by J conservation, and occurs due to configuration mixing in the final state and possibly the initial state as well. The experiments identifying and characterizing quasi-metastable levels are compared to calculations using the Hartree-Fock code RCN/RCG. Calculated parameters include energy levels, wavefunctions, and transition rates. Based on an extension of this code, earlier unexplained experiments showing strong two-electron radiative transitions from quasi-metastable levels are now understood.
The linearly scaling 3D fragment method for large scale electronic structure calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, Zhengji; Meza, Juan; Lee, Byounghak
2009-07-28
The Linearly Scaling three-dimensional fragment (LS3DF) method is an O(N) ab initio electronic structure method for large-scale nano material simulations. It is a divide-and-conquer approach with a novel patching scheme that effectively cancels out the artificial boundary effects, which exist in all divide-and-conquer schemes. This method has made ab initio simulations of thousand-atom nanosystems feasible in a couple of hours, while retaining essentially the same accuracy as the direct calculation methods. The LS3DF method won the 2008 ACM Gordon Bell Prize for algorithm innovation. Our code has reached 442 Tflop/s running on 147,456 processors on the Cray XT5 (Jaguar) atmore » OLCF, and has been run on 163,840 processors on the Blue Gene/P (Intrepid) at ALCF, and has been applied to a system containing 36,000 atoms. In this paper, we will present the recent parallel performance results of this code, and will apply the method to asymmetric CdSe/CdS core/shell nanorods, which have potential applications in electronic devices and solar cells.« less
The Linearly Scaling 3D Fragment Method for Large Scale Electronic Structure Calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, Zhengji; Meza, Juan; Lee, Byounghak
2009-06-26
The Linearly Scaling three-dimensional fragment (LS3DF) method is an O(N) ab initio electronic structure method for large-scale nano material simulations. It is a divide-and-conquer approach with a novel patching scheme that effectively cancels out the artificial boundary effects, which exist in all divide-and-conquer schemes. This method has made ab initio simulations of thousand-atom nanosystems feasible in a couple of hours, while retaining essentially the same accuracy as the direct calculation methods. The LS3DF method won the 2008 ACM Gordon Bell Prize for algorithm innovation. Our code has reached 442 Tflop/s running on 147,456 processors on the Cray XT5 (Jaguar) atmore » OLCF, and has been run on 163,840 processors on the Blue Gene/P (Intrepid) at ALCF, and has been applied to a system containing 36,000 atoms. In this paper, we will present the recent parallel performance results of this code, and will apply the method to asymmetric CdSe/CdS core/shell nanorods, which have potential applications in electronic devices and solar cells.« less
On neoclassical impurity transport in stellarator geometry
NASA Astrophysics Data System (ADS)
García-Regaña, J. M.; Kleiber, R.; Beidler, C. D.; Turkin, Y.; Maaßberg, H.; Helander, P.
2013-07-01
The impurity dynamics in stellarators has become an issue of moderate concern due to the inherent tendency of the impurities to accumulate in the core when the neoclassical ambipolar radial electric field points radially inwards (ion root regime). This accumulation can lead to collapse of the plasma due to radiative losses, and thus limit high performance plasma discharges in non-axisymmetric devices. A quantitative description of the neoclassical impurity transport is complicated by the breakdown of the assumption of small E × B drift and trapping due to the electrostatic potential variation on a flux surface \\tilde{\\Phi} compared with those due to the magnetic field gradient. This work examines the impact of this potential variation on neoclassical impurity transport in the Large Helical Device heliotron. It shows that the neoclassical impurity transport can be strongly affected by \\tilde{\\Phi} . The central numerical tool used is the δf particle in cell Monte Carlo code EUTERPE. The \\tilde{\\Phi} used in the calculations is provided by the neoclassical code GSRAKE. The possibility of obtaining a more general \\tilde{\\Phi} self-consistently with EUTERPE is also addressed and a preliminary calculation is presented.
Rates for neutron-capture reactions on tungsten isotopes in iron meteorites. [Abstract only
NASA Technical Reports Server (NTRS)
Masarik, J.; Reedy, R. C.
1994-01-01
High-precision W isotopic analyses by Harper and Jacobsen indicate the W-182/W-183 ratio in the Toluca iron meteorite is shifted by -(3.0 +/- 0.9) x 10(exp -4) relative to a terrestrial standard. Possible causes of this shift are neutron-capture reactions on W during Toluca's approximately 600-Ma exposure to cosmic ray particles or radiogenic growth of W-182 from 9-Ma Hf-182 in the silicate portion of the Earth after removal of W to the Earth's core. Calculations for the rates of neutron-capture reactions on W isotopes were done to study the first possibility. The LAHET Code System (LCS) which consists of the Los Alamos High Energy Transport (LAHET) code and the Monte Carlo N-Particle(MCNP) transport code was used to numerically simulate the irradiation of the Toluca iron meteorite by galactic-cosmic-ray (GCR) particles and to calculate the rates of W(n, gamma) reactions. Toluca was modeled as a 3.9-m-radius sphere with the composition of a typical IA iron meteorite. The incident GCR protons and their interactions were modeled with LAHET, which also handled the interactions of neutrons with energies above 20 MeV. The rates for the capture of neutrons by W-182, W-183, and W-186 were calculated using the detailed library of (n, gamma) cross sections in MCNP. For this study of the possible effect of W(n, gamma) reactions on W isotope systematics, we consider the peak rates. The calculated maximum change in the normalized W-182/W-183 ratio due to neutron-capture reactions cannot account for more than 25% of the mass 182 deficit observed in Toluca W.
Status and future of the 3D MAFIA group of codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ebeling, F.; Klatt, R.; Krawzcyk, F.
1988-12-01
The group of fully three dimensional computer codes for solving Maxwell's equations for a wide range of applications, MAFIA, is already well established. Extensive comparisons with measurements have demonstrated the accuracy of the computations. A large numer of components have been designed for accelerators, such as kicker magnets, non cyclindrical cavities, ferrite loaded cavities, vacuum chambers with slots and transitions, etc. The latest additions to the system include a new static solver that can calculate 3D magneto- and electrostatic fields, and a self consistent version of the 2D-BCI that solves the field equations and the equations of motion in parallel.more » Work on new eddy current modules has started, which will allow treatment of laminated and/or solid iron cores excited by low frequency currents. Based on our experience with the present releases 1 and 2, we have started a complete revision of the whole user interface and data structure, which will make the codes even more user-friendly and flexible.« less
NASA Astrophysics Data System (ADS)
Yang, Lin; Zhang, Feng; Wang, Cai-Zhuang; Ho, Kai-Ming; Travesset, Alex
2018-04-01
We present an implementation of EAM and FS interatomic potentials, which are widely used in simulating metallic systems, in HOOMD-blue, a software designed to perform classical molecular dynamics simulations using GPU accelerations. We first discuss the details of our implementation and then report extensive benchmark tests. We demonstrate that single-precision floating point operations efficiently implemented on GPUs can produce sufficient accuracy when compared against double-precision codes, as demonstrated in test simulations of calculations of the glass-transition temperature of Cu64.5Zr35.5, and pair correlation function g (r) of liquid Ni3Al. Our code scales well with the size of the simulating system on NVIDIA Tesla M40 and P100 GPUs. Compared with another popular software LAMMPS running on 32 cores of AMD Opteron 6220 processors, the GPU/CPU performance ratio can reach as high as 4.6. The source code can be accessed through the HOOMD-blue web page for free by any interested user.
Strydom, G.; Epiney, A. S.; Alfonsi, Andrea; ...
2015-12-02
The PHISICS code system has been under development at INL since 2010. It consists of several modules providing improved coupled core simulation capability: INSTANT (3D nodal transport core calculations), MRTAU (depletion and decay heat generation) and modules performing criticality searches, fuel shuffling and generalized perturbation. Coupling of the PHISICS code suite to the thermal hydraulics system code RELAP5-3D was finalized in 2013, and as part of the verification and validation effort the first phase of the OECD/NEA MHTGR-350 Benchmark has now been completed. The theoretical basis and latest development status of the coupled PHISICS/RELAP5-3D tool are described in more detailmore » in a concurrent paper. This paper provides an overview of the OECD/NEA MHTGR-350 Benchmark and presents the results of Exercises 2 and 3 defined for Phase I. Exercise 2 required the modelling of a stand-alone thermal fluids solution at End of Equilibrium Cycle for the Modular High Temperature Reactor (MHTGR). The RELAP5-3D results of four sub-cases are discussed, consisting of various combinations of coolant bypass flows and material thermophysical properties. Exercise 3 required a coupled neutronics and thermal fluids solution, and the PHISICS/RELAP5-3D code suite was used to calculate the results of two sub-cases. The main focus of the paper is a comparison of results obtained with the traditional RELAP5-3D “ring” model approach against a much more detailed model that include kinetics feedback on individual block level and thermal feedbacks on a triangular sub-mesh. The higher fidelity that can be obtained by this “block” model is illustrated with comparison results on the temperature, power density and flux distributions. Furthermore, it is shown that the ring model leads to significantly lower fuel temperatures (up to 10%) when compared with the higher fidelity block model, and that the additional model development and run-time efforts are worth the gains obtained in the improved spatial temperature and flux distributions.« less
Evaluation of RAPID for a UNF cask benchmark problem
NASA Astrophysics Data System (ADS)
Mascolino, Valerio; Haghighat, Alireza; Roskoff, Nathan J.
2017-09-01
This paper examines the accuracy and performance of the RAPID (Real-time Analysis for Particle transport and In-situ Detection) code system for the simulation of a used nuclear fuel (UNF) cask. RAPID is capable of determining eigenvalue, subcritical multiplication, and pin-wise, axially-dependent fission density throughout a UNF cask. We study the source convergence based on the analysis of the different parameters used in an eigenvalue calculation in the MCNP Monte Carlo code. For this study, we consider a single assembly surrounded by absorbing plates with reflective boundary conditions. Based on the best combination of eigenvalue parameters, a reference MCNP solution for the single assembly is obtained. RAPID results are in excellent agreement with the reference MCNP solutions, while requiring significantly less computation time (i.e., minutes vs. days). A similar set of eigenvalue parameters is used to obtain a reference MCNP solution for the whole UNF cask. Because of time limitation, the MCNP results near the cask boundaries have significant uncertainties. Except for these, the RAPID results are in excellent agreement with the MCNP predictions, and its computation time is significantly lower, 35 second on 1 core versus 9.5 days on 16 cores.
Subplane-based Control Rod Decusping Techniques for the 2D/1D Method in MPACT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Graham, Aaron M; Collins, Benjamin S; Downar, Thomas
2017-01-01
The MPACT transport code is being jointly developed by Oak Ridge National Laboratory and the University of Michigan to serve as the primary neutron transport code for the Virtual Environment for Reactor Applications Core Simulator. MPACT uses the 2D/1D method to solve the transport equation by decomposing the reactor model into a stack of 2D planes. A fine mesh flux distribution is calculated in each 2D plane using the Method of Characteristics (MOC), then the planes are coupled axially through a 1D NEM-Pmore » $$_3$$ calculation. This iterative calculation is then accelerated using the Coarse Mesh Finite Difference method. One problem that arises frequently when using the 2D/1D method is that of control rod cusping. This occurs when the tip of a control rod falls between the boundaries of an MOC plane, requiring that the rodded and unrodded regions be axially homogenized for the 2D MOC calculations. Performing a volume homogenization does not properly preserve the reaction rates, causing an error known as cusping. The most straightforward way of resolving this problem is by refining the axial mesh, but this can significantly increase the computational expense of the calculation. The other way of resolving the partially inserted rod is through the use of a decusping method. This paper presents new decusping methods implemented in MPACT that can dynamically correct the rod cusping behavior for a variety of problems.« less
Neutron Radiation Damage Estimation in the Core Structure Base Metal of RSG GAS
NASA Astrophysics Data System (ADS)
Santa, S. A.; Suwoto
2018-02-01
Radiation damage in core structure of the Indonesian RGS GAS multi purpose reactor resulting from the reaction of fast and thermal neutrons with core material structure was investigated for the first time after almost 30 years in operation. The aim is to analyze the degradation level of the critical components of the RSG GAS reactor so that the remaining life of its component can be estimated. Evaluation results of critical components remaining life will be used as data ccompleteness for submission of reactor operating permit extension. Material damage analysis due to neutron radiation is performed for the core structure components made of AlMg3 material and bolts reinforcement of core structure made of SUS304. Material damage evaluation was done on Al and Fe as base metal of AlMg3 and SUS304, respectively. Neutron fluences are evaluated based on the assumption that neutron flux calculations of U3Si8-Al equilibrium core which is operated on power rated of 15 MW. Calculation result using SRAC2006 code of CITATION module shows the maximum total neutron flux and flux >0.1 MeV are 2.537E+14 n/cm2/s and 3.376E+13 n/cm2/s, respectively. It was located at CIP core center close to the fuel element. After operating up to the end of #89 core formation, the total neutron fluence and fluence >0.1 MeV were achieved 9.063E+22 and 1.269E+22 n/cm2, respectively. Those are related to material damage of Al and Fe as much as 17.91 and 10.06 dpa, respectively. Referring to the life time of Al-1100 material irradiated in the neutron field with thermal flux/total flux=1.7 which capable of accepting material damage up to 250 dpa, it was concluded that RSG GAS reactor core structure underwent 7.16% of its operating life span. It means that core structure of RSG GAS reactor is still capable to receive the total neutron fluence of 9.637E+22 n/cm2 or fluence >0.1 MeV of 5.672E+22 n/cm2.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tommasi, J.; Ruggieri, J. M.; Lebrat, J. F.
The latest release (2.1) of the ERANOS code system, using JEF-2.2, JEFF-3.1 and ENDF/B-VI r8 multigroup cross-section libraries is currently being validated on fast reactor critical experiments at CEA-Cadarache (France). This paper briefly presents the library effect studies and the detailed best-estimate validation studies performed up to now as part of the validation process. The library effect studies are performed over a wide range of experimental configurations, using simple model and method options. They yield global trends about the shift from JEF-2.2 to JEFF-3.1 cross-section libraries, that can be related to individual sensitivities and cross-section changes. The more detailed, best-estimate,more » calculations have been performed up to now over three experimental configurations carried out in the MASURCA critical facility at CEA-Cadarache: two cores with a softened spectrum due to large amounts of graphite (MAS1A' and MAS1B), and a core representative of sodium-cooled fast reactors (CIRANO ZONA2A). Calculated values have been compared to measurements, and discrepancies analyzed in detail using perturbation theory. Values calculated with JEFF-3.1 were found to be within 3 standard deviations of the measured values, and at least of the same quality as the JEF-2.2 based results. (authors)« less
Fast CPU-based Monte Carlo simulation for radiotherapy dose calculation.
Ziegenhein, Peter; Pirner, Sven; Ph Kamerling, Cornelis; Oelfke, Uwe
2015-08-07
Monte-Carlo (MC) simulations are considered to be the most accurate method for calculating dose distributions in radiotherapy. Its clinical application, however, still is limited by the long runtimes conventional implementations of MC algorithms require to deliver sufficiently accurate results on high resolution imaging data. In order to overcome this obstacle we developed the software-package PhiMC, which is capable of computing precise dose distributions in a sub-minute time-frame by leveraging the potential of modern many- and multi-core CPU-based computers. PhiMC is based on the well verified dose planning method (DPM). We could demonstrate that PhiMC delivers dose distributions which are in excellent agreement to DPM. The multi-core implementation of PhiMC scales well between different computer architectures and achieves a speed-up of up to 37[Formula: see text] compared to the original DPM code executed on a modern system. Furthermore, we could show that our CPU-based implementation on a modern workstation is between 1.25[Formula: see text] and 1.95[Formula: see text] faster than a well-known GPU implementation of the same simulation method on a NVIDIA Tesla C2050. Since CPUs work on several hundreds of GB RAM the typical GPU memory limitation does not apply for our implementation and high resolution clinical plans can be calculated.
NASA Astrophysics Data System (ADS)
Dewi Syarifah, Ratna; Su'ud, Zaki; Basar, Khairul; Irwanto, Dwi
2017-01-01
Nuclear Power Plant (NPP) is one of candidates which can support electricity demand in the world. The Generation IV NPP has fourth main objective, i.e. sustainability, economics competitiveness, safety and reliability, and proliferation and physical protection. One of Gen-IV reactor type is Gas Cooled Fast Reactor (GFR). In this study, the analysis of fuel fraction in small GFR with nitride fuel has been done. The calculation was performed by SRAC code, both Pij and CITATION calculation. SRAC2002 system is a code system applicable to analyze the neutronics of variety reactor type. And for the data library used JENDL-3.2. The step of SRAC calculation is fuel pin calculated by Pij calculation until the data homogenized, after it homogenized we calculate core reactor. The variation of fuel fraction is 40% up to 65%. The optimum design of 500MWth GFR without refueling with 10 years burn up time reach when radius F1:F2:F3 = 50cm:30cm:30cm and height F1:F2:F3 = 50cm:40cm:30cm, variation percentage Plutonium in F1:F2:F3 = 7%:10%:13%. The optimum fuel fraction is 41% with addition 2% Plutonium weapon grade mix in the fuel. The excess reactivity value in this case 1.848% and the k-eff value is 1.01883. The high burn up reached when the fuel fraction is low. In this study 41% fuel fraction produce faster fissile fuel, so it has highest burn-up level than the other fuel fraction.
An efficient modeling method for thermal stratification simulation in a BWR suppression pool
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haihua Zhao; Ling Zou; Hongbin Zhang
2012-09-01
The suppression pool in a BWR plant not only is the major heat sink within the containment system, but also provides major emergency cooling water for the reactor core. In several accident scenarios, such as LOCA and extended station blackout, thermal stratification tends to form in the pool after the initial rapid venting stage. Accurately predicting the pool stratification phenomenon is important because it affects the peak containment pressure; and the pool temperature distribution also affects the NPSHa (Available Net Positive Suction Head) and therefore the performance of the pump which draws cooling water back to the core. Current safetymore » analysis codes use 0-D lumped parameter methods to calculate the energy and mass balance in the pool and therefore have large uncertainty in prediction of scenarios in which stratification and mixing are important. While 3-D CFD methods can be used to analyze realistic 3D configurations, these methods normally require very fine grid resolution to resolve thin substructures such as jets and wall boundaries, therefore long simulation time. For mixing in stably stratified large enclosures, the BMIX++ code has been developed to implement a highly efficient analysis method for stratification where the ambient fluid volume is represented by 1-D transient partial differential equations and substructures such as free or wall jets are modeled with 1-D integral models. This allows very large reductions in computational effort compared to 3-D CFD modeling. The POOLEX experiments at Finland, which was designed to study phenomena relevant to Nordic design BWR suppression pool including thermal stratification and mixing, are used for validation. GOTHIC lumped parameter models are used to obtain boundary conditions for BMIX++ code and CFD simulations. Comparison between the BMIX++, GOTHIC, and CFD calculations against the POOLEX experimental data is discussed in detail.« less
Analysis of selected critical experiments using ENDF/B-IV and ENDF/B-V data
NASA Astrophysics Data System (ADS)
Singh, U. N.; Rec, J. R.; Jonsson, A.
1981-10-01
Selected critical experiments - TRX 1 & 2 (U metal; BNL 1, 2, 3 (ThO2/U-233); and B & W TUPE 15B (ThO2/U-235) - were analyzed using ENDF/B-V data, and the results were compared to the measured parameters and to values obtained using ENDF/B-IV. Calculations were performed with DIT, an integral transport assembly design code. A heterogeneous cell calculation in 85 energy groups was performed for each configuration. Leakage was accounted for through B-1 calculation using the measured bucklings. Overall, ENDF/B-V data have been found to improve the agreement with experimental results with the exception of the TUPE 15B core. However, the changes in the capture cross sections of U-238 (epithermal) and Th-232 do not fully resolve the long-standing differences with the measurements.
Giles, Tracey M; de Lacey, Sheryl; Muir-Cochrane, Eimear
2016-01-01
Grounded theory method has been described extensively in the literature. Yet, the varying processes portrayed can be confusing for novice grounded theorists. This article provides a worked example of the data analysis phase of a constructivist grounded theory study that examined family presence during resuscitation in acute health care settings. Core grounded theory methods are exemplified, including initial and focused coding, constant comparative analysis, memo writing, theoretical sampling, and theoretical saturation. The article traces the construction of the core category "Conditional Permission" from initial and focused codes, subcategories, and properties, through to its position in the final substantive grounded theory.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fischer, G.A.
2011-07-01
Document available in abstract form only, full text of document follows: The dosimetry from the H. B. Robinson Unit 2 Pressure Vessel Benchmark is analyzed with a suite of Westinghouse-developed codes and data libraries. The radiation transport from the reactor core to the surveillance capsule and ex-vessel locations is performed by RAPTOR-M3G, a parallel deterministic radiation transport code that calculates high-resolution neutron flux information in three dimensions. The cross-section library used in this analysis is the ALPAN library, an Evaluated Nuclear Data File (ENDF)/B-VII.0-based library designed for reactor dosimetry and fluence analysis applications. Dosimetry is evaluated with the industry-standard SNLRMLmore » reactor dosimetry cross-section data library. (authors)« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moisseytsev, A.; Sienicki, J. J.
2011-04-12
The analysis of specific control strategies and dynamic behavior of the supercritical carbon dioxide (S-CO{sub 2}) Brayton cycle has been extended to the two reactor types selected for continued development under the Generation IV Nuclear Energy Systems Initiative; namely, the Very High Temperature Reactor (VHTR) and the Sodium-Cooled Fast Reactor (SFR). Direct application of the standard S-CO{sub 2} recompression cycle to the VHTR was found to be challenging because of the mismatch in the temperature drop of the He gaseous reactor coolant through the He-to-CO{sub 2} reactor heat exchanger (RHX) versus the temperature rise of the CO{sub 2} through themore » RHX. The reference VHTR features a large temperature drop of 450 C between the assumed core outlet and inlet temperatures of 850 and 400 C, respectively. This large temperature difference is an essential feature of the VHTR enabling a lower He flow rate reducing the required core velocities and pressure drop. In contrast, the standard recompression S-CO{sub 2} cycle wants to operate with a temperature rise through the RHX of about 150 C reflecting the temperature drop as the CO{sub 2} expands from 20 MPa to 7.4 MPa in the turbine and the fact that the cycle is highly recuperated such that the CO{sub 2} entering the RHX is effectively preheated. Because of this mismatch, direct application of the standard recompression cycle results in a relatively poor cycle efficiency of 44.9%. However, two approaches have been identified by which the S-CO{sub 2} cycle can be successfully adapted to the VHTR and the benefits of the S-CO{sub 2} cycle, especially a significant gain in cycle efficiency, can be realized. The first approach involves the use of three separate cascaded S-CO{sub 2} cycles. Each S-CO{sub 2} cycle is coupled to the VHTR through its own He-to-CO{sub 2} RHX in which the He temperature is reduced by 150 C. The three respective cycles have efficiencies of 54, 50, and 44%, respectively, resulting in a net cycle efficiency of 49.3 %. The other approach involves reducing the minimum cycle pressure significantly below the critical pressure such that the temperature drop in the turbine is increased while the minimum cycle temperature is maintained above the critical temperature to prevent the formation of a liquid phase. The latter approach also involves the addition of a precooler and a third compressor before the main compressor to retain the benefits of compression near the critical point with the main compressor. For a minimum cycle pressure of 1 MPa, a cycle efficiency of 49.5% is achieved. Either approach opens up the door to applying the SCO{sub 2} cycle to the VHTR. In contrast, the SFR system typically has a core outlet-inlet temperature difference of about 150 C such that the standard recompression cycle is ideally suited for direct application to the SFR. The ANL Plant Dynamics Code has been modified for application to the VHTR and SFR when the reactor side dynamic behavior is calculated with another system level computer code such as SAS4A/SYSSYS-1 in the SFR case. The key modification involves modeling heat exchange in the RHX, accepting time dependent tabular input from the reactor code, and generating time dependent tabular input to the reactor code such that both the reactor and S-CO{sub 2} cycle sides can be calculated in a convergent iterative scheme. This approach retains the modeling benefits provided by the detailed reactor system level code and can be applied to any reactor system type incorporating a S-CO{sub 2} cycle. This approach was applied to the particular calculation of a scram scenario for a SFR in which the main and intermediate sodium pumps are not tripped and the generator is not disconnected from the electrical grid in order to enhance heat removal from the reactor system thereby enhancing the cooldown rate of the Na-to-CO{sub 2} RHX. The reactor side is calculated with SAS4A/SASSYS-1 while the S-CO{sub 2} cycle is calculated with the Plant Dynamics Code with a number of iterations over a timescale of 500 seconds. It is found that the RHX undergoes a maximum cooldown rate of {approx} -0.3 C/s. The Plant Dynamics Code was also modified to decrease its running time by replacing the compressible flow form of the momentum equation with an incompressible flow equation for use inside of the cooler or recuperators where the CO{sub 2} has a compressibility similar to that of a liquid. Appendices provide a quasi-static control strategy for a SFR as well as the self-adaptive linear function fitting algorithm developed to produce the tabular data for input to the reactor code and Plant Dynamics Code from the detailed output of the other code.« less
Design calculations for NIF convergent ablator experiments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Callahan, Debra; Leeper, Ramon Joe; Spears, B. K.
2010-11-01
Design calculations for NIF convergent ablator experiments will be described. The convergent ablator experiments measure the implosion trajectory, velocity, and ablation rate of an x-ray driven capsule and are a important component of the U. S. National Ignition Campaign at NIF. The design calculations are post-processed to provide simulations of the key diagnostics: (1) Dante measurements of hohlraum x-ray flux and spectrum, (2) streaked radiographs of the imploding ablator shell, (3) wedge range filter measurements of D-He3 proton output spectra, and (4) GXD measurements of the imploded core. The simulated diagnostics will be compared to the experimental measurements to providemore » an assessment of the accuracy of the design code predictions of hohlraum radiation temperature, capsule ablation rate, implosion velocity, shock flash areal density, and x-ray bang time. Post-shot versions of the design calculations are used to enhance the understanding of the experimental measurements and will assist in choosing parameters for subsequent shots and the path towards optimal ignition capsule tuning.« less
Research on stellarator-mirror fission-fusion hybrid
NASA Astrophysics Data System (ADS)
Moiseenko, V. E.; Kotenko, V. G.; Chernitskiy, S. V.; Nemov, V. V.; Ågren, O.; Noack, K.; Kalyuzhnyi, V. N.; Hagnestål, A.; Källne, J.; Voitsenya, V. S.; Garkusha, I. E.
2014-09-01
The development of a stellarator-mirror fission-fusion hybrid concept is reviewed. The hybrid comprises of a fusion neutron source and a powerful sub-critical fast fission reactor core. The aim is the transmutation of spent nuclear fuel and safe fission energy production. In its fusion part, neutrons are generated in deuterium-tritium (D-T) plasma, confined magnetically in a stellarator-type system with an embedded magnetic mirror. Based on kinetic calculations, the energy balance for such a system is analyzed. Neutron calculations have been performed with the MCNPX code, and the principal design of the reactor part is developed. Neutron outflux at different outer parts of the reactor is calculated. Numerical simulations have been performed on the structure of a magnetic field in a model of the stellarator-mirror device, and that is achieved by switching off one or two coils of toroidal field in the Uragan-2M torsatron. The calculations predict the existence of closed magnetic surfaces under certain conditions. The confinement of fast particles in such a magnetic trap is analyzed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fawcett, L.R. Jr.; Roberts, R.R. II; Hunter, R.E.
1988-03-01
Tritium production and activation of radiochemical detector foils in a sphere of /sup 6/LiD with an oralloy core irradiated by a central source of 14-MeV neutrons have been calculated and compared with experimental measurements. The experimental assembly consisted of an oralloy sphere surrounded by three solid /sup 6/LiD concentric shells with ampules of /sup 6/LiH and /sup 7/LiH and activation foils located in several positions throughout the assembly. The Los Alamos Monte Carlo Neutron Photon Transport Code (MCNP) was used to calculate neutron transport throughout the system, tritium production in the ampules, and foil activation. The overall experimentally observed-to-calculated ratiosmore » of tritium production were 0.996 +- 2.5% in /sup 6/Li ampules and 0.903 +- 5.2% in /sup 7/Li ampules. Observed-to-calculated ratios for foil activation are also presented. 11 refs., 4 figs., 7 tabs.« less
Cloudy - simulating the non-equilibrium microphysics of gas and dust, and its observed spectrum
NASA Astrophysics Data System (ADS)
Ferland, Gary J.
2014-01-01
Cloudy is an open-source plasma/spectral simulation code, last described in the open-access journal Revista Mexicana (Ferland et al. 2013, 2013RMxAA..49..137F). The project goal is a complete simulation of the microphysics of gas and dust over the full range of density, temperature, and ionization that we encounter in astrophysics, together with a prediction of the observed spectrum. Cloudy is one of the more widely used theory codes in astrophysics with roughly 200 papers citing its documentation each year. It is developed by graduate students, postdocs, and an international network of collaborators. Cloudy is freely available on the web at trac.nublado.org, the user community can post questions on http://groups.yahoo.com/neo/groups/cloudy_simulations/info, and summer schools are organized to learn more about Cloudy and its use (http://cloud9.pa.uky.edu gary/cloudy/CloudySummerSchool/). The code’s widespread use is possible because of extensive automatic testing. It is exercised over its full range of applicability whenever the source is changed. Changes in predicted quantities are automatically detected along with any newly introduced problems. The code is designed to be autonomous and self-aware. It generates a report at the end of a calculation that summarizes any problems encountered along with suggestions of potentially incorrect boundary conditions. This self-monitoring is a core feature since the code is now often used to generate large MPI grids of simulations, making it impossible for a user to verify each calculation by hand. I will describe some challenges in developing a large physics code, with its many interconnected physical processes, many at the frontier of research in atomic or molecular physics, all in an open environment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bess, John D.
2014-03-01
PROTEUS is a zero-power research reactor based on a cylindrical graphite annulus with a central cylindrical cavity. The graphite annulus remains basically the same for all experimental programs, but the contents of the central cavity are changed according to the type of reactor being investigated. Through most of its service history, PROTEUS has represented light-water reactors, but from 1992 to 1996 PROTEUS was configured as a pebble-bed reactor (PBR) critical facility and designated as HTR-PROTEUS. The nomenclature was used to indicate that this series consisted of High Temperature Reactor experiments performed in the PROTEUS assembly. During this period, seventeen criticalmore » configurations were assembled and various reactor physics experiments were conducted. These experiments included measurements of criticality, differential and integral control rod and safety rod worths, kinetics, reaction rates, water ingress effects, and small sample reactivity effects (Ref. 3). HTR-PROTEUS was constructed, and the experimental program was conducted, for the purpose of providing experimental benchmark data for assessment of reactor physics computer codes. Considerable effort was devoted to benchmark calculations as a part of the HTR-PROTEUS program. References 1 and 2 provide detailed data for use in constructing models for codes to be assessed. Reference 3 is a comprehensive summary of the HTR-PROTEUS experiments and the associated benchmark program. This document draws freely from these references. Only Cores 9 and 10 are evaluated in this benchmark report due to similarities in their construction. The other core configurations of the HTR-PROTEUS program are evaluated in their respective reports as outlined in Section 1.0. Cores 9 and 10 were evaluated and determined to be acceptable benchmark experiments.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
John D. Bess
2013-03-01
PROTEUS is a zero-power research reactor based on a cylindrical graphite annulus with a central cylindrical cavity. The graphite annulus remains basically the same for all experimental programs, but the contents of the central cavity are changed according to the type of reactor being investigated. Through most of its service history, PROTEUS has represented light-water reactors, but from 1992 to 1996 PROTEUS was configured as a pebble-bed reactor (PBR) critical facility and designated as HTR-PROTEUS. The nomenclature was used to indicate that this series consisted of High Temperature Reactor experiments performed in the PROTEUS assembly. During this period, seventeen criticalmore » configurations were assembled and various reactor physics experiments were conducted. These experiments included measurements of criticality, differential and integral control rod and safety rod worths, kinetics, reaction rates, water ingress effects, and small sample reactivity effects (Ref. 3). HTR-PROTEUS was constructed, and the experimental program was conducted, for the purpose of providing experimental benchmark data for assessment of reactor physics computer codes. Considerable effort was devoted to benchmark calculations as a part of the HTR-PROTEUS program. References 1 and 2 provide detailed data for use in constructing models for codes to be assessed. Reference 3 is a comprehensive summary of the HTR-PROTEUS experiments and the associated benchmark program. This document draws freely from these references. Only Cores 9 and 10 are evaluated in this benchmark report due to similarities in their construction. The other core configurations of the HTR-PROTEUS program are evaluated in their respective reports as outlined in Section 1.0. Cores 9 and 10 were evaluated and determined to be acceptable benchmark experiments.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
John D. Bess
2013-03-01
PROTEUS is a zero-power research reactor based on a cylindrical graphite annulus with a central cylindrical cavity. The graphite annulus remains basically the same for all experimental programs, but the contents of the central cavity are changed according to the type of reactor being investigated. Through most of its service history, PROTEUS has represented light-water reactors, but from 1992 to 1996 PROTEUS was configured as a pebble-bed reactor (PBR) critical facility and designated as HTR-PROTEUS. The nomenclature was used to indicate that this series consisted of High Temperature Reactor experiments performed in the PROTEUS assembly. During this period, seventeen criticalmore » configurations were assembled and various reactor physics experiments were conducted. These experiments included measurements of criticality, differential and integral control rod and safety rod worths, kinetics, reaction rates, water ingress effects, and small sample reactivity effects (Ref. 3). HTR-PROTEUS was constructed, and the experimental program was conducted, for the purpose of providing experimental benchmark data for assessment of reactor physics computer codes. Considerable effort was devoted to benchmark calculations as a part of the HTR-PROTEUS program. References 1 and 2 provide detailed data for use in constructing models for codes to be assessed. Reference 3 is a comprehensive summary of the HTR-PROTEUS experiments and the associated benchmark program. This document draws freely from these references. Only Cores 9 and 10 are evaluated in this benchmark report due to similarities in their construction. The other core configurations of the HTR-PROTEUS program are evaluated in their respective reports as outlined in Section 1.0. Cores 9 and 10 were evaluated and determined to be acceptable benchmark experiments.« less
Development of an extensible dual-core wireless sensing node for cyber-physical systems
NASA Astrophysics Data System (ADS)
Kane, Michael; Zhu, Dapeng; Hirose, Mitsuhito; Dong, Xinjun; Winter, Benjamin; Häckell, Mortiz; Lynch, Jerome P.; Wang, Yang; Swartz, A.
2014-04-01
The introduction of wireless telemetry into the design of monitoring and control systems has been shown to reduce system costs while simplifying installations. To date, wireless nodes proposed for sensing and actuation in cyberphysical systems have been designed using microcontrollers with one computational pipeline (i.e., single-core microcontrollers). While concurrent code execution can be implemented on single-core microcontrollers, concurrency is emulated by splitting the pipeline's resources to support multiple threads of code execution. For many applications, this approach to multi-threading is acceptable in terms of speed and function. However, some applications such as feedback controls demand deterministic timing of code execution and maximum computational throughput. For these applications, the adoption of multi-core processor architectures represents one effective solution. Multi-core microcontrollers have multiple computational pipelines that can execute embedded code in parallel and can be interrupted independent of one another. In this study, a new wireless platform named Martlet is introduced with a dual-core microcontroller adopted in its design. The dual-core microcontroller design allows Martlet to dedicate one core to standard wireless sensor operations while the other core is reserved for embedded data processing and real-time feedback control law execution. Another distinct feature of Martlet is a standardized hardware interface that allows specialized daughter boards (termed wing boards) to be interfaced to the Martlet baseboard. This extensibility opens opportunity to encapsulate specialized sensing and actuation functions in a wing board without altering the design of Martlet. In addition to describing the design of Martlet, a few example wings are detailed, along with experiments showing the Martlet's ability to monitor and control physical systems such as wind turbines and buildings.
Methodology comparison for gamma-heating calculations in material-testing reactors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lemaire, M.; Vaglio-Gaudard, C.; Lyoussi, A.
2015-07-01
The Jules Horowitz Reactor (JHR) is a Material-Testing Reactor (MTR) under construction in the south of France at CEA Cadarache (French Alternative Energies and Atomic Energy Commission). It will typically host about 20 simultaneous irradiation experiments in the core and in the beryllium reflector. These experiments will help us better understand the complex phenomena occurring during the accelerated ageing of materials and the irradiation of nuclear fuels. Gamma heating, i.e. photon energy deposition, is mainly responsible for temperature rise in non-fuelled zones of nuclear reactors, including JHR internal structures and irradiation devices. As temperature is a key parameter for physicalmore » models describing the behavior of material, accurate control of temperature, and hence gamma heating, is required in irradiation devices and samples in order to perform an advanced suitable analysis of future experimental results. From a broader point of view, JHR global attractiveness as a MTR depends on its ability to monitor experimental parameters with high accuracy, including gamma heating. Strict control of temperature levels is also necessary in terms of safety. As JHR structures are warmed up by gamma heating, they must be appropriately cooled down to prevent creep deformation or melting. Cooling-power sizing is based on calculated levels of gamma heating in the JHR. Due to these safety concerns, accurate calculation of gamma heating with well-controlled bias and associated uncertainty as low as possible is all the more important. There are two main kinds of calculation bias: bias coming from nuclear data on the one hand and bias coming from physical approximations assumed by computer codes and by general calculation route on the other hand. The former must be determined by comparison between calculation and experimental data; the latter by calculation comparisons between codes and between methodologies. In this presentation, we focus on this latter kind of bias. Nuclear heating is represented by the physical quantity called absorbed dose (energy deposition induced by particle-matter interactions, divided by mass). Its calculation with Monte Carlo codes is possible but computationally expensive as it requires transport simulation of charged particles, along with neutrons and photons. For that reason, the calculation of another physical quantity, called KERMA, is often preferred, as KERMA calculation with Monte Carlo codes only requires transport of neutral particles. However, KERMA is only an estimator of the absorbed dose and many conditions must be fulfilled for KERMA to be equal to absorbed dose, including so-called condition of electronic equilibrium. Also, Monte Carlo computations of absorbed dose still present some physical approximations, even though there is only a limited number of them. Some of these approximations are linked to the way how Monte Carlo codes apprehend the transport simulation of charged particles and the productive and destructive interactions between photons, electrons and positrons. There exists a huge variety of electromagnetic shower models which tackle this topic. Differences in the implementation of these models can lead to discrepancies in calculated values of absorbed dose between different Monte Carlo codes. The magnitude of order of such potential discrepancies should be quantified for JHR gamma-heating calculations. We consequently present a two-pronged plan. In a first phase, we intend to perform compared absorbed dose / KERMA Monte Carlo calculations in the JHR. This way, we will study the presence or absence of electronic equilibrium in the different JHR structures and experimental devices and we will give recommendations for the choice of KERMA or absorbed dose when calculating gamma heating in the JHR. In a second phase, we intend to perform compared TRIPOLI4 / MCNP absorbed dose calculations in a simplified JHR-representative geometry. For this comparison, we will use the same nuclear data library for both codes (the European library JEFF3.1.1 and photon library EPDL97) so as to isolate the effects from electromagnetic shower models on absorbed dose calculation. This way, we hope to get insightful feedback on these models and their implementation in Monte Carlo codes. (authors)« less
Fast GPU-based Monte Carlo simulations for LDR prostate brachytherapy.
Bonenfant, Éric; Magnoux, Vincent; Hissoiny, Sami; Ozell, Benoît; Beaulieu, Luc; Després, Philippe
2015-07-07
The aim of this study was to evaluate the potential of bGPUMCD, a Monte Carlo algorithm executed on Graphics Processing Units (GPUs), for fast dose calculations in permanent prostate implant dosimetry. It also aimed to validate a low dose rate brachytherapy source in terms of TG-43 metrics and to use this source to compute dose distributions for permanent prostate implant in very short times. The physics of bGPUMCD was reviewed and extended to include Rayleigh scattering and fluorescence from photoelectric interactions for all materials involved. The radial and anisotropy functions were obtained for the Nucletron SelectSeed in TG-43 conditions. These functions were compared to those found in the MD Anderson Imaging and Radiation Oncology Core brachytherapy source registry which are considered the TG-43 reference values. After appropriate calibration of the source, permanent prostate implant dose distributions were calculated for four patients and compared to an already validated Geant4 algorithm. The radial function calculated from bGPUMCD showed excellent agreement (differences within 1.3%) with TG-43 accepted values. The anisotropy functions at r = 1 cm and r = 4 cm were within 2% of TG-43 values for angles over 17.5°. For permanent prostate implants, Monte Carlo-based dose distributions with a statistical uncertainty of 1% or less for the target volume were obtained in 30 s or less for 1 × 1 × 1 mm(3) calculation grids. Dosimetric indices were very similar (within 2.7%) to those obtained with a validated, independent Monte Carlo code (Geant4) performing the calculations for the same cases in a much longer time (tens of minutes to more than a hour). bGPUMCD is a promising code that lets envision the use of Monte Carlo techniques in a clinical environment, with sub-minute execution times on a standard workstation. Future work will explore the use of this code with an inverse planning method to provide a complete Monte Carlo-based planning solution.
Fast GPU-based Monte Carlo simulations for LDR prostate brachytherapy
NASA Astrophysics Data System (ADS)
Bonenfant, Éric; Magnoux, Vincent; Hissoiny, Sami; Ozell, Benoît; Beaulieu, Luc; Després, Philippe
2015-07-01
The aim of this study was to evaluate the potential of bGPUMCD, a Monte Carlo algorithm executed on Graphics Processing Units (GPUs), for fast dose calculations in permanent prostate implant dosimetry. It also aimed to validate a low dose rate brachytherapy source in terms of TG-43 metrics and to use this source to compute dose distributions for permanent prostate implant in very short times. The physics of bGPUMCD was reviewed and extended to include Rayleigh scattering and fluorescence from photoelectric interactions for all materials involved. The radial and anisotropy functions were obtained for the Nucletron SelectSeed in TG-43 conditions. These functions were compared to those found in the MD Anderson Imaging and Radiation Oncology Core brachytherapy source registry which are considered the TG-43 reference values. After appropriate calibration of the source, permanent prostate implant dose distributions were calculated for four patients and compared to an already validated Geant4 algorithm. The radial function calculated from bGPUMCD showed excellent agreement (differences within 1.3%) with TG-43 accepted values. The anisotropy functions at r = 1 cm and r = 4 cm were within 2% of TG-43 values for angles over 17.5°. For permanent prostate implants, Monte Carlo-based dose distributions with a statistical uncertainty of 1% or less for the target volume were obtained in 30 s or less for 1 × 1 × 1 mm3 calculation grids. Dosimetric indices were very similar (within 2.7%) to those obtained with a validated, independent Monte Carlo code (Geant4) performing the calculations for the same cases in a much longer time (tens of minutes to more than a hour). bGPUMCD is a promising code that lets envision the use of Monte Carlo techniques in a clinical environment, with sub-minute execution times on a standard workstation. Future work will explore the use of this code with an inverse planning method to provide a complete Monte Carlo-based planning solution.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kostuk, M.; Uram, T. D.; Evans, T.
For the first time, an automatically triggered, between-pulse fusion science analysis code was run on-demand at a remotely located supercomputer at Argonne Leadership Computing Facility (ALCF, Lemont, IL) in support of in-process experiments being performed at DIII-D (San Diego, CA). This represents a new paradigm for combining geographically distant experimental and high performance computing (HPC) facilities to provide enhanced data analysis that is quickly available to researchers. Enhanced analysis improves the understanding of the current pulse, translating into a more efficient use of experimental resources, and to the quality of the resultant science. The analysis code used here, called SURFMN,more » calculates the magnetic structure of the plasma using Fourier transform. Increasing the number of Fourier components provides a more accurate determination of the stochastic boundary layer near the plasma edge by better resolving magnetic islands, but requires 26 minutes to complete using local DIII-D resources, putting it well outside the useful time range for between pulse analysis. These islands relate to confinement and edge localized mode (ELM) suppression, and may be controlled by adjusting coil currents for the next pulse. Argonne has ensured on-demand execution of SURFMN by providing a reserved queue, a specialized service that launches the code after receiving an automatic trigger, and with network access from the worker nodes for data transfer. Runs are executed on 252 cores of ALCF’s Cooley cluster and the data is available locally at DIII-D within three minutes of triggering. The original SURFMN design limits additional improvements with more cores, however our work shows a path forward where codes that benefit from thousands of processors can run between pulses.« less
Kostuk, M.; Uram, T. D.; Evans, T.; ...
2018-02-01
For the first time, an automatically triggered, between-pulse fusion science analysis code was run on-demand at a remotely located supercomputer at Argonne Leadership Computing Facility (ALCF, Lemont, IL) in support of in-process experiments being performed at DIII-D (San Diego, CA). This represents a new paradigm for combining geographically distant experimental and high performance computing (HPC) facilities to provide enhanced data analysis that is quickly available to researchers. Enhanced analysis improves the understanding of the current pulse, translating into a more efficient use of experimental resources, and to the quality of the resultant science. The analysis code used here, called SURFMN,more » calculates the magnetic structure of the plasma using Fourier transform. Increasing the number of Fourier components provides a more accurate determination of the stochastic boundary layer near the plasma edge by better resolving magnetic islands, but requires 26 minutes to complete using local DIII-D resources, putting it well outside the useful time range for between pulse analysis. These islands relate to confinement and edge localized mode (ELM) suppression, and may be controlled by adjusting coil currents for the next pulse. Argonne has ensured on-demand execution of SURFMN by providing a reserved queue, a specialized service that launches the code after receiving an automatic trigger, and with network access from the worker nodes for data transfer. Runs are executed on 252 cores of ALCF’s Cooley cluster and the data is available locally at DIII-D within three minutes of triggering. The original SURFMN design limits additional improvements with more cores, however our work shows a path forward where codes that benefit from thousands of processors can run between pulses.« less
Development and Assessment of CTF for Pin-resolved BWR Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Salko, Robert K; Wysocki, Aaron J; Collins, Benjamin S
2017-01-01
CTF is the modernized and improved version of the subchannel code, COBRA-TF. It has been adopted by the Consortium for Advanced Simulation for Light Water Reactors (CASL) for subchannel analysis applications and thermal hydraulic feedback calculations in the Virtual Environment for Reactor Applications Core Simulator (VERA-CS). CTF is now jointly developed by Oak Ridge National Laboratory and North Carolina State University. Until now, CTF has been used for pressurized water reactor modeling and simulation in CASL, but in the future it will be extended to boiling water reactor designs. This required development activities to integrate the code into the VERA-CSmore » workflow and to make it more ecient for full-core, pin resolved simulations. Additionally, there is a significant emphasis on producing high quality tools that follow a regimented software quality assurance plan in CASL. Part of this plan involves performing validation and verification assessments on the code that are easily repeatable and tied to specific code versions. This work has resulted in the CTF validation and verification matrix being expanded to include several two-phase flow experiments, including the General Electric 3 3 facility and the BWR Full-Size Fine Mesh Bundle Tests (BFBT). Comparisons with both experimental databases is reasonable, but the BFBT analysis reveals a tendency of CTF to overpredict void, especially in the slug flow regime. The execution of these tests is fully automated, analysis is documented in the CTF Validation and Verification manual, and the tests have become part of CASL continuous regression testing system. This paper will summarize these recent developments and some of the two-phase assessments that have been performed on CTF.« less
Addressing the challenges of standalone multi-core simulations in molecular dynamics
NASA Astrophysics Data System (ADS)
Ocaya, R. O.; Terblans, J. J.
2017-07-01
Computational modelling in material science involves mathematical abstractions of force fields between particles with the aim to postulate, develop and understand materials by simulation. The aggregated pairwise interactions of the material's particles lead to a deduction of its macroscopic behaviours. For practically meaningful macroscopic scales, a large amount of data are generated, leading to vast execution times. Simulation times of hours, days or weeks for moderately sized problems are not uncommon. The reduction of simulation times, improved result accuracy and the associated software and hardware engineering challenges are the main motivations for many of the ongoing researches in the computational sciences. This contribution is concerned mainly with simulations that can be done on a "standalone" computer based on Message Passing Interfaces (MPI), parallel code running on hardware platforms with wide specifications, such as single/multi- processor, multi-core machines with minimal reconfiguration for upward scaling of computational power. The widely available, documented and standardized MPI library provides this functionality through the MPI_Comm_size (), MPI_Comm_rank () and MPI_Reduce () functions. A survey of the literature shows that relatively little is written with respect to the efficient extraction of the inherent computational power in a cluster. In this work, we discuss the main avenues available to tap into this extra power without compromising computational accuracy. We also present methods to overcome the high inertia encountered in single-node-based computational molecular dynamics. We begin by surveying the current state of the art and discuss what it takes to achieve parallelism, efficiency and enhanced computational accuracy through program threads and message passing interfaces. Several code illustrations are given. The pros and cons of writing raw code as opposed to using heuristic, third-party code are also discussed. The growing trend towards graphical processor units and virtual computing clouds for high-performance computing is also discussed. Finally, we present the comparative results of vacancy formation energy calculations using our own parallelized standalone code called Verlet-Stormer velocity (VSV) operating on 30,000 copper atoms. The code is based on the Sutton-Chen implementation of the Finnis-Sinclair pairwise embedded atom potential. A link to the code is also given.
Influence of Non-spherical Initial Stellar Structure on the Core-Collapse Supernova Mechanism
NASA Astrophysics Data System (ADS)
Couch, Sean M.
I review the state of investigation into the impact that nonspherical stellar progenitor structure has on the core-collapse supernova mechanism. Although modeling stellar evolution relies on 1D spherically symmetric calculations, massive stars are not truly spherical. In the stellar evolution codes, this fact is accounted for by "fixes" such as mixing length theory and attendant modifications. Of particular relevance to the supernova mechanism, the Si- and O-burning shells surrounding the iron core at the point of collapse can be violently convective, with convective speeds of hundreds of km s-1. It has recently been shown by a number of groups that the presence of nonspherical perturbations in the layers surrounding the collapsing iron core can have a favorable impact on the likelihood for shock revival and explosion via the neutrino heating mechanism. This is due in large part to the strengthening of turbulence behind the stalled shock due to the presence of finite amplitude seed perturbations to speed the growth of convection which drives the post-shock turbulence. Efforts are now underway to simulate the final minutes of stellar evolution to core-collapse in 3D with the aim to generate realistic multidimensional initial conditions for use in simulations of the supernova mechanism.
3D Visualization of Monte-Carlo Simulation's of HZE Track Structure and Initial Chemical Species
NASA Technical Reports Server (NTRS)
Plante, Ianik; Cucinotta, Francis A.
2009-01-01
Heavy ions biophysics is important for space radiation risk assessment [1] and hadron-therapy [2]. The characteristic of heavy ions tracks include a very high energy deposition region close to the track (<20 nm) denoted as the track core, and an outer penumbra region consisting of individual secondary electrons (6-rays). A still open question is the radiobiological effects of 6- rays relative to the track core. Of importance is the induction of double-strand breaks (DSB) [3] and oxidative damage to the biomolecules and the tissue matrix, considered the most important lesions for acute and long term effects of radiation. In this work, we have simulated a 56Fe26+ ion track of 1 GeV/amu with our Monte-Carlo code RITRACKS [4]. The simulation results have been used to calculate the energy depiction and initial chemical species in a "voxelized" space, which is then visualized in 3D. Several voxels with dose >1000 Gy are found in the penumbra, some located 0.1 mm from the track core. In computational models, the DSB induction probability is calculated with radial dose [6], which may not take into account the higher RBE of electron track ends for DSB induction. Therefore, these simulations should help improve models of DSB induction and our understanding of heavy ions biophysics.
Conceptual design study of small long-life PWR based on thorium cycle fuel
DOE Office of Scientific and Technical Information (OSTI.GOV)
Subkhi, M. Nurul; Su'ud, Zaki; Waris, Abdul
2014-09-30
A neutronic performance of small long-life Pressurized Water Reactor (PWR) using thorium cycle based fuel has been investigated. Thorium cycle which has higher conversion ratio in thermal region compared to uranium cycle produce some significant of {sup 233}U during burn up time. The cell-burn up calculations were performed by PIJ SRAC code using nuclear data library based on JENDL 3.3, while the multi-energy-group diffusion calculations were optimized in whole core cylindrical two-dimension R-Z geometry by SRAC-CITATION. this study would be introduced thorium nitride fuel system which ZIRLO is the cladding material. The optimization of 350 MWt small long life PWRmore » result small excess reactivity and reduced power peaking during its operation.« less
NASA Technical Reports Server (NTRS)
Funderburgh, J. L.; Funderburgh, M. L.; Brown, S. J.; Vergnes, J. P.; Hassell, J. R.; Mann, M. M.; Conrad, G. W.; Spooner, B. S. (Principal Investigator)
1993-01-01
Amino acid sequence from tryptic peptides of three different bovine corneal keratan sulfate proteoglycan (KSPG) core proteins (designated 37A, 37B, and 25) showed similarities to the sequence of a chicken KSPG core protein lumican. Bovine lumican cDNA was isolated from a bovine corneal expression library by screening with chicken lumican cDNA. The bovine cDNA codes for a 342-amino acid protein, M(r) 38,712, containing amino acid sequences identified in the 37B KSPG core protein. The bovine lumican is 68% identical to chicken lumican, with an 83% identity excluding the N-terminal 40 amino acids. Location of 6 cysteine and 4 consensus N-glycosylation sites in the bovine sequence were identical to those in chicken lumican. Bovine lumican had about 50% identity to bovine fibromodulin and 20% identity to bovine decorin and biglycan. About two-thirds of the lumican protein consists of a series of 10 amino acid leucine-rich repeats that occur in regions of calculated high beta-hydrophobic moment, suggesting that the leucine-rich repeats contribute to beta-sheet formation in these proteins. Sequences obtained from 37A and 25 core proteins were absent in bovine lumican, thus predicting a unique primary structure and separate mRNA for each of the three bovine KSPG core proteins.
HowTo - Easy use of global unique identifier
NASA Astrophysics Data System (ADS)
Czerniak, A.; Fleischer, D.; Schirnick, C.
2013-12-01
The GEOMAR sample- and core repository covers several thousands of samples and cores and was collected over the last decades. In the actual project, we bring this collection up to the new generation and tag every sample and core with a unique identifier, in our case the International Geo Sample Number (ISGN). This work is done with our digital Ink and hand writing recognition implementation. The Smart Pen technology was save time and resources to record the information on every sample or core. In the procedure of recording, there are several steps systematical are done: 1. Getting all information about the core or sample, such as cruise number, responsible person and so on. 2. Tag with unique identifiers, in our case a QR-Code. 3. Wrote down the location of sample or core. After transmitting the information from Smart Pen, actually via USB but wireless is a choice too, into our server infrastructure the link to other information began. As it linked in our Virtual Research Environment (VRE) with the unique identifier (ISGN) sample or core can be located and the QR-Code was simply linked back from core or sample to ISGN with additional scientific information. On the QR-Code all important information are on it and it was simple to produce thousand of it.
MATCHED-INDEX-OF-REFRACTION FLOW FACILITY FOR FUNDAMENTAL AND APPLIED RESEARCH
DOE Office of Scientific and Technical Information (OSTI.GOV)
Piyush Sabharwall; Carl Stoots; Donald M. McEligot
2014-11-01
Significant challenges face reactor designers with regard to thermal hydraulic design and associated modeling for advanced reactor concepts. Computational thermal hydraulic codes solve only a piece of the core. There is a need for a whole core dynamics system code with local resolution to investigate and understand flow behavior with all the relevant physics and thermo-mechanics. The matched index of refraction (MIR) flow facility at Idaho National Laboratory (INL) has a unique capability to contribute to the development of validated computational fluid dynamics (CFD) codes through the use of state-of-the-art optical measurement techniques, such as Laser Doppler Velocimetry (LDV) andmore » Particle Image Velocimetry (PIV). PIV is a non-intrusive velocity measurement technique that tracks flow by imaging the movement of small tracer particles within a fluid. At the heart of a PIV calculation is the cross correlation algorithm, which is used to estimate the displacement of particles in some small part of the image over the time span between two images. Generally, the displacement is indicated by the location of the largest peak. To quantify these measurements accurately, sophisticated processing algorithms correlate the locations of particles within the image to estimate the velocity (Ref. 1). Prior to use with reactor deign, the CFD codes have to be experimentally validated, which requires rigorous experimental measurements to produce high quality, multi-dimensional flow field data with error quantification methodologies. Computational thermal hydraulic codes solve only a piece of the core. There is a need for a whole core dynamics system code with local resolution to investigate and understand flow behavior with all the relevant physics and thermo-mechanics. Computational techniques with supporting test data may be needed to address the heat transfer from the fuel to the coolant during the transition from turbulent to laminar flow, including the possibility of an early laminarization of the flow (Refs. 2 and 3) (laminarization is caused when the coolant velocity is theoretically in the turbulent regime, but the heat transfer properties are indicative of the coolant velocity being in the laminar regime). Such studies are complicated enough that computational fluid dynamics (CFD) models may not converge to the same conclusion. Thus, experimentally scaled thermal hydraulic data with uncertainties should be developed to support modeling and simulation for verification and validation activities. The fluid/solid index of refraction matching technique allows optical access in and around geometries that would otherwise be impossible while the large test section of the INL system provides better spatial and temporal resolution than comparable facilities. Benchmark data for assessing computational fluid dynamics can be acquired for external flows, internal flows, and coupled internal/external flows for better understanding of physical phenomena of interest. The core objective of this study is to describe MIR and its capabilities, and mention current development areas for uncertainty quantification, mainly the uncertainty surface method and cross-correlation method. Using these methods, it is anticipated to establish a suitable approach to quantify PIV uncertainty for experiments performed in the MIR.« less
Coupling procedure for TRANSURANUS and KTF codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jimenez, J.; Alglave, S.; Avramova, M.
2012-07-01
The nuclear industry aims to ensure safe and economic operation of each single fuel rod introduced in the reactor core. This goal is even more challenging nowadays due to the current strategy of going for higher burn-up (fuel cycles of 18 or 24 months) and longer residence time. In order to achieve that goal, fuel modeling is the key to predict the fuel rod behavior and lifetime under thermal and pressure loads, corrosion and irradiation. In this context, fuel performance codes, such as TRANSURANUS, are used to improve the fuel rod design. The modeling capabilities of the above mentioned toolsmore » can be significantly improved if they are coupled with a thermal-hydraulic code in order to have a better description of the flow conditions within the rod bundle. For LWR applications, a good representation of the two phase flow within the fuel assembly is necessary in order to have a best estimate calculation of the heat transfer inside the bundle. In this paper we present the coupling methodology of TRANSURANUS with KTF (Karlsruhe Two phase Flow subchannel code) as well as selected results of the coupling proof of principle. (authors)« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Earl, Christopher; Might, Matthew; Bagusetty, Abhishek
This study presents Nebo, a declarative domain-specific language embedded in C++ for discretizing partial differential equations for transport phenomena on multiple architectures. Application programmers use Nebo to write code that appears sequential but can be run in parallel, without editing the code. Currently Nebo supports single-thread execution, multi-thread execution, and many-core (GPU-based) execution. With single-thread execution, Nebo performs on par with code written by domain experts. With multi-thread execution, Nebo can linearly scale (with roughly 90% efficiency) up to 12 cores, compared to its single-thread execution. Moreover, Nebo’s many-core execution can be over 140x faster than its single-thread execution.
Earl, Christopher; Might, Matthew; Bagusetty, Abhishek; ...
2016-01-26
This study presents Nebo, a declarative domain-specific language embedded in C++ for discretizing partial differential equations for transport phenomena on multiple architectures. Application programmers use Nebo to write code that appears sequential but can be run in parallel, without editing the code. Currently Nebo supports single-thread execution, multi-thread execution, and many-core (GPU-based) execution. With single-thread execution, Nebo performs on par with code written by domain experts. With multi-thread execution, Nebo can linearly scale (with roughly 90% efficiency) up to 12 cores, compared to its single-thread execution. Moreover, Nebo’s many-core execution can be over 140x faster than its single-thread execution.
Mohr, Stephan; Dawson, William; Wagner, Michael; Caliste, Damien; Nakajima, Takahito; Genovese, Luigi
2017-10-10
We present CheSS, the "Chebyshev Sparse Solvers" library, which has been designed to solve typical problems arising in large-scale electronic structure calculations using localized basis sets. The library is based on a flexible and efficient expansion in terms of Chebyshev polynomials and presently features the calculation of the density matrix, the calculation of matrix powers for arbitrary powers, and the extraction of eigenvalues in a selected interval. CheSS is able to exploit the sparsity of the matrices and scales linearly with respect to the number of nonzero entries, making it well-suited for large-scale calculations. The approach is particularly adapted for setups leading to small spectral widths of the involved matrices and outperforms alternative methods in this regime. By coupling CheSS to the DFT code BigDFT, we show that such a favorable setup is indeed possible in practice. In addition, the approach based on Chebyshev polynomials can be massively parallelized, and CheSS exhibits excellent scaling up to thousands of cores even for relatively small matrix sizes.
Effect on magnetic properties of germanium encapsulated C60 fullerene
NASA Astrophysics Data System (ADS)
Umran, Nibras Mossa; Kumar, Ranjan
2013-02-01
Structural and electronic properties of Gen(n = 1-4) doped C60 fullerene are investigated with ab initio density functional theory calculations by using an efficient computer code, known as SIESTA. The pseudopotentials are constructed using a Trouiller-Martins scheme, to describe the interaction of valence electrons with the atomic cores. In endohedral doped embedding of more germanium atoms complexes we have seen that complexes are stable and thereafter cage break down. We have also investigated that binding energy, electronic affinity increases and magnetic moment oscillating behavior as the number of semiconductor atoms in C60 fullerene goes on increasing.
NASA Astrophysics Data System (ADS)
Semidotskiy, I. I.; Kurskiy, A. S.
2013-12-01
The paper describes the conditions of the ATWS type with virtually complete cessation of the feed-water flow at the operating power level of a reactor of the VK-50 type. Under these conditions, the role of spatial kinetics in the system of feedback between thermohydraulic and nuclear processes with bulk boiling of the coolant in the reactor core is clearly seen. This feature determines the specific character of experimental data obtained and the suitability of their use for verification of the associated codes used for calculating water-water reactors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martin, F.; Zimmerman, B.; Heard, F.
A number of N Reactor core heatup studies have been performed using the TRUMP-BD computer code. These studies were performed to address questions concerning the dependency of results on potential variations in the material properties and/or modeling assumptions. This report described and documents a series of 31 TRUMP-BD runs that were performed to determine the sensitivity of calculated inner-fuel temperatures to a variety of TRUMP input parameters and also to a change in the node density in a high-temperature-gradient region. The results of this study are based on the 32-in. model. 18 refs., 17 figs., 2 tab.
S/sub n/ analysis of the TRX metal lattices with ENDF/B version III data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wheeler, F.J.; Pearlstein, S.
1975-03-01
Two critical assemblies, designated as thermal-reactor benchmarks TRX-1 and TRX-2 for ENDF/B data testing, were analyzed using the one-dimensional S/sub n/-theory code SCAMP. The two assemblies were simple lattices of aluminum-clad, uranium-metal fuel rods in triangular arrays with D$sub 2$O as moderator and reflector. The fuel was low-enriched (1.3 percent $sup 235$U), 0.387-inch in diameter and had an active height of 48 inches. The volume ratio of water to uranium was 2.35 for the TRX-1 lattice and 4.02 for TRX-2. Full-core S/sub n/ calculations based on Version III data were performed for these assemblies and the results obtained were comparedmore » with the measured values of the multiplication factors, the ratio of epithermal-to-thermal neutron capture in $sup 238$U, the ratio of epithermal-to-thermal fission in $sup 235$U, the ratio of $sup 238$U fission to $sup 235$U fission, and the ratio of capture in $sup 238$U to fission in $sup 235$U. Reaction rates were obtained from a central region of the full- core problems. Multigroup cross sections for the reactor calculation were obtained from S/sub n/ cell calculations with resonance self-shielding calculated using the RABBLE treatment. The results of the analyses are generally consistent with results obtained by other investigators. (auth)« less
Understanding the haling power depletion (HPD) method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Levine, S.; Blyth, T.; Ivanov, K.
2012-07-01
The Pennsylvania State Univ. (PSU) is using the university version of the Studsvik Scandpower Code System (CMS) for research and education purposes. Preparations have been made to incorporate the CMS into the PSU Nuclear Engineering graduate class 'Nuclear Fuel Management' course. The information presented in this paper was developed during the preparation of the material for the course. The Haling Power Depletion (HPD) was presented in the course for the first time. The HPD method has been criticized as not valid by many in the field even though it has been successfully applied at PSU for the past 20 years.more » It was noticed that the radial power distribution (RPD) for low leakage cores during depletion remained similar to that of the HPD during most of the cycle. Thus, the Haling Power Depletion (HPD) may be used conveniently mainly for low leakage cores. Studies were then made to better understand the HPD and the results are presented in this paper. Many different core configurations can be computed quickly with the HPD without using Burnable Poisons (BP) to produce several excellent low leakage core configurations that are viable for power production. Once the HPD core configuration is chosen for further analysis, techniques are available for establishing the BP design to prevent violating any of the safety constraints in such HPD calculated cores. In summary, in this paper it has been shown that the HPD method can be used for guiding the design for the low leakage core. (authors)« less
Rubus: A compiler for seamless and extensible parallelism.
Adnan, Muhammad; Aslam, Faisal; Nawaz, Zubair; Sarwar, Syed Mansoor
2017-01-01
Nowadays, a typical processor may have multiple processing cores on a single chip. Furthermore, a special purpose processing unit called Graphic Processing Unit (GPU), originally designed for 2D/3D games, is now available for general purpose use in computers and mobile devices. However, the traditional programming languages which were designed to work with machines having single core CPUs, cannot utilize the parallelism available on multi-core processors efficiently. Therefore, to exploit the extraordinary processing power of multi-core processors, researchers are working on new tools and techniques to facilitate parallel programming. To this end, languages like CUDA and OpenCL have been introduced, which can be used to write code with parallelism. The main shortcoming of these languages is that programmer needs to specify all the complex details manually in order to parallelize the code across multiple cores. Therefore, the code written in these languages is difficult to understand, debug and maintain. Furthermore, to parallelize legacy code can require rewriting a significant portion of code in CUDA or OpenCL, which can consume significant time and resources. Thus, the amount of parallelism achieved is proportional to the skills of the programmer and the time spent in code optimizations. This paper proposes a new open source compiler, Rubus, to achieve seamless parallelism. The Rubus compiler relieves the programmer from manually specifying the low-level details. It analyses and transforms a sequential program into a parallel program automatically, without any user intervention. This achieves massive speedup and better utilization of the underlying hardware without a programmer's expertise in parallel programming. For five different benchmarks, on average a speedup of 34.54 times has been achieved by Rubus as compared to Java on a basic GPU having only 96 cores. Whereas, for a matrix multiplication benchmark the average execution speedup of 84 times has been achieved by Rubus on the same GPU. Moreover, Rubus achieves this performance without drastically increasing the memory footprint of a program.
Rubus: A compiler for seamless and extensible parallelism
Adnan, Muhammad; Aslam, Faisal; Sarwar, Syed Mansoor
2017-01-01
Nowadays, a typical processor may have multiple processing cores on a single chip. Furthermore, a special purpose processing unit called Graphic Processing Unit (GPU), originally designed for 2D/3D games, is now available for general purpose use in computers and mobile devices. However, the traditional programming languages which were designed to work with machines having single core CPUs, cannot utilize the parallelism available on multi-core processors efficiently. Therefore, to exploit the extraordinary processing power of multi-core processors, researchers are working on new tools and techniques to facilitate parallel programming. To this end, languages like CUDA and OpenCL have been introduced, which can be used to write code with parallelism. The main shortcoming of these languages is that programmer needs to specify all the complex details manually in order to parallelize the code across multiple cores. Therefore, the code written in these languages is difficult to understand, debug and maintain. Furthermore, to parallelize legacy code can require rewriting a significant portion of code in CUDA or OpenCL, which can consume significant time and resources. Thus, the amount of parallelism achieved is proportional to the skills of the programmer and the time spent in code optimizations. This paper proposes a new open source compiler, Rubus, to achieve seamless parallelism. The Rubus compiler relieves the programmer from manually specifying the low-level details. It analyses and transforms a sequential program into a parallel program automatically, without any user intervention. This achieves massive speedup and better utilization of the underlying hardware without a programmer’s expertise in parallel programming. For five different benchmarks, on average a speedup of 34.54 times has been achieved by Rubus as compared to Java on a basic GPU having only 96 cores. Whereas, for a matrix multiplication benchmark the average execution speedup of 84 times has been achieved by Rubus on the same GPU. Moreover, Rubus achieves this performance without drastically increasing the memory footprint of a program. PMID:29211758
Automating the generation of finite element dynamical cores with Firedrake
NASA Astrophysics Data System (ADS)
Ham, David; Mitchell, Lawrence; Homolya, Miklós; Luporini, Fabio; Gibson, Thomas; Kelly, Paul; Cotter, Colin; Lange, Michael; Kramer, Stephan; Shipton, Jemma; Yamazaki, Hiroe; Paganini, Alberto; Kärnä, Tuomas
2017-04-01
The development of a dynamical core is an increasingly complex software engineering undertaking. As the equations become more complete, the discretisations more sophisticated and the hardware acquires ever more fine-grained parallelism and deeper memory hierarchies, the problem of building, testing and modifying dynamical cores becomes increasingly complex. Here we present Firedrake, a code generation system for the finite element method with specialist features designed to support the creation of geoscientific models. Using Firedrake, the dynamical core developer writes the partial differential equations in weak form in a high level mathematical notation. Appropriate function spaces are chosen and time stepping loops written at the same high level. When the programme is run, Firedrake generates high performance C code for the resulting numerics which are executed in parallel. Models in Firedrake typically take a tiny fraction of the lines of code required by traditional hand-coding techniques. They support more sophisticated numerics than are easily achieved by hand, and the resulting code is frequently higher performance. Critically, debugging, modifying and extending a model written in Firedrake is vastly easier than by traditional methods due to the small, highly mathematical code base. Firedrake supports a wide range of key features for dynamical core creation: A vast range of discretisations, including both continuous and discontinuous spaces and mimetic (C-grid-like) elements which optimally represent force balances in geophysical flows. High aspect ratio layered meshes suitable for ocean and atmosphere domains. Curved elements for high accuracy representations of the sphere. Support for non-finite element operators, such as parametrisations. Access to PETSc, a world-leading library of programmable linear and nonlinear solvers. High performance adjoint models generated automatically by symbolically reasoning about the forward model. This poster will present the key features of the Firedrake system, as well as those of Gusto, an atmospheric dynamical core, and Thetis, a coastal ocean model, both of which are written in Firedrake.
Shielding and activation calculations around the reactor core for the MYRRHA ADS design
NASA Astrophysics Data System (ADS)
Ferrari, Anna; Mueller, Stefan; Konheiser, J.; Castelliti, D.; Sarotto, M.; Stankovskiy, A.
2017-09-01
In the frame of the FP7 European project MAXSIMA, an extensive simulation study has been done to assess the main shielding problems in view of the construction of the MYRRHA accelerator-driven system at SCK·CEN in Mol (Belgium). An innovative method based on the combined use of the two state-of-the-art Monte Carlo codes MCNPX and FLUKA has been used, with the goal to characterize complex, realistic neutron fields around the core barrel, to be used as source terms in detailed analyses of the radiation fields due to the system in operation, and of the coupled residual radiation. The main results of the shielding analysis are presented, as well as the construction of an activation database of all the key structural materials. The results evidenced a powerful way to analyse the shielding and activation problems, with direct and clear implications on the design solutions.
Posttest RELAP4 analysis of LOFT experiment L1-4
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grush, W.H.; Holmstrom, H.L.O.
Results of posttest analysis of LOFT loss-of-coolant experiment L1-4 with the RELAP4 code are presented. The results are compared with the pretest prediction and the test data. Differences between the RELAP4 model used for this analysis and that used for the pretest prediction are in the areas of initial conditions, nodalization, emergency core cooling system, broken loop hot leg, and steam generator secondary. In general, these changes made only minor improvement in the comparison of the analytical results to the data. Also presented are the results of a limited study of LOFT downcomer modeling which compared the performance of themore » conventional single downcomer model with that of the new split downcomer model. A RELAP4 sensitivity calculation with artificially elevated emergency core coolant temperature was performed to highlight the need for an ECC mixing model in RELAP4.« less
Multigroup Monte Carlo on GPUs: Comparison of history- and event-based algorithms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hamilton, Steven P.; Slattery, Stuart R.; Evans, Thomas M.
This article presents an investigation of the performance of different multigroup Monte Carlo transport algorithms on GPUs with a discussion of both history-based and event-based approaches. Several algorithmic improvements are introduced for both approaches. By modifying the history-based algorithm that is traditionally favored in CPU-based MC codes to occasionally filter out dead particles to reduce thread divergence, performance exceeds that of either the pure history-based or event-based approaches. The impacts of several algorithmic choices are discussed, including performance studies on Kepler and Pascal generation NVIDIA GPUs for fixed source and eigenvalue calculations. Single-device performance equivalent to 20–40 CPU cores onmore » the K40 GPU and 60–80 CPU cores on the P100 GPU is achieved. Last, in addition, nearly perfect multi-device parallel weak scaling is demonstrated on more than 16,000 nodes of the Titan supercomputer.« less
Multigroup Monte Carlo on GPUs: Comparison of history- and event-based algorithms
Hamilton, Steven P.; Slattery, Stuart R.; Evans, Thomas M.
2017-12-22
This article presents an investigation of the performance of different multigroup Monte Carlo transport algorithms on GPUs with a discussion of both history-based and event-based approaches. Several algorithmic improvements are introduced for both approaches. By modifying the history-based algorithm that is traditionally favored in CPU-based MC codes to occasionally filter out dead particles to reduce thread divergence, performance exceeds that of either the pure history-based or event-based approaches. The impacts of several algorithmic choices are discussed, including performance studies on Kepler and Pascal generation NVIDIA GPUs for fixed source and eigenvalue calculations. Single-device performance equivalent to 20–40 CPU cores onmore » the K40 GPU and 60–80 CPU cores on the P100 GPU is achieved. Last, in addition, nearly perfect multi-device parallel weak scaling is demonstrated on more than 16,000 nodes of the Titan supercomputer.« less
Simulating nonlinear neutrino flavor evolution
NASA Astrophysics Data System (ADS)
Duan, H.; Fuller, G. M.; Carlson, J.
2008-10-01
We discuss a new kind of astrophysical transport problem: the coherent evolution of neutrino flavor in core collapse supernovae. Solution of this problem requires a numerical approach which can simulate accurately the quantum mechanical coupling of intersecting neutrino trajectories and the associated nonlinearity which characterizes neutrino flavor conversion. We describe here the two codes developed to attack this problem. We also describe the surprising phenomena revealed by these numerical calculations. Chief among these is that the nonlinearities in the problem can engineer neutrino flavor transformation which is dramatically different to that in standard Mikheyev Smirnov Wolfenstein treatments. This happens even though the neutrino mass-squared differences are measured to be small, and even when neutrino self-coupling is sub-dominant. Our numerical work has revealed potential signatures which, if detected in the neutrino burst from a Galactic core collapse event, could reveal heretofore unmeasurable properties of the neutrinos, such as the mass hierarchy and vacuum mixing angle θ13.
FLiT: a field line trace code for magnetic confinement devices
NASA Astrophysics Data System (ADS)
Innocente, P.; Lorenzini, R.; Terranova, D.; Zanca, P.
2017-04-01
This paper presents a field line tracing code (FLiT) developed to study particle and energy transport as well as other phenomena related to magnetic topology in reversed-field pinch (RFP) and tokamak experiments. The code computes magnetic field lines in toroidal geometry using curvilinear coordinates (r, ϑ, ϕ) and calculates the intersections of these field lines with specified planes. The code also computes the magnetic and thermal diffusivity due to stochastic magnetic field in the collisionless limit. Compared to Hamiltonian codes, there are no constraints on the magnetic field functional formulation, which allows the integration of whichever magnetic field is required. The code uses the magnetic field computed by solving the zeroth-order axisymmetric equilibrium and the Newcomb equation for the first-order helical perturbation matching the edge magnetic field measurements in toroidal geometry. Two algorithms are developed to integrate the field lines: one is a dedicated implementation of a first-order semi-implicit volume-preserving integration method, and the other is based on the Adams-Moulton predictor-corrector method. As expected, the volume-preserving algorithm is accurate in conserving divergence, but slow because the low integration order requires small amplitude steps. The second algorithm proves to be quite fast and it is able to integrate the field lines in many partially and fully stochastic configurations accurately. The code has already been used to study the core and edge magnetic topology of the RFX-mod device in both the reversed-field pinch and tokamak magnetic configurations.
Study on Utilization of Super Grade Plutonium in Molten Salt Reactor FUJI-U3 using CITATION Code
NASA Astrophysics Data System (ADS)
Wulandari, Cici; Waris, Abdul; Pramuditya, Syeilendra; Asril, Pramutadi AM; Novitrian
2017-07-01
FUJI-U3 type of Molten Salt Reactor (MSR) has a unique design since it consists of three core regions in order to avoid the replacement of graphite as moderator. MSR uses floride as a nuclear fuel salt with the most popular chemical composition is LiF-BeF2-ThF4-233UF4. ThF4 and 233UF4 are the fertile and fissile materials, respectively. On the other hand, LiF and BeF2 working as both fuel and heat transfer medium. In this study, the super grade plutonium will be utilized as substitution of 233U since plutonium is easier to be obtained compared to 233U as main fuel. Neutronics calculation was performed by using PIJ and CITATION modules of SRAC 2002 code with JENDL 3.2 as nuclear data library.
Decay heat uncertainty quantification of MYRRHA
NASA Astrophysics Data System (ADS)
Fiorito, Luca; Buss, Oliver; Hoefer, Axel; Stankovskiy, Alexey; Eynde, Gert Van den
2017-09-01
MYRRHA is a lead-bismuth cooled MOX-fueled accelerator driven system (ADS) currently in the design phase at SCK·CEN in Belgium. The correct evaluation of the decay heat and of its uncertainty level is very important for the safety demonstration of the reactor. In the first part of this work we assessed the decay heat released by the MYRRHA core using the ALEPH-2 burnup code. The second part of the study focused on the nuclear data uncertainty and covariance propagation to the MYRRHA decay heat. Radioactive decay data, independent fission yield and cross section uncertainties/covariances were propagated using two nuclear data sampling codes, namely NUDUNA and SANDY. According to the results, 238U cross sections and fission yield data are the largest contributors to the MYRRHA decay heat uncertainty. The calculated uncertainty values are deemed acceptable from the safety point of view as they are well within the available regulatory limits.
Magnetospheric space plasma investigations
NASA Technical Reports Server (NTRS)
Comfort, Richard H.; Horwitz, James L.
1994-01-01
A time dependent semi-kinetic model that includes self collisions and ion-neutral collisions and chemistry was developed. Light ion outflow in the polar cap transition region was modeled and compared with data results. A model study of wave heating of O+ ions in the topside transition region was carried out using a code which does local calculations that include ion-neutral and Coulomb self collisions as well as production and loss of O+. Another project is a statistical study of hydrogen spin curve characteristics in the polar cap. A statistical study of the latitudinal distribution of core plasmas along the L=4.6 field line using DE-1/RIMS data was completed. A short paper on dual spacecraft estimates of ion temperature profiles and heat flows in the plasmasphere ionosphere system was prepared. An automated processing code was used to process RIMS data from 1981 to 1984.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wagner, John C; Peplow, Douglas E.; Mosher, Scott W
2010-01-01
This paper provides a review of the hybrid (Monte Carlo/deterministic) radiation transport methods and codes used at the Oak Ridge National Laboratory and examples of their application for increasing the efficiency of real-world, fixed-source Monte Carlo analyses. The two principal hybrid methods are (1) Consistent Adjoint Driven Importance Sampling (CADIS) for optimization of a localized detector (tally) region (e.g., flux, dose, or reaction rate at a particular location) and (2) Forward Weighted CADIS (FW-CADIS) for optimizing distributions (e.g., mesh tallies over all or part of the problem space) or multiple localized detector regions (e.g., simultaneous optimization of two or moremore » localized tally regions). The two methods have been implemented and automated in both the MAVRIC sequence of SCALE 6 and ADVANTG, a code that works with the MCNP code. As implemented, the methods utilize the results of approximate, fast-running 3-D discrete ordinates transport calculations (with the Denovo code) to generate consistent space- and energy-dependent source and transport (weight windows) biasing parameters. These methods and codes have been applied to many relevant and challenging problems, including calculations of PWR ex-core thermal detector response, dose rates throughout an entire PWR facility, site boundary dose from arrays of commercial spent fuel storage casks, radiation fields for criticality accident alarm system placement, and detector response for special nuclear material detection scenarios and nuclear well-logging tools. Substantial computational speed-ups, generally O(10{sup 2-4}), have been realized for all applications to date. This paper provides a brief review of the methods, their implementation, results of their application, and current development activities, as well as a considerable list of references for readers seeking more information about the methods and/or their applications.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harris, J. Austin; Hix, W. Raphael; Chertkow, Merek A.
In this paper, we investigate core-collapse supernova (CCSN) nucleosynthesis with self-consistent, axisymmetric (2D) simulations performed using the neutrino hydrodynamics code Chimera. Computational costs have traditionally constrained the evolution of the nuclear composition within multidimensional CCSN models to, at best, a 14-species α-network capable of tracking onlymore » $$(\\alpha ,\\gamma )$$ reactions from 4He to 60Zn. Such a simplified network limits the ability to accurately evolve detailed composition and neutronization or calculate the nuclear energy generation rate. Lagrangian tracer particles are commonly used to extend the nuclear network evolution by incorporating more realistic networks into post-processing nucleosynthesis calculations. However, limitations such as poor spatial resolution of the tracer particles; inconsistent thermodynamic evolution, including misestimation of expansion timescales; and uncertain determination of the multidimensional mass cut at the end of the simulation impose uncertainties inherent to this approach. Finally, we present a detailed analysis of the impact of such uncertainties for four self-consistent axisymmetric CCSN models initiated from solar-metallicity, nonrotating progenitors of 12, 15, 20, and 25 $${M}_{\\odot }$$ and evolved with the smaller α-network to more than 1 s after the launch of an explosion.« less
NASA Astrophysics Data System (ADS)
Harris, J. Austin; Hix, W. Raphael; Chertkow, Merek A.; Lee, C. T.; Lentz, Eric J.; Messer, O. E. Bronson
2017-07-01
We investigate core-collapse supernova (CCSN) nucleosynthesis with self-consistent, axisymmetric (2D) simulations performed using the neutrino hydrodynamics code Chimera. Computational costs have traditionally constrained the evolution of the nuclear composition within multidimensional CCSN models to, at best, a 14-species α-network capable of tracking only (α ,γ ) reactions from 4He to 60Zn. Such a simplified network limits the ability to accurately evolve detailed composition and neutronization or calculate the nuclear energy generation rate. Lagrangian tracer particles are commonly used to extend the nuclear network evolution by incorporating more realistic networks into post-processing nucleosynthesis calculations. However, limitations such as poor spatial resolution of the tracer particles inconsistent thermodynamic evolution, including misestimation of expansion timescales and uncertain determination of the multidimensional mass cut at the end of the simulation impose uncertainties inherent to this approach. We present a detailed analysis of the impact of such uncertainties for four self-consistent axisymmetric CCSN models initiated from solar-metallicity, nonrotating progenitors of 12, 15, 20, and 25 {M}⊙ and evolved with the smaller α-network to more than 1 s after the launch of an explosion.
Harris, J. Austin; Hix, W. Raphael; Chertkow, Merek A.; ...
2017-06-26
In this paper, we investigate core-collapse supernova (CCSN) nucleosynthesis with self-consistent, axisymmetric (2D) simulations performed using the neutrino hydrodynamics code Chimera. Computational costs have traditionally constrained the evolution of the nuclear composition within multidimensional CCSN models to, at best, a 14-species α-network capable of tracking onlymore » $$(\\alpha ,\\gamma )$$ reactions from 4He to 60Zn. Such a simplified network limits the ability to accurately evolve detailed composition and neutronization or calculate the nuclear energy generation rate. Lagrangian tracer particles are commonly used to extend the nuclear network evolution by incorporating more realistic networks into post-processing nucleosynthesis calculations. However, limitations such as poor spatial resolution of the tracer particles; inconsistent thermodynamic evolution, including misestimation of expansion timescales; and uncertain determination of the multidimensional mass cut at the end of the simulation impose uncertainties inherent to this approach. Finally, we present a detailed analysis of the impact of such uncertainties for four self-consistent axisymmetric CCSN models initiated from solar-metallicity, nonrotating progenitors of 12, 15, 20, and 25 $${M}_{\\odot }$$ and evolved with the smaller α-network to more than 1 s after the launch of an explosion.« less
Dielectronic recombination of lowly charged tungsten ions Wq+(q = 5 - 10)
NASA Astrophysics Data System (ADS)
Kwon, Duck-Hee
2018-03-01
Dielectronic recombination (DR) rate coefficients for the ground levels of low ionization state Wq+ (q = 5 - 10) ions have been obtained by an ab-inito level-by-level calculation using the flexible atomic code (FAC) based on relativistic jj coupling scheme and independent process, isolated resonance, distorted wave approximation. The radiative transition calculation in the original FAC has been adapted into parallel programming for time effective dealing with so many resonance levels of the complex open 4f, 5p, or 5d-shell structure ion. Core excitations Δnc = 0 , 1 of 4f, 5p, and 5d (W5+), Δnc = 2 of 4f, and Δnc = 0 of 4d (W7+), and 5s (W8+) are included to the total DR rate coefficient. The core excitations Δnc = 0 , 5p → 5l and Δnc = 1 , 4f → 5l mainly contribute to the total DR rate coefficients. The strong resonances involved in the DR are analyzed and the total DR rate coefficients are compared with available previous ab-initio predictions and with ADAS data by a simple semiempirical formula.
Uncertainty quantification and sensitivity analysis with CASL Core Simulator VERA-CS
Brown, C. S.; Zhang, Hongbin
2016-05-24
Uncertainty quantification and sensitivity analysis are important for nuclear reactor safety design and analysis. A 2x2 fuel assembly core design was developed and simulated by the Virtual Environment for Reactor Applications, Core Simulator (VERA-CS) coupled neutronics and thermal-hydraulics code under development by the Consortium for Advanced Simulation of Light Water Reactors (CASL). An approach to uncertainty quantification and sensitivity analysis with VERA-CS was developed and a new toolkit was created to perform uncertainty quantification and sensitivity analysis with fourteen uncertain input parameters. Furthermore, the minimum departure from nucleate boiling ratio (MDNBR), maximum fuel center-line temperature, and maximum outer clad surfacemore » temperature were chosen as the selected figures of merit. Pearson, Spearman, and partial correlation coefficients were considered for all of the figures of merit in sensitivity analysis and coolant inlet temperature was consistently the most influential parameter. We used parameters as inputs to the critical heat flux calculation with the W-3 correlation were shown to be the most influential on the MDNBR, maximum fuel center-line temperature, and maximum outer clad surface temperature.« less
Recent advances and future prospects for Monte Carlo
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, Forrest B
2010-01-01
The history of Monte Carlo methods is closely linked to that of computers: The first known Monte Carlo program was written in 1947 for the ENIAC; a pre-release of the first Fortran compiler was used for Monte Carlo In 1957; Monte Carlo codes were adapted to vector computers in the 1980s, clusters and parallel computers in the 1990s, and teraflop systems in the 2000s. Recent advances include hierarchical parallelism, combining threaded calculations on multicore processors with message-passing among different nodes. With the advances In computmg, Monte Carlo codes have evolved with new capabilities and new ways of use. Production codesmore » such as MCNP, MVP, MONK, TRIPOLI and SCALE are now 20-30 years old (or more) and are very rich in advanced featUres. The former 'method of last resort' has now become the first choice for many applications. Calculations are now routinely performed on office computers, not just on supercomputers. Current research and development efforts are investigating the use of Monte Carlo methods on FPGAs. GPUs, and many-core processors. Other far-reaching research is exploring ways to adapt Monte Carlo methods to future exaflop systems that may have 1M or more concurrent computational processes.« less
Statistical dielectronic recombination rates for multielectron ions in plasma
NASA Astrophysics Data System (ADS)
Demura, A. V.; Leont'iev, D. S.; Lisitsa, V. S.; Shurygin, V. A.
2017-10-01
We describe the general analytic derivation of the dielectronic recombination (DR) rate coefficient for multielectron ions in a plasma based on the statistical theory of an atom in terms of the spatial distribution of the atomic electron density. The dielectronic recombination rates for complex multielectron tungsten ions are calculated numerically in a wide range of variation of the plasma temperature, which is important for modern nuclear fusion studies. The results of statistical theory are compared with the data obtained using level-by-level codes ADPAK, FAC, HULLAC, and experimental results. We consider different statistical DR models based on the Thomas-Fermi distribution, viz., integral and differential with respect to the orbital angular momenta of the ion core and the trapped electron, as well as the Rost model, which is an analog of the Frank-Condon model as applied to atomic structures. In view of its universality and relative simplicity, the statistical approach can be used for obtaining express estimates of the dielectronic recombination rate coefficients in complex calculations of the parameters of the thermonuclear plasmas. The application of statistical methods also provides information for the dielectronic recombination rates with much smaller computer time expenditures as compared to available level-by-level codes.
Mironov, Vladimir; Moskovsky, Alexander; D’Mello, Michael; ...
2017-10-04
The Hartree-Fock (HF) method in the quantum chemistry package GAMESS represents one of the most irregular algorithms in computation today. Major steps in the calculation are the irregular computation of electron repulsion integrals (ERIs) and the building of the Fock matrix. These are the central components of the main Self Consistent Field (SCF) loop, the key hotspot in Electronic Structure (ES) codes. By threading the MPI ranks in the official release of the GAMESS code, we not only speed up the main SCF loop (4x to 6x for large systems), but also achieve a significant (>2x) reduction in the overallmore » memory footprint. These improvements are a direct consequence of memory access optimizations within the MPI ranks. We benchmark our implementation against the official release of the GAMESS code on the Intel R Xeon PhiTM supercomputer. Here, scaling numbers are reported on up to 7,680 cores on Intel Xeon Phi coprocessors.« less
Yang, Lin; Zhang, Feng; Wang, Cai-Zhuang; ...
2018-01-12
We present an implementation of EAM and FS interatomic potentials, which are widely used in simulating metallic systems, in HOOMD-blue, a software designed to perform classical molecular dynamics simulations using GPU accelerations. We first discuss the details of our implementation and then report extensive benchmark tests. We demonstrate that single-precision floating point operations efficiently implemented on GPUs can produce sufficient accuracy when compared against double-precision codes, as demonstrated in test simulations of calculations of the glass-transition temperature of Cu 64.5Zr 35.5, and pair correlation function of liquid Ni 3Al. Our code scales well with the size of the simulating systemmore » on NVIDIA Tesla M40 and P100 GPUs. Compared with another popular software LAMMPS running on 32 cores of AMD Opteron 6220 processors, the GPU/CPU performance ratio can reach as high as 4.6. In conclusion, the source code can be accessed through the HOOMD-blue web page for free by any interested user.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Lin; Zhang, Feng; Wang, Cai-Zhuang
We present an implementation of EAM and FS interatomic potentials, which are widely used in simulating metallic systems, in HOOMD-blue, a software designed to perform classical molecular dynamics simulations using GPU accelerations. We first discuss the details of our implementation and then report extensive benchmark tests. We demonstrate that single-precision floating point operations efficiently implemented on GPUs can produce sufficient accuracy when compared against double-precision codes, as demonstrated in test simulations of calculations of the glass-transition temperature of Cu 64.5Zr 35.5, and pair correlation function of liquid Ni 3Al. Our code scales well with the size of the simulating systemmore » on NVIDIA Tesla M40 and P100 GPUs. Compared with another popular software LAMMPS running on 32 cores of AMD Opteron 6220 processors, the GPU/CPU performance ratio can reach as high as 4.6. In conclusion, the source code can be accessed through the HOOMD-blue web page for free by any interested user.« less
Confinement properties of tokamak plasmas with extended regions of low magnetic shear
NASA Astrophysics Data System (ADS)
Graves, J. P.; Cooper, W. A.; Kleiner, A.; Raghunathan, M.; Neto, E.; Nicolas, T.; Lanthaler, S.; Patten, H.; Pfefferle, D.; Brunetti, D.; Lutjens, H.
2017-10-01
Extended regions of low magnetic shear can be advantageous to tokamak plasmas. But the core and edge can be susceptible to non-resonant ideal fluctuations due to the weakened restoring force associated with magnetic field line bending. This contribution shows how saturated non-linear phenomenology, such as 1 / 1 Long Lived Modes, and Edge Harmonic Oscillations associated with QH-modes, can be modelled accurately using the non-linear stability code XTOR, the free boundary 3D equilibrium code VMEC, and non-linear analytic theory. That the equilibrium approach is valid is particularly valuable because it enables advanced particle confinement studies to be undertaken in the ordinarily difficult environment of strongly 3D magnetic fields. The VENUS-LEVIS code exploits the Fourier description of the VMEC equilibrium fields, such that full Lorenzian and guiding centre approximated differential operators in curvilinear angular coordinates can be evaluated analytically. Consequently, the confinement properties of minority ions such as energetic particles and high Z impurities can be calculated accurately over slowing down timescales in experimentally relevant 3D plasmas.
Optimizing legacy molecular dynamics software with directive-based offload
NASA Astrophysics Data System (ADS)
Michael Brown, W.; Carrillo, Jan-Michael Y.; Gavhane, Nitin; Thakkar, Foram M.; Plimpton, Steven J.
2015-10-01
Directive-based programming models are one solution for exploiting many-core coprocessors to increase simulation rates in molecular dynamics. They offer the potential to reduce code complexity with offload models that can selectively target computations to run on the CPU, the coprocessor, or both. In this paper, we describe modifications to the LAMMPS molecular dynamics code to enable concurrent calculations on a CPU and coprocessor. We demonstrate that standard molecular dynamics algorithms can run efficiently on both the CPU and an x86-based coprocessor using the same subroutines. As a consequence, we demonstrate that code optimizations for the coprocessor also result in speedups on the CPU; in extreme cases up to 4.7X. We provide results for LAMMPS benchmarks and for production molecular dynamics simulations using the Stampede hybrid supercomputer with both Intel® Xeon Phi™ coprocessors and NVIDIA GPUs. The optimizations presented have increased simulation rates by over 2X for organic molecules and over 7X for liquid crystals on Stampede. The optimizations are available as part of the "Intel package" supplied with LAMMPS.
NASA Astrophysics Data System (ADS)
Featherstone, N. A.; Aurnou, J. M.; Yadav, R. K.; Heimpel, M. H.; Soderlund, K. M.; Matsui, H.; Stanley, S.; Brown, B. P.; Glatzmaier, G.; Olson, P.; Buffett, B. A.; Hwang, L.; Kellogg, L. H.
2017-12-01
In the past three years, CIG's Dynamo Working Group has successfully ported the Rayleigh Code to the Argonne Leadership Computer Facility's Mira BG/Q device. In this poster, we present some our first results, showing simulations of 1) convection in the solar convection zone; 2) dynamo action in Earth's core and 3) convection in the jovian deep atmosphere. These simulations have made efficient use of 131 thousand cores, 131 thousand cores and 232 thousand cores, respectively, on Mira. In addition to our novel results, the joys and logistical challenges of carrying out such large runs will also be discussed.
Porting plasma physics simulation codes to modern computing architectures using the
NASA Astrophysics Data System (ADS)
Germaschewski, Kai; Abbott, Stephen
2015-11-01
Available computing power has continued to grow exponentially even after single-core performance satured in the last decade. The increase has since been driven by more parallelism, both using more cores and having more parallelism in each core, e.g. in GPUs and Intel Xeon Phi. Adapting existing plasma physics codes is challenging, in particular as there is no single programming model that covers current and future architectures. We will introduce the open-source
Portable LQCD Monte Carlo code using OpenACC
NASA Astrophysics Data System (ADS)
Bonati, Claudio; Calore, Enrico; Coscetti, Simone; D'Elia, Massimo; Mesiti, Michele; Negro, Francesco; Fabio Schifano, Sebastiano; Silvi, Giorgio; Tripiccione, Raffaele
2018-03-01
Varying from multi-core CPU processors to many-core GPUs, the present scenario of HPC architectures is extremely heterogeneous. In this context, code portability is increasingly important for easy maintainability of applications; this is relevant in scientific computing where code changes are numerous and frequent. In this talk we present the design and optimization of a state-of-the-art production level LQCD Monte Carlo application, using the OpenACC directives model. OpenACC aims to abstract parallel programming to a descriptive level, where programmers do not need to specify the mapping of the code on the target machine. We describe the OpenACC implementation and show that the same code is able to target different architectures, including state-of-the-art CPUs and GPUs.
Using Coding Apps to Support Literacy Instruction and Develop Coding Literacy
ERIC Educational Resources Information Center
Hutchison, Amy; Nadolny, Larysa; Estapa, Anne
2016-01-01
In this article the authors present the concept of Coding Literacy and describe the ways in which coding apps can support the development of Coding Literacy and disciplinary and digital literacy skills. Through detailed examples, we describe how coding apps can be integrated into literacy instruction to support learning of the Common Core English…
Latent uncertainties of the precalculated track Monte Carlo method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Renaud, Marc-André; Seuntjens, Jan; Roberge, David
Purpose: While significant progress has been made in speeding up Monte Carlo (MC) dose calculation methods, they remain too time-consuming for the purpose of inverse planning. To achieve clinically usable calculation speeds, a precalculated Monte Carlo (PMC) algorithm for proton and electron transport was developed to run on graphics processing units (GPUs). The algorithm utilizes pregenerated particle track data from conventional MC codes for different materials such as water, bone, and lung to produce dose distributions in voxelized phantoms. While PMC methods have been described in the past, an explicit quantification of the latent uncertainty arising from the limited numbermore » of unique tracks in the pregenerated track bank is missing from the paper. With a proper uncertainty analysis, an optimal number of tracks in the pregenerated track bank can be selected for a desired dose calculation uncertainty. Methods: Particle tracks were pregenerated for electrons and protons using EGSnrc and GEANT4 and saved in a database. The PMC algorithm for track selection, rotation, and transport was implemented on the Compute Unified Device Architecture (CUDA) 4.0 programming framework. PMC dose distributions were calculated in a variety of media and compared to benchmark dose distributions simulated from the corresponding general-purpose MC codes in the same conditions. A latent uncertainty metric was defined and analysis was performed by varying the pregenerated track bank size and the number of simulated primary particle histories and comparing dose values to a “ground truth” benchmark dose distribution calculated to 0.04% average uncertainty in voxels with dose greater than 20% of D{sub max}. Efficiency metrics were calculated against benchmark MC codes on a single CPU core with no variance reduction. Results: Dose distributions generated using PMC and benchmark MC codes were compared and found to be within 2% of each other in voxels with dose values greater than 20% of the maximum dose. In proton calculations, a small (≤1 mm) distance-to-agreement error was observed at the Bragg peak. Latent uncertainty was characterized for electrons and found to follow a Poisson distribution with the number of unique tracks per energy. A track bank of 12 energies and 60000 unique tracks per pregenerated energy in water had a size of 2.4 GB and achieved a latent uncertainty of approximately 1% at an optimal efficiency gain over DOSXYZnrc. Larger track banks produced a lower latent uncertainty at the cost of increased memory consumption. Using an NVIDIA GTX 590, efficiency analysis showed a 807 × efficiency increase over DOSXYZnrc for 16 MeV electrons in water and 508 × for 16 MeV electrons in bone. Conclusions: The PMC method can calculate dose distributions for electrons and protons to a statistical uncertainty of 1% with a large efficiency gain over conventional MC codes. Before performing clinical dose calculations, models to calculate dose contributions from uncharged particles must be implemented. Following the successful implementation of these models, the PMC method will be evaluated as a candidate for inverse planning of modulated electron radiation therapy and scanned proton beams.« less
Latent uncertainties of the precalculated track Monte Carlo method.
Renaud, Marc-André; Roberge, David; Seuntjens, Jan
2015-01-01
While significant progress has been made in speeding up Monte Carlo (MC) dose calculation methods, they remain too time-consuming for the purpose of inverse planning. To achieve clinically usable calculation speeds, a precalculated Monte Carlo (PMC) algorithm for proton and electron transport was developed to run on graphics processing units (GPUs). The algorithm utilizes pregenerated particle track data from conventional MC codes for different materials such as water, bone, and lung to produce dose distributions in voxelized phantoms. While PMC methods have been described in the past, an explicit quantification of the latent uncertainty arising from the limited number of unique tracks in the pregenerated track bank is missing from the paper. With a proper uncertainty analysis, an optimal number of tracks in the pregenerated track bank can be selected for a desired dose calculation uncertainty. Particle tracks were pregenerated for electrons and protons using EGSnrc and geant4 and saved in a database. The PMC algorithm for track selection, rotation, and transport was implemented on the Compute Unified Device Architecture (cuda) 4.0 programming framework. PMC dose distributions were calculated in a variety of media and compared to benchmark dose distributions simulated from the corresponding general-purpose MC codes in the same conditions. A latent uncertainty metric was defined and analysis was performed by varying the pregenerated track bank size and the number of simulated primary particle histories and comparing dose values to a "ground truth" benchmark dose distribution calculated to 0.04% average uncertainty in voxels with dose greater than 20% of Dmax. Efficiency metrics were calculated against benchmark MC codes on a single CPU core with no variance reduction. Dose distributions generated using PMC and benchmark MC codes were compared and found to be within 2% of each other in voxels with dose values greater than 20% of the maximum dose. In proton calculations, a small (≤ 1 mm) distance-to-agreement error was observed at the Bragg peak. Latent uncertainty was characterized for electrons and found to follow a Poisson distribution with the number of unique tracks per energy. A track bank of 12 energies and 60000 unique tracks per pregenerated energy in water had a size of 2.4 GB and achieved a latent uncertainty of approximately 1% at an optimal efficiency gain over DOSXYZnrc. Larger track banks produced a lower latent uncertainty at the cost of increased memory consumption. Using an NVIDIA GTX 590, efficiency analysis showed a 807 × efficiency increase over DOSXYZnrc for 16 MeV electrons in water and 508 × for 16 MeV electrons in bone. The PMC method can calculate dose distributions for electrons and protons to a statistical uncertainty of 1% with a large efficiency gain over conventional MC codes. Before performing clinical dose calculations, models to calculate dose contributions from uncharged particles must be implemented. Following the successful implementation of these models, the PMC method will be evaluated as a candidate for inverse planning of modulated electron radiation therapy and scanned proton beams.
Solar g-modes? Comparison of detected asymptotic g-mode frequencies with solar model predictions
NASA Astrophysics Data System (ADS)
Wood, Suzannah Rebecca; Guzik, Joyce Ann; Mussack, Katie; Bradley, Paul A.
2018-06-01
After many years of searching for solar gravity modes, Fossat et al. (2017) reported detection of the nearly equally spaced high-order g-modes periods using a 15-year time series of GOLF data from the SOHO spacecraft. Here we report progress towards and challenges associated with calculating and comparing g-mode period predictions for several previously published standard solar models using various abundance mixtures and opacities, as well as the predictions for some non-standard models incorporating early mass loss, and compare with the periods reported by Fossat et al (2017). Additionally, we have a side-by-side comparison of results of different stellar pulsation codes for calculating g-mode predictions. These comparisons will allow for testing of nonstandard physics input that affect the core, including an early more massive Sun and dynamic electron screening.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bess, John D.; Sterbentz, James W.; Snoj, Luka
PROTEUS is a zero-power research reactor based on a cylindrical graphite annulus with a central cylindrical cavity. The graphite annulus remains basically the same for all experimental programs, but the contents of the central cavity are changed according to the type of reactor being investigated. Through most of its service history, PROTEUS has represented light-water reactors, but from 1992 to 1996 PROTEUS was configured as a pebble-bed reactor (PBR) critical facility and designated as HTR-PROTEUS. The nomenclature was used to indicate that this series consisted of High Temperature Reactor experiments performed in the PROTEUS assembly. During this period, seventeen criticalmore » configurations were assembled and various reactor physics experiments were conducted. These experiments included measurements of criticality, differential and integral control rod and safety rod worths, kinetics, reaction rates, water ingress effects, and small sample reactivity effects (Ref. 3). HTR-PROTEUS was constructed, and the experimental program was conducted, for the purpose of providing experimental benchmark data for assessment of reactor physics computer codes. Considerable effort was devoted to benchmark calculations as a part of the HTR-PROTEUS program. References 1 and 2 provide detailed data for use in constructing models for codes to be assessed. Reference 3 is a comprehensive summary of the HTR-PROTEUS experiments and the associated benchmark program. This document draws freely from these references. Only Cores 9 and 10 are evaluated in this benchmark report due to similarities in their construction. The other core configurations of the HTR-PROTEUS program are evaluated in their respective reports as outlined in Section 1.0. Cores 9 and 10 were evaluated and determined to be acceptable benchmark experiments.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liang, Thomas K.S.; Ko, F.-K
Although only a few percent of residual power remains during plant outages, the associated risk of core uncovery and corresponding fuel overheating has been identified to be relatively high, particularly under midloop operation (MLO) in pressurized water reactors. However, to analyze the system behavior during outages, the tools currently available, such as RELAP5, RETRAN, etc., cannot easily perform the task. Therefore, a medium-sized program aiming at reactor outage simulation and evaluation, such as MLO with the loss of residual heat removal (RHR), was developed. All important thermal-hydraulic processes involved during MLO with the loss of RHR will be properly simulatedmore » by the newly developed reactor outage simulation and evaluation (ROSE) code. Important processes during MLO with loss of RHR involve a pressurizer insurge caused by the hot-leg flooding, reflux condensation, liquid holdup inside the steam generator, loop-seal clearance, core-level depression, etc. Since the accuracy of the pressure distribution from the classical nodal momentum approach will be degraded when the system is stratified and under atmospheric pressure, the two-region approach with a modified two-fluid model will be the theoretical basis of the new program to analyze the nuclear steam supply system during plant outages. To verify the analytical model in the first step, posttest calculations against the closed integral midloop experiments with loss of RHR were performed. The excellent simulation capacity of the ROSE code against the Institute of Nuclear Energy Research Integral System Test Facility (IIST) test data is demonstrated.« less
NASA Astrophysics Data System (ADS)
D'Angelo, G.
2016-12-01
D'Angelo & Bodenheimer (2013, ApJ, 778, 77) performed global 3D radiation-hydrodynamics disk-planet simulations aimed at studying envelope formation around planetary cores, during the phase of sustained planetesimal accretion. The calculations modeled cores of 5, 10, and 15 Earth masses orbiting a sun-like star in a protoplanetary disk extending from ap/2 to 2ap in radius, ap=5 or 10 AU being the core's orbital radius. The gas equation of state - for a solar mixture of H2, H, He - accounted for translational, rotational, and vibrational states, for molecular dissociation and atomic ionization, and for radiation energy. Dust opacity calculations applied the Mie theory to multiple grain species whose size distributions ranged from 5e-6 to 1 mm. Mesh refinement via grid nesting allowed the planets' envelopes to be resolved at the core-radius length scale. Passive tracers were used to determine the volume of gas bound to a core, defining the envelope, and resulting in planet radii comparable to the Bondi radius. The energy budjet included contributions from the accretion of solids on the cores, whose rates were self-consistently computed with a 1D planet formation code. At this stage of the planet's growth, gravitational energy released in the envelope by solids' accretion far exceeds that released by gas accretion. These models are used to determine the gravitational torques exerted by the disk's gas on the planet and the resulting orbital migration rates. Since the envelope radius is a direct product of the models, they allow for a non-ambiguous assessment of the torques exerted by gas not bound to the planet. Additionally, since planets' envelopes are fully resolved, thermal and dynamical effects on the surrounding disk's gas are accurately taken into account. The computed migration rates are compared to those obtained from existing semi-analytical formulations for planets orbiting in isothermal and adiabatic disks. Because these formulations do not account for thermodynamical interactions between the planet's envelope and the disk's gas, the numerical models are also used to quanitfy the impact of short-scale tidal interactions on the total torque acting on the planet. Computing resources were provided by the NASA High-End Computing Program through the NASA Advanced Supercomputing Division at Ames Research Center.
Theoretical investigation of gas-surface interactions
NASA Technical Reports Server (NTRS)
Dyall, Kenneth G.
1992-01-01
The investigation into the appearance of intruder states from the negative continuum when some of the two-electron integrals were omitted was completed. The work shows that, provided all integrals involving core contracted functions in an atomic general contraction are included, or that the core functions are radially localized, meaningful results are obtained and intruder states do not appear. In the area of program development, the Dirac-Hartree-Fock (DHF) program for closed-shell polyatomic molecules was extended to permit Kramers-restricted open-shell DHF calculations with one electron in an open shell or one hole in a closed shell, or state-averaged DHF calculations over several particle or hole doublet states. One application of the open-shell code was to the KO molecule. Another major area of program development is the transformation of integrals from the scalar basis in which they are generated to the 2-spinor basis employed in parts of the DHF program, and hence to supermatrix form. Particularly concerning the omission of small component integrals, and with increase in availability of disk space, it is now possible to consider transforming the integrals. The use of ordered integrals, either in the scalar basis or in the 2-spinor basis, would considerably speed up the construction of the Fock matrix, and even more so if supermatrices were constructed. A considerable amount of effort was spent on analyzing the integral ordering and tranformation for the DHF program. The work of assessing the reliability of the relativistic effective core potentials (RECPs) was continued with calculation of the group IV monoxides. The perturbation of the metal atom provided by oxygen is expected to be larger than that provided by hydrogen and thus provide a better test of the qualification of the RECPs. Calculations on some platinum hydrides were carried out at nonrelativistic (NR), perturbation theory (PT) and DHF levels. Reprints of four papers describing this work are included.
Three-dimensional pin-to-pin analyses of VVER-440 cores by the MOBY-DICK code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lehmann, M.; Mikolas, P.
1994-12-31
Nuclear design for the Dukovany (EDU) VVER-440s nuclear power plant is routinely performed by the MOBY-DICK system. After its implementation on Hewlett Packard series 700 workstations, it is able to perform routinely three-dimensional pin-to-pin core analyses. For purposes of code validation, the benchmark prepared from EDU operational data was solved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gordienko, P. V., E-mail: gorpavel@vver.kiae.ru; Kotsarev, A. V.; Lizorkin, M. P.
2014-12-15
The procedure of recovery of pin-by-pin energy-release fields for the BIPR-8 code and the algorithm of the BIPR-8 code which is used in nodal computation of the reactor core and on which the recovery of pin-by-pin fields of energy release is based are briefly described. The description and results of the verification using the module of recovery of pin-by-pin energy-release fields and the TVS-M program are given.
NASA Astrophysics Data System (ADS)
Liu, Tianyu; Wolfe, Noah; Lin, Hui; Zieb, Kris; Ji, Wei; Caracappa, Peter; Carothers, Christopher; Xu, X. George
2017-09-01
This paper contains two parts revolving around Monte Carlo transport simulation on Intel Many Integrated Core coprocessors (MIC, also known as Xeon Phi). (1) MCNP 6.1 was recompiled into multithreading (OpenMP) and multiprocessing (MPI) forms respectively without modification to the source code. The new codes were tested on a 60-core 5110P MIC. The test case was FS7ONNi, a radiation shielding problem used in MCNP's verification and validation suite. It was observed that both codes became slower on the MIC than on a 6-core X5650 CPU, by a factor of 4 for the MPI code and, abnormally, 20 for the OpenMP code, and both exhibited limited capability of strong scaling. (2) We have recently added a Constructive Solid Geometry (CSG) module to our ARCHER code to provide better support for geometry modelling in radiation shielding simulation. The functions of this module are frequently called in the particle random walk process. To identify the performance bottleneck we developed a CSG proxy application and profiled the code using the geometry data from FS7ONNi. The profiling data showed that the code was primarily memory latency bound on the MIC. This study suggests that despite low initial porting e_ort, Monte Carlo codes do not naturally lend themselves to the MIC platform — just like to the GPUs, and that the memory latency problem needs to be addressed in order to achieve decent performance gain.
Energy levels, oscillator strengths, and transition probabilities for sulfur-like scandium, Sc VI
NASA Astrophysics Data System (ADS)
El-Maaref, A. A.; Abou Halaka, M. M.; Saddeek, Yasser B.
2017-09-01
Energy levels, Oscillator strengths, and transition probabilities for sulfur-like scandium are calculated using CIV3 code. The calculations have been executed in an intermediate coupling scheme using Breit-Pauli Hamiltonian. The present calculations have been compared with the experimental data and other theoretical calculations. LANL code has been used to confirm the accuracy of the present calculations, where the calculations using CIV3 code agree well with the corresponding values by LANL code. The calculated energy levels and oscillator strengths are in reasonable agreement with the published experimental data and theoretical values. We have calculated lifetimes of some excited levels, as well.
Modifications to WRFs dynamical core to improve the treatment of moisture for large-eddy simulations
Xiao, Heng; Endo, Satoshi; Wong, May; ...
2015-10-29
Yamaguchi and Feingold (2012) note that the cloud fields in their large-eddy simulations (LESs) of marine stratocumulus using the Weather Research and Forecasting (WRF) model exhibit a strong sensitivity to time stepping choices. In this study, we reproduce and analyze this sensitivity issue using two stratocumulus cases, one marine and one continental. Results show that (1) the sensitivity is associated with spurious motions near the moisture jump between the boundary layer and the free atmosphere, and (2) these spurious motions appear to arise from neglecting small variations in water vapor mixing ratio (qv) in the pressure gradient calculation in themore » acoustic sub-stepping portion of the integration procedure. We show that this issue is remedied in the WRF dynamical core by replacing the prognostic equation for the potential temperature θ with one for the moist potential temperature θm=θ(1+1.61qv), which allows consistent treatment of moisture in the calculation of pressure during the acoustic sub-steps. With this modification, the spurious motions and the sensitivity to the time stepping settings (i.e., the dynamic time step length and number of acoustic sub-steps) are eliminated in both of the example stratocumulus cases. In conclusion, this modification improves the applicability of WRF for LES applications, and possibly other models using similar dynamical core formulations, and also permits the use of longer time steps than in the original code.« less
NASA Astrophysics Data System (ADS)
Hogan, J.; Demichelis, C.; Monier-Garbet, P.; Guirlet, R.; Hess, W.; Schunke, B.
2000-10-01
A model combining the MIST (core symmetric) and BBQ (SOL asymmetric) codes is used to study the relation between impurity density and radiated power for representative cases from Tore Supra experiments on strong radiation regimes using the ergodic divertor. Transport predictions of external radiation are compared with observation to estimate the absolute impurity density. BBQ provides the incoming distribution of recycling impurity charge states for the radial transport calculation. The shots studied use the ergodic divertor and high ICRH power. Power is first applied and then the extrinsic impurity (Ne, N or Ar) is injected. Separate time dependent intrinsic (C and O) impurity transport calculations match radiation levels before and during the high power and impurity injection phases. Empirical diffusivities are sought to reproduce the UV (CV R, I lines), CVI Lya, OVIII Lya, Zeff, and horizontal bolometer data. The model has been used to calculate the relative radiative efficiency (radiated power / extrinsically contributed electron) for the sample database.
Validation of the WIMSD4M cross-section generation code with benchmark results
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leal, L.C.; Deen, J.R.; Woodruff, W.L.
1995-02-01
The WIMSD4 code has been adopted for cross-section generation in support of the Reduced Enrichment for Research and Test (RERTR) program at Argonne National Laboratory (ANL). Subsequently, the code has undergone several updates, and significant improvements have been achieved. The capability of generating group-collapsed micro- or macroscopic cross sections from the ENDF/B-V library and the more recent evaluation, ENDF/B-VI, in the ISOTXS format makes the modified version of the WIMSD4 code, WIMSD4M, very attractive, not only for the RERTR program, but also for the reactor physics community. The intent of the present paper is to validate the procedure to generatemore » cross-section libraries for reactor analyses and calculations utilizing the WIMSD4M code. To do so, the results of calculations performed with group cross-section data generated with the WIMSD4M code will be compared against experimental results. These results correspond to calculations carried out with thermal reactor benchmarks of the Oak Ridge National Laboratory(ORNL) unreflected critical spheres, the TRX critical experiments, and calculations of a modified Los Alamos highly-enriched heavy-water moderated benchmark critical system. The benchmark calculations were performed with the discrete-ordinates transport code, TWODANT, using WIMSD4M cross-section data. Transport calculations using the XSDRNPM module of the SCALE code system are also included. In addition to transport calculations, diffusion calculations with the DIF3D code were also carried out, since the DIF3D code is used in the RERTR program for reactor analysis and design. For completeness, Monte Carlo results of calculations performed with the VIM and MCNP codes are also presented.« less
MILC Code Performance on High End CPU and GPU Supercomputer Clusters
NASA Astrophysics Data System (ADS)
DeTar, Carleton; Gottlieb, Steven; Li, Ruizi; Toussaint, Doug
2018-03-01
With recent developments in parallel supercomputing architecture, many core, multi-core, and GPU processors are now commonplace, resulting in more levels of parallelism, memory hierarchy, and programming complexity. It has been necessary to adapt the MILC code to these new processors starting with NVIDIA GPUs, and more recently, the Intel Xeon Phi processors. We report on our efforts to port and optimize our code for the Intel Knights Landing architecture. We consider performance of the MILC code with MPI and OpenMP, and optimizations with QOPQDP and QPhiX. For the latter approach, we concentrate on the staggered conjugate gradient and gauge force. We also consider performance on recent NVIDIA GPUs using the QUDA library.
Cooper, Christopher D; Bardhan, Jaydeep P; Barba, L A
2014-03-01
The continuum theory applied to biomolecular electrostatics leads to an implicit-solvent model governed by the Poisson-Boltzmann equation. Solvers relying on a boundary integral representation typically do not consider features like solvent-filled cavities or ion-exclusion (Stern) layers, due to the added difficulty of treating multiple boundary surfaces. This has hindered meaningful comparisons with volume-based methods, and the effects on accuracy of including these features has remained unknown. This work presents a solver called PyGBe that uses a boundary-element formulation and can handle multiple interacting surfaces. It was used to study the effects of solvent-filled cavities and Stern layers on the accuracy of calculating solvation energy and binding energy of proteins, using the well-known apbs finite-difference code for comparison. The results suggest that if required accuracy for an application allows errors larger than about 2% in solvation energy, then the simpler, single-surface model can be used. When calculating binding energies, the need for a multi-surface model is problem-dependent, becoming more critical when ligand and receptor are of comparable size. Comparing with the apbs solver, the boundary-element solver is faster when the accuracy requirements are higher. The cross-over point for the PyGBe code is in the order of 1-2% error, when running on one gpu card (nvidia Tesla C2075), compared with apbs running on six Intel Xeon cpu cores. PyGBe achieves algorithmic acceleration of the boundary element method using a treecode, and hardware acceleration using gpus via PyCuda from a user-visible code that is all Python. The code is open-source under MIT license.
NASA Astrophysics Data System (ADS)
Cooper, Christopher D.; Bardhan, Jaydeep P.; Barba, L. A.
2014-03-01
The continuum theory applied to biomolecular electrostatics leads to an implicit-solvent model governed by the Poisson-Boltzmann equation. Solvers relying on a boundary integral representation typically do not consider features like solvent-filled cavities or ion-exclusion (Stern) layers, due to the added difficulty of treating multiple boundary surfaces. This has hindered meaningful comparisons with volume-based methods, and the effects on accuracy of including these features has remained unknown. This work presents a solver called PyGBe that uses a boundary-element formulation and can handle multiple interacting surfaces. It was used to study the effects of solvent-filled cavities and Stern layers on the accuracy of calculating solvation energy and binding energy of proteins, using the well-known
Zhang, Xiaohua; Wong, Sergio E; Lightstone, Felice C
2013-04-30
A mixed parallel scheme that combines message passing interface (MPI) and multithreading was implemented in the AutoDock Vina molecular docking program. The resulting program, named VinaLC, was tested on the petascale high performance computing (HPC) machines at Lawrence Livermore National Laboratory. To exploit the typical cluster-type supercomputers, thousands of docking calculations were dispatched by the master process to run simultaneously on thousands of slave processes, where each docking calculation takes one slave process on one node, and within the node each docking calculation runs via multithreading on multiple CPU cores and shared memory. Input and output of the program and the data handling within the program were carefully designed to deal with large databases and ultimately achieve HPC on a large number of CPU cores. Parallel performance analysis of the VinaLC program shows that the code scales up to more than 15K CPUs with a very low overhead cost of 3.94%. One million flexible compound docking calculations took only 1.4 h to finish on about 15K CPUs. The docking accuracy of VinaLC has been validated against the DUD data set by the re-docking of X-ray ligands and an enrichment study, 64.4% of the top scoring poses have RMSD values under 2.0 Å. The program has been demonstrated to have good enrichment performance on 70% of the targets in the DUD data set. An analysis of the enrichment factors calculated at various percentages of the screening database indicates VinaLC has very good early recovery of actives. Copyright © 2013 Wiley Periodicals, Inc.
Advanced numerical methods for three dimensional two-phase flow calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Toumi, I.; Caruge, D.
1997-07-01
This paper is devoted to new numerical methods developed for both one and three dimensional two-phase flow calculations. These methods are finite volume numerical methods and are based on the use of Approximate Riemann Solvers concepts to define convective fluxes versus mean cell quantities. The first part of the paper presents the numerical method for a one dimensional hyperbolic two-fluid model including differential terms as added mass and interface pressure. This numerical solution scheme makes use of the Riemann problem solution to define backward and forward differencing to approximate spatial derivatives. The construction of this approximate Riemann solver uses anmore » extension of Roe`s method that has been successfully used to solve gas dynamic equations. As far as the two-fluid model is hyperbolic, this numerical method seems very efficient for the numerical solution of two-phase flow problems. The scheme was applied both to shock tube problems and to standard tests for two-fluid computer codes. The second part describes the numerical method in the three dimensional case. The authors discuss also some improvements performed to obtain a fully implicit solution method that provides fast running steady state calculations. Such a scheme is not implemented in a thermal-hydraulic computer code devoted to 3-D steady-state and transient computations. Some results obtained for Pressurised Water Reactors concerning upper plenum calculations and a steady state flow in the core with rod bow effect evaluation are presented. In practice these new numerical methods have proved to be stable on non staggered grids and capable of generating accurate non oscillating solutions for two-phase flow calculations.« less
Multigroup cross section library for GFR2400
NASA Astrophysics Data System (ADS)
Čerba, Štefan; Vrban, Branislav; Lüley, Jakub; Haščík, Ján; Nečas, Vladimír
2017-09-01
In this paper the development and optimization of the SBJ_E71 multigroup cross section library for GFR2400 applications is discussed. A cross section processing scheme, merging Monte Carlo and deterministic codes, was developed. Several fine and coarse group structures and two weighting flux options were analysed through 18 benchmark experiments selected from the handbook of ICSBEP and based on performed similarity assessments. The performance of the collapsed version of the SBJ_E71 library was compared with MCNP5 CE ENDF/B VII.1 and the Korean KAFAX-E70 library. The comparison was made based on integral parameters of calculations performed on full core homogenous models.
Wallace, Sarah J; Worrall, Linda; Rose, Tanya; Le Dorze, Guylaine
2017-11-12
This study synthesised the findings of three separate consensus processes exploring the perspectives of key stakeholder groups about important aphasia treatment outcomes. This process was conducted to generate recommendations for outcome domains to be included in a core outcome set for aphasia treatment trials. International Classification of Functioning, Disability, and Health codes were examined to identify where the groups of: (1) people with aphasia, (2) family members, (3) aphasia researchers, and (4) aphasia clinicians/managers, demonstrated congruence in their perspectives regarding important treatment outcomes. Codes were contextualized using qualitative data. Congruence across three or more stakeholder groups was evident for ICF chapters: Mental functions; Communication; and Services, systems, and policies. Quality of life was explicitly identified by clinicians/managers and researchers, while people with aphasia and their families identified outcomes known to be determinants of quality of life. Core aphasia outcomes include: language, emotional wellbeing, communication, patient-reported satisfaction with treatment and impact of treatment, and quality of life. International Classification of Functioning, Disability, and Health coding can be used to compare stakeholder perspectives and identify domains for core outcome sets. Pairing coding with qualitative data may ensure important nuances of meaning are retained. Implications for rehabilitation The outcomes measured in treatment research should be relevant to stakeholders and support health care decision making. Core outcome sets (agreed, minimum set of outcomes, and outcome measures) are increasingly being used to ensure the relevancy and consistency of the outcomes measured in treatment studies. Important aphasia treatment outcomes span all components of the International Classification of Functioning, Disability, and Health. Stakeholders demonstrated congruence in the identification of important outcomes which related Mental functions; Communication; Services, systems, and policies; and Quality of life. A core outcome set for aphasia treatment research should include measures relating to: language, emotional wellbeing, communication, patient-reported satisfaction with treatment and impact of treatment, and quality of life. Coding using the International Classification of Functioning, Disability, and Health, presents a novel methodology for the comparison of stakeholder perspectives to inform recommendations for outcome constructs to be included in a core outcome set. Coding can be paired with qualitative data to ensure nuances of meaning are retained.
Parametric Analysis of a Turbine Trip Event in a BWR Using a 3D Nodal Code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gorzel, A.
2006-07-01
Two essential thermal hydraulics safety criteria concerning the reactor core are that even during operational transients there is no fuel melting and not-permissible cladding temperatures are avoided. A common concept for boiling water reactors is to establish a minimum critical power ratio (MCPR) for steady state operation. For this MCPR it is shown that only a very small number of fuel rods suffers a short-term dryout during the transient. It is known from experience that the limiting transient for the determination of the MCPR is the turbine trip with blocked bypass system. This fast transient was simulated for a Germanmore » BWR by use of the three-dimensional reactor analysis transient code SIMULATE-3K. The transient behaviour of the hot channels was used as input for the dryout calculation with the transient thermal hydraulics code FRANCESCA. By this way the maximum reduction of the CPR during the transient could be calculated. The fast increase in reactor power due to the pressure increase and to an increased core inlet flow is limited mainly by the Doppler effect, but automatically triggered operational measures also can contribute to the mitigation of the turbine trip. One very important method is the short-term fast reduction of the recirculation pump speed which is initiated e. g. by a pressure increase in front of the turbine. The large impacts of the starting time and of the rate of the pump speed reduction on the power progression and hence on the deterioration of CPR is presented. Another important procedure to limit the effects of the transient is the fast shutdown of the reactor that is caused when the reactor power reaches the limit value. It is shown that the SCRAM is not fast enough to reduce the first power maximum, but is able to prevent the appearance of a second - much smaller - maximum that would occur around one second after the first one in the absence of a SCRAM. (author)« less
Thermal neutron filter design for the neutron radiography facility at the LVR-15 reactor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Soltes, Jaroslav; Faculty of Nuclear Sciences and Physical Engineering, CTU in Prague,; Viererbl, Ladislav
2015-07-01
In 2011 a decision was made to build a neutron radiography facility at one of the unused horizontal channels of the LVR-15 research reactor in Rez, Czech Republic. One of the key conditions for operating an effective radiography facility is the delivery of a high intensity, homogeneous and collimated thermal neutron beam at the sample location. Additionally the intensity of fast neutrons has to be kept as low as possible as the fast neutrons may damage the detectors used for neutron imaging. As the spectrum in the empty horizontal channel roughly copies the spectrum in the reactor core, which hasmore » a high ratio of fast neutrons, neutron filter components have to be installed inside the channel in order to achieve desired beam parameters. As the channel design does not allow the instalment of complex filters and collimators, an optimal solution represent neutron filters made of large single-crystal ingots of proper material composition. Single-crystal silicon was chosen as a favorable filter material for its wide availability in sufficient dimensions. Besides its ability to reasonably lower the ratio of fast neutrons while still keeping high intensities of thermal neutrons, due to its large dimensions, it suits as a shielding against gamma radiation from the reactor core. For designing the necessary filter dimensions the Monte-Carlo MCNP transport code was used. As the code does not provide neutron cross-section libraries for thermal neutron transport through single-crystalline silicon, these had to be created by approximating the theory of thermal neutron scattering and modifying the original cross-section data which are provided with the code. Carrying out a series of calculations the filter thickness of 1 m proved good for gaining a beam with desired parameters and a low gamma background. After mounting the filter inside the channel several measurements of the neutron field were realized at the beam exit. The results have justified the expected calculated values. After the successful filter installing and a series of measurements, first test neutron radiography attempts with test samples could been carried out. (authors)« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
McKemmish, Laura K., E-mail: laura.mckemmish@gmail.com; Research School of Chemistry, Australian National University, Canberra
Algorithms for the efficient calculation of two-electron integrals in the newly developed mixed ramp-Gaussian basis sets are presented, alongside a Fortran90 implementation of these algorithms, RAMPITUP. These new basis sets have significant potential to (1) give some speed-up (estimated at up to 20% for large molecules in fully optimised code) to general-purpose Hartree-Fock (HF) and density functional theory quantum chemistry calculations, replacing all-Gaussian basis sets, and (2) give very large speed-ups for calculations of core-dependent properties, such as electron density at the nucleus, NMR parameters, relativistic corrections, and total energies, replacing the current use of Slater basis functions or verymore » large specialised all-Gaussian basis sets for these purposes. This initial implementation already demonstrates roughly 10% speed-ups in HF/R-31G calculations compared to HF/6-31G calculations for large linear molecules, demonstrating the promise of this methodology, particularly for the second application. As well as the reduction in the total primitive number in R-31G compared to 6-31G, this timing advantage can be attributed to the significant reduction in the number of mathematically complex intermediate integrals after modelling each ramp-Gaussian basis-function-pair as a sum of ramps on a single atomic centre.« less
Upgrade of Irradiation Test Capability of the Experimental Fast Reactor Joyo
NASA Astrophysics Data System (ADS)
Sekine, Takashi; Aoyama, Takafumi; Suzuki, Soju; Yamashita, Yoshioki
2003-06-01
The JOYO MK-II core was operated from 1983 to 2000 as fast neutron irradiation bed. In order to meet various requirements for irradiation tests for development of FBRs, the JOYO upgrading project named MK-III program was initiated. The irradiation capability in the MK-III core will be about four times larger than that of the MK-II core. Advanced irradiation test subassemblies such as capsule type subassembly and on-line instrumentation rig are planned. As an innovative reactor safety system, the irradiation test of Self-Actuated Shutdown System (SASS) will be conducted. In order to improve the accuracy of neutron fluence, the core management code system was upgraded, and the Monte Carlo code and Helium Accumulation Fluence Monitor (HAFM) were applied. The MK-III core is planned to achieve initial criticality in July 2003.
Design of an Object-Oriented Turbomachinery Analysis Code: Initial Results
NASA Technical Reports Server (NTRS)
Jones, Scott
2015-01-01
Performance prediction of turbomachines is a significant part of aircraft propulsion design. In the conceptual design stage, there is an important need to quantify compressor and turbine aerodynamic performance and develop initial geometry parameters at the 2-D level prior to more extensive Computational Fluid Dynamics (CFD) analyses. The Object-oriented Turbomachinery Analysis Code (OTAC) is being developed to perform 2-D meridional flowthrough analysis of turbomachines using an implicit formulation of the governing equations to solve for the conditions at the exit of each blade row. OTAC is designed to perform meanline or streamline calculations; for streamline analyses simple radial equilibrium is used as a governing equation to solve for spanwise property variations. While the goal for OTAC is to allow simulation of physical effects and architectural features unavailable in other existing codes, it must first prove capable of performing calculations for conventional turbomachines.OTAC is being developed using the interpreted language features available in the Numerical Propulsion System Simulation (NPSS) code described by Claus et al (1991). Using the NPSS framework came with several distinct advantages, including access to the pre-existing NPSS thermodynamic property packages and the NPSS Newton-Raphson solver. The remaining objects necessary for OTAC were written in the NPSS framework interpreted language. These new objects form the core of OTAC and are the BladeRow, BladeSegment, TransitionSection, Expander, Reducer, and OTACstart Elements. The BladeRow and BladeSegment consumed the initial bulk of the development effort and required determining the equations applicable to flow through turbomachinery blade rows given specific assumptions about the nature of that flow. Once these objects were completed, OTAC was tested and found to agree with existing solutions from other codes; these tests included various meanline and streamline comparisons of axial compressors and turbines at design and off-design conditions.
Design of an Object-Oriented Turbomachinery Analysis Code: Initial Results
NASA Technical Reports Server (NTRS)
Jones, Scott M.
2015-01-01
Performance prediction of turbomachines is a significant part of aircraft propulsion design. In the conceptual design stage, there is an important need to quantify compressor and turbine aerodynamic performance and develop initial geometry parameters at the 2-D level prior to more extensive Computational Fluid Dynamics (CFD) analyses. The Object-oriented Turbomachinery Analysis Code (OTAC) is being developed to perform 2-D meridional flowthrough analysis of turbomachines using an implicit formulation of the governing equations to solve for the conditions at the exit of each blade row. OTAC is designed to perform meanline or streamline calculations; for streamline analyses simple radial equilibrium is used as a governing equation to solve for spanwise property variations. While the goal for OTAC is to allow simulation of physical effects and architectural features unavailable in other existing codes, it must first prove capable of performing calculations for conventional turbomachines. OTAC is being developed using the interpreted language features available in the Numerical Propulsion System Simulation (NPSS) code described by Claus et al (1991). Using the NPSS framework came with several distinct advantages, including access to the pre-existing NPSS thermodynamic property packages and the NPSS Newton-Raphson solver. The remaining objects necessary for OTAC were written in the NPSS framework interpreted language. These new objects form the core of OTAC and are the BladeRow, BladeSegment, TransitionSection, Expander, Reducer, and OTACstart Elements. The BladeRow and BladeSegment consumed the initial bulk of the development effort and required determining the equations applicable to flow through turbomachinery blade rows given specific assumptions about the nature of that flow. Once these objects were completed, OTAC was tested and found to agree with existing solutions from other codes; these tests included various meanline and streamline comparisons of axial compressors and turbines at design and off-design conditions.
The CRONOS Code for Astrophysical Magnetohydrodynamics
NASA Astrophysics Data System (ADS)
Kissmann, R.; Kleimann, J.; Krebl, B.; Wiengarten, T.
2018-06-01
We describe the magnetohydrodynamics (MHD) code CRONOS, which has been used in astrophysics and space-physics studies in recent years. CRONOS has been designed to be easily adaptable to the problem in hand, where the user can expand or exchange core modules or add new functionality to the code. This modularity comes about through its implementation using a C++ class structure. The core components of the code include solvers for both hydrodynamical (HD) and MHD problems. These problems are solved on different rectangular grids, which currently support Cartesian, spherical, and cylindrical coordinates. CRONOS uses a finite-volume description with different approximate Riemann solvers that can be chosen at runtime. Here, we describe the implementation of the code with a view toward its ongoing development. We illustrate the code’s potential through several (M)HD test problems and some astrophysical applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jeong, Hae-Yong; Ha, Kwi-Seok; Chang, Won-Pyo
The local blockage in a subassembly of a liquid metal-cooled reactor (LMR) is of importance to the plant safety because of the compact design and the high power density of the core. To analyze the thermal-hydraulic parameters in a subassembly of a liquid metal-cooled reactor with a flow blockage, the Korea Atomic Energy Research Institute has developed the MATRA-LMR-FB code. This code uses the distributed resistance model to describe the sweeping flow formed by the wire wrap around the fuel rods and to model the recirculation flow after a blockage. The hybrid difference scheme is also adopted for the descriptionmore » of the convective terms in the recirculating wake region of low velocity. Some state-of-the-art turbulent mixing models were implemented in the code, and the models suggested by Rehme and by Zhukov are analyzed and found to be appropriate for the description of the flow blockage in an LMR subassembly. The MATRA-LMR-FB code predicts accurately the experimental data of the Oak Ridge National Laboratory 19-pin bundle with a blockage for both the high-flow and low-flow conditions. The influences of the distributed resistance model, the hybrid difference method, and the turbulent mixing models are evaluated step by step with the experimental data. The appropriateness of the models also has been evaluated through a comparison with the results from the COMMIX code calculation. The flow blockage for the KALIMER design has been analyzed with the MATRA-LMR-FB code and is compared with the SABRE code to guarantee the design safety for the flow blockage.« less
Environment-based pin-power reconstruction method for homogeneous core calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leroyer, H.; Brosselard, C.; Girardi, E.
2012-07-01
Core calculation schemes are usually based on a classical two-step approach associated with assembly and core calculations. During the first step, infinite lattice assemblies calculations relying on a fundamental mode approach are used to generate cross-sections libraries for PWRs core calculations. This fundamental mode hypothesis may be questioned when dealing with loading patterns involving several types of assemblies (UOX, MOX), burnable poisons, control rods and burn-up gradients. This paper proposes a calculation method able to take into account the heterogeneous environment of the assemblies when using homogeneous core calculations and an appropriate pin-power reconstruction. This methodology is applied to MOXmore » assemblies, computed within an environment of UOX assemblies. The new environment-based pin-power reconstruction is then used on various clusters of 3x3 assemblies showing burn-up gradients and UOX/MOX interfaces, and compared to reference calculations performed with APOLLO-2. The results show that UOX/MOX interfaces are much better calculated with the environment-based calculation scheme when compared to the usual pin-power reconstruction method. The power peak is always better located and calculated with the environment-based pin-power reconstruction method on every cluster configuration studied. This study shows that taking into account the environment in transport calculations can significantly improve the pin-power reconstruction so far as it is consistent with the core loading pattern. (authors)« less
CoreTSAR: Core Task-Size Adapting Runtime
Scogland, Thomas R. W.; Feng, Wu-chun; Rountree, Barry; ...
2014-10-27
Heterogeneity continues to increase at all levels of computing, with the rise of accelerators such as GPUs, FPGAs, and other co-processors into everything from desktops to supercomputers. As a consequence, efficiently managing such disparate resources has become increasingly complex. CoreTSAR seeks to reduce this complexity by adaptively worksharing parallel-loop regions across compute resources without requiring any transformation of the code within the loop. Lastly, our results show performance improvements of up to three-fold over a current state-of-the-art heterogeneous task scheduler as well as linear performance scaling from a single GPU to four GPUs for many codes. In addition, CoreTSAR demonstratesmore » a robust ability to adapt to both a variety of workloads and underlying system configurations.« less
NASA Astrophysics Data System (ADS)
Mielikainen, Jarno; Huang, Bormin; Huang, Allen H.
2014-10-01
Purdue-Lin scheme is a relatively sophisticated microphysics scheme in the Weather Research and Forecasting (WRF) model. The scheme includes six classes of hydro meteors: water vapor, cloud water, raid, cloud ice, snow and graupel. The scheme is very suitable for massively parallel computation as there are no interactions among horizontal grid points. In this paper, we accelerate the Purdue Lin scheme using Intel Many Integrated Core Architecture (MIC) hardware. The Intel Xeon Phi is a high performance coprocessor consists of up to 61 cores. The Xeon Phi is connected to a CPU via the PCI Express (PICe) bus. In this paper, we will discuss in detail the code optimization issues encountered while tuning the Purdue-Lin microphysics Fortran code for Xeon Phi. In particularly, getting a good performance required utilizing multiple cores, the wide vector operations and make efficient use of memory. The results show that the optimizations improved performance of the original code on Xeon Phi 5110P by a factor of 4.2x. Furthermore, the same optimizations improved performance on Intel Xeon E5-2603 CPU by a factor of 1.2x compared to the original code.
Progress Towards a Rad-Hydro Code for Modern Computing Architectures LA-UR-10-02825
NASA Astrophysics Data System (ADS)
Wohlbier, J. G.; Lowrie, R. B.; Bergen, B.; Calef, M.
2010-11-01
We are entering an era of high performance computing where data movement is the overwhelming bottleneck to scalable performance, as opposed to the speed of floating-point operations per processor. All multi-core hardware paradigms, whether heterogeneous or homogeneous, be it the Cell processor, GPGPU, or multi-core x86, share this common trait. In multi-physics applications such as inertial confinement fusion or astrophysics, one may be solving multi-material hydrodynamics with tabular equation of state data lookups, radiation transport, nuclear reactions, and charged particle transport in a single time cycle. The algorithms are intensely data dependent, e.g., EOS, opacity, nuclear data, and multi-core hardware memory restrictions are forcing code developers to rethink code and algorithm design. For the past two years LANL has been funding a small effort referred to as Multi-Physics on Multi-Core to explore ideas for code design as pertaining to inertial confinement fusion and astrophysics applications. The near term goals of this project are to have a multi-material radiation hydrodynamics capability, with tabular equation of state lookups, on cartesian and curvilinear block structured meshes. In the longer term we plan to add fully implicit multi-group radiation diffusion and material heat conduction, and block structured AMR. We will report on our progress to date.
Theoretical Studies of Dissociative Recombination of Electrons with SH+ Ions
NASA Astrophysics Data System (ADS)
Kashinski, D. O.; di Nallo, O. E.; Hickman, A. P.; Mezei, J. Zs.; Colboc, F.; Schneider, I. F.; Chakrabarti, K.; Talbi, D.
2017-04-01
We are investigating the dissociative recombination (DR) of electrons with the molecular ion SH+, i.e. e- +SH+ -> S + H . SH+ is found in the interstellar medium (ISM), and little is known concerning its chemistry. Understanding the role of DR of electrons with SH+ will lead to more accurate astrophysical models. Large active-space multi-reference configuration interaction (MRCI) electronic structure calculations were performed using the GAMESS code to obtain ground and excited 2 Π state potential energy curves (PECs) for several values of SH separation. Core-excited Rydberg states have proven to be of huge importance. The block diagonalization method was used to disentangle interacting states and form a diabatic representation of the PECs. Currently we are performing dynamics calculations using Multichannel Quantum Defect Theory (MQDT) to obtain DR rates. The status of the work will be presented at the conference. Work supported by the French CNRS, the NSF, the XSEDE, and USMA.
Theoretical Studies of Dissociative Recombination of Electrons with SH+ Ions
NASA Astrophysics Data System (ADS)
Kashinski, D. O.; di Nallo, O. E.; Hickman, A. P.; Mezei, J. Zs.; Colboc, F.; Schneider, I. F.; Chakrabarti, K.; Talbi, D.
2016-05-01
We are investigating the dissociative recombination (DR) of electrons with the molecular ion SH+, i.e. e- +SH+ --> S + H . SH+ is found in the interstellar medium (ISM), and little is known concerning its chemistry. Understanding the role of DR of electrons with SH+ will lead to more accurate astrophysical models. Large active-space multi-reference configuration interaction (MRCI) electronic structure calculations were performed using the GAMESS code to obtain ground and excited 2 Π state potential energy curves (PECs) for several values of SH separation. Core-excited Rydberg states have proven to be of huge importance. The block diagonalization method was used to disentangle interacting states and form a diabatic representation of the PECs. Currently we are performing dynamics calculations using Multichannel Quantum Defect Theory (MQDT) to obtain DR rates. The status of the work will be presented at the conference. work supported by the French CNRS, the NSF, the XSEDE, and USMA.
Potential Application of a Graphical Processing Unit to Parallel Computations in the NUBEAM Code
NASA Astrophysics Data System (ADS)
Payne, J.; McCune, D.; Prater, R.
2010-11-01
NUBEAM is a comprehensive computational Monte Carlo based model for neutral beam injection (NBI) in tokamaks. NUBEAM computes NBI-relevant profiles in tokamak plasmas by tracking the deposition and the slowing of fast ions. At the core of NUBEAM are vector calculations used to track fast ions. These calculations have recently been parallelized to run on MPI clusters. However, cost and interlink bandwidth limit the ability to fully parallelize NUBEAM on an MPI cluster. Recent implementation of double precision capabilities for Graphical Processing Units (GPUs) presents a cost effective and high performance alternative or complement to MPI computation. Commercially available graphics cards can achieve up to 672 GFLOPS double precision and can handle hundreds of thousands of threads. The ability to execute at least one thread per particle simultaneously could significantly reduce the execution time and the statistical noise of NUBEAM. Progress on implementation on a GPU will be presented.
Correlation Energies from the Two-Component Random Phase Approximation.
Kühn, Michael
2014-02-11
The correlation energy within the two-component random phase approximation accounting for spin-orbit effects is derived. The resulting plasmon equation is rewritten-analogously to the scalar relativistic case-in terms of the trace of two Hermitian matrices for (Kramers-restricted) closed-shell systems and then represented as an integral over imaginary frequency using the resolution of the identity approximation. The final expression is implemented in the TURBOMOLE program suite. The code is applied to the computation of equilibrium distances and vibrational frequencies of heavy diatomic molecules. The efficiency is demonstrated by calculation of the relative energies of the Oh-, D4h-, and C5v-symmetric isomers of Pb6. Results within the random phase approximation are obtained based on two-component Kohn-Sham reference-state calculations, using effective-core potentials. These values are finally compared to other two-component and scalar relativistic methods, as well as experimental data.
WOMBAT: A Scalable and High-performance Astrophysical Magnetohydrodynamics Code
NASA Astrophysics Data System (ADS)
Mendygral, P. J.; Radcliffe, N.; Kandalla, K.; Porter, D.; O'Neill, B. J.; Nolting, C.; Edmon, P.; Donnert, J. M. F.; Jones, T. W.
2017-02-01
We present a new code for astrophysical magnetohydrodynamics specifically designed and optimized for high performance and scaling on modern and future supercomputers. We describe a novel hybrid OpenMP/MPI programming model that emerged from a collaboration between Cray, Inc. and the University of Minnesota. This design utilizes MPI-RMA optimized for thread scaling, which allows the code to run extremely efficiently at very high thread counts ideal for the latest generation of multi-core and many-core architectures. Such performance characteristics are needed in the era of “exascale” computing. We describe and demonstrate our high-performance design in detail with the intent that it may be used as a model for other, future astrophysical codes intended for applications demanding exceptional performance.
Research on Streamlines and Aerodynamic Heating for Unstructured Grids on High-Speed Vehicles
NASA Technical Reports Server (NTRS)
DeJarnette, Fred R.; Hamilton, H. Harris (Technical Monitor)
2001-01-01
Engineering codes are needed which can calculate convective heating rates accurately and expeditiously on the surfaces of high-speed vehicles. One code which has proven to meet these needs is the Langley Approximate Three-Dimensional Convective Heating (LATCH) code. It uses the axisymmetric analogue in an integral boundary-layer method to calculate laminar and turbulent heating rates along inviscid surface streamlines. It requires the solution of the inviscid flow field to provide the surface properties needed to calculate the streamlines and streamline metrics. The LATCH code has been used with inviscid codes which calculated the flow field on structured grids, Several more recent inviscid codes calculate flow field properties on unstructured grids. The present research develops a method to calculate inviscid surface streamlines, the streamline metrics, and heating rates using the properties calculated from inviscid flow fields on unstructured grids. Mr. Chris Riley, prior to his departure from NASA LaRC, developed a preliminary code in the C language, called "UNLATCH", to accomplish these goals. No publication was made on his research. The present research extends and improves on the code developed by Riley. Particular attention is devoted to the stagnation region, and the method is intended for programming in the FORTRAN 90 language.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tang, Guoping; D'Azevedo, Ed F; Zhang, Fan
2010-01-01
Calibration of groundwater models involves hundreds to thousands of forward solutions, each of which may solve many transient coupled nonlinear partial differential equations, resulting in a computationally intensive problem. We describe a hybrid MPI/OpenMP approach to exploit two levels of parallelisms in software and hardware to reduce calibration time on multi-core computers. HydroGeoChem 5.0 (HGC5) is parallelized using OpenMP for direct solutions for a reactive transport model application, and a field-scale coupled flow and transport model application. In the reactive transport model, a single parallelizable loop is identified to account for over 97% of the total computational time using GPROF.more » Addition of a few lines of OpenMP compiler directives to the loop yields a speedup of about 10 on a 16-core compute node. For the field-scale model, parallelizable loops in 14 of 174 HGC5 subroutines that require 99% of the execution time are identified. As these loops are parallelized incrementally, the scalability is found to be limited by a loop where Cray PAT detects over 90% cache missing rates. With this loop rewritten, similar speedup as the first application is achieved. The OpenMP-parallelized code can be run efficiently on multiple workstations in a network or multiple compute nodes on a cluster as slaves using parallel PEST to speedup model calibration. To run calibration on clusters as a single task, the Levenberg Marquardt algorithm is added to HGC5 with the Jacobian calculation and lambda search parallelized using MPI. With this hybrid approach, 100 200 compute cores are used to reduce the calibration time from weeks to a few hours for these two applications. This approach is applicable to most of the existing groundwater model codes for many applications.« less
Cscibox: A Software System for Age-Model Construction and Evaluation
NASA Astrophysics Data System (ADS)
Bradley, E.; Anderson, K. A.; Marchitto, T. M., Jr.; de Vesine, L. R.; White, J. W. C.; Anderson, D. M.
2014-12-01
CSciBox is an integrated software system for the construction and evaluation of age models of paleo-environmetal archives, both directly dated and cross dated. The time has come to encourage cross-pollinization between earth science and computer science in dating paleorecords. This project addresses that need. The CSciBox code, which is being developed by a team of computer scientists and geoscientists, is open source and freely available on github. The system employs modern database technology to store paleoclimate proxy data and analysis results in an easily accessible and searchable form. This makes it possible to do analysis on the whole core at once, in an interactive fashion, or to tailor the analysis to a subset of the core without loading the entire data file. CSciBox provides a number of 'components' that perform the common steps in age-model construction and evaluation: calibrations, reservoir-age correction, interpolations, statistics, and so on. The user employs these components via a graphical user interface (GUI) to go from raw data to finished age model in a single tool: e.g., an IntCal09 calibration of 14C data from a marine sediment core, followed by a piecewise-linear interpolation. CSciBox's GUI supports plotting of any measurement in the core against any other measurement, or against any of the variables in the calculation of the age model-with or without explicit error representations. Using the GUI, CSciBox's user can import a new calibration curve or other background data set and define a new module that employs that information. Users can also incorporate other software (e.g., Calib, BACON) as 'plug ins.' In the case of truly large data or significant computational effort, CSciBox is parallelizable across modern multicore processors, or clusters, or even the cloud. The next generation of the CSciBox code, currently in the testing stages, includes an automated reasoning engine that supports a more-thorough exploration of plausible age models and cross-dating scenarios.
NASA Technical Reports Server (NTRS)
Norment, H. G.
1980-01-01
Calculations can be performed for any atmospheric conditions and for all water drop sizes, from the smallest cloud droplet to large raindrops. Any subsonic, external, non-lifting flow can be accommodated; flow into, but not through, inlets also can be simulated. Experimental water drop drag relations are used in the water drop equations of motion and effects of gravity settling are included. Seven codes are described: (1) a code used to debug and plot body surface description data; (2) a code that processes the body surface data to yield the potential flow field; (3) a code that computes flow velocities at arrays of points in space; (4) a code that computes water drop trajectories from an array of points in space; (5) a code that computes water drop trajectories and fluxes to arbitrary target points; (6) a code that computes water drop trajectories tangent to the body; and (7) a code that produces stereo pair plots which include both the body and trajectories. Code descriptions include operating instructions, card inputs and printouts for example problems, and listing of the FORTRAN codes. Accuracy of the calculations is discussed, and trajectory calculation results are compared with prior calculations and with experimental data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bowman, S.M.
1995-01-01
The requirements of ANSI/ANS 8.1 specify that calculational methods for away-from-reactor criticality safety analyses be validated against experimental measurements. If credit for the negative reactivity of the depleted (or spent) fuel isotopics is desired, it is necessary to benchmark computational methods against spent fuel critical configurations. This report summarizes a portion of the ongoing effort to benchmark away-from-reactor criticality analysis methods using critical configurations from commercial pressurized-water reactors. The analysis methodology selected for all the calculations reported herein is based on the codes and data provided in the SCALE-4 code system. The isotopic densities for the spent fuel assemblies inmore » the critical configurations were calculated using the SAS2H analytical sequence of the SCALE-4 system. The sources of data and the procedures for deriving SAS2H input parameters are described in detail. The SNIKR code module was used to extract the necessary isotopic densities from the SAS2H results and provide the data in the format required by the SCALE criticality analysis modules. The CSASN analytical sequence in SCALE-4 was used to perform resonance processing of the cross sections. The KENO V.a module of SCALE-4 was used to calculate the effective multiplication factor (k{sub eff}) of each case. The SCALE-4 27-group burnup library containing ENDF/B-IV (actinides) and ENDF/B-V (fission products) data was used for all the calculations. This volume of the report documents the SCALE system analysis of three reactor critical configurations for the Sequoyah Unit 2 Cycle 3. This unit and cycle were chosen because of the relevance in spent fuel benchmark applications: (1) the unit had a significantly long downtime of 2.7 years during the middle of cycle (MOC) 3, and (2) the core consisted entirely of burned fuel at the MOC restart. The first benchmark critical calculation was the MOC restart at hot, full-power (HFP) critical conditions. The other two benchmark critical calculations were the beginning-of-cycle (BOC) startup at both hot, zero-power (HZP) and HFP critical conditions. These latter calculations were used to check for consistency in the calculated results for different burnups and downtimes. The k{sub eff} results were in the range of 1.00014 to 1.00259 with a standard deviation of less than 0.001.« less
Evanescent field characteristics of eccentric core optical fiber for distributed sensing.
Liu, Jianxia; Yuan, Libo
2014-03-01
Fundamental core-mode cutoff and evanescent field are considered for an eccentric core optical fiber (ECOF). A method has been proposed to calculate the core-mode cutoff by solving the eigenvalue equations of an ECOF. Using conformal mapping, the asymmetric geometrical structure can be transformed into a simple, easily solved axisymmetric optical fiber with three layers. The variation of the fundamental core-mode cut-off frequency (V(c)) is also calculated with different eccentric distances, wavelengths, core radii, and coating refractive indices. The fractional power of evanescent fields for ECOF is also calculated with the eccentric distances and coating refractive indices. These calculations are necessary to design the structural parameters of an ECOF for long-distance, single-mode distributed evanescent field absorption sensors.
NASA Technical Reports Server (NTRS)
Rogge, Matthew D. (Inventor); Moore, Jason P. (Inventor)
2014-01-01
Shape of a multi-core optical fiber is determined by positioning the fiber in an arbitrary initial shape and measuring strain over the fiber's length using strain sensors. A three-coordinate p-vector is defined for each core as a function of the distance of the corresponding cores from a center point of the fiber and a bending angle of the cores. The method includes calculating, via a controller, an applied strain value of the fiber using the p-vector and the measured strain for each core, and calculating strain due to bending as a function of the measured and the applied strain values. Additionally, an apparent local curvature vector is defined for each core as a function of the calculated strain due to bending. Curvature and bend direction are calculated using the apparent local curvature vector, and fiber shape is determined via the controller using the calculated curvature and bend direction.
Core radial electric field and transport in Wendelstein 7-X plasmas
NASA Astrophysics Data System (ADS)
Pablant, N. A.; Langenberg, A.; Alonso, A.; Beidler, C. D.; Bitter, M.; Bozhenkov, S.; Burhenn, R.; Beurskens, M.; Delgado-Aparicio, L.; Dinklage, A.; Fuchert, G.; Gates, D.; Geiger, J.; Hill, K. W.; Höfel, U.; Hirsch, M.; Knauer, J.; Krämer-Flecken, A.; Landreman, M.; Lazerson, S.; Maaßberg, H.; Marchuk, O.; Massidda, S.; Neilson, G. H.; Pasch, E.; Satake, S.; Svennson, J.; Traverso, P.; Turkin, Y.; Valson, P.; Velasco, J. L.; Weir, G.; Windisch, T.; Wolf, R. C.; Yokoyama, M.; Zhang, D.; W7-X Team
2018-02-01
The results from the investigation of neoclassical core transport and the role of the radial electric field profile (Er) in the first operational phase of the Wendelstein 7-X (W7-X) stellarator are presented. In stellarator plasmas, the details of the Er profile are expected to have a strong effect on both the particle and heat fluxes. Investigation of the radial electric field is important in understanding neoclassical transport and in validation of neoclassical calculations. The radial electric field is closely related to the perpendicular plasma flow (u⊥) through the force balance equation. This allows the radial electric field to be inferred from measurements of the perpendicular flow velocity, which can be measured using the x-ray imaging crystal spectrometer and correlation reflectometry diagnostics. Large changes in the perpendicular rotation, on the order of Δu⊥˜ 5 km/s (ΔEr ˜ 12 kV/m), have been observed within a set of experiments where the heating power was stepped down from 2 MW to 0.6 MW. These experiments are examined in detail to explore the relationship between heating power temperature, and density profiles and the radial electric field. Finally, the inferred Er profiles are compared to initial neoclassical calculations based on measured plasma profiles. The results from several neoclassical codes, sfincs, fortec-3d, and dkes, are compared both with each other and the measurements. These comparisons show good agreement, giving confidence in the applicability of the neoclassical calculations to the W7-X configuration.
NASA Technical Reports Server (NTRS)
Norment, H. G.
1985-01-01
Subsonic, external flow about nonlifting bodies, lifting bodies or combinations of lifting and nonlifting bodies is calculated by a modified version of the Hess lifting code. Trajectory calculations can be performed for any atmospheric conditions and for all water drop sizes, from the smallest cloud droplet to large raindrops. Experimental water drop drag relations are used in the water drop equations of motion and effects of gravity settling are included. Inlet flow can be accommodated, and high Mach number compressibility effects are corrected for approximately. Seven codes are described: (1) a code used to debug and plot body surface description data; (2) a code that processes the body surface data to yield the potential flow field; (3) a code that computes flow velocities at arrays of points in space; (4) a code that computes water drop trajectories from an array of points in space; (5) a code that computes water drop trajectories and fluxes to arbitrary target points; (6) a code that computes water drop trajectories tangent to the body; and (7) a code that produces stereo pair plots which include both the body and trajectories. Accuracy of the calculations is discussed, and trajectory calculation results are compared with prior calculations and with experimental data.
NASA Astrophysics Data System (ADS)
Serpa-Imbett, C. M.; Marín-Alfonso, J.; Gómez-Santamaría, C.; Betancur-Agudelo, L.; Amaya-Fernández, F.
2013-12-01
Space division multiplexing in multicore fibers is one of the most promise technologies in order to support transmissions of next-generation peta-to-exaflop-scale supercomputers and mega data centers, owing to advantages in terms of costs and space saving of the new optical fibers with multiple cores. Additionally, multicore fibers allow photonic signal processing in optical communication systems, taking advantage of the mode coupling phenomena. In this work, we numerically have simulated an optical MIMO-OFDM (multiple-input multiple-output orthogonal frequency division multiplexing) by using the coded Alamouti to be transmitted through a twin-core fiber with low coupling. Furthermore, an optical OFDM is transmitted through a core of a singlemode fiber, using pilot-aided channel estimation. We compare the transmission performance in the twin-core fiber and in the singlemode fiber taking into account numerical results of the bit-error rate, considering linear propagation, and Gaussian noise through an optical fiber link. We carry out an optical fiber transmission of OFDM frames using 8 PSK and 16 QAM, with bit rates values of 130 Gb/s and 170 Gb/s, respectively. We obtain a penalty around 4 dB for the 8 PSK transmissions, after 100 km of linear fiber optic propagation for both singlemode and twin core fiber. We obtain a penalty around 6 dB for the 16 QAM transmissions, with linear propagation after 100 km of optical fiber. The transmission in a two-core fiber by using Alamouti coded OFDM-MIMO exhibits a better performance, offering a good alternative in the mitigation of fiber impairments, allowing to expand Alamouti coded in multichannel systems spatially multiplexed in multicore fibers.
A neutron spectrum unfolding computer code based on artificial neural networks
NASA Astrophysics Data System (ADS)
Ortiz-Rodríguez, J. M.; Reyes Alfaro, A.; Reyes Haro, A.; Cervantes Viramontes, J. M.; Vega-Carrillo, H. R.
2014-02-01
The Bonner Spheres Spectrometer consists of a thermal neutron sensor placed at the center of a number of moderating polyethylene spheres of different diameters. From the measured readings, information can be derived about the spectrum of the neutron field where measurements were made. Disadvantages of the Bonner system are the weight associated with each sphere and the need to sequentially irradiate the spheres, requiring long exposure periods. Provided a well-established response matrix and adequate irradiation conditions, the most delicate part of neutron spectrometry, is the unfolding process. The derivation of the spectral information is not simple because the unknown is not given directly as a result of the measurements. The drawbacks associated with traditional unfolding procedures have motivated the need of complementary approaches. Novel methods based on Artificial Intelligence, mainly Artificial Neural Networks, have been widely investigated. In this work, a neutron spectrum unfolding code based on neural nets technology is presented. This code is called Neutron Spectrometry and Dosimetry with Artificial Neural networks unfolding code that was designed in a graphical interface. The core of the code is an embedded neural network architecture previously optimized using the robust design of artificial neural networks methodology. The main features of the code are: easy to use, friendly and intuitive to the user. This code was designed for a Bonner Sphere System based on a 6LiI(Eu) neutron detector and a response matrix expressed in 60 energy bins taken from an International Atomic Energy Agency compilation. The main feature of the code is that as entrance data, for unfolding the neutron spectrum, only seven rate counts measured with seven Bonner spheres are required; simultaneously the code calculates 15 dosimetric quantities as well as the total flux for radiation protection purposes. This code generates a full report with all information of the unfolding in the HTML format. NSDann unfolding code is freely available, upon request to the authors.
Computational Analysis of a Pylon-Chevron Core Nozzle Interaction
NASA Technical Reports Server (NTRS)
Thomas, Russell H.; Kinzie, Kevin W.; Pao, S. Paul
2001-01-01
In typical engine installations, the pylon of an engine creates a flow disturbance that interacts with the engine exhaust flow. This interaction of the pylon with the exhaust flow from a dual stream nozzle was studied computationally. The dual stream nozzle simulates an engine with a bypass ratio of five. A total of five configurations were simulated all at the take-off operating point. All computations were performed using the structured PAB3D code which solves the steady, compressible, Reynolds-averaged Navier-Stokes equations. These configurations included a core nozzle with eight chevron noise reduction devices built into the nozzle trailing edge. Baseline cases had no chevron devices and were run with a pylon and without a pylon. Cases with the chevron were also studied with and without the pylon. Another case was run with the chevron rotated relative to the pylon. The fan nozzle did not have chevron devices attached. Solutions showed that the effect of the pylon is to distort the round Jet plume and to destroy the symmetrical lobed pattern created by the core chevrons. Several overall flow field quantities were calculated that might be used in extensions of this work to find flow field parameters that correlate with changes in noise.
From Large-scale to Protostellar Disk Fragmentation into Close Binary Stars
NASA Astrophysics Data System (ADS)
Sigalotti, Leonardo Di G.; Cruz, Fidel; Gabbasov, Ruslan; Klapp, Jaime; Ramírez-Velasquez, José
2018-04-01
Recent observations of young stellar systems with the Atacama Large Millimeter/submillimeter Array (ALMA) and the Karl G. Jansky Very Large Array are helping to cement the idea that close companion stars form via fragmentation of a gravitationally unstable disk around a protostar early in the star formation process. As the disk grows in mass, it eventually becomes gravitationally unstable and fragments, forming one or more new protostars in orbit with the first at mean separations of 100 au or even less. Here, we report direct numerical calculations down to scales as small as ∼0.1 au, using a consistent Smoothed Particle Hydrodynamics code, that show the large-scale fragmentation of a cloud core into two protostars accompanied by small-scale fragmentation of their circumstellar disks. Our results demonstrate the two dominant mechanisms of star formation, where the disk forming around a protostar (which in turn results from the large-scale fragmentation of the cloud core) undergoes eccentric (m = 1) fragmentation to produce a close binary. We generate two-dimensional emission maps and simulated ALMA 1.3 mm continuum images of the structure and fragmentation of the disks that can help explain the dynamical processes occurring within collapsing cloud cores.
Study of positron annihilation with core electrons at the clean and oxygen covered Ag(001) surface
NASA Astrophysics Data System (ADS)
Joglekar, P.; Shastry, K.; Olenga, A.; Fazleev, N. G.; Weiss, A. H.
2013-03-01
In this paper we present measurements of the energy spectrum of electrons emitted as a result of Positron Annihilation Induce Auger Electron Emission from a clean and oxygen covered Ag (100) surface using a series of incident beam energies ranging from 20 eV down to 2 eV. A peak was observed at ~ 40 eV corresponding to the N23VV Auger transition in agreement with previous PAES studies. Experimental results were investigated theoretically by calculations of positron states and annihilation probabilities of surface-trapped positrons with relevant core electrons at the clean and oxygen covered Ag(100) surface. An ab-initio investigation of stability and associated electronic properties of different adsorption phases of oxygen on Ag(100) has been performed on the basis of density functional theory and using DMOl3 code. The computed positron binding energy, positron surface state wave function, and positron annihilation probabilities of surface trapped positrons with relevant core electrons demonstrate their sensitivity to oxygen coverage, elemental content, atomic structure of the topmost layers of surfaces, and charge transfer effects. Theoretical results are compared with experimental data. This work was supported in part by the National Science Foundation Grant # DMR-0907679.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bokhari, Ishtiaq H.
2004-12-15
The Pakistan Research Reactor-1 (PARR-1) was converted from highly enriched uranium (HEU) to low-enriched uranium (LEU) fuel in 1991. The reactor is running successfully, with an upgraded power level of 10 MW. To save money on the purchase of costly fresh LEU fuel elements, the use of less burnt HEU spent fuel elements along with the present LEU fuel elements is being considered. The proposal calls for the HEU fuel elements to be placed near the thermal column to gain the required excess reactivity. In the present study the safety analysis of a proposed mixed-fuel core has been carried outmore » at a calculated steady-state power level of 9.8 MW. Standard computer codes and correlations were employed to compute various parameters. Initiating events in reactivity-induced accidents involve various modes of reactivity insertion, namely, start-up accident, accidental drop of a fuel element on the core, flooding of a beam tube with water, and removal of an in-pile experiment during reactor operation. For each of these transients, time histories of reactor power, energy released, temperature, and reactivity were determined.« less
Monte Carlo modelling of TRIGA research reactor
NASA Astrophysics Data System (ADS)
El Bakkari, B.; Nacir, B.; El Bardouni, T.; El Younoussi, C.; Merroun, O.; Htet, A.; Boulaich, Y.; Zoubair, M.; Boukhal, H.; Chakir, M.
2010-10-01
The Moroccan 2 MW TRIGA MARK II research reactor at Centre des Etudes Nucléaires de la Maâmora (CENM) achieved initial criticality on May 2, 2007. The reactor is designed to effectively implement the various fields of basic nuclear research, manpower training, and production of radioisotopes for their use in agriculture, industry, and medicine. This study deals with the neutronic analysis of the 2-MW TRIGA MARK II research reactor at CENM and validation of the results by comparisons with the experimental, operational, and available final safety analysis report (FSAR) values. The study was prepared in collaboration between the Laboratory of Radiation and Nuclear Systems (ERSN-LMR) from Faculty of Sciences of Tetuan (Morocco) and CENM. The 3-D continuous energy Monte Carlo code MCNP (version 5) was used to develop a versatile and accurate full model of the TRIGA core. The model represents in detailed all components of the core with literally no physical approximation. Continuous energy cross-section data from the more recent nuclear data evaluations (ENDF/B-VI.8, ENDF/B-VII.0, JEFF-3.1, and JENDL-3.3) as well as S( α, β) thermal neutron scattering functions distributed with the MCNP code were used. The cross-section libraries were generated by using the NJOY99 system updated to its more recent patch file "up259". The consistency and accuracy of both the Monte Carlo simulation and neutron transport physics were established by benchmarking the TRIGA experiments. Core excess reactivity, total and integral control rods worth as well as power peaking factors were used in the validation process. Results of calculations are analysed and discussed.
Are Military and Medical Ethics Necessarily Incompatible? A Canadian Case Study.
Rochon, Christiane; Williams-Jones, Bryn
2016-12-01
Military physicians are often perceived to be in a position of 'dual loyalty' because they have responsibilities towards their patients but also towards their employer, the military institution. Further, they have to ascribe to and are bound by two distinct codes of ethics (i.e., medical and military), each with its own set of values and duties, that could at first glance be considered to be very different or even incompatible. How, then, can military physicians reconcile these two codes of ethics and their distinct professional/institutional values, and assume their responsibilities towards both their patients and the military institution? To clarify this situation, and to show how such a reconciliation might be possible, we compared the history and content of two national professional codes of ethics: the Defence Ethics of the Canadian Armed Forces and the Code of Ethics of the Canadian Medical Association. Interestingly, even if the medical code is more focused on duties and responsibility while the military code is more focused on core values and is supported by a comprehensive ethical training program, they also have many elements in common. Further, both are based on the same core values of loyalty and integrity, and they are broad in scope but are relatively flexible in application. While there are still important sources of tension between and limits within these two codes of ethics, there are fewer differences than may appear at first glance because the core values and principles of military and medical ethics are not so different.
Sensitivity analysis of Monju using ERANOS with JENDL-4.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tamagno, P.; Van Rooijen, W. F. G.; Takeda, T.
2012-07-01
This paper deals with sensitivity analysis using JENDL-4.0 nuclear data applied to the Monju reactor. In 2010 the Japan Atomic Energy Agency - JAEA - released a new set of nuclear data: JENDL-4.0. This new evaluation is expected to contain improved data on actinides and covariance matrices. Covariance matrices are a key point in quantification of uncertainties due to basic nuclear data. For sensitivity analysis, the well-established ERANOS [1] code was chosen because of its integrated modules that allow users to perform a sensitivity analysis of complex reactor geometries. A JENDL-4.0 cross-section library is not available for ERANOS. Therefore amore » cross-section library had to be made from the original nuclear data set, available as ENDF formatted files. This is achieved by using the following codes: NJOY, CALENDF, MERGE and GECCO in order to create a library for the ECCO cell code (part of ERANOS). In order to make sure of the accuracy of the new ECCO library, two benchmark experiments have been analyzed: the MZA and MZB cores of the MOZART program measured at the ZEBRA facility in the UK. These were chosen due to their similarity to the Monju core. Using the JENDL-4.0 ECCO library we have analyzed the criticality of Monju during the restart in 2010. We have obtained good agreement with the measured criticality. Perturbation calculations have been performed between JENDL-3.3 and JENDL-4.0 based models. The isotopes {sup 239}Pu, {sup 238}U, {sup 241}Am and {sup 241}Pu account for a major part of observed differences. (authors)« less
Preliminary Analysis of SiC BWR Channel Box Performance under Normal Operation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wirth, Brian; Singh, Gyanender P.; Gorton, Jacob
SiC-SiC composites are being considered for applications in the core components, including BWR channel box and fuel rod cladding, of light water reactors to improve accident tolerance. In the extreme nuclear reactor environment, core components like the BWR channel box will be exposed to neutron damage and a corrosive environment. To ensure reliable and safe operation of a SiC channel box, it is important to assess its deformation behavior under in-reactor conditions including the expected neutron flux and temperature distributions. In particular, this work has evaluated the effect of non-uniform dimensional changes caused by spatially varying neutron flux and temperaturesmore » on the deformation behavior of the channel box over the course of one cycle of irradiation. These analyses have been performed using the fuel performance modeling code BISON and the commercial finite element analysis code Abaqus, based on fast flux and temperature boundary conditions have been calculated using the neutronics and thermal-hydraulics codes Serpent2 and COBRA-TF, respectively. The dependence of dimensions and thermophysical properties on fast flux and temperature has been incorporated into the material models. These initial results indicate significant bowing of the channel box with a lateral displacement greater than 6.5mm. The channel box bowing behavior is time dependent, and driven by the temperature dependence of the SiC irradiation-induced swelling and the neutron flux/fluence gradients. The bowing behavior gradually recovers during the course of the operating cycle as the swelling of the SiC-SiC material saturates. However, the bending relaxation due to temperature gradients does not fully recover and residual bending remains after the swelling saturates in the entire channel box.« less
Optimizing legacy molecular dynamics software with directive-based offload
Michael Brown, W.; Carrillo, Jan-Michael Y.; Gavhane, Nitin; ...
2015-05-14
The directive-based programming models are one solution for exploiting many-core coprocessors to increase simulation rates in molecular dynamics. They offer the potential to reduce code complexity with offload models that can selectively target computations to run on the CPU, the coprocessor, or both. In our paper, we describe modifications to the LAMMPS molecular dynamics code to enable concurrent calculations on a CPU and coprocessor. We also demonstrate that standard molecular dynamics algorithms can run efficiently on both the CPU and an x86-based coprocessor using the same subroutines. As a consequence, we demonstrate that code optimizations for the coprocessor also resultmore » in speedups on the CPU; in extreme cases up to 4.7X. We provide results for LAMMAS benchmarks and for production molecular dynamics simulations using the Stampede hybrid supercomputer with both Intel (R) Xeon Phi (TM) coprocessors and NVIDIA GPUs: The optimizations presented have increased simulation rates by over 2X for organic molecules and over 7X for liquid crystals on Stampede. The optimizations are available as part of the "Intel package" supplied with LAMMPS. (C) 2015 Elsevier B.V. All rights reserved.« less
Initial Comparison of Direct and Legacy Modeling Approaches for Radial Core Expansion Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shemon, Emily R.
2016-10-10
Radial core expansion in sodium-cooled fast reactors provides an important reactivity feedback effect. As the reactor power increases due to normal start up conditions or accident scenarios, the core and surrounding materials heat up, causing both grid plate expansion and bowing of the assembly ducts. When the core restraint system is designed correctly, the resulting structural deformations introduce negative reactivity which decreases the reactor power. Historically, an indirect procedure has been used to estimate the reactivity feedback due to structural deformation which relies upon perturbation theory and coupling legacy physics codes with limited geometry capabilities. With advancements in modeling andmore » simulation, radial core expansion phenomena can now be modeled directly, providing an assessment of the accuracy of the reactivity feedback coefficients generated by indirect legacy methods. Recently a new capability was added to the PROTEUS-SN unstructured geometry neutron transport solver to analyze deformed meshes quickly and directly. By supplying the deformed mesh in addition to the base configuration input files, PROTEUS-SN automatically processes material adjustments including calculation of region densities to conserve mass, calculation of isotopic densities according to material models (for example, sodium density as a function of temperature), and subsequent re-homogenization of materials. To verify the new capability of directly simulating deformed meshes, PROTEUS-SN was used to compute reactivity feedback for a series of contrived yet representative deformed configurations for the Advanced Burner Test Reactor design. The indirect legacy procedure was also performed to generate reactivity feedback coefficients for the same deformed configurations. Interestingly, the legacy procedure consistently overestimated reactivity feedbacks by 35% compared to direct simulations by PROTEUS-SN. This overestimation indicates that the legacy procedures are in fact not conservative and could be overestimating reactivity feedback effects that are closely tied to reactor safety. We conclude that there is indeed value in performing direct simulation of deformed meshes despite the increased computational expense. PROTEUS-SN is already part of the SHARP multi-physics toolkit where both thermal hydraulics and structural mechanical feedback modeling can be applied but this is the first comparison of direct simulation to legacy techniques for radial core expansion.« less
NASA Technical Reports Server (NTRS)
Schutz, Bob E.; Baker, Gregory A.
1997-01-01
The recovery of a high resolution geopotential from satellite gradiometer observations motivates the examination of high performance computational techniques. The primary subject matter addresses specifically the use of satellite gradiometer and GPS observations to form and invert the normal matrix associated with a large degree and order geopotential solution. Memory resident and out-of-core parallel linear algebra techniques along with data parallel batch algorithms form the foundation of the least squares application structure. A secondary topic includes the adoption of object oriented programming techniques to enhance modularity and reusability of code. Applications implementing the parallel and object oriented methods successfully calculate the degree variance for a degree and order 110 geopotential solution on 32 processors of the Cray T3E. The memory resident gradiometer application exhibits an overall application performance of 5.4 Gflops, and the out-of-core linear solver exhibits an overall performance of 2.4 Gflops. The combination solution derived from a sun synchronous gradiometer orbit produce average geoid height variances of 17 millimeters.
NASA Astrophysics Data System (ADS)
Baker, Gregory Allen
The recovery of a high resolution geopotential from satellite gradiometer observations motivates the examination of high performance computational techniques. The primary subject matter addresses specifically the use of satellite gradiometer and GPS observations to form and invert the normal matrix associated with a large degree and order geopotential solution. Memory resident and out-of-core parallel linear algebra techniques along with data parallel batch algorithms form the foundation of the least squares application structure. A secondary topic includes the adoption of object oriented programming techniques to enhance modularity and reusability of code. Applications implementing the parallel and object oriented methods successfully calculate the degree variance for a degree and order 110 geopotential solution on 32 processors of the Cray T3E. The memory resident gradiometer application exhibits an overall application performance of 5.4 Gflops, and the out-of-core linear solver exhibits an overall performance of 2.4 Gflops. The combination solution derived from a sun synchronous gradiometer orbit produce average geoid height variances of 17 millimeters.
RAMONA-3B application to Browns Ferry ATWS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Slovik, G.C.; Neymotin, L.Y.; Saha, P.
1985-01-01
The Anticipated Transient Without Scram (ATWS) is known to be a dominant accident sequence for possible core melt in a Boiling Water Reactor (BWR). A recent Probabilistic Risk Assessment (PRA) analysis for the Browns Ferry nuclear power plant indicates that ATWS is the second most dominant transient for core melt in BWR/4 with Mark I containment. The most dominant sequence being the failure of long term decay heat removal function of the Residual Heat Removal (RHR) system. Of all the various ATWS scenarios, the Main Steam Isolation Valve (MSIV) closure ATWS sequence was chosen for present analysis because of itsmore » relatively high frequency of occurrence and its challenge to the residual heat removal system and containment integrity. The objective of this paper is to discuss four MSIV closure ATWS calculations using the RAMONA-3B code. The paper is a summary of a report being prepared for the USNRC Severe Accident Sequence Analysis (SASA) program which should be referred to for details. 10 refs., 20 figs., 3 tabs.« less
Core-core and core-valence correlation
NASA Technical Reports Server (NTRS)
Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Taylor, Peter R.
1988-01-01
The effect of (1s) core correlation on properties and energy separations was analyzed using full configuration-interaction (FCI) calculations. The Be 1 S - 1 P, the C 3 P - 5 S and CH+ 1 Sigma + or - 1 Pi separations, and CH+ spectroscopic constants, dipole moment and 1 Sigma + - 1 Pi transition dipole moment were studied. The results of the FCI calculations are compared to those obtained using approximate methods. In addition, the generation of atomic natural orbital (ANO) basis sets, as a method for contracting a primitive basis set for both valence and core correlation, is discussed. When both core-core and core-valence correlation are included in the calculation, no suitable truncated CI approach consistently reproduces the FCI, and contraction of the basis set is very difficult. If the (nearly constant) core-core correlation is eliminated, and only the core-valence correlation is included, CASSCF/MRCI approached reproduce the FCI results and basis set contraction is significantly easier.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kontogeorgakos, D.; Derstine, K.; Wright, A.
2013-06-01
The purpose of the TREAT reactor is to generate large transient neutron pulses in test samples without over-heating the core to simulate fuel assembly accident conditions. The power transients in the present HEU core are inherently self-limiting such that the core prevents itself from overheating even in the event of a reactivity insertion accident. The objective of this study was to support the assessment of the feasibility of the TREAT core conversion based on the present reactor performance metrics and the technical specifications of the HEU core. The LEU fuel assembly studied had the same overall design, materials (UO 2more » particles finely dispersed in graphite) and impurities content as the HEU fuel assembly. The Monte Carlo N–Particle code (MCNP) and the point kinetics code TREKIN were used in the analyses.« less
VENTURE/PC manual: A multidimensional multigroup neutron diffusion code system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shapiro, A.; Huria, H.C.; Cho, K.W.
1991-12-01
VENTURE/PC is a recompilation of part of the Oak Ridge BOLD VENTURE code system, which will operate on an IBM PC or compatible computer. Neutron diffusion theory solutions are obtained for multidimensional, multigroup problems. This manual contains information associated with operating the code system. The purpose of the various modules used in the code system, and the input for these modules are discussed. The PC code structure is also given. Version 2 included several enhancements not given in the original version of the code. In particular, flux iterations can be done in core rather than by reading and writing tomore » disk, for problems which allow sufficient memory for such in-core iterations. This speeds up the iteration process. Version 3 does not include any of the special processors used in the previous versions. These special processors utilized formatted input for various elements of the code system. All such input data is now entered through the Input Processor, which produces standard interface files for the various modules in the code system. In addition, a Standard Interface File Handbook is included in the documentation which is distributed with the code, to assist in developing the input for the Input Processor.« less
A Multiple Sphere T-Matrix Fortran Code for Use on Parallel Computer Clusters
NASA Technical Reports Server (NTRS)
Mackowski, D. W.; Mishchenko, M. I.
2011-01-01
A general-purpose Fortran-90 code for calculation of the electromagnetic scattering and absorption properties of multiple sphere clusters is described. The code can calculate the efficiency factors and scattering matrix elements of the cluster for either fixed or random orientation with respect to the incident beam and for plane wave or localized- approximation Gaussian incident fields. In addition, the code can calculate maps of the electric field both interior and exterior to the spheres.The code is written with message passing interface instructions to enable the use on distributed memory compute clusters, and for such platforms the code can make feasible the calculation of absorption, scattering, and general EM characteristics of systems containing several thousand spheres.
NASA Astrophysics Data System (ADS)
Darmawan, R.
2018-01-01
Nuclear power industry is facing uncertainties since the occurrence of the unfortunate accident at Fukushima Daiichi Nuclear Power Plant. The issue of nuclear power plant safety becomes the major hindrance in the planning of nuclear power program for new build countries. Thus, the understanding of the behaviour of reactor system is very important to ensure the continuous development and improvement on reactor safety. Throughout the development of nuclear reactor technology, investigation and analysis on reactor safety have gone through several phases. In the early days, analytical and experimental methods were employed. For the last four decades 1D system level codes were widely used. The continuous development of nuclear reactor technology has brought about more complex system and processes of nuclear reactor operation. More detailed dimensional simulation codes are needed to assess these new reactors. Recently, 2D and 3D system level codes such as CFD are being explored. This paper discusses a comparative study on two different approaches of CFD modelling on reactor core cooling behaviour.
Automatic Parallelization of Numerical Python Applications using the Global Arrays Toolkit
DOE Office of Scientific and Technical Information (OSTI.GOV)
Daily, Jeffrey A.; Lewis, Robert R.
2011-11-30
Global Arrays is a software system from Pacific Northwest National Laboratory that enables an efficient, portable, and parallel shared-memory programming interface to manipulate distributed dense arrays. The NumPy module is the de facto standard for numerical calculation in the Python programming language, a language whose use is growing rapidly in the scientific and engineering communities. NumPy provides a powerful N-dimensional array class as well as other scientific computing capabilities. However, like the majority of the core Python modules, NumPy is inherently serial. Using a combination of Global Arrays and NumPy, we have reimplemented NumPy as a distributed drop-in replacement calledmore » Global Arrays in NumPy (GAiN). Serial NumPy applications can become parallel, scalable GAiN applications with only minor source code changes. Scalability studies of several different GAiN applications will be presented showing the utility of developing serial NumPy codes which can later run on more capable clusters or supercomputers.« less
GW100: Benchmarking G0W0 for Molecular Systems.
van Setten, Michiel J; Caruso, Fabio; Sharifzadeh, Sahar; Ren, Xinguo; Scheffler, Matthias; Liu, Fang; Lischner, Johannes; Lin, Lin; Deslippe, Jack R; Louie, Steven G; Yang, Chao; Weigend, Florian; Neaton, Jeffrey B; Evers, Ferdinand; Rinke, Patrick
2015-12-08
We present the GW100 set. GW100 is a benchmark set of the ionization potentials and electron affinities of 100 molecules computed with the GW method using three independent GW codes and different GW methodologies. The quasi-particle energies of the highest-occupied molecular orbitals (HOMO) and lowest-unoccupied molecular orbitals (LUMO) are calculated for the GW100 set at the G0W0@PBE level using the software packages TURBOMOLE, FHI-aims, and BerkeleyGW. The use of these three codes allows for a quantitative comparison of the type of basis set (plane wave or local orbital) and handling of unoccupied states, the treatment of core and valence electrons (all electron or pseudopotentials), the treatment of the frequency dependence of the self-energy (full frequency or more approximate plasmon-pole models), and the algorithm for solving the quasi-particle equation. Primary results include reference values for future benchmarks, best practices for convergence within a particular approach, and average error bars for the most common approximations.
GPU-accelerated phase-field simulation of dendritic solidification in a binary alloy
NASA Astrophysics Data System (ADS)
Yamanaka, Akinori; Aoki, Takayuki; Ogawa, Satoi; Takaki, Tomohiro
2011-03-01
The phase-field simulation for dendritic solidification of a binary alloy has been accelerated by using a graphic processing unit (GPU). To perform the phase-field simulation of the alloy solidification on GPU, a program code was developed with computer unified device architecture (CUDA). In this paper, the implementation technique of the phase-field model on GPU is presented. Also, we evaluated the acceleration performance of the three-dimensional solidification simulation by using a single NVIDIA TESLA C1060 GPU and the developed program code. The results showed that the GPU calculation for 5763 computational grids achieved the performance of 170 GFLOPS by utilizing the shared memory as a software-managed cache. Furthermore, it can be demonstrated that the computation with the GPU is 100 times faster than that with a single CPU core. From the obtained results, we confirmed the feasibility of realizing a real-time full three-dimensional phase-field simulation of microstructure evolution on a personal desktop computer.
Toward performance portability of the Albany finite element analysis code using the Kokkos library
DOE Office of Scientific and Technical Information (OSTI.GOV)
Demeshko, Irina; Watkins, Jerry; Tezaur, Irina K.
Performance portability on heterogeneous high-performance computing (HPC) systems is a major challenge faced today by code developers: parallel code needs to be executed correctly as well as with high performance on machines with different architectures, operating systems, and software libraries. The finite element method (FEM) is a popular and flexible method for discretizing partial differential equations arising in a wide variety of scientific, engineering, and industrial applications that require HPC. This paper presents some preliminary results pertaining to our development of a performance portable implementation of the FEM-based Albany code. Performance portability is achieved using the Kokkos library. We presentmore » performance results for the Aeras global atmosphere dynamical core module in Albany. Finally, numerical experiments show that our single code implementation gives reasonable performance across three multicore/many-core architectures: NVIDIA General Processing Units (GPU’s), Intel Xeon Phis, and multicore CPUs.« less
Toward performance portability of the Albany finite element analysis code using the Kokkos library
Demeshko, Irina; Watkins, Jerry; Tezaur, Irina K.; ...
2018-02-05
Performance portability on heterogeneous high-performance computing (HPC) systems is a major challenge faced today by code developers: parallel code needs to be executed correctly as well as with high performance on machines with different architectures, operating systems, and software libraries. The finite element method (FEM) is a popular and flexible method for discretizing partial differential equations arising in a wide variety of scientific, engineering, and industrial applications that require HPC. This paper presents some preliminary results pertaining to our development of a performance portable implementation of the FEM-based Albany code. Performance portability is achieved using the Kokkos library. We presentmore » performance results for the Aeras global atmosphere dynamical core module in Albany. Finally, numerical experiments show that our single code implementation gives reasonable performance across three multicore/many-core architectures: NVIDIA General Processing Units (GPU’s), Intel Xeon Phis, and multicore CPUs.« less
Investigations of the Application of CFD to Flow Expected in the Lower Plenum of the Prismatic VHTR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Richard W.Johnson; Tara Gallaway; Donna P. Guillen
2006-09-01
The Generation IV (Gen IV) very high temperature reactor (VHTR) will either be a prismatic (block) or pebble bed design. However, a prismatic VHTR reference design, based on the General Atomics Gas Turbine-Modular Helium Reactor (GT-MHR) [General Atomics, 1996] has been developed for preliminary analysis purposes [MacDonald, et al., 2003]. Numerical simulation studies reported herein are based on this reference design. In the lower plenum of the prismatic reference design, the flow will be introduced by dozens of turbulent jets from the core above. The jet flow will encounter rows of columns that support the core. The flow from themore » core will have to turn ninety degrees and flow toward the exit duct as it passed through the forest of support columns. Due to the radial variation of the power density in the core, the jets will be at various temperatures at the inlet to the lower plenum. This presents some concerns, including that local hot spots may occur in the lower plenum. This may have a deleterious effect on the materials present as well as cause a variation in temperature to be present as the flow enters the power conversion system machinery, which could cause problems with the operation of the machinery. In the past, systems analysis codes have been used to model flow in nuclear reactor systems. It is recognized, however, that such codes are not capable of modeling the local physics of the flow to be able to analyze for local mixing and temperature variations. This has led to the determination that computational fluid dynamic (CFD) codes be used, which are generally regarded as having the capability of accurately simulating local flow physics. Accurate flow modeling involves determining appropriate modeling strategies needed to obtain accurate analyses. These include determining the fineness of the grid needed, the required iterative convergence tolerance, which numerical discretization method to use, and which turbulence model and wall treatment should be employed. It also involves validating the computer code and turbulence model against a series of separate and combined flow phenomena and selecting the data used for the validation. This report describes progress made to identify proper modeling strategies for simulating the lower plenum flow for the task entitled “CFD software validation of jets in crossflow,” which was designed to investigate the issues pertaining to the validation process. The flow phenomenon previously chosen to investigate is flow in a staggered tube bank because it is shown by preliminary simulations to be the location of the highest turbulence intensity in the lower plenum Numerical simulations were previously obtained assuming that the flow is steady. Various turbulence models were employed along with strategies to reduce numerical error to allow appropriate comparisons of the results. It was determined that the sophisticated Reynolds stress model (RSM) provided the best results. It was later determined that the flow is an unsteady flow wherein circulating eddies grow behind the tube and ‘peel off’ alternately from the top and the bottom of the tube. Additional calculations show that the mean velocity is well predicted when the flow is modeled as an unsteady flow. The results for U are clearly superior for the unsteady computations; the unsteady computations for the turbulence stress are similar to those for the steady calculations, showing the same trends. It is clear that strategie« less
Experimental investigation and CFD analysis on cross flow in the core of PMR200
Lee, Jeong -Hun; Yoon, Su -Jong; Cho, Hyoung -Kyu; ...
2015-04-16
The Prismatic Modular Reactor (PMR) is one of the major Very High Temperature Reactor (VHTR) concepts, which consists of hexagonal prismatic fuel blocks and reflector blocks made of nuclear gradegraphite. However, the shape of the graphite blocks could be easily changed by neutron damage duringthe reactor operation and the shape change can create gaps between the blocks inducing the bypass flow.In the VHTR core, two types of gaps, a vertical gap and a horizontal gap which are called bypass gap and cross gap, respectively, can be formed. The cross gap complicates the flow field in the reactor core by connectingmore » the coolant channel to the bypass gap and it could lead to a loss of effective coolant flow in the fuel blocks. Thus, a cross flow experimental facility was constructed to investigate the cross flow phenomena in the core of the VHTR and a series of experiments were carried out under varying flow rates and gap sizes. The results of the experiments were compared with CFD (Computational Fluid Dynamics) analysis results in order to verify its prediction capability for the cross flow phenomena. Fairly good agreement was seen between experimental results and CFD predictions and the local characteristics of the cross flow was discussed in detail. Based on the calculation results, pressure loss coefficient across the cross gap was evaluated, which is necessary for the thermo-fluid analysis of the VHTR core using a lumped parameter code.« less
XGC developments for a more efficient XGC-GENE code coupling
NASA Astrophysics Data System (ADS)
Dominski, Julien; Hager, Robert; Ku, Seung-Hoe; Chang, Cs
2017-10-01
In the Exascale Computing Program, the High-Fidelity Whole Device Modeling project initially aims at delivering a tightly-coupled simulation of plasma neoclassical and turbulence dynamics from the core to the edge of the tokamak. To permit such simulations, the gyrokinetic codes GENE and XGC will be coupled together. Numerical efforts are made to improve the numerical schemes agreement in the coupling region. One of the difficulties of coupling those codes together is the incompatibility of their grids. GENE is a continuum grid-based code and XGC is a Particle-In-Cell code using unstructured triangular mesh. A field-aligned filter is thus implemented in XGC. Even if XGC originally had an approximately field-following mesh, this field-aligned filter permits to have a perturbation discretization closer to the one solved in the field-aligned code GENE. Additionally, new XGC gyro-averaging matrices are implemented on a velocity grid adapted to the plasma properties, thus ensuring same accuracy from the core to the edge regions.
SCEC Earthquake System Science Using High Performance Computing
NASA Astrophysics Data System (ADS)
Maechling, P. J.; Jordan, T. H.; Archuleta, R.; Beroza, G.; Bielak, J.; Chen, P.; Cui, Y.; Day, S.; Deelman, E.; Graves, R. W.; Minster, J. B.; Olsen, K. B.
2008-12-01
The SCEC Community Modeling Environment (SCEC/CME) collaboration performs basic scientific research using high performance computing with the goal of developing a predictive understanding of earthquake processes and seismic hazards in California. SCEC/CME research areas including dynamic rupture modeling, wave propagation modeling, probabilistic seismic hazard analysis (PSHA), and full 3D tomography. SCEC/CME computational capabilities are organized around the development and application of robust, re- usable, well-validated simulation systems we call computational platforms. The SCEC earthquake system science research program includes a wide range of numerical modeling efforts and we continue to extend our numerical modeling codes to include more realistic physics and to run at higher and higher resolution. During this year, the SCEC/USGS OpenSHA PSHA computational platform was used to calculate PSHA hazard curves and hazard maps using the new UCERF2.0 ERF and new 2008 attenuation relationships. Three SCEC/CME modeling groups ran 1Hz ShakeOut simulations using different codes and computer systems and carefully compared the results. The DynaShake Platform was used to calculate several dynamic rupture-based source descriptions equivalent in magnitude and final surface slip to the ShakeOut 1.2 kinematic source description. A SCEC/CME modeler produced 10Hz synthetic seismograms for the ShakeOut 1.2 scenario rupture by combining 1Hz deterministic simulation results with 10Hz stochastic seismograms. SCEC/CME modelers ran an ensemble of seven ShakeOut-D simulations to investigate the variability of ground motions produced by dynamic rupture-based source descriptions. The CyberShake Platform was used to calculate more than 15 new probabilistic seismic hazard analysis (PSHA) hazard curves using full 3D waveform modeling and the new UCERF2.0 ERF. The SCEC/CME group has also produced significant computer science results this year. Large-scale SCEC/CME high performance codes were run on NSF TeraGrid sites including simulations that use the full PSC Big Ben supercomputer (4096 cores) and simulations that ran on more than 10K cores at TACC Ranger. The SCEC/CME group used scientific workflow tools and grid-computing to run more than 1.5 million jobs at NCSA for the CyberShake project. Visualizations produced by a SCEC/CME researcher of the 10Hz ShakeOut 1.2 scenario simulation data were used by USGS in ShakeOut publications and public outreach efforts. OpenSHA was ported onto an NSF supercomputer and was used to produce very high resolution hazard PSHA maps that contained more than 1.6 million hazard curves.
Multi-Core Processor Memory Contention Benchmark Analysis Case Study
NASA Technical Reports Server (NTRS)
Simon, Tyler; McGalliard, James
2009-01-01
Multi-core processors dominate current mainframe, server, and high performance computing (HPC) systems. This paper provides synthetic kernel and natural benchmark results from an HPC system at the NASA Goddard Space Flight Center that illustrate the performance impacts of multi-core (dual- and quad-core) vs. single core processor systems. Analysis of processor design, application source code, and synthetic and natural test results all indicate that multi-core processors can suffer from significant memory subsystem contention compared to similar single-core processors.
Development of a GPU Compatible Version of the Fast Radiation Code RRTMG
NASA Astrophysics Data System (ADS)
Iacono, M. J.; Mlawer, E. J.; Berthiaume, D.; Cady-Pereira, K. E.; Suarez, M.; Oreopoulos, L.; Lee, D.
2012-12-01
The absorption of solar radiation and emission/absorption of thermal radiation are crucial components of the physics that drive Earth's climate and weather. Therefore, accurate radiative transfer calculations are necessary for realistic climate and weather simulations. Efficient radiation codes have been developed for this purpose, but their accuracy requirements still necessitate that as much as 30% of the computational time of a GCM is spent computing radiative fluxes and heating rates. The overall computational expense constitutes a limitation on a GCM's predictive ability if it becomes an impediment to adding new physics to or increasing the spatial and/or vertical resolution of the model. The emergence of Graphics Processing Unit (GPU) technology, which will allow the parallel computation of multiple independent radiative calculations in a GCM, will lead to a fundamental change in the competition between accuracy and speed. Processing time previously consumed by radiative transfer will now be available for the modeling of other processes, such as physics parameterizations, without any sacrifice in the accuracy of the radiative transfer. Furthermore, fast radiation calculations can be performed much more frequently and will allow the modeling of radiative effects of rapid changes in the atmosphere. The fast radiation code RRTMG, developed at Atmospheric and Environmental Research (AER), is utilized operationally in many dynamical models throughout the world. We will present the results from the first stage of an effort to create a version of the RRTMG radiation code designed to run efficiently in a GPU environment. This effort will focus on the RRTMG implementation in GEOS-5. RRTMG has an internal pseudo-spectral vector of length of order 100 that, when combined with the much greater length of the global horizontal grid vector from which the radiation code is called in GEOS-5, makes RRTMG/GEOS-5 particularly suited to achieving a significant speed improvement through GPU technology. This large number of independent cases will allow us to take full advantage of the computational power of the latest GPUs, ensuring that all thread cores in the GPU remain active, a key criterion for obtaining significant speedup. The CUDA (Compute Unified Device Architecture) Fortran compiler developed by PGI and Nvidia will allow us to construct this parallel implementation on the GPU while remaining in the Fortran language. This implementation will scale very well across various CUDA-supported GPUs such as the recently released Fermi Nvidia cards. We will present the computational speed improvements of the GPU-compatible code relative to the standard CPU-based RRTMG with respect to a very large and diverse suite of atmospheric profiles. This suite will also be utilized to demonstrate the minimal impact of the code restructuring on the accuracy of radiation calculations. The GPU-compatible version of RRTMG will be directly applicable to future versions of GEOS-5, but it is also likely to provide significant associated benefits for other GCMs that employ RRTMG.
Confirmation of a realistic reactor model for BNCT dosimetry at the TRIGA Mainz
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ziegner, Markus, E-mail: Markus.Ziegner.fl@ait.ac.at; Schmitz, Tobias; Hampel, Gabriele
2014-11-01
Purpose: In order to build up a reliable dose monitoring system for boron neutron capture therapy (BNCT) applications at the TRIGA reactor in Mainz, a computer model for the entire reactor was established, simulating the radiation field by means of the Monte Carlo method. The impact of different source definition techniques was compared and the model was validated by experimental fluence and dose determinations. Methods: The depletion calculation code ORIGEN2 was used to compute the burn-up and relevant material composition of each burned fuel element from the day of first reactor operation to its current core. The material composition ofmore » the current core was used in a MCNP5 model of the initial core developed earlier. To perform calculations for the region outside the reactor core, the model was expanded to include the thermal column and compared with the previously established ATTILA model. Subsequently, the computational model is simplified in order to reduce the calculation time. Both simulation models are validated by experiments with different setups using alanine dosimetry and gold activation measurements with two different types of phantoms. Results: The MCNP5 simulated neutron spectrum and source strength are found to be in good agreement with the previous ATTILA model whereas the photon production is much lower. Both MCNP5 simulation models predict all experimental dose values with an accuracy of about 5%. The simulations reveal that a Teflon environment favorably reduces the gamma dose component as compared to a polymethyl methacrylate phantom. Conclusions: A computer model for BNCT dosimetry was established, allowing the prediction of dosimetric quantities without further calibration and within a reasonable computation time for clinical applications. The good agreement between the MCNP5 simulations and experiments demonstrates that the ATTILA model overestimates the gamma dose contribution. The detailed model can be used for the planning of structural modifications in the thermal column irradiation channel or the use of different irradiation sites than the thermal column, e.g., the beam tubes.« less
Deterministic Modeling of the High Temperature Test Reactor with DRAGON-HEXPEDITE
DOE Office of Scientific and Technical Information (OSTI.GOV)
J. Ortensi; M.A. Pope; R.M. Ferrer
2010-10-01
The Idaho National Laboratory (INL) is tasked with the development of reactor physics analysis capability of the Next Generation Nuclear Power (NGNP) project. In order to examine the INL’s current prismatic reactor analysis tools, the project is conducting a benchmark exercise based on modeling the High Temperature Test Reactor (HTTR). This exercise entails the development of a model for the initial criticality, a 19 fuel column thin annular core, and the fully loaded core critical condition with 30 fuel columns. Special emphasis is devoted to physical phenomena and artifacts in HTTR that are similar to phenomena and artifacts in themore » NGNP base design. The DRAGON code is used in this study since it offers significant ease and versatility in modeling prismatic designs. DRAGON can generate transport solutions via Collision Probability (CP), Method of Characteristics (MOC) and Discrete Ordinates (Sn). A fine group cross-section library based on the SHEM 281 energy structure is used in the DRAGON calculations. The results from this study show reasonable agreement in the calculation of the core multiplication factor with the MC methods, but a consistent bias of 2–3% with the experimental values is obtained. This systematic error has also been observed in other HTTR benchmark efforts and is well documented in the literature. The ENDF/B VII graphite and U235 cross sections appear to be the main source of the error. The isothermal temperature coefficients calculated with the fully loaded core configuration agree well with other benchmark participants but are 40% higher than the experimental values. This discrepancy with the measurement partially stems from the fact that during the experiments the control rods were adjusted to maintain criticality, whereas in the model, the rod positions were fixed. In addition, this work includes a brief study of a cross section generation approach that seeks to decouple the domain in order to account for neighbor effects. This spectral interpenetration is a dominant effect in annular HTR physics. This analysis methodology should be further explored in order to reduce the error that is systematically propagated in the traditional generation of cross sections.« less
Henneberg, M.F.; Strause, J.L.
2002-01-01
This report presents the instructions required to use the Scour Critical Bridge Indicator (SCBI) Code and Scour Assessment Rating (SAR) calculator developed by the Pennsylvania Department of Transportation (PennDOT) and the U.S. Geological Survey to identify Pennsylvania bridges with excessive scour conditions or a high potential for scour. Use of the calculator will enable PennDOT bridge personnel to quickly calculate these scour indices if site conditions change, new bridges are constructed, or new information needs to be included. Both indices are calculated for a bridge simultaneously because they must be used together to be interpreted accurately. The SCBI Code and SAR calculator program is run by a World Wide Web browser from a remote computer. The user can 1) add additional scenarios for bridges in the SCBI Code and SAR calculator database or 2) enter data for new bridges and run the program to calculate the SCBI Code and calculate the SAR. The calculator program allows the user to print the results and to save multiple scenarios for a bridge.
WOMBAT: A Scalable and High-performance Astrophysical Magnetohydrodynamics Code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mendygral, P. J.; Radcliffe, N.; Kandalla, K.
2017-02-01
We present a new code for astrophysical magnetohydrodynamics specifically designed and optimized for high performance and scaling on modern and future supercomputers. We describe a novel hybrid OpenMP/MPI programming model that emerged from a collaboration between Cray, Inc. and the University of Minnesota. This design utilizes MPI-RMA optimized for thread scaling, which allows the code to run extremely efficiently at very high thread counts ideal for the latest generation of multi-core and many-core architectures. Such performance characteristics are needed in the era of “exascale” computing. We describe and demonstrate our high-performance design in detail with the intent that it maymore » be used as a model for other, future astrophysical codes intended for applications demanding exceptional performance.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
D'Azevedo, Ed F; Nintcheu Fata, Sylvain
2012-01-01
A collocation boundary element code for solving the three-dimensional Laplace equation, publicly available from \\url{http://www.intetec.org}, has been adapted to run on an Nvidia Tesla general purpose graphics processing unit (GPU). Global matrix assembly and LU factorization of the resulting dense matrix were performed on the GPU. Out-of-core techniques were used to solve problems larger than available GPU memory. The code achieved over eight times speedup in matrix assembly and about 56~Gflops/sec in the LU factorization using only 512~Mbytes of GPU memory. Details of the GPU implementation and comparisons with the standard sequential algorithm are included to illustrate the performance ofmore » the GPU code.« less
NASA Technical Reports Server (NTRS)
Coakley, T. J.; Hsieh, T.
1985-01-01
Numerical simulation of steady and unsteady transonic diffuser flows using two different computer codes are discussed and compared with experimental data. The codes solve the Reynolds-averaged, compressible, Navier-Stokes equations using various turbulence models. One of the codes has been applied extensively to diffuser flows and uses the hybrid method of MacCormack. This code is relatively inefficient numerically. The second code, which was developed more recently, is fully implicit and is relatively efficient numerically. Simulations of steady flows using the implicit code are shown to be in good agreement with simulations using the hybrid code. Both simulations are in good agreement with experimental results. Simulations of unsteady flows using the two codes are in good qualitative agreement with each other, although the quantitative agreement is not as good as in the steady flow cases. The implicit code is shown to be eight times faster than the hybrid code for unsteady flow calculations and up to 32 times faster for steady flow calculations. Results of calculations using alternative turbulence models are also discussed.
Users manual for the NASA Lewis three-dimensional ice accretion code (LEWICE 3D)
NASA Technical Reports Server (NTRS)
Bidwell, Colin S.; Potapczuk, Mark G.
1993-01-01
A description of the methodology, the algorithms, and the input and output data along with an example case for the NASA Lewis 3D ice accretion code (LEWICE3D) has been produced. The manual has been designed to help the user understand the capabilities, the methodologies, and the use of the code. The LEWICE3D code is a conglomeration of several codes for the purpose of calculating ice shapes on three-dimensional external surfaces. A three-dimensional external flow panel code is incorporated which has the capability of calculating flow about arbitrary 3D lifting and nonlifting bodies with external flow. A fourth order Runge-Kutta integration scheme is used to calculate arbitrary streamlines. An Adams type predictor-corrector trajectory integration scheme has been included to calculate arbitrary trajectories. Schemes for calculating tangent trajectories, collection efficiencies, and concentration factors for arbitrary regions of interest for single droplets or droplet distributions have been incorporated. A LEWICE 2D based heat transfer algorithm can be used to calculate ice accretions along surface streamlines. A geometry modification scheme is incorporated which calculates the new geometry based on the ice accretions generated at each section of interest. The three-dimensional ice accretion calculation is based on the LEWICE 2D calculation. Both codes calculate the flow, pressure distribution, and collection efficiency distribution along surface streamlines. For both codes the heat transfer calculation is divided into two regions, one above the stagnation point and one below the stagnation point, and solved for each region assuming a flat plate with pressure distribution. Water is assumed to follow the surface streamlines, hence starting at the stagnation zone any water that is not frozen out at a control volume is assumed to run back into the next control volume. After the amount of frozen water at each control volume has been calculated the geometry is modified by adding the ice at each control volume in the surface normal direction.
SWB-A modified Thornthwaite-Mather Soil-Water-Balance code for estimating groundwater recharge
Westenbroek, S.M.; Kelson, V.A.; Dripps, W.R.; Hunt, R.J.; Bradbury, K.R.
2010-01-01
A Soil-Water-Balance (SWB) computer code has been developed to calculate spatial and temporal variations in groundwater recharge. The SWB model calculates recharge by use of commonly available geographic information system (GIS) data layers in combination with tabular climatological data. The code is based on a modified Thornthwaite-Mather soil-water-balance approach, with components of the soil-water balance calculated at a daily timestep. Recharge calculations are made on a rectangular grid of computational elements that may be easily imported into a regional groundwater-flow model. Recharge estimates calculated by the code may be output as daily, monthly, or annual values.
Supercomputer simulations of structure formation in the Universe
NASA Astrophysics Data System (ADS)
Ishiyama, Tomoaki
2017-06-01
We describe the implementation and performance results of our massively parallel MPI†/OpenMP‡ hybrid TreePM code for large-scale cosmological N-body simulations. For domain decomposition, a recursive multi-section algorithm is used and the size of domains are automatically set so that the total calculation time is the same for all processes. We developed a highly-tuned gravity kernel for short-range forces, and a novel communication algorithm for long-range forces. For two trillion particles benchmark simulation, the average performance on the fullsystem of K computer (82,944 nodes, the total number of core is 663,552) is 5.8 Pflops, which corresponds to 55% of the peak speed.
SANTA BARBARA CLUSTER COMPARISON TEST WITH DISPH
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saitoh, Takayuki R.; Makino, Junichiro, E-mail: saitoh@elsi.jp
2016-06-01
The Santa Barbara cluster comparison project revealed that there is a systematic difference between entropy profiles of clusters of galaxies obtained by Eulerian mesh and Lagrangian smoothed particle hydrodynamics (SPH) codes: mesh codes gave a core with a constant entropy, whereas SPH codes did not. One possible reason for this difference is that mesh codes are not Galilean invariant. Another possible reason is the problem of the SPH method, which might give too much “protection” to cold clumps because of the unphysical surface tension induced at contact discontinuities. In this paper, we apply the density-independent formulation of SPH (DISPH), whichmore » can handle contact discontinuities accurately, to simulations of a cluster of galaxies and compare the results with those with the standard SPH. We obtained the entropy core when we adopt DISPH. The size of the core is, however, significantly smaller than those obtained with mesh simulations and is comparable to those obtained with quasi-Lagrangian schemes such as “moving mesh” and “mesh free” schemes. We conclude that both the standard SPH without artificial conductivity and Eulerian mesh codes have serious problems even with such an idealized simulation, while DISPH, SPH with artificial conductivity, and quasi-Lagrangian schemes have sufficient capability to deal with it.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Walter, Matthew; Yin, Shengjun; Stevens, Gary
2012-01-01
In past years, the authors have undertaken various studies of nozzles in both boiling water reactors (BWRs) and pressurized water reactors (PWRs) located in the reactor pressure vessel (RPV) adjacent to the core beltline region. Those studies described stress and fracture mechanics analyses performed to assess various RPV nozzle geometries, which were selected based on their proximity to the core beltline region, i.e., those nozzle configurations that are located close enough to the core region such that they may receive sufficient fluence prior to end-of-life (EOL) to require evaluation of embrittlement as part of the RPV analyses associated with pressure-temperaturemore » (P-T) limits. In this paper, additional stress and fracture analyses are summarized that were performed for additional PWR nozzles with the following objectives: To expand the population of PWR nozzle configurations evaluated, which was limited in the previous work to just two nozzles (one inlet and one outlet nozzle). To model and understand differences in stress results obtained for an internal pressure load case using a two-dimensional (2-D) axi-symmetric finite element model (FEM) vs. a three-dimensional (3-D) FEM for these PWR nozzles. In particular, the ovalization (stress concentration) effect of two intersecting cylinders, which is typical of RPV nozzle configurations, was investigated. To investigate the applicability of previously recommended linear elastic fracture mechanics (LEFM) hand solutions for calculating the Mode I stress intensity factor for a postulated nozzle corner crack for pressure loading for these PWR nozzles. These analyses were performed to further expand earlier work completed to support potential revision and refinement of Title 10 to the U.S. Code of Federal Regulations (CFR), Part 50, Appendix G, Fracture Toughness Requirements, and are intended to supplement similar evaluation of nozzles presented at the 2008, 2009, and 2011 Pressure Vessels and Piping (PVP) Conferences. This work is also relevant to the ongoing efforts of the American Society of Mechanical Engineers (ASME) Boiler and Pressure Vessel (B&PV) Code, Section XI, Working Group on Operating Plant Criteria (WGOPC) efforts to incorporate nozzle fracture mechanics solutions into a revision to ASME B&PV Code, Section XI, Nonmandatory Appendix G.« less
Relativistic semiempirical-core-potential calculations in Ca+,Sr+ , and Ba+ ions on Lagrange meshes
NASA Astrophysics Data System (ADS)
Filippin, Livio; Schiffmann, Sacha; Dohet-Eraly, Jérémy; Baye, Daniel; Godefroid, Michel
2018-01-01
Relativistic atomic structure calculations are carried out in alkaline-earth-metal ions using a semiempirical-core-potential approach. The systems are partitioned into frozen-core electrons and an active valence electron. The core orbitals are defined by a Dirac-Hartree-Fock calculation using the grasp2k package. The valence electron is described by a Dirac-like Hamiltonian involving a core-polarization potential to simulate the core-valence electron correlation. The associated equation is solved with the Lagrange-mesh method, which is an approximate variational approach having the form of a mesh calculation because of the use of a Gauss quadrature to calculate matrix elements. Properties involving the low-lying metastable
ERIC Educational Resources Information Center
Yamamoto, Kentaro; He, Qiwei; Shin, Hyo Jeong; von Davier, Mattias
2017-01-01
Approximately a third of the Programme for International Student Assessment (PISA) items in the core domains (math, reading, and science) are constructed-response items and require human coding (scoring). This process is time-consuming, expensive, and prone to error as often (a) humans code inconsistently, and (b) coding reliability in…