Sample records for model energy codes

  1. Impacts of Model Building Energy Codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Athalye, Rahul A.; Sivaraman, Deepak; Elliott, Douglas B.

    The U.S. Department of Energy (DOE) Building Energy Codes Program (BECP) periodically evaluates national and state-level impacts associated with energy codes in residential and commercial buildings. Pacific Northwest National Laboratory (PNNL), funded by DOE, conducted an assessment of the prospective impacts of national model building energy codes from 2010 through 2040. A previous PNNL study evaluated the impact of the Building Energy Codes Program; this study looked more broadly at overall code impacts. This report describes the methodology used for the assessment and presents the impacts in terms of energy savings, consumer cost savings, and reduced CO 2 emissions atmore » the state level and at aggregated levels. This analysis does not represent all potential savings from energy codes in the U.S. because it excludes several states which have codes which are fundamentally different from the national model energy codes or which do not have state-wide codes. Energy codes follow a three-phase cycle that starts with the development of a new model code, proceeds with the adoption of the new code by states and local jurisdictions, and finishes when buildings comply with the code. The development of new model code editions creates the potential for increased energy savings. After a new model code is adopted, potential savings are realized in the field when new buildings (or additions and alterations) are constructed to comply with the new code. Delayed adoption of a model code and incomplete compliance with the code’s requirements erode potential savings. The contributions of all three phases are crucial to the overall impact of codes, and are considered in this assessment.« less

  2. Potential Job Creation in Rhode Island as a Result of Adopting New Residential Building Energy Codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scott, Michael J.; Niemeyer, Jackie M.

    Are there advantages to states that adopt the most recent model building energy codes other than saving energy? For example, can the construction activity and energy savings associated with code-compliant housing units become significant sources of job creation for states if new building energy codes are adopted to cover residential construction? , The U.S. Department of Energy (DOE) Building Energy Codes Program (BECP) asked Pacific Northwest National Laboratory (PNNL) to research and ascertain whether jobs would be created in individual states based on their adoption of model building energy codes. Each state in the country is dealing with high levelsmore » of unemployment, so job creation has become a top priority. Many programs have been created to combat unemployment with various degrees of failure and success. At the same time, many states still have not yet adopted the most current versions of the International Energy Conservation Code (IECC) model building energy code, when doing so could be a very effective tool in creating jobs to assist states in recovering from this economic downturn.« less

  3. Potential Job Creation in Minnesota as a Result of Adopting New Residential Building Energy Codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scott, Michael J.; Niemeyer, Jackie M.

    Are there advantages to states that adopt the most recent model building energy codes other than saving energy? For example, can the construction activity and energy savings associated with code-compliant housing units become significant sources of job creation for states if new building energy codes are adopted to cover residential construction? , The U.S. Department of Energy (DOE) Building Energy Codes Program (BECP) asked Pacific Northwest National Laboratory (PNNL) to research and ascertain whether jobs would be created in individual states based on their adoption of model building energy codes. Each state in the country is dealing with high levelsmore » of unemployment, so job creation has become a top priority. Many programs have been created to combat unemployment with various degrees of failure and success. At the same time, many states still have not yet adopted the most current versions of the International Energy Conservation Code (IECC) model building energy code, when doing so could be a very effective tool in creating jobs to assist states in recovering from this economic downturn.« less

  4. Potential Job Creation in Tennessee as a Result of Adopting New Residential Building Energy Codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scott, Michael J.; Niemeyer, Jackie M.

    Are there advantages to states that adopt the most recent model building energy codes other than saving energy? For example, can the construction activity and energy savings associated with code-compliant housing units become significant sources of job creation for states if new building energy codes are adopted to cover residential construction? , The U.S. Department of Energy (DOE) Building Energy Codes Program (BECP) asked Pacific Northwest National Laboratory (PNNL) to research and ascertain whether jobs would be created in individual states based on their adoption of model building energy codes. Each state in the country is dealing with high levelsmore » of unemployment, so job creation has become a top priority. Many programs have been created to combat unemployment with various degrees of failure and success. At the same time, many states still have not yet adopted the most current versions of the International Energy Conservation Code (IECC) model building energy code, when doing so could be a very effective tool in creating jobs to assist states in recovering from this economic downturn.« less

  5. Potential Job Creation in Nevada as a Result of Adopting New Residential Building Energy Codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scott, Michael J.; Niemeyer, Jackie M.

    Are there advantages to states that adopt the most recent model building energy codes other than saving energy? For example, can the construction activity and energy savings associated with code-compliant housing units become significant sources of job creation for states if new building energy codes are adopted to cover residential construction? , The U.S. Department of Energy (DOE) Building Energy Codes Program (BECP) asked Pacific Northwest National Laboratory (PNNL) to research and ascertain whether jobs would be created in individual states based on their adoption of model building energy codes. Each state in the country is dealing with high levelsmore » of unemployment, so job creation has become a top priority. Many programs have been created to combat unemployment with various degrees of failure and success. At the same time, many states still have not yet adopted the most current versions of the International Energy Conservation Code (IECC) model building energy code, when doing so could be a very effective tool in creating jobs to assist states in recovering from this economic downturn.« less

  6. Vibration Response Models of a Stiffened Aluminum Plate Excited by a Shaker

    NASA Technical Reports Server (NTRS)

    Cabell, Randolph H.

    2008-01-01

    Numerical models of structural-acoustic interactions are of interest to aircraft designers and the space program. This paper describes a comparison between two energy finite element codes, a statistical energy analysis code, a structural finite element code, and the experimentally measured response of a stiffened aluminum plate excited by a shaker. Different methods for modeling the stiffeners and the power input from the shaker are discussed. The results show that the energy codes (energy finite element and statistical energy analysis) accurately predicted the measured mean square velocity of the plate. In addition, predictions from an energy finite element code had the best spatial correlation with measured velocities. However, predictions from a considerably simpler, single subsystem, statistical energy analysis model also correlated well with the spatial velocity distribution. The results highlight a need for further work to understand the relationship between modeling assumptions and the prediction results.

  7. Energy Savings Analysis of the Proposed NYStretch-Energy Code 2018

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Bing; Zhang, Jian; Chen, Yan

    This study was conducted by the Pacific Northwest National Laboratory (PNNL) in support of the stretch energy code development led by the New York State Energy Research and Development Authority (NYSERDA). In 2017 NYSERDA developed its 2016 Stretch Code Supplement to the 2016 New York State Energy Conservation Construction Code (hereinafter referred to as “NYStretch-Energy”). NYStretch-Energy is intended as a model energy code for statewide voluntary adoption that anticipates other code advancements culminating in the goal of a statewide Net Zero Energy Code by 2028. Since then, NYSERDA continues to develop the NYStretch-Energy Code 2018 edition. To support the effort,more » PNNL conducted energy simulation analysis to quantify the energy savings of proposed commercial provisions of the NYStretch-Energy Code (2018) in New York. The focus of this project is the 20% improvement over existing commercial model energy codes. A key requirement of the proposed stretch code is that it be ‘adoptable’ as an energy code, meaning that it must align with current code scope and limitations, and primarily impact building components that are currently regulated by local building departments. It is largely limited to prescriptive measures, which are what most building departments and design projects are most familiar with. This report describes a set of energy-efficiency measures (EEMs) that demonstrate 20% energy savings over ANSI/ASHRAE/IES Standard 90.1-2013 (ASHRAE 2013) across a broad range of commercial building types and all three climate zones in New York. In collaboration with New Building Institute, the EEMs were developed from national model codes and standards, high-performance building codes and standards, regional energy codes, and measures being proposed as part of the on-going code development process. PNNL analyzed these measures using whole building energy models for selected prototype commercial buildings and multifamily buildings representing buildings in New York. Section 2 of this report describes the analysis methodology, including the building types and construction area weights update for this analysis, the baseline, and the method to conduct the energy saving analysis. Section 3 provides detailed specifications of the EEMs and bundles. Section 4 summarizes the results of individual EEMs and EEM bundles by building type, energy end-use and climate zone. Appendix A documents detailed descriptions of the selected prototype buildings. Appendix B provides energy end-use breakdown results by building type for both the baseline code and stretch code in all climate zones.« less

  8. WEC3: Wave Energy Converter Code Comparison Project: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Combourieu, Adrien; Lawson, Michael; Babarit, Aurelien

    This paper describes the recently launched Wave Energy Converter Code Comparison (WEC3) project and present preliminary results from this effort. The objectives of WEC3 are to verify and validate numerical modelling tools that have been developed specifically to simulate wave energy conversion devices and to inform the upcoming IEA OES Annex VI Ocean Energy Modelling Verification and Validation project. WEC3 is divided into two phases. Phase 1 consists of a code-to-code verification and Phase II entails code-to-experiment validation. WEC3 focuses on mid-fidelity codes that simulate WECs using time-domain multibody dynamics methods to model device motions and hydrodynamic coefficients to modelmore » hydrodynamic forces. Consequently, high-fidelity numerical modelling tools, such as Navier-Stokes computational fluid dynamics simulation, and simple frequency domain modelling tools were not included in the WEC3 project.« less

  9. A long-term, integrated impact assessment of alternative building energy code scenarios in China

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, Sha; Eom, Jiyong; Evans, Meredydd

    2014-04-01

    China is the second largest building energy user in the world, ranking first and third in residential and commercial energy consumption. Beginning in the early 1980s, the Chinese government has developed a variety of building energy codes to improve building energy efficiency and reduce total energy demand. This paper studies the impact of building energy codes on energy use and CO2 emissions by using a detailed building energy model that represents four distinct climate zones each with three building types, nested in a long-term integrated assessment framework GCAM. An advanced building stock module, coupled with the building energy model, ismore » developed to reflect the characteristics of future building stock and its interaction with the development of building energy codes in China. This paper also evaluates the impacts of building codes on building energy demand in the presence of economy-wide carbon policy. We find that building energy codes would reduce Chinese building energy use by 13% - 22% depending on building code scenarios, with a similar effect preserved even under the carbon policy. The impact of building energy codes shows regional and sectoral variation due to regionally differentiated responses of heating and cooling services to shell efficiency improvement.« less

  10. WEC-SIM Phase 1 Validation Testing -- Numerical Modeling of Experiments: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ruehl, Kelley; Michelen, Carlos; Bosma, Bret

    2016-08-01

    The Wave Energy Converter Simulator (WEC-Sim) is an open-source code jointly developed by Sandia National Laboratories and the National Renewable Energy Laboratory. It is used to model wave energy converters subjected to operational and extreme waves. In order for the WEC-Sim code to be beneficial to the wave energy community, code verification and physical model validation is necessary. This paper describes numerical modeling of the wave tank testing for the 1:33-scale experimental testing of the floating oscillating surge wave energy converter. The comparison between WEC-Sim and the Phase 1 experimental data set serves as code validation. This paper is amore » follow-up to the WEC-Sim paper on experimental testing, and describes the WEC-Sim numerical simulations for the floating oscillating surge wave energy converter.« less

  11. The Marriage of Residential Energy Codes and Rating Systems: Conflict Resolution or Just Conflict?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taylor, Zachary T.; Mendon, Vrushali V.

    2014-08-21

    After three decades of coexistence at a distance, model residential energy codes and residential energy rating systems have come together in the 2015 International Energy Conservation Code. At the October, 2013, International Code Council’s Public Comment Hearing, a new compliance path based on an Energy Rating Index was added to the IECC. Although not specifically named in the code, RESNET’s HERS rating system is the likely candidate Index for most jurisdictions. While HERS has been a mainstay in various beyond-code programs for many years, its direct incorporation into the most popular model energy code raises questions about the equivalence ofmore » a HERS-based compliance path and the traditional IECC performance compliance path, especially because the two approaches use different efficiency metrics, are governed by different simulation rules, and have different scopes with regard to energy impacting house features. A detailed simulation analysis of more than 15,000 house configurations reveals a very large range of HERS Index values that achieve equivalence with the IECC’s performance path. This paper summarizes the results of that analysis and evaluates those results against the specific Energy Rating Index values required by the 2015 IECC. Based on the home characteristics most likely to result in disparities between HERS-based compliance and performance path compliance, potential impacts on the compliance process, state and local adoption of the new code, energy efficiency in the next generation of homes subject to this new code, and future evolution of model code formats are discussed.« less

  12. EnergyPlus Run Time Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hong, Tianzhen; Buhl, Fred; Haves, Philip

    2008-09-20

    EnergyPlus is a new generation building performance simulation program offering many new modeling capabilities and more accurate performance calculations integrating building components in sub-hourly time steps. However, EnergyPlus runs much slower than the current generation simulation programs. This has become a major barrier to its widespread adoption by the industry. This paper analyzed EnergyPlus run time from comprehensive perspectives to identify key issues and challenges of speeding up EnergyPlus: studying the historical trends of EnergyPlus run time based on the advancement of computers and code improvements to EnergyPlus, comparing EnergyPlus with DOE-2 to understand and quantify the run time differences,more » identifying key simulation settings and model features that have significant impacts on run time, and performing code profiling to identify which EnergyPlus subroutines consume the most amount of run time. This paper provides recommendations to improve EnergyPlus run time from the modeler?s perspective and adequate computing platforms. Suggestions of software code and architecture changes to improve EnergyPlus run time based on the code profiling results are also discussed.« less

  13. Improvements to the nuclear model code GNASH for cross section calculations at higher energies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Young, P.G.; Chadwick, M.B.

    1994-05-01

    The nuclear model code GNASH, which in the past has been used predominantly for incident particle energies below 20 MeV, has been modified extensively for calculations at higher energies. The model extensions and improvements are described in this paper, and their significance is illustrated by comparing calculations with experimental data for incident energies up to 160 MeV.

  14. The random energy model in a magnetic field and joint source channel coding

    NASA Astrophysics Data System (ADS)

    Merhav, Neri

    2008-09-01

    We demonstrate that there is an intimate relationship between the magnetic properties of Derrida’s random energy model (REM) of spin glasses and the problem of joint source-channel coding in Information Theory. In particular, typical patterns of erroneously decoded messages in the coding problem have “magnetization” properties that are analogous to those of the REM in certain phases, where the non-uniformity of the distribution of the source in the coding problem plays the role of an external magnetic field applied to the REM. We also relate the ensemble performance (random coding exponents) of joint source-channel codes to the free energy of the REM in its different phases.

  15. Preserving Envelope Efficiency in Performance Based Code Compliance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thornton, Brian A.; Sullivan, Greg P.; Rosenberg, Michael I.

    2015-06-20

    The City of Seattle 2012 Energy Code (Seattle 2014), one of the most progressive in the country, is under revision for its 2015 edition. Additionally, city personnel participate in the development of the next generation of the Washington State Energy Code and the International Energy Code. Seattle has pledged carbon neutrality by 2050 including buildings, transportation and other sectors. The United States Department of Energy (DOE), through Pacific Northwest National Laboratory (PNNL) provided technical assistance to Seattle in order to understand the implications of one potential direction for its code development, limiting trade-offs of long-lived building envelope components less stringentmore » than the prescriptive code envelope requirements by using better-than-code but shorter-lived lighting and heating, ventilation, and air-conditioning (HVAC) components through the total building performance modeled energy compliance path. Weaker building envelopes can permanently limit building energy performance even as lighting and HVAC components are upgraded over time, because retrofitting the envelope is less likely and more expensive. Weaker building envelopes may also increase the required size, cost and complexity of HVAC systems and may adversely affect occupant comfort. This report presents the results of this technical assistance. The use of modeled energy code compliance to trade-off envelope components with shorter-lived building components is not unique to Seattle and the lessons and possible solutions described in this report have implications for other jurisdictions and energy codes.« less

  16. Pulsed Inductive Thruster (PIT): Modeling and Validation Using the MACH2 Code

    NASA Technical Reports Server (NTRS)

    Schneider, Steven (Technical Monitor); Mikellides, Pavlos G.

    2003-01-01

    Numerical modeling of the Pulsed Inductive Thruster exercising the magnetohydrodynamics code, MACH2 aims to provide bilateral validation of the thruster's measured performance and the code's capability of capturing the pertinent physical processes. Computed impulse values for helium and argon propellants demonstrate excellent correlation to the experimental data for a range of energy levels and propellant-mass values. The effects of the vacuum tank wall and massinjection scheme were investigated to show trivial changes in the overall performance. An idealized model for these energy levels and propellants deduces that the energy expended to the internal energy modes and plasma dissipation processes is independent of the propellant type, mass, and energy level.

  17. Towards self-correcting quantum memories

    NASA Astrophysics Data System (ADS)

    Michnicki, Kamil

    This thesis presents a model of self-correcting quantum memories where quantum states are encoded using topological stabilizer codes and error correction is done using local measurements and local dynamics. Quantum noise poses a practical barrier to developing quantum memories. This thesis explores two types of models for suppressing noise. One model suppresses thermalizing noise energetically by engineering a Hamiltonian with a high energy barrier between code states. Thermalizing dynamics are modeled phenomenologically as a Markovian quantum master equation with only local generators. The second model suppresses stochastic noise with a cellular automaton that performs error correction using syndrome measurements and a local update rule. Several ways of visualizing and thinking about stabilizer codes are presented in order to design ones that have a high energy barrier: the non-local Ising model, the quasi-particle graph and the theory of welded stabilizer codes. I develop the theory of welded stabilizer codes and use it to construct a code with the highest known energy barrier in 3-d for spin Hamiltonians: the welded solid code. Although the welded solid code is not fully self correcting, it has some self correcting properties. It has an increased memory lifetime for an increased system size up to a temperature dependent maximum. One strategy for increasing the energy barrier is by mediating an interaction with an external system. I prove a no-go theorem for a class of Hamiltonians where the interaction terms are local, of bounded strength and commute with the stabilizer group. Under these conditions the energy barrier can only be increased by a multiplicative constant. I develop cellular automaton to do error correction on a state encoded using the toric code. The numerical evidence indicates that while there is no threshold, the model can extend the memory lifetime significantly. While of less theoretical importance, this could be practical for real implementations of quantum memories. Numerical evidence also suggests that the cellular automaton could function as a decoder with a soft threshold.

  18. Neutron displacement cross-sections for tantalum and tungsten at energies up to 1 GeV

    NASA Astrophysics Data System (ADS)

    Broeders, C. H. M.; Konobeyev, A. Yu.; Villagrasa, C.

    2005-06-01

    The neutron displacement cross-section has been evaluated for tantalum and tungsten at energies from 10 -5 eV up to 1 GeV. The nuclear optical model, the intranuclear cascade model combined with the pre-equilibrium and evaporation models were used for the calculations. The number of defects produced by recoil atoms nuclei in materials was calculated by the Norgett, Robinson, Torrens model and by the approach combining calculations using the binary collision approximation model and the results of the molecular dynamics simulation. The numerical calculations were done using the NJOY code, the ECIS96 code, the MCNPX code and the IOTA code.

  19. CEM2k and LAQGSM Codes as Event-Generators for Space Radiation Shield and Cosmic Rays Propagation Applications

    NASA Technical Reports Server (NTRS)

    Mashnik, S. G.; Gudima, K. K.; Sierk, A. J.; Moskalenko, I. V.

    2002-01-01

    Space radiation shield applications and studies of cosmic ray propagation in the Galaxy require reliable cross sections to calculate spectra of secondary particles and yields of the isotopes produced in nuclear reactions induced both by particles and nuclei at energies from threshold to hundreds of GeV per nucleon. Since the data often exist in a very limited energy range or sometimes not at all, the only way to obtain an estimate of the production cross sections is to use theoretical models and codes. Recently, we have developed improved versions of the Cascade-Exciton Model (CEM) of nuclear reactions: the codes CEM97 and CEM2k for description of particle-nucleus reactions at energies up to about 5 GeV. In addition, we have developed a LANL version of the Quark-Gluon String Model (LAQGSM) to describe reactions induced both by particles and nuclei at energies up to hundreds of GeVhucleon. We have tested and benchmarked the CEM and LAQGSM codes against a large variety of experimental data and have compared their results with predictions by other currently available models and codes. Our benchmarks show that CEM and LAQGSM codes have predictive powers no worse than other currently used codes and describe many reactions better than other codes; therefore both our codes can be used as reliable event-generators for space radiation shield and cosmic ray propagation applications. The CEM2k code is being incorporated into the transport code MCNPX (and several other transport codes), and we plan to incorporate LAQGSM into MCNPX in the near future. Here, we present the current status of the CEM2k and LAQGSM codes, and show results and applications to studies of cosmic ray propagation in the Galaxy.

  20. 12 CFR 1807.503 - Project completion.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... applicable: One of three model codes (Uniform Building Code (ICBO), National Building Code (BOCA), Standard (Southern) Building Code (SBCCI)); or the Council of American Building Officials (CABO) one or two family... must meet the current edition of the Model Energy Code published by the Council of American Building...

  1. 12 CFR 1807.503 - Project completion.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... applicable: One of three model codes (Uniform Building Code (ICBO), National Building Code (BOCA), Standard (Southern) Building Code (SBCCI)); or the Council of American Building Officials (CABO) one or two family... must meet the current edition of the Model Energy Code published by the Council of American Building...

  2. 12 CFR 1807.503 - Project completion.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... applicable: One of three model codes (Uniform Building Code (ICBO), National Building Code (BOCA), Standard (Southern) Building Code (SBCCI)); or the Council of American Building Officials (CABO) one or two family... must meet the current edition of the Model Energy Code published by the Council of American Building...

  3. 12 CFR 1807.503 - Project completion.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... applicable: One of three model codes (Uniform Building Code (ICBO), National Building Code (BOCA), Standard (Southern) Building Code (SBCCI)); or the Council of American Building Officials (CABO) one or two family... must meet the current edition of the Model Energy Code published by the Council of American Building...

  4. 76 FR 13101 - Building Energy Codes Program: Presenting and Receiving Comments to DOE Proposed Changes to the...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-03-10

    .... The IgCC is intended to provide a green model building code provisions for new and existing commercial... DEPARTMENT OF ENERGY 10 CFR Part 430 [Docket No. EERE-2011-BT-BC-0009] Building Energy Codes Program: Presenting and Receiving Comments to DOE Proposed Changes to the International Green Construction...

  5. An Energy Model of Place Cell Network in Three Dimensional Space.

    PubMed

    Wang, Yihong; Xu, Xuying; Wang, Rubin

    2018-01-01

    Place cells are important elements in the spatial representation system of the brain. A considerable amount of experimental data and classical models are achieved in this area. However, an important question has not been addressed, which is how the three dimensional space is represented by the place cells. This question is preliminarily surveyed by energy coding method in this research. Energy coding method argues that neural information can be expressed by neural energy and it is convenient to model and compute for neural systems due to the global and linearly addable properties of neural energy. Nevertheless, the models of functional neural networks based on energy coding method have not been established. In this work, we construct a place cell network model to represent three dimensional space on an energy level. Then we define the place field and place field center and test the locating performance in three dimensional space. The results imply that the model successfully simulates the basic properties of place cells. The individual place cell obtains unique spatial selectivity. The place fields in three dimensional space vary in size and energy consumption. Furthermore, the locating error is limited to a certain level and the simulated place field agrees to the experimental results. In conclusion, this is an effective model to represent three dimensional space by energy method. The research verifies the energy efficiency principle of the brain during the neural coding for three dimensional spatial information. It is the first step to complete the three dimensional spatial representing system of the brain, and helps us further understand how the energy efficiency principle directs the locating, navigating, and path planning function of the brain.

  6. Recent Progress in the Development of a Multi-Layer Green's Function Code for Ion Beam Transport

    NASA Technical Reports Server (NTRS)

    Tweed, John; Walker, Steven A.; Wilson, John W.; Tripathi, Ram K.

    2008-01-01

    To meet the challenge of future deep space programs, an accurate and efficient engineering code for analyzing the shielding requirements against high-energy galactic heavy radiation is needed. To address this need, a new Green's function code capable of simulating high charge and energy ions with either laboratory or space boundary conditions is currently under development. The computational model consists of combinations of physical perturbation expansions based on the scales of atomic interaction, multiple scattering, and nuclear reactive processes with use of the Neumann-asymptotic expansions with non-perturbative corrections. The code contains energy loss due to straggling, nuclear attenuation, nuclear fragmentation with energy dispersion and downshifts. Previous reports show that the new code accurately models the transport of ion beams through a single slab of material. Current research efforts are focused on enabling the code to handle multiple layers of material and the present paper reports on progress made towards that end.

  7. Phase 1 Validation Testing and Simulation for the WEC-Sim Open Source Code

    NASA Astrophysics Data System (ADS)

    Ruehl, K.; Michelen, C.; Gunawan, B.; Bosma, B.; Simmons, A.; Lomonaco, P.

    2015-12-01

    WEC-Sim is an open source code to model wave energy converters performance in operational waves, developed by Sandia and NREL and funded by the US DOE. The code is a time-domain modeling tool developed in MATLAB/SIMULINK using the multibody dynamics solver SimMechanics, and solves the WEC's governing equations of motion using the Cummins time-domain impulse response formulation in 6 degrees of freedom. The WEC-Sim code has undergone verification through code-to-code comparisons; however validation of the code has been limited to publicly available experimental data sets. While these data sets provide preliminary code validation, the experimental tests were not explicitly designed for code validation, and as a result are limited in their ability to validate the full functionality of the WEC-Sim code. Therefore, dedicated physical model tests for WEC-Sim validation have been performed. This presentation provides an overview of the WEC-Sim validation experimental wave tank tests performed at the Oregon State University's Directional Wave Basin at Hinsdale Wave Research Laboratory. Phase 1 of experimental testing was focused on device characterization and completed in Fall 2015. Phase 2 is focused on WEC performance and scheduled for Winter 2015/2016. These experimental tests were designed explicitly to validate the performance of WEC-Sim code, and its new feature additions. Upon completion, the WEC-Sim validation data set will be made publicly available to the wave energy community. For the physical model test, a controllable model of a floating wave energy converter has been designed and constructed. The instrumentation includes state-of-the-art devices to measure pressure fields, motions in 6 DOF, multi-axial load cells, torque transducers, position transducers, and encoders. The model also incorporates a fully programmable Power-Take-Off system which can be used to generate or absorb wave energy. Numerical simulations of the experiments using WEC-Sim will be presented. These simulations highlight the code features included in the latest release of WEC-Sim (v1.2), including: wave directionality, nonlinear hydrostatics and hydrodynamics, user-defined wave elevation time-series, state space radiation, and WEC-Sim compatibility with BEMIO (open source AQWA/WAMI/NEMOH coefficient parser).

  8. Mechanism on brain information processing: Energy coding

    NASA Astrophysics Data System (ADS)

    Wang, Rubin; Zhang, Zhikang; Jiao, Xianfa

    2006-09-01

    According to the experimental result of signal transmission and neuronal energetic demands being tightly coupled to information coding in the cerebral cortex, the authors present a brand new scientific theory that offers a unique mechanism for brain information processing. They demonstrate that the neural coding produced by the activity of the brain is well described by the theory of energy coding. Due to the energy coding model's ability to reveal mechanisms of brain information processing based upon known biophysical properties, they cannot only reproduce various experimental results of neuroelectrophysiology but also quantitatively explain the recent experimental results from neuroscientists at Yale University by means of the principle of energy coding. Due to the theory of energy coding to bridge the gap between functional connections within a biological neural network and energetic consumption, they estimate that the theory has very important consequences for quantitative research of cognitive function.

  9. Modeling Laboratory Astrophysics Experiments in the High-Energy-Density Regime Using the CRASH Radiation-Hydrodynamics Model

    NASA Astrophysics Data System (ADS)

    Grosskopf, M. J.; Drake, R. P.; Trantham, M. R.; Kuranz, C. C.; Keiter, P. A.; Rutter, E. M.; Sweeney, R. M.; Malamud, G.

    2012-10-01

    The radiation hydrodynamics code developed by the Center for Radiative Shock Hydrodynamics (CRASH) at the University of Michigan has been used to model experimental designs for high-energy-density physics campaigns on OMEGA and other high-energy laser facilities. This code is an Eulerian, block-adaptive AMR hydrodynamics code with implicit multigroup radiation transport and electron heat conduction. CRASH model results have shown good agreement with a experimental results from a variety of applications, including: radiative shock, Kelvin-Helmholtz and Rayleigh-Taylor experiments on the OMEGA laser; as well as laser-driven ablative plumes in experiments by the Astrophysical Collisionless Shocks Experiments with Lasers (ACSEL), collaboration. We report a series of results with the CRASH code in support of design work for upcoming high-energy-density physics experiments, as well as comparison between existing experimental data and simulation results. This work is funded by the Predictive Sciences Academic Alliances Program in NNSA-ASC via grant DEFC52- 08NA28616, by the NNSA-DS and SC-OFES Joint Program in High-Energy-Density Laboratory Plasmas, grant number DE-FG52-09NA29548, and by the National Laser User Facility Program, grant number DE-NA0000850.

  10. The FLUKA code for space applications: recent developments

    NASA Technical Reports Server (NTRS)

    Andersen, V.; Ballarini, F.; Battistoni, G.; Campanella, M.; Carboni, M.; Cerutti, F.; Empl, A.; Fasso, A.; Ferrari, A.; Gadioli, E.; hide

    2004-01-01

    The FLUKA Monte Carlo transport code is widely used for fundamental research, radioprotection and dosimetry, hybrid nuclear energy system and cosmic ray calculations. The validity of its physical models has been benchmarked against a variety of experimental data over a wide range of energies, ranging from accelerator data to cosmic ray showers in the earth atmosphere. The code is presently undergoing several developments in order to better fit the needs of space applications. The generation of particle spectra according to up-to-date cosmic ray data as well as the effect of the solar and geomagnetic modulation have been implemented and already successfully applied to a variety of problems. The implementation of suitable models for heavy ion nuclear interactions has reached an operational stage. At medium/high energy FLUKA is using the DPMJET model. The major task of incorporating heavy ion interactions from a few GeV/n down to the threshold for inelastic collisions is also progressing and promising results have been obtained using a modified version of the RQMD-2.4 code. This interim solution is now fully operational, while waiting for the development of new models based on the FLUKA hadron-nucleus interaction code, a newly developed QMD code, and the implementation of the Boltzmann master equation theory for low energy ion interactions. c2004 COSPAR. Published by Elsevier Ltd. All rights reserved.

  11. Pion and Kaon Lab Frame Differential Cross Sections for Intermediate Energy Nucleus-Nucleus Collisions

    NASA Technical Reports Server (NTRS)

    Norbury, John W.; Blattnig, Steve R.

    2008-01-01

    Space radiation transport codes require accurate models for hadron production in intermediate energy nucleus-nucleus collisions. Codes require cross sections to be written in terms of lab frame variables and it is important to be able to verify models against experimental data in the lab frame. Several models are compared to lab frame data. It is found that models based on algebraic parameterizations are unable to describe intermediate energy differential cross section data. However, simple thermal model parameterizations, when appropriately transformed from the center of momentum to the lab frame, are able to account for the data.

  12. Multi-scale modeling of irradiation effects in spallation neutron source materials

    NASA Astrophysics Data System (ADS)

    Yoshiie, T.; Ito, T.; Iwase, H.; Kaneko, Y.; Kawai, M.; Kishida, I.; Kunieda, S.; Sato, K.; Shimakawa, S.; Shimizu, F.; Hashimoto, S.; Hashimoto, N.; Fukahori, T.; Watanabe, Y.; Xu, Q.; Ishino, S.

    2011-07-01

    Changes in mechanical property of Ni under irradiation by 3 GeV protons were estimated by multi-scale modeling. The code consisted of four parts. The first part was based on the Particle and Heavy-Ion Transport code System (PHITS) code for nuclear reactions, and modeled the interactions between high energy protons and nuclei in the target. The second part covered atomic collisions by particles without nuclear reactions. Because the energy of the particles was high, subcascade analysis was employed. The direct formation of clusters and the number of mobile defects were estimated using molecular dynamics (MD) and kinetic Monte-Carlo (kMC) methods in each subcascade. The third part considered damage structural evolutions estimated by reaction kinetic analysis. The fourth part involved the estimation of mechanical property change using three-dimensional discrete dislocation dynamics (DDD). Using the above four part code, stress-strain curves for high energy proton irradiated Ni were obtained.

  13. Computer code for analyzing the performance of aquifer thermal energy storage systems

    NASA Astrophysics Data System (ADS)

    Vail, L. W.; Kincaid, C. T.; Kannberg, L. D.

    1985-05-01

    A code called Aquifer Thermal Energy Storage System Simulator (ATESSS) has been developed to analyze the operational performance of ATES systems. The ATESSS code provides an ability to examine the interrelationships among design specifications, general operational strategies, and unpredictable variations in the demand for energy. The uses of the code can vary the well field layout, heat exchanger size, and pumping/injection schedule. Unpredictable aspects of supply and demand may also be examined through the use of a stochastic model of selected system parameters. While employing a relatively simple model of the aquifer, the ATESSS code plays an important role in the design and operation of ATES facilities by augmenting experience provided by the relatively few field experiments and demonstration projects. ATESSS has been used to characterize the effect of different pumping/injection schedules on a hypothetical ATES system and to estimate the recovery at the St. Paul, Minnesota, field experiment.

  14. P.L. 102-486, "Energy Policy Act" (1992)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2011-12-13

    Amends the Energy Conservation and Production Act to set a deadline by which each State must certify to the Secretary of Energy whether its energy efficiency standards with respect to residential and commercial building codes meet or exceed those of the Council of American Building Officials (CABO) Model Energy Code, 1992, and of the American Society of Heating, Refrigerating, and Air-Conditioning Engineers, respectively.

  15. Hybrid model for simulation of plasma jet injection in tokamak

    NASA Astrophysics Data System (ADS)

    Galkin, Sergei A.; Bogatu, I. N.

    2016-10-01

    Hybrid kinetic model of plasma treats the ions as kinetic particles and the electrons as charge neutralizing massless fluid. The model is essentially applicable when most of the energy is concentrated in the ions rather than in the electrons, i.e. it is well suited for the high-density hyper-velocity C60 plasma jet. The hybrid model separates the slower ion time scale from the faster electron time scale, which becomes disregardable. That is why hybrid codes consistently outperform the traditional PIC codes in computational efficiency, still resolving kinetic ions effects. We discuss 2D hybrid model and code with exact energy conservation numerical algorithm and present some results of its application to simulation of C60 plasma jet penetration through tokamak-like magnetic barrier. We also examine the 3D model/code extension and its possible applications to tokamak and ionospheric plasmas. The work is supported in part by US DOE DE-SC0015776 Grant.

  16. Review of heavy charged particle transport in MCNP6.2

    NASA Astrophysics Data System (ADS)

    Zieb, K.; Hughes, H. G.; James, M. R.; Xu, X. G.

    2018-04-01

    The release of version 6.2 of the MCNP6 radiation transport code is imminent. To complement the newest release, a summary of the heavy charged particle physics models used in the 1 MeV to 1 GeV energy regime is presented. Several changes have been introduced into the charged particle physics models since the merger of the MCNP5 and MCNPX codes into MCNP6. This paper discusses the default models used in MCNP6 for continuous energy loss, energy straggling, and angular scattering of heavy charged particles. Explanations of the physics models' theories are included as well.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zieb, Kristofer James Ekhart; Hughes, Henry Grady III; Xu, X. George

    The release of version 6.2 of the MCNP6 radiation transport code is imminent. To complement the newest release, a summary of the heavy charged particle physics models used in the 1 MeV to 1 GeV energy regime is presented. Several changes have been introduced into the charged particle physics models since the merger of the MCNP5 and MCNPX codes into MCNP6. Here, this article discusses the default models used in MCNP6 for continuous energy loss, energy straggling, and angular scattering of heavy charged particles. Explanations of the physics models’ theories are included as well.

  18. City Reach Code Technical Support Document

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Athalye, Rahul A.; Chen, Yan; Zhang, Jian

    This report describes and analyzes a set of energy efficiency measures that will save 20% energy over ASHRAE Standard 90.1-2013. The measures will be used to formulate a Reach Code for cities aiming to go beyond national model energy codes. A coalition of U.S. cities together with other stakeholders wanted to facilitate the development of voluntary guidelines and standards that can be implemented in stages at the city level to improve building energy efficiency. The coalition's efforts are being supported by the U.S. Department of Energy via Pacific Northwest National Laboratory (PNNL) and in collaboration with the New Buildings Institute.

  19. Validation of a multi-layer Green's function code for ion beam transport

    NASA Astrophysics Data System (ADS)

    Walker, Steven; Tweed, John; Tripathi, Ram; Badavi, Francis F.; Miller, Jack; Zeitlin, Cary; Heilbronn, Lawrence

    To meet the challenge of future deep space programs, an accurate and efficient engineering code for analyzing the shielding requirements against high-energy galactic heavy radiations is needed. In consequence, a new version of the HZETRN code capable of simulating high charge and energy (HZE) ions with either laboratory or space boundary conditions is currently under development. The new code, GRNTRN, is based on a Green's function approach to the solution of Boltzmann's transport equation and like its predecessor is deterministic in nature. The computational model consists of the lowest order asymptotic approximation followed by a Neumann series expansion with non-perturbative corrections. The physical description includes energy loss with straggling, nuclear attenuation, nuclear fragmentation with energy dispersion and down shift. Code validation in the laboratory environment is addressed by showing that GRNTRN accurately predicts energy loss spectra as measured by solid-state detectors in ion beam experiments with multi-layer targets. In order to validate the code with space boundary conditions, measured particle fluences are propagated through several thicknesses of shielding using both GRNTRN and the current version of HZETRN. The excellent agreement obtained indicates that GRNTRN accurately models the propagation of HZE ions in the space environment as well as in laboratory settings and also provides verification of the HZETRN propagator.

  20. Code-to-Code Comparison, and Material Response Modeling of Stardust and MSL using PATO and FIAT

    NASA Technical Reports Server (NTRS)

    Omidy, Ali D.; Panerai, Francesco; Martin, Alexandre; Lachaud, Jean R.; Cozmuta, Ioana; Mansour, Nagi N.

    2015-01-01

    This report provides a code-to-code comparison between PATO, a recently developed high fidelity material response code, and FIAT, NASA's legacy code for ablation response modeling. The goal is to demonstrates that FIAT and PATO generate the same results when using the same models. Test cases of increasing complexity are used, from both arc-jet testing and flight experiment. When using the exact same physical models, material properties and boundary conditions, the two codes give results that are within 2% of errors. The minor discrepancy is attributed to the inclusion of the gas phase heat capacity (cp) in the energy equation in PATO, and not in FIAT.

  1. Implementation of Energy Code Controls Requirements in New Commercial Buildings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rosenberg, Michael I.; Hart, Philip R.; Hatten, Mike

    Most state energy codes in the United States are based on one of two national model codes; ANSI/ASHRAE/IES 90.1 (Standard 90.1) or the International Code Council (ICC) International Energy Conservation Code (IECC). Since 2004, covering the last four cycles of Standard 90.1 updates, about 30% of all new requirements have been related to building controls. These requirements can be difficult to implement and verification is beyond the expertise of most building code officials, yet the assumption in studies that measure the savings from energy codes is that they are implemented and working correctly. The objective of the current research ismore » to evaluate the degree to which high impact controls requirements included in commercial energy codes are properly designed, commissioned and implemented in new buildings. This study also evaluates the degree to which these control requirements are realizing their savings potential. This was done using a three-step process. The first step involved interviewing commissioning agents to get a better understanding of their activities as they relate to energy code required controls measures. The second involved field audits of a sample of commercial buildings to determine whether the code required control measures are being designed, commissioned and correctly implemented and functioning in new buildings. The third step includes compilation and analysis of the information gather during the first two steps. Information gathered during these activities could be valuable to code developers, energy planners, designers, building owners, and building officials.« less

  2. A hadron-nucleus collision event generator for simulations at intermediate energies

    NASA Astrophysics Data System (ADS)

    Ackerstaff, K.; Bisplinghoff, J.; Bollmann, R.; Cloth, P.; Diehl, O.; Dohrmann, F.; Drüke, V.; Eisenhardt, S.; Engelhardt, H. P.; Ernst, J.; Eversheim, P. D.; Filges, D.; Fritz, S.; Gasthuber, M.; Gebel, R.; Greiff, J.; Gross, A.; Gross-Hardt, R.; Hinterberger, F.; Jahn, R.; Lahr, U.; Langkau, R.; Lippert, G.; Maschuw, R.; Mayer-Kuckuk, T.; Mertler, G.; Metsch, B.; Mosel, F.; Paetz gen. Schieck, H.; Petry, H. R.; Prasuhn, D.; von Przewoski, B.; Rohdjeß, H.; Rosendaal, D.; Roß, U.; von Rossen, P.; Scheid, H.; Schirm, N.; Schulz-Rojahn, M.; Schwandt, F.; Scobel, W.; Sterzenbach, G.; Theis, D.; Weber, J.; Wellinghausen, A.; Wiedmann, W.; Woller, K.; Ziegler, R.; EDDA-Collaboration

    2002-10-01

    Several available codes for hadronic event generation and shower simulation are discussed and their predictions are compared to experimental data in order to obtain a satisfactory description of hadronic processes in Monte Carlo studies of detector systems for medium energy experiments. The most reasonable description is found for the intra-nuclear-cascade (INC) model of Bertini which employs microscopic description of the INC, taking into account elastic and inelastic pion-nucleon and nucleon-nucleon scattering. The isobar model of Sternheimer and Lindenbaum is used to simulate the inelastic elementary collisions inside the nucleus via formation and decay of the Δ33-resonance which, however, limits the model at higher energies. To overcome this limitation, the INC model has been extended by using the resonance model of the HADRIN code, considering all resonances in elementary collisions contributing more than 2% to the total cross-section up to kinetic energies of 5 GeV. In addition, angular distributions based on phase shift analysis are used for elastic nucleon-nucleon as well as elastic and charge exchange pion-nucleon scattering. Also kaons and antinucleons can be treated as projectiles. Good agreement with experimental data is found predominantly for lower projectile energies, i.e. in the regime of the Bertini code. The original as well as the extended Bertini model have been implemented as shower codes into the high energy detector simulation package GEANT-3.14, allowing now its use also in full Monte Carlo studies of detector systems at intermediate energies. The GEANT-3.14 here have been used mainly for its powerful geometry and analysing packages due to the complex EDDA detector system.

  3. The Energy Coding of a Structural Neural Network Based on the Hodgkin-Huxley Model.

    PubMed

    Zhu, Zhenyu; Wang, Rubin; Zhu, Fengyun

    2018-01-01

    Based on the Hodgkin-Huxley model, the present study established a fully connected structural neural network to simulate the neural activity and energy consumption of the network by neural energy coding theory. The numerical simulation result showed that the periodicity of the network energy distribution was positively correlated to the number of neurons and coupling strength, but negatively correlated to signal transmitting delay. Moreover, a relationship was established between the energy distribution feature and the synchronous oscillation of the neural network, which showed that when the proportion of negative energy in power consumption curve was high, the synchronous oscillation of the neural network was apparent. In addition, comparison with the simulation result of structural neural network based on the Wang-Zhang biophysical model of neurons showed that both models were essentially consistent.

  4. Overview of FAR-TECH's magnetic fusion energy research

    NASA Astrophysics Data System (ADS)

    Kim, Jin-Soo; Bogatu, I. N.; Galkin, S. A.; Spencer, J. Andrew; Svidzinski, V. A.; Zhao, L.

    2017-10-01

    FAR-TECH, Inc. has been working on magnetic fusion energy research over two-decades. During the years, we have developed unique approaches to help understanding the physics, and resolving issues in magnetic fusion energy. The specific areas of work have been in modeling RF waves in plasmas, MHD modeling and mode-identification, and nano-particle plasma jet and its application to disruption mitigation. Our research highlights in recent years will be presented with examples, specifically, developments of FullWave (Full Wave RF code), PMARS (Parallelized MARS code), and HEM (Hybrid ElectroMagnetic code). In addition, nano-particle plasma-jet (NPPJ) and its application for disruption mitigation will be presented. Work is supported by the U.S. DOE SBIR program.

  5. A finite-temperature Hartree-Fock code for shell-model Hamiltonians

    NASA Astrophysics Data System (ADS)

    Bertsch, G. F.; Mehlhaff, J. M.

    2016-10-01

    The codes HFgradZ.py and HFgradT.py find axially symmetric minima of a Hartree-Fock energy functional for a Hamiltonian supplied in a shell model basis. The functional to be minimized is the Hartree-Fock energy for zero-temperature properties or the Hartree-Fock grand potential for finite-temperature properties (thermal energy, entropy). The minimization may be subjected to additional constraints besides axial symmetry and nucleon numbers. A single-particle operator can be used to constrain the minimization by adding it to the single-particle Hamiltonian with a Lagrange multiplier. One can also constrain its expectation value in the zero-temperature code. Also the orbital filling can be constrained in the zero-temperature code, fixing the number of nucleons having given Kπ quantum numbers. This is particularly useful to resolve near-degeneracies among distinct minima.

  6. DREAM-3D and the importance of model inputs and boundary conditions

    NASA Astrophysics Data System (ADS)

    Friedel, Reiner; Tu, Weichao; Cunningham, Gregory; Jorgensen, Anders; Chen, Yue

    2015-04-01

    Recent work on radiation belt 3D diffusion codes such as the Los Alamos "DREAM-3D" code have demonstrated the ability of such codes to reproduce realistic magnetospheric storm events in the relativistic electron dynamics - as long as sufficient "event-oriented" boundary conditions and code inputs such as wave powers, low energy boundary conditions, background plasma densities, and last closed drift shell (outer boundary) are available. In this talk we will argue that the main limiting factor in our modeling ability is no longer our inability to represent key physical processes that govern the dynamics of the radiation belts (radial, pitch angle and energy diffusion) but rather our limitations in specifying accurate boundary conditions and code inputs. We use here DREAM-3D runs to show the sensitivity of the modeled outcomes to these boundary conditions and inputs, and also discuss alternate "proxy" approaches to obtain the required inputs from other (ground-based) sources.

  7. Modeling Laser-Driven Laboratory Astrophysics Experiments Using the CRASH Code

    NASA Astrophysics Data System (ADS)

    Grosskopf, Michael; Keiter, P.; Kuranz, C. C.; Malamud, G.; Trantham, M.; Drake, R.

    2013-06-01

    Laser-driven, laboratory astrophysics experiments can provide important insight into the physical processes relevant to astrophysical systems. The radiation hydrodynamics code developed by the Center for Radiative Shock Hydrodynamics (CRASH) at the University of Michigan has been used to model experimental designs for high-energy-density laboratory astrophysics campaigns on OMEGA and other high-energy laser facilities. This code is an Eulerian, block-adaptive AMR hydrodynamics code with implicit multigroup radiation transport and electron heat conduction. The CRASH model has been used on many applications including: radiative shocks, Kelvin-Helmholtz and Rayleigh-Taylor experiments on the OMEGA laser; as well as laser-driven ablative plumes in experiments by the Astrophysical Collisionless Shocks Experiments with Lasers (ACSEL) collaboration. We report a series of results with the CRASH code in support of design work for upcoming high-energy-density physics experiments, as well as comparison between existing experimental data and simulation results. This work is funded by the Predictive Sciences Academic Alliances Program in NNSA-ASC via grant DEFC52- 08NA28616, by the NNSA-DS and SC-OFES Joint Program in High-Energy-Density Laboratory Plasmas, grant number DE-FG52-09NA29548, and by the National Laser User Facility Program, grant number DE-NA0000850.

  8. Energy and Energy Cost Savings Analysis of the 2015 IECC for Commercial Buildings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Jian; Xie, YuLong; Athalye, Rahul A.

    As required by statute (42 USC 6833), DOE recently issued a determination that ANSI/ASHRAE/IES Standard 90.1-2013 would achieve greater energy efficiency in buildings subject to the code compared to the 2010 edition of the standard. Pacific Northwest National Laboratory (PNNL) conducted an energy savings analysis for Standard 90.1-2013 in support of its determination . While Standard 90.1 is the model energy standard for commercial and multi-family residential buildings over three floors (42 USC 6833), many states have historically adopted the International Energy Conservation Code (IECC) for both residential and commercial buildings. This report provides an assessment as to whether buildingsmore » constructed to the commercial energy efficiency provisions of the 2015 IECC would save energy and energy costs as compared to the 2012 IECC. PNNL also compared the energy performance of the 2015 IECC with the corresponding Standard 90.1-2013. The goal of this analysis is to help states and local jurisdictions make informed decisions regarding model code adoption.« less

  9. Review of Heavy Charged Particle Transport in MCNP6.2

    DOE PAGES

    Zieb, Kristofer James Ekhart; Hughes, Henry Grady III; Xu, X. George; ...

    2018-01-05

    The release of version 6.2 of the MCNP6 radiation transport code is imminent. To complement the newest release, a summary of the heavy charged particle physics models used in the 1 MeV to 1 GeV energy regime is presented. Several changes have been introduced into the charged particle physics models since the merger of the MCNP5 and MCNPX codes into MCNP6. Here, this article discusses the default models used in MCNP6 for continuous energy loss, energy straggling, and angular scattering of heavy charged particles. Explanations of the physics models’ theories are included as well.

  10. Mandating better buildings: a global review of building codes and prospects for improvement in the United States

    DOE PAGES

    Sun, Xiaojing; Brown, Marilyn A.; Cox, Matt; ...

    2015-03-11

    This paper provides a global overview of the design, implementation, and evolution of building energy codes. Reflecting alternative policy goals, building energy codes differ significantly across the United States, the European Union, and China. This review uncovers numerous innovative practices including greenhouse gas emissions caps per square meter of building space, energy performance certificates with retrofit recommendations, and inclusion of renewable energy to achieve “nearly zero-energy buildings”. These innovations motivated an assessment of an aggressive commercial building code applied to all US states, requiring both new construction and buildings with major modifications to comply with the latest version of themore » ASHRAE 90.1 Standards. Using the National Energy Modeling System (NEMS), we estimate that by 2035, such building codes in the United States could reduce energy for space heating, cooling, water heating and lighting in commercial buildings by 16%, 15%, 20% and 5%, respectively. Impacts on different fuels and building types, energy rates and bills as well as pollution emission reductions are also examined.« less

  11. Transmutation Fuel Performance Code Thermal Model Verification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gregory K. Miller; Pavel G. Medvedev

    2007-09-01

    FRAPCON fuel performance code is being modified to be able to model performance of the nuclear fuels of interest to the Global Nuclear Energy Partnership (GNEP). The present report documents the effort for verification of the FRAPCON thermal model. It was found that, with minor modifications, FRAPCON thermal model temperature calculation agrees with that of the commercial software ABAQUS (Version 6.4-4). This report outlines the methodology of the verification, code input, and calculation results.

  12. Increasing Flexibility in Energy Code Compliance: Performance Packages

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hart, Philip R.; Rosenberg, Michael I.

    Energy codes and standards have provided significant increases in building efficiency over the last 38 years, since the first national energy code was published in late 1975. The most commonly used path in energy codes, the prescriptive path, appears to be reaching a point of diminishing returns. As the code matures, the prescriptive path becomes more complicated, and also more restrictive. It is likely that an approach that considers the building as an integrated system will be necessary to achieve the next real gains in building efficiency. Performance code paths are increasing in popularity; however, there remains a significant designmore » team overhead in following the performance path, especially for smaller buildings. This paper focuses on development of one alternative format, prescriptive packages. A method to develop building-specific prescriptive packages is reviewed based on a multiple runs of prototypical building models that are used to develop parametric decision analysis to determines a set of packages with equivalent energy performance. The approach is designed to be cost-effective and flexible for the design team while achieving a desired level of energy efficiency performance. A demonstration of the approach based on mid-sized office buildings with two HVAC system types is shown along with a discussion of potential applicability in the energy code process.« less

  13. Development of a new EMP code at LANL

    NASA Astrophysics Data System (ADS)

    Colman, J. J.; Roussel-Dupré, R. A.; Symbalisty, E. M.; Triplett, L. A.; Travis, B. J.

    2006-05-01

    A new code for modeling the generation of an electromagnetic pulse (EMP) by a nuclear explosion in the atmosphere is being developed. The source of the EMP is the Compton current produced by the prompt radiation (γ-rays, X-rays, and neutrons) of the detonation. As a first step in building a multi- dimensional EMP code we have written three kinetic codes, Plume, Swarm, and Rad. Plume models the transport of energetic electrons in air. The Plume code solves the relativistic Fokker-Planck equation over a specified energy range that can include ~ 3 keV to 50 MeV and computes the resulting electron distribution function at each cell in a two dimensional spatial grid. The energetic electrons are allowed to transport, scatter, and experience Coulombic drag. Swarm models the transport of lower energy electrons in air, spanning 0.005 eV to 30 keV. The swarm code performs a full 2-D solution to the Boltzmann equation for electrons in the presence of an applied electric field. Over this energy range the relevant processes to be tracked are elastic scattering, three body attachment, two body attachment, rotational excitation, vibrational excitation, electronic excitation, and ionization. All of these occur due to collisions between the electrons and neutral bodies in air. The Rad code solves the full radiation transfer equation in the energy range of 1 keV to 100 MeV. It includes effects of photo-absorption, Compton scattering, and pair-production. All of these codes employ a spherical coordinate system in momentum space and a cylindrical coordinate system in configuration space. The "z" axis of the momentum and configuration spaces is assumed to be parallel and we are currently also assuming complete spatial symmetry around the "z" axis. Benchmarking for each of these codes will be discussed as well as the way forward towards an integrated modern EMP code.

  14. Radiation and polarization signatures of the 3D multizone time-dependent hadronic blazar model

    DOE PAGES

    Zhang, Haocheng; Diltz, Chris; Bottcher, Markus

    2016-09-23

    We present a newly developed time-dependent three-dimensional multizone hadronic blazar emission model. By coupling a Fokker–Planck-based lepto-hadronic particle evolution code, 3DHad, with a polarization-dependent radiation transfer code, 3DPol, we are able to study the time-dependent radiation and polarization signatures of a hadronic blazar model for the first time. Our current code is limited to parameter regimes in which the hadronic γ-ray output is dominated by proton synchrotron emission, neglecting pion production. Our results demonstrate that the time-dependent flux and polarization signatures are generally dominated by the relation between the synchrotron cooling and the light-crossing timescale, which is largely independent ofmore » the exact model parameters. We find that unlike the low-energy polarization signatures, which can vary rapidly in time, the high-energy polarization signatures appear stable. Lastly, future high-energy polarimeters may be able to distinguish such signatures from the lower and more rapidly variable polarization signatures expected in leptonic models.« less

  15. 10 CFR 420.2 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Planning Organization means that organization required by the Department of Transportation, and designated... planning provisions in a Standard Metropolitan Statistical Area. Model Energy Code, 1993, including Errata, means the model building code published by the Council of American Building Officials, which is...

  16. Energetic properties' investigation of removing flattening filter at phantom surface: Monte Carlo study using BEAMnrc code, DOSXYZnrc code and BEAMDP code

    NASA Astrophysics Data System (ADS)

    Bencheikh, Mohamed; Maghnouj, Abdelmajid; Tajmouati, Jaouad

    2017-11-01

    The Monte Carlo calculation method is considered to be the most accurate method for dose calculation in radiotherapy and beam characterization investigation, in this study, the Varian Clinac 2100 medical linear accelerator with and without flattening filter (FF) was modelled. The objective of this study was to determine flattening filter impact on particles' energy properties at phantom surface in terms of energy fluence, mean energy, and energy fluence distribution. The Monte Carlo codes used in this study were BEAMnrc code for simulating linac head, DOSXYZnrc code for simulating the absorbed dose in a water phantom, and BEAMDP for extracting energy properties. Field size was 10 × 10 cm2, simulated photon beam energy was 6 MV and SSD was 100 cm. The Monte Carlo geometry was validated by a gamma index acceptance rate of 99% in PDD and 98% in dose profiles, gamma criteria was 3% for dose difference and 3mm for distance to agreement. In without-FF, the energetic properties was as following: electron contribution was increased by more than 300% in energy fluence, almost 14% in mean energy and 1900% in energy fluence distribution, however, photon contribution was increased 50% in energy fluence, and almost 18% in mean energy and almost 35% in energy fluence distribution. The removing flattening filter promotes the increasing of electron contamination energy versus photon energy; our study can contribute in the evolution of removing flattening filter configuration in future linac.

  17. Energy Storage System Safety: Plan Review and Inspection Checklist

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cole, Pam C.; Conover, David R.

    Codes, standards, and regulations (CSR) governing the design, construction, installation, commissioning, and operation of the built environment are intended to protect the public health, safety, and welfare. While these documents change over time to address new technology and new safety challenges, there is generally some lag time between the introduction of a technology into the market and the time it is specifically covered in model codes and standards developed in the voluntary sector. After their development, there is also a timeframe of at least a year or two until the codes and standards are adopted. Until existing model codes andmore » standards are updated or new ones are developed and then adopted, one seeking to deploy energy storage technologies or needing to verify the safety of an installation may be challenged in trying to apply currently implemented CSRs to an energy storage system (ESS). The Energy Storage System Guide for Compliance with Safety Codes and Standards1 (CG), developed in June 2016, is intended to help address the acceptability of the design and construction of stationary ESSs, their component parts, and the siting, installation, commissioning, operations, maintenance, and repair/renovation of ESS within the built environment.« less

  18. HYDRA-II: A hydrothermal analysis computer code: Volume 3, Verification/validation assessments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCann, R.A.; Lowery, P.S.

    1987-10-01

    HYDRA-II is a hydrothermal computer code capable of three-dimensional analysis of coupled conduction, convection, and thermal radiation problems. This code is especially appropriate for simulating the steady-state performance of spent fuel storage systems. The code has been evaluated for this application for the US Department of Energy's Commercial Spent Fuel Management Program. HYDRA-II provides a finite difference solution in cartesian coordinates to the equations governing the conservation of mass, momentum, and energy. A cylindrical coordinate system may also be used to enclose the cartesian coordinate system. This exterior coordinate system is useful for modeling cylindrical cask bodies. The difference equationsmore » for conservation of momentum are enhanced by the incorporation of directional porosities and permeabilities that aid in modeling solid structures whose dimensions may be smaller than the computational mesh. The equation for conservation of energy permits modeling of orthotropic physical properties and film resistances. Several automated procedures are available to model radiation transfer within enclosures and from fuel rod to fuel rod. The documentation of HYDRA-II is presented in three separate volumes. Volume I - Equations and Numerics describes the basic differential equations, illustrates how the difference equations are formulated, and gives the solution procedures employed. Volume II - User's Manual contains code flow charts, discusses the code structure, provides detailed instructions for preparing an input file, and illustrates the operation of the code by means of a model problem. This volume, Volume III - Verification/Validation Assessments, provides a comparison between the analytical solution and the numerical simulation for problems with a known solution. This volume also documents comparisons between the results of simulations of single- and multiassembly storage systems and actual experimental data. 11 refs., 55 figs., 13 tabs.« less

  19. Learning-Based Just-Noticeable-Quantization- Distortion Modeling for Perceptual Video Coding.

    PubMed

    Ki, Sehwan; Bae, Sung-Ho; Kim, Munchurl; Ko, Hyunsuk

    2018-07-01

    Conventional predictive video coding-based approaches are reaching the limit of their potential coding efficiency improvements, because of severely increasing computation complexity. As an alternative approach, perceptual video coding (PVC) has attempted to achieve high coding efficiency by eliminating perceptual redundancy, using just-noticeable-distortion (JND) directed PVC. The previous JNDs were modeled by adding white Gaussian noise or specific signal patterns into the original images, which were not appropriate in finding JND thresholds due to distortion with energy reduction. In this paper, we present a novel discrete cosine transform-based energy-reduced JND model, called ERJND, that is more suitable for JND-based PVC schemes. Then, the proposed ERJND model is extended to two learning-based just-noticeable-quantization-distortion (JNQD) models as preprocessing that can be applied for perceptual video coding. The two JNQD models can automatically adjust JND levels based on given quantization step sizes. One of the two JNQD models, called LR-JNQD, is based on linear regression and determines the model parameter for JNQD based on extracted handcraft features. The other JNQD model is based on a convolution neural network (CNN), called CNN-JNQD. To our best knowledge, our paper is the first approach to automatically adjust JND levels according to quantization step sizes for preprocessing the input to video encoders. In experiments, both the LR-JNQD and CNN-JNQD models were applied to high efficiency video coding (HEVC) and yielded maximum (average) bitrate reductions of 38.51% (10.38%) and 67.88% (24.91%), respectively, with little subjective video quality degradation, compared with the input without preprocessing applied.

  20. World Energy Projection System Plus Model Documentation: Coal Module

    EIA Publications

    2011-01-01

    This report documents the objectives, analytical approach and development of the World Energy Projection System Plus (WEPS ) Coal Model. It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.

  1. World Energy Projection System Plus Model Documentation: Transportation Module

    EIA Publications

    2017-01-01

    This report documents the objectives, analytical approach and development of the World Energy Projection System Plus (WEPS ) International Transportation model. It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.

  2. World Energy Projection System Plus Model Documentation: Residential Module

    EIA Publications

    2016-01-01

    This report documents the objectives, analytical approach and development of the World Energy Projection System Plus (WEPS ) Residential Model. It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.

  3. World Energy Projection System Plus Model Documentation: Refinery Module

    EIA Publications

    2016-01-01

    This report documents the objectives, analytical approach and development of the World Energy Projection System Plus (WEPS ) Refinery Model. It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.

  4. World Energy Projection System Plus Model Documentation: Main Module

    EIA Publications

    2016-01-01

    This report documents the objectives, analytical approach and development of the World Energy Projection System Plus (WEPS ) Main Model. It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.

  5. World Energy Projection System Plus Model Documentation: Electricity Module

    EIA Publications

    2017-01-01

    This report documents the objectives, analytical approach and development of the World Energy Projection System Plus (WEPS ) World Electricity Model. It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.

  6. HYDRA-II: A hydrothermal analysis computer code: Volume 2, User's manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCann, R.A.; Lowery, P.S.; Lessor, D.L.

    1987-09-01

    HYDRA-II is a hydrothermal computer code capable of three-dimensional analysis of coupled conduction, convection, and thermal radiation problems. This code is especially appropriate for simulating the steady-state performance of spent fuel storage systems. The code has been evaluated for this application for the US Department of Energy's Commercial Spent Fuel Management Program. HYDRA-II provides a finite-difference solution in cartesian coordinates to the equations governing the conservation of mass, momentum, and energy. A cylindrical coordinate system may also be used to enclose the cartesian coordinate system. This exterior coordinate system is useful for modeling cylindrical cask bodies. The difference equations formore » conservation of momentum incorporate directional porosities and permeabilities that are available to model solid structures whose dimensions may be smaller than the computational mesh. The equation for conservation of energy permits modeling of orthotropic physical properties and film resistances. Several automated methods are available to model radiation transfer within enclosures and from fuel rod to fuel rod. The documentation of HYDRA-II is presented in three separate volumes. Volume 1 - Equations and Numerics describes the basic differential equations, illustrates how the difference equations are formulated, and gives the solution procedures employed. This volume, Volume 2 - User's Manual, contains code flow charts, discusses the code structure, provides detailed instructions for preparing an input file, and illustrates the operation of the code by means of a sample problem. The final volume, Volume 3 - Verification/Validation Assessments, provides a comparison between the analytical solution and the numerical simulation for problems with a known solution. 6 refs.« less

  7. Neural Energy Supply-Consumption Properties Based on Hodgkin-Huxley Model

    PubMed Central

    2017-01-01

    Electrical activity is the foundation of the neural system. Coding theories that describe neural electrical activity by the roles of action potential timing or frequency have been thoroughly studied. However, an alternative method to study coding questions is the energy method, which is more global and economical. In this study, we clearly defined and calculated neural energy supply and consumption based on the Hodgkin-Huxley model, during firing action potentials and subthreshold activities using ion-counting and power-integral model. Furthermore, we analyzed energy properties of each ion channel and found that, under the two circumstances, power synchronization of ion channels and energy utilization ratio have significant differences. This is particularly true of the energy utilization ratio, which can rise to above 100% during subthreshold activity, revealing an overdraft property of energy use. These findings demonstrate the distinct status of the energy properties during neuronal firings and subthreshold activities. Meanwhile, after introducing a synapse energy model, this research can be generalized to energy calculation of a neural network. This is potentially important for understanding the relationship between dynamical network activities and cognitive behaviors. PMID:28316842

  8. Neutron Capture Gamma-Ray Libraries for Nuclear Applications

    NASA Astrophysics Data System (ADS)

    Sleaford, B. W.; Firestone, R. B.; Summers, N.; Escher, J.; Hurst, A.; Krticka, M.; Basunia, S.; Molnar, G.; Belgya, T.; Revay, Z.; Choi, H. D.

    2011-06-01

    The neutron capture reaction is useful in identifying and analyzing the gamma-ray spectrum from an unknown assembly as it gives unambiguous information on its composition. This can be done passively or actively where an external neutron source is used to probe an unknown assembly. There are known capture gamma-ray data gaps in the ENDF libraries used by transport codes for various nuclear applications. The Evaluated Gamma-ray Activation file (EGAF) is a new thermal neutron capture database of discrete line spectra and cross sections for over 260 isotopes that was developed as part of an IAEA Coordinated Research Project. EGAF is being used to improve the capture gamma production in ENDF libraries. For medium to heavy nuclei the quasi continuum contribution to the gamma cascades is not experimentally resolved. The continuum contains up to 90% of all the decay energy and is modeled here with the statistical nuclear structure code DICEBOX. This code also provides a consistency check of the level scheme nuclear structure evaluation. The calculated continuum is of sufficient accuracy to include in the ENDF libraries. This analysis also determines new total thermal capture cross sections and provides an improved RIPL database. For higher energy neutron capture there is less experimental data available making benchmarking of the modeling codes more difficult. We are investigating the capture spectra from higher energy neutrons experimentally using surrogate reactions and modeling this with Hauser-Feshbach codes. This can then be used to benchmark CASINO, a version of DICEBOX modified for neutron capture at higher energy. This can be used to simulate spectra from neutron capture at incident neutron energies up to 20 MeV to improve the gamma-ray spectrum in neutron data libraries used for transport modeling of unknown assemblies.

  9. Development of Automated Procedures to Generate Reference Building Models for ASHRAE Standard 90.1 and India’s Building Energy Code and Implementation in OpenStudio

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parker, Andrew; Haves, Philip; Jegi, Subhash

    This paper describes a software system for automatically generating a reference (baseline) building energy model from the proposed (as-designed) building energy model. This system is built using the OpenStudio Software Development Kit (SDK) and is designed to operate on building energy models in the OpenStudio file format.

  10. World Energy Projection System Plus Model Documentation: Greenhouse Gases Module

    EIA Publications

    2011-01-01

    This report documents the objectives, analytical approach and development of the World Energy Projection System Plus (WEPS ) Greenhouse Gases Model. It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.

  11. World Energy Projection System Plus Model Documentation: Natural Gas Module

    EIA Publications

    2011-01-01

    This report documents the objectives, analytical approach and development of the World Energy Projection System Plus (WEPS ) Natural Gas Model. It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.

  12. World Energy Projection System Plus Model Documentation: District Heat Module

    EIA Publications

    2017-01-01

    This report documents the objectives, analytical approach and development of the World Energy Projection System Plus (WEPS ) District Heat Model. It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.

  13. World Energy Projection System Plus Model Documentation: Industrial Module

    EIA Publications

    2016-01-01

    This report documents the objectives, analytical approach and development of the World Energy Projection System Plus (WEPS ) World Industrial Model (WIM). It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.

  14. Assessment of the MHD capability in the ATHENA code using data from the ALEX facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roth, P.A.

    1989-03-01

    The ATHENA (Advanced Thermal Hydraulic Energy Network Analyzer) code is a system transient analysis code with multi-loop, multi-fluid capabilities, which is available to the fusion community at the National Magnetic Fusion Energy Computing Center (NMFECC). The work reported here assesses the ATHENA magnetohydrodynamic (MHD) pressure drop model for liquid metals flowing through a strong magnetic field. An ATHENA model was developed for two simple geometry, adiabatic test sections used in the Argonne Liquid Metal Experiment (ALEX) at Argonne National Laboratory (ANL). The pressure drops calculated by ATHENA agreed well with the experimental results from the ALEX facility.

  15. Combined Modeling of Acceleration, Transport, and Hydrodynamic Response in Solar Flares. 1; The Numerical Model

    NASA Technical Reports Server (NTRS)

    Liu, Wei; Petrosian, Vahe; Mariska, John T.

    2009-01-01

    Acceleration and transport of high-energy particles and fluid dynamics of atmospheric plasma are interrelated aspects of solar flares, but for convenience and simplicity they were artificially separated in the past. We present here self consistently combined Fokker-Planck modeling of particles and hydrodynamic simulation of flare plasma. Energetic electrons are modeled with the Stanford unified code of acceleration, transport, and radiation, while plasma is modeled with the Naval Research Laboratory flux tube code. We calculated the collisional heating rate directly from the particle transport code, which is more accurate than those in previous studies based on approximate analytical solutions. We repeated the simulation of Mariska et al. with an injection of power law, downward-beamed electrons using the new heating rate. For this case, a -10% difference was found from their old result. We also used a more realistic spectrum of injected electrons provided by the stochastic acceleration model, which has a smooth transition from a quasi-thermal background at low energies to a non thermal tail at high energies. The inclusion of low-energy electrons results in relatively more heating in the corona (versus chromosphere) and thus a larger downward heat conduction flux. The interplay of electron heating, conduction, and radiative loss leads to stronger chromospheric evaporation than obtained in previous studies, which had a deficit in low-energy electrons due to an arbitrarily assumed low-energy cutoff. The energy and spatial distributions of energetic electrons and bremsstrahlung photons bear signatures of the changing density distribution caused by chromospheric evaporation. In particular, the density jump at the evaporation front gives rise to enhanced emission, which, in principle, can be imaged by X-ray telescopes. This model can be applied to investigate a variety of high-energy processes in solar, space, and astrophysical plasmas.

  16. Nonperturbative methods in HZE ion transport

    NASA Technical Reports Server (NTRS)

    Wilson, John W.; Badavi, Francis F.; Costen, Robert C.; Shinn, Judy L.

    1993-01-01

    A nonperturbative analytic solution of the high charge and energy (HZE) Green's function is used to implement a computer code for laboratory ion beam transport. The code is established to operate on the Langley Research Center nuclear fragmentation model used in engineering applications. Computational procedures are established to generate linear energy transfer (LET) distributions for a specified ion beam and target for comparison with experimental measurements. The code is highly efficient and compares well with the perturbation approximations.

  17. Low-energy electron dose-point kernel simulations using new physics models implemented in Geant4-DNA

    NASA Astrophysics Data System (ADS)

    Bordes, Julien; Incerti, Sébastien; Lampe, Nathanael; Bardiès, Manuel; Bordage, Marie-Claude

    2017-05-01

    When low-energy electrons, such as Auger electrons, interact with liquid water, they induce highly localized ionizing energy depositions over ranges comparable to cell diameters. Monte Carlo track structure (MCTS) codes are suitable tools for performing dosimetry at this level. One of the main MCTS codes, Geant4-DNA, is equipped with only two sets of cross section models for low-energy electron interactions in liquid water (;option 2; and its improved version, ;option 4;). To provide Geant4-DNA users with new alternative physics models, a set of cross sections, extracted from CPA100 MCTS code, have been added to Geant4-DNA. This new version is hereafter referred to as ;Geant4-DNA-CPA100;. In this study, ;Geant4-DNA-CPA100; was used to calculate low-energy electron dose-point kernels (DPKs) between 1 keV and 200 keV. Such kernels represent the radial energy deposited by an isotropic point source, a parameter that is useful for dosimetry calculations in nuclear medicine. In order to assess the influence of different physics models on DPK calculations, DPKs were calculated using the existing Geant4-DNA models (;option 2; and ;option 4;), newly integrated CPA100 models, and the PENELOPE Monte Carlo code used in step-by-step mode for monoenergetic electrons. Additionally, a comparison was performed of two sets of DPKs that were simulated with ;Geant4-DNA-CPA100; - the first set using Geant4‧s default settings, and the second using CPA100‧s original code default settings. A maximum difference of 9.4% was found between the Geant4-DNA-CPA100 and PENELOPE DPKs. Between the two Geant4-DNA existing models, slight differences, between 1 keV and 10 keV were observed. It was highlighted that the DPKs simulated with the two Geant4-DNA's existing models were always broader than those generated with ;Geant4-DNA-CPA100;. The discrepancies observed between the DPKs generated using Geant4-DNA's existing models and ;Geant4-DNA-CPA100; were caused solely by their different cross sections. The different scoring and interpolation methods used in CPA100 and Geant4 to calculate DPKs showed differences close to 3.0% near the source.

  18. Metabolic Free Energy and Biological Codes: A 'Data Rate Theorem' Aging Model.

    PubMed

    Wallace, Rodrick

    2015-06-01

    A famous argument by Maturana and Varela (Autopoiesis and cognition. Reidel, Dordrecht, 1980) holds that the living state is cognitive at every scale and level of organization. Since it is possible to associate many cognitive processes with 'dual' information sources, pathologies can sometimes be addressed using statistical models based on the Shannon Coding, the Shannon-McMillan Source Coding, the Rate Distortion, and the Data Rate Theorems, which impose necessary conditions on information transmission and system control. Deterministic-but-for-error biological codes do not directly invoke cognition, but may be essential subcomponents within larger cognitive processes. A formal argument, however, places such codes within a similar framework, with metabolic free energy serving as a 'control signal' stabilizing biochemical code-and-translator dynamics in the presence of noise. Demand beyond available energy supply triggers punctuated destabilization of the coding channel, affecting essential biological functions. Aging, normal or prematurely driven by psychosocial or environmental stressors, must interfere with the routine operation of such mechanisms, initiating the chronic diseases associated with senescence. Amyloid fibril formation, intrinsically disordered protein logic gates, and cell surface glycan/lectin 'kelp bed' logic gates are reviewed from this perspective. The results generalize beyond coding machineries having easily recognizable symmetry modes, and strip a layer of mathematical complication from the study of phase transitions in nonequilibrium biological systems.

  19. Implementation of an anomalous radial transport model for continuum kinetic edge codes

    NASA Astrophysics Data System (ADS)

    Bodi, K.; Krasheninnikov, S. I.; Cohen, R. H.; Rognlien, T. D.

    2007-11-01

    Radial plasma transport in magnetic fusion devices is often dominated by plasma turbulence compared to neoclassical collisional transport. Continuum kinetic edge codes [such as the (2d,2v) transport version of TEMPEST and also EGK] compute the collisional transport directly, but there is a need to model the anomalous transport from turbulence for long-time transport simulations. Such a model is presented and results are shown for its implementation in the TEMPEST gyrokinetic edge code. The model includes velocity-dependent convection and diffusion coefficients expressed as a Hermite polynominals in velocity. The specification of the Hermite coefficients can be set, e.g., by specifying the ratio of particle and energy transport as in fluid transport codes. The anomalous transport terms preserve the property of no particle flux into unphysical regions of velocity space. TEMPEST simulations are presented showing the separate control of particle and energy anomalous transport, and comparisons are made with neoclassical transport also included.

  20. Overview of the Graphical User Interface for the GERM Code (GCR Event-Based Risk Model

    NASA Technical Reports Server (NTRS)

    Kim, Myung-Hee; Cucinotta, Francis A.

    2010-01-01

    The descriptions of biophysical events from heavy ions are of interest in radiobiology, cancer therapy, and space exploration. The biophysical description of the passage of heavy ions in tissue and shielding materials is best described by a stochastic approach that includes both ion track structure and nuclear interactions. A new computer model called the GCR Event-based Risk Model (GERM) code was developed for the description of biophysical events from heavy ion beams at the NASA Space Radiation Laboratory (NSRL). The GERM code calculates basic physical and biophysical quantities of high-energy protons and heavy ions that have been studied at NSRL for the purpose of simulating space radiobiological effects. For mono-energetic beams, the code evaluates the linear-energy transfer (LET), range (R), and absorption in tissue equivalent material for a given Charge (Z), Mass Number (A) and kinetic energy (E) of an ion. In addition, a set of biophysical properties are evaluated such as the Poisson distribution of ion or delta-ray hits for a specified cellular area, cell survival curves, and mutation and tumor probabilities. The GERM code also calculates the radiation transport of the beam line for either a fixed number of user-specified depths or at multiple positions along the Bragg curve of the particle. The contributions from primary ion and nuclear secondaries are evaluated. The GERM code accounts for the major nuclear interaction processes of importance for describing heavy ion beams, including nuclear fragmentation, elastic scattering, and knockout-cascade processes by using the quantum multiple scattering fragmentation (QMSFRG) model. The QMSFRG model has been shown to be in excellent agreement with available experimental data for nuclear fragmentation cross sections, and has been used by the GERM code for application to thick target experiments. The GERM code provides scientists participating in NSRL experiments with the data needed for the interpretation of their experiments, including the ability to model the beam line, the shielding of samples and sample holders, and the estimates of basic physical and biological outputs of the designed experiments. We present an overview of the GERM code GUI, as well as providing training applications.

  1. MARS15

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mokhov, Nikolai

    MARS is a Monte Carlo code for inclusive and exclusive simulation of three-dimensional hadronic and electromagnetic cascades, muon, heavy-ion and low-energy neutron transport in accelerator, detector, spacecraft and shielding components in the energy range from a fraction of an electronvolt up to 100 TeV. Recent developments in the MARS15 physical models of hadron, heavy-ion and lepton interactions with nuclei and atoms include a new nuclear cross section library, a model for soft pion production, the cascade-exciton model, the quark gluon string models, deuteron-nucleus and neutrino-nucleus interaction models, detailed description of negative hadron and muon absorption and a unified treatment ofmore » muon, charged hadron and heavy-ion electromagnetic interactions with matter. New algorithms are implemented into the code and thoroughly benchmarked against experimental data. The code capabilities to simulate cascades and generate a variety of results in complex media have been also enhanced. Other changes in the current version concern the improved photo- and electro-production of hadrons and muons, improved algorithms for the 3-body decays, particle tracking in magnetic fields, synchrotron radiation by electrons and muons, significantly extended histograming capabilities and material description, and improved computational performance. In addition to direct energy deposition calculations, a new set of fluence-to-dose conversion factors for all particles including neutrino are built into the code. The code includes new modules for calculation of Displacement-per-Atom and nuclide inventory. The powerful ROOT geometry and visualization model implemented in MARS15 provides a large set of geometrical elements with a possibility of producing composite shapes and assemblies and their 3D visualization along with a possible import/export of geometry descriptions created by other codes (via the GDML format) and CAD systems (via the STEP format). The built-in MARS-MAD Beamline Builder (MMBLB) was redesigned for use with the ROOT geometry package that allows a very efficient and highly-accurate description, modeling and visualization of beam loss induced effects in arbitrary beamlines and accelerator lattices. The MARS15 code includes links to the MCNP-family codes for neutron and photon production and transport below 20 MeV, to the ANSYS code for thermal and stress analyses and to the STRUCT code for multi-turn particle tracking in large synchrotrons and collider rings.« less

  2. Applications of the microdosimetric function implemented in the macroscopic particle transport simulation code PHITS.

    PubMed

    Sato, Tatsuhiko; Watanabe, Ritsuko; Sihver, Lembit; Niita, Koji

    2012-01-01

    Microdosimetric quantities such as lineal energy are generally considered to be better indices than linear energy transfer (LET) for expressing the relative biological effectiveness (RBE) of high charge and energy particles. To calculate their probability densities (PD) in macroscopic matter, it is necessary to integrate microdosimetric tools such as track-structure simulation codes with macroscopic particle transport simulation codes. As an integration approach, the mathematical model for calculating the PD of microdosimetric quantities developed based on track-structure simulations was incorporated into the macroscopic particle transport simulation code PHITS (Particle and Heavy Ion Transport code System). The improved PHITS enables the PD in macroscopic matter to be calculated within a reasonable computation time, while taking their stochastic nature into account. The microdosimetric function of PHITS was applied to biological dose estimation for charged-particle therapy and risk estimation for astronauts. The former application was performed in combination with the microdosimetric kinetic model, while the latter employed the radiation quality factor expressed as a function of lineal energy. Owing to the unique features of the microdosimetric function, the improved PHITS has the potential to establish more sophisticated systems for radiological protection in space as well as for the treatment planning of charged-particle therapy.

  3. BRYNTRN: A baryon transport model

    NASA Technical Reports Server (NTRS)

    Wilson, John W.; Townsend, Lawrence W.; Nealy, John E.; Chun, Sang Y.; Hong, B. S.; Buck, Warren W.; Lamkin, S. L.; Ganapol, Barry D.; Khan, Ferdous; Cucinotta, Francis A.

    1989-01-01

    The development of an interaction data base and a numerical solution to the transport of baryons through an arbitrary shield material based on a straight ahead approximation of the Boltzmann equation are described. The code is most accurate for continuous energy boundary values, but gives reasonable results for discrete spectra at the boundary using even a relatively coarse energy grid (30 points) and large spatial increments (1 cm in H2O). The resulting computer code is self-contained, efficient and ready to use. The code requires only a very small fraction of the computer resources required for Monte Carlo codes.

  4. Liquid rocket combustor computer code development

    NASA Technical Reports Server (NTRS)

    Liang, P. Y.

    1985-01-01

    The Advanced Rocket Injector/Combustor Code (ARICC) that has been developed to model the complete chemical/fluid/thermal processes occurring inside rocket combustion chambers are highlighted. The code, derived from the CONCHAS-SPRAY code originally developed at Los Alamos National Laboratory incorporates powerful features such as the ability to model complex injector combustion chamber geometries, Lagrangian tracking of droplets, full chemical equilibrium and kinetic reactions for multiple species, a fractional volume of fluid (VOF) description of liquid jet injection in addition to the gaseous phase fluid dynamics, and turbulent mass, energy, and momentum transport. Atomization and droplet dynamic models from earlier generation codes are transplated into the present code. Currently, ARICC is specialized for liquid oxygen/hydrogen propellants, although other fuel/oxidizer pairs can be easily substituted.

  5. Coupling Legacy and Contemporary Deterministic Codes to Goldsim for Probabilistic Assessments of Potential Low-Level Waste Repository Sites

    NASA Astrophysics Data System (ADS)

    Mattie, P. D.; Knowlton, R. G.; Arnold, B. W.; Tien, N.; Kuo, M.

    2006-12-01

    Sandia National Laboratories (Sandia), a U.S. Department of Energy National Laboratory, has over 30 years experience in radioactive waste disposal and is providing assistance internationally in a number of areas relevant to the safety assessment of radioactive waste disposal systems. International technology transfer efforts are often hampered by small budgets, time schedule constraints, and a lack of experienced personnel in countries with small radioactive waste disposal programs. In an effort to surmount these difficulties, Sandia has developed a system that utilizes a combination of commercially available codes and existing legacy codes for probabilistic safety assessment modeling that facilitates the technology transfer and maximizes limited available funding. Numerous codes developed and endorsed by the United States Nuclear Regulatory Commission and codes developed and maintained by United States Department of Energy are generally available to foreign countries after addressing import/export control and copyright requirements. From a programmatic view, it is easier to utilize existing codes than to develop new codes. From an economic perspective, it is not possible for most countries with small radioactive waste disposal programs to maintain complex software, which meets the rigors of both domestic regulatory requirements and international peer review. Therefore, re-vitalization of deterministic legacy codes, as well as an adaptation of contemporary deterministic codes, provides a creditable and solid computational platform for constructing probabilistic safety assessment models. External model linkage capabilities in Goldsim and the techniques applied to facilitate this process will be presented using example applications, including Breach, Leach, and Transport-Multiple Species (BLT-MS), a U.S. NRC sponsored code simulating release and transport of contaminants from a subsurface low-level waste disposal facility used in a cooperative technology transfer project between Sandia National Laboratories and Taiwan's Institute of Nuclear Energy Research (INER) for the preliminary assessment of several candidate low-level waste repository sites. Sandia National Laboratories is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy under Contract DE AC04 94AL85000.

  6. Nuclear Energy Advanced Modeling and Simulation Waste Integrated Performance and Safety Codes (NEAMS Waste IPSC).

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schultz, Peter Andrew

    The objective of the U.S. Department of Energy Office of Nuclear Energy Advanced Modeling and Simulation Waste Integrated Performance and Safety Codes (NEAMS Waste IPSC) is to provide an integrated suite of computational modeling and simulation (M&S) capabilities to quantitatively assess the long-term performance of waste forms in the engineered and geologic environments of a radioactive-waste storage facility or disposal repository. Achieving the objective of modeling the performance of a disposal scenario requires describing processes involved in waste form degradation and radionuclide release at the subcontinuum scale, beginning with mechanistic descriptions of chemical reactions and chemical kinetics at the atomicmore » scale, and upscaling into effective, validated constitutive models for input to high-fidelity continuum scale codes for coupled multiphysics simulations of release and transport. Verification and validation (V&V) is required throughout the system to establish evidence-based metrics for the level of confidence in M&S codes and capabilities, including at the subcontiunuum scale and the constitutive models they inform or generate. This Report outlines the nature of the V&V challenge at the subcontinuum scale, an approach to incorporate V&V concepts into subcontinuum scale modeling and simulation (M&S), and a plan to incrementally incorporate effective V&V into subcontinuum scale M&S destined for use in the NEAMS Waste IPSC work flow to meet requirements of quantitative confidence in the constitutive models informed by subcontinuum scale phenomena.« less

  7. Approximate Green's function methods for HZE transport in multilayered materials

    NASA Technical Reports Server (NTRS)

    Wilson, John W.; Badavi, Francis F.; Shinn, Judy L.; Costen, Robert C.

    1993-01-01

    A nonperturbative analytic solution of the high charge and energy (HZE) Green's function is used to implement a computer code for laboratory ion beam transport in multilayered materials. The code is established to operate on the Langley nuclear fragmentation model used in engineering applications. Computational procedures are established to generate linear energy transfer (LET) distributions for a specified ion beam and target for comparison with experimental measurements. The code was found to be highly efficient and compared well with the perturbation approximation.

  8. Multipacting studies in elliptic SRF cavities

    NASA Astrophysics Data System (ADS)

    Prakash, Ram; Jana, Arup Ratan; Kumar, Vinit

    2017-09-01

    Multipacting is a resonant process, where the number of unwanted electrons resulting from a parasitic discharge rapidly grows to a larger value at some specific locations in a radio-frequency cavity. This results in a degradation of the cavity performance indicators (e.g. the quality factor Q and the maximum achievable accelerating gradient Eacc), and in the case of a superconducting radiofrequency (SRF) cavity, it leads to a quenching of superconductivity. Numerical simulations are essential to pre-empt the possibility of multipacting in SRF cavities, such that its design can be suitably refined to avoid this performance limiting phenomenon. Readily available computer codes (e.g.FishPact, MultiPac,CST-PICetc.) are widely used to simulate the phenomenon of multipacting in such cases. Most of the contemporary two dimensional (2D) codes such as FishPact, MultiPacetc. are unable to detect the multipacting in elliptic cavities because they use a simplistic secondary emission model, where it is assumed that all the secondary electrons are emitted with same energy. Some three-dimensional (3D) codes such as CST-PIC, which use a more realistic secondary emission model (Furman model) by following a probability distribution for the emission energy of secondary electrons, are able to correctly predict the occurrence of multipacting. These 3D codes however require large data handling and are slower than the 2D codes. In this paper, we report a detailed analysis of the multipacting phenomenon in elliptic SRF cavities and development of a 2D code to numerically simulate this phenomenon by employing the Furman model to simulate the secondary emission process. Since our code is 2D, it is faster than the 3D codes. It is however as accurate as the contemporary 3D codes since it uses the Furman model for secondary emission. We have also explored the possibility to further simplify the Furman model, which enables us to quickly estimate the growth rate of multipacting without performing any multi-particle simulation. This methodology has been employed along with computer code for the detailed analysis of multipacting in βg = 0 . 61 and βg = 0 . 9, 650 MHz elliptic SRF cavities that we have recently designed for the medium and high energy section of the proposed Indian Spallation Neutron Source (ISNS) project.

  9. Technical Support Document for Version 3.6.1 of the COMcheck Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bartlett, Rosemarie; Connell, Linda M.; Gowri, Krishnan

    2009-09-29

    This technical support document (TSD) is designed to explain the technical basis for the COMcheck software as originally developed based on the ANSI/ASHRAE/IES Standard 90.1-1989 (Standard 90.1-1989). Documentation for other national model codes and standards and specific state energy codes supported in COMcheck has been added to this report as appendices. These appendices are intended to provide technical documentation for features specific to the supported codes and for any changes made for state-specific codes that differ from the standard features that support compliance with the national model codes and standards.

  10. Spatially-Dependent Modelling of Pulsar Wind Nebula G0.9+0.1

    NASA Astrophysics Data System (ADS)

    van Rensburg, C.; Krüger, P. P.; Venter, C.

    2018-03-01

    We present results from a leptonic emission code that models the spectral energy distribution of a pulsar wind nebula by solving a Fokker-Planck-type transport equation and calculating inverse Compton and synchrotron emissivities. We have created this time-dependent, multi-zone model to investigate changes in the particle spectrum as they traverse the pulsar wind nebula, by considering a time and spatially-dependent B-field, spatially-dependent bulk particle speed implying convection and adiabatic losses, diffusion, as well as radiative losses. Our code predicts the radiation spectrum at different positions in the nebula, yielding the surface brightness versus radius and the nebular size as function of energy. We compare our new model against more basic models using the observed spectrum of pulsar wind nebula G0.9+0.1, incorporating data from H.E.S.S. as well as radio and X-ray experiments. We show that simultaneously fitting the spectral energy distribution and the energy-dependent source size leads to more stringent constraints on several model parameters.

  11. Spatially dependent modelling of pulsar wind nebula G0.9+0.1

    NASA Astrophysics Data System (ADS)

    van Rensburg, C.; Krüger, P. P.; Venter, C.

    2018-07-01

    We present results from a leptonic emission code that models the spectral energy distribution of a pulsar wind nebula by solving a Fokker-Planck-type transport equation and calculating inverse Compton and synchrotron emissivities. We have created this time-dependent, multizone model to investigate changes in the particle spectrum as they traverse the pulsar wind nebula, by considering a time and spatially dependent B-field, spatially dependent bulk particle speed implying convection and adiabatic losses, diffusion, as well as radiative losses. Our code predicts the radiation spectrum at different positions in the nebula, yielding the surface brightness versus radius and the nebular size as function of energy. We compare our new model against more basic models using the observed spectrum of pulsar wind nebula G0.9+0.1, incorporating data from H.E.S.S. as well as radio and X-ray experiments. We show that simultaneously fitting the spectral energy distribution and the energy-dependent source size leads to more stringent constraints on several model parameters.

  12. Verification and Validation: High Charge and Energy (HZE) Transport Codes and Future Development

    NASA Technical Reports Server (NTRS)

    Wilson, John W.; Tripathi, Ram K.; Mertens, Christopher J.; Blattnig, Steve R.; Clowdsley, Martha S.; Cucinotta, Francis A.; Tweed, John; Heinbockel, John H.; Walker, Steven A.; Nealy, John E.

    2005-01-01

    In the present paper, we give the formalism for further developing a fully three-dimensional HZETRN code using marching procedures but also development of a new Green's function code is discussed. The final Green's function code is capable of not only validation in the space environment but also in ground based laboratories with directed beams of ions of specific energy and characterized with detailed diagnostic particle spectrometer devices. Special emphasis is given to verification of the computational procedures and validation of the resultant computational model using laboratory and spaceflight measurements. Due to historical requirements, two parallel development paths for computational model implementation using marching procedures and Green s function techniques are followed. A new version of the HZETRN code capable of simulating HZE ions with either laboratory or space boundary conditions is under development. Validation of computational models at this time is particularly important for President Bush s Initiative to develop infrastructure for human exploration with first target demonstration of the Crew Exploration Vehicle (CEV) in low Earth orbit in 2008.

  13. Stochastic analog neutron transport with TRIPOLI-4 and FREYA: Bayesian uncertainty quantification for neutron multiplicity counting

    DOE PAGES

    Verbeke, J. M.; Petit, O.

    2016-06-01

    From nuclear safeguards to homeland security applications, the need for the better modeling of nuclear interactions has grown over the past decades. Current Monte Carlo radiation transport codes compute average quantities with great accuracy and performance; however, performance and averaging come at the price of limited interaction-by-interaction modeling. These codes often lack the capability of modeling interactions exactly: for a given collision, energy is not conserved, energies of emitted particles are uncorrelated, and multiplicities of prompt fission neutrons and photons are uncorrelated. Many modern applications require more exclusive quantities than averages, such as the fluctuations in certain observables (e.g., themore » neutron multiplicity) and correlations between neutrons and photons. In an effort to meet this need, the radiation transport Monte Carlo code TRIPOLI-4® was modified to provide a specific mode that models nuclear interactions in a full analog way, replicating as much as possible the underlying physical process. Furthermore, the computational model FREYA (Fission Reaction Event Yield Algorithm) was coupled with TRIPOLI-4 to model complete fission events. As a result, FREYA automatically includes fluctuations as well as correlations resulting from conservation of energy and momentum.« less

  14. NEAMS Update. Quarterly Report for October - December 2011.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bradley, K.

    2012-02-16

    The Advanced Modeling and Simulation Office within the DOE Office of Nuclear Energy (NE) has been charged with revolutionizing the design tools used to build nuclear power plants during the next 10 years. To accomplish this, the DOE has brought together the national laboratories, U.S. universities, and the nuclear energy industry to establish the Nuclear Energy Advanced Modeling and Simulation (NEAMS) Program. The mission of NEAMS is to modernize computer modeling of nuclear energy systems and improve the fidelity and validity of modeling results using contemporary software environments and high-performance computers. NEAMS will create a set of engineering-level codes aimedmore » at designing and analyzing the performance and safety of nuclear power plants and reactor fuels. The truly predictive nature of these codes will be achieved by modeling the governing phenomena at the spatial and temporal scales that dominate the behavior. These codes will be executed within a simulation environment that orchestrates code integration with respect to spatial meshing, computational resources, and execution to give the user a common 'look and feel' for setting up problems and displaying results. NEAMS is building upon a suite of existing simulation tools, including those developed by the federal Scientific Discovery through Advanced Computing and Advanced Simulation and Computing programs. NEAMS also draws upon existing simulation tools for materials and nuclear systems, although many of these are limited in terms of scale, applicability, and portability (their ability to be integrated into contemporary software and hardware architectures). NEAMS investments have directly and indirectly supported additional NE research and development programs, including those devoted to waste repositories, safeguarded separations systems, and long-term storage of used nuclear fuel. NEAMS is organized into two broad efforts, each comprising four elements. The quarterly highlights October-December 2011 are: (1) Version 1.0 of AMP, the fuel assembly performance code, was tested on the JAGUAR supercomputer and released on November 1, 2011, a detailed discussion of this new simulation tool is given; (2) A coolant sub-channel model and a preliminary UO{sub 2} smeared-cracking model were implemented in BISON, the single-pin fuel code, more information on how these models were developed and benchmarked is given; (3) The Object Kinetic Monte Carlo model was implemented to account for nucleation events in meso-scale simulations and a discussion of the significance of this advance is given; (4) The SHARP neutronics module, PROTEUS, was expanded to be applicable to all types of reactors, and a discussion of the importance of PROTEUS is given; (5) A plan has been finalized for integrating the high-fidelity, three-dimensional reactor code SHARP with both the systems-level code RELAP7 and the fuel assembly code AMP. This is a new initiative; (6) Work began to evaluate the applicability of AMP to the problem of dry storage of used fuel and to define a relevant problem to test the applicability; (7) A code to obtain phonon spectra from the force-constant matrix for a crystalline lattice has been completed. This important bridge between subcontinuum and continuum phenomena is discussed; (8) Benchmarking was begun on the meso-scale, finite-element fuels code MARMOT to validate its new variable splitting algorithm; (9) A very computationally demanding simulation of diffusion-driven nucleation of new microstructural features has been completed. An explanation of the difficulty of this simulation is given; (10) Experiments were conducted with deformed steel to validate a crystal plasticity finite-element code for bodycentered cubic iron; (11) The Capability Transfer Roadmap was completed and published as an internal laboratory technical report; (12) The AMP fuel assembly code input generator was integrated into the NEAMS Integrated Computational Environment (NiCE). More details on the planned NEAMS computing environment is given; and (13) The NEAMS program website (neams.energy.gov) is nearly ready to launch.« less

  15. Total reaction cross sections in CEM and MCNP6 at intermediate energies

    DOE PAGES

    Kerby, Leslie M.; Mashnik, Stepan G.

    2015-05-14

    Accurate total reaction cross section models are important to achieving reliable predictions from spallation and transport codes. The latest version of the Cascade Exciton Model (CEM) as incorporated in the code CEM03.03, and the Monte Carlo N-Particle transport code (MCNP6), both developed at Los Alamos National Laboratory (LANL), each use such cross sections. Having accurate total reaction cross section models in the intermediate energy region (50 MeV to 5 GeV) is very important for different applications, including analysis of space environments, use in medical physics, and accelerator design, to name just a few. The current inverse cross sections used inmore » the preequilibrium and evaporation stages of CEM are based on the Dostrovsky et al. model, published in 1959. Better cross section models are now available. Implementing better cross section models in CEM and MCNP6 should yield improved predictions for particle spectra and total production cross sections, among other results.« less

  16. Total reaction cross sections in CEM and MCNP6 at intermediate energies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kerby, Leslie M.; Mashnik, Stepan G.

    Accurate total reaction cross section models are important to achieving reliable predictions from spallation and transport codes. The latest version of the Cascade Exciton Model (CEM) as incorporated in the code CEM03.03, and the Monte Carlo N-Particle transport code (MCNP6), both developed at Los Alamos National Laboratory (LANL), each use such cross sections. Having accurate total reaction cross section models in the intermediate energy region (50 MeV to 5 GeV) is very important for different applications, including analysis of space environments, use in medical physics, and accelerator design, to name just a few. The current inverse cross sections used inmore » the preequilibrium and evaporation stages of CEM are based on the Dostrovsky et al. model, published in 1959. Better cross section models are now available. Implementing better cross section models in CEM and MCNP6 should yield improved predictions for particle spectra and total production cross sections, among other results.« less

  17. Final Technical Report for SBIR entitled Four-Dimensional Finite-Orbit-Width Fokker-Planck Code with Sources, for Neoclassical/Anomalous Transport Simulation of Ion and Electron Distributions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harvey, R. W.; Petrov, Yu. V.

    2013-12-03

    Within the US Department of Energy/Office of Fusion Energy magnetic fusion research program, there is an important whole-plasma-modeling need for a radio-frequency/neutral-beam-injection (RF/NBI) transport-oriented finite-difference Fokker-Planck (FP) code with combined capabilities for 4D (2R2V) geometry near the fusion plasma periphery, and computationally less demanding 3D (1R2V) bounce-averaged capabilities for plasma in the core of fusion devices. Demonstration of proof-of-principle achievement of this goal has been carried out in research carried out under Phase I of the SBIR award. Two DOE-sponsored codes, the CQL3D bounce-average Fokker-Planck code in which CompX has specialized, and the COGENT 4D, plasma edge-oriented Fokker-Planck code whichmore » has been constructed by Lawrence Livermore National Laboratory and Lawrence Berkeley Laboratory scientists, where coupled. Coupling was achieved by using CQL3D calculated velocity distributions including an energetic tail resulting from NBI, as boundary conditions for the COGENT code over the two-dimensional velocity space on a spatial interface (flux) surface at a given radius near the plasma periphery. The finite-orbit-width fast ions from the CQL3D distributions penetrated into the peripheral plasma modeled by the COGENT code. This combined code demonstrates the feasibility of the proposed 3D/4D code. By combining these codes, the greatest computational efficiency is achieved subject to present modeling needs in toroidally symmetric magnetic fusion devices. The more efficient 3D code can be used in its regions of applicability, coupled to the more computationally demanding 4D code in higher collisionality edge plasma regions where that extended capability is necessary for accurate representation of the plasma. More efficient code leads to greater use and utility of the model. An ancillary aim of the project is to make the combined 3D/4D code user friendly. Achievement of full-coupling of these two Fokker-Planck codes will advance computational modeling of plasma devices important to the USDOE magnetic fusion energy program, in particular the DIII-D tokamak at General Atomics, San Diego, the NSTX spherical tokamak at Princeton, New Jersey, and the MST reversed-field-pinch Madison, Wisconsin. The validation studies of the code against the experiments will improve understanding of physics important for magnetic fusion, and will increase our design capabilities for achieving the goals of the International Tokamak Experimental Reactor (ITER) project in which the US is a participant and which seeks to demonstrate at least a factor of five in fusion power production divided by input power.« less

  18. Some Notes on Neutron Up-Scattering and the Doppler-Broadening of High-Z Scattering Resonances

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parsons, Donald Kent

    When neutrons are scattered by target nuclei at elevated temperatures, it is entirely possible that the neutron will actually gain energy (i.e., up-scatter) from the interaction. This phenomenon is in addition to the more usual case of the neutron losing energy (i.e., down-scatter). Furthermore, the motion of the target nuclei can also cause extended neutron down-scattering, i.e., the neutrons can and do scatter to energies lower than predicted by the simple asymptotic models. In recent years, more attention has been given to temperature-dependent scattering cross sections for materials in neutron multiplying systems. This has led to the inclusion of neutronmore » up-scatter in deterministic codes like Partisn and to free gas scattering models for material temperature effects in Monte Carlo codes like MCNP and cross section processing codes like NJOY. The free gas scattering models have the effect of Doppler Broadening the scattering cross section output spectra in energy and angle. The current state of Doppler-Broadening numerical techniques used at Los Alamos for scattering resonances will be reviewed, and suggestions will be made for further developments. The focus will be on the free gas scattering models currently in use and the development of new models to include high-Z resonance scattering effects. These models change the neutron up-scattering behavior.« less

  19. Nuclear Energy Advanced Modeling and Simulation (NEAMS) waste Integrated Performance and Safety Codes (IPSC) : gap analysis for high fidelity and performance assessment code development.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Joon H.; Siegel, Malcolm Dean; Arguello, Jose Guadalupe, Jr.

    2011-03-01

    This report describes a gap analysis performed in the process of developing the Waste Integrated Performance and Safety Codes (IPSC) in support of the U.S. Department of Energy (DOE) Office of Nuclear Energy Advanced Modeling and Simulation (NEAMS) Campaign. The goal of the Waste IPSC is to develop an integrated suite of computational modeling and simulation capabilities to quantitatively assess the long-term performance of waste forms in the engineered and geologic environments of a radioactive waste storage or disposal system. The Waste IPSC will provide this simulation capability (1) for a range of disposal concepts, waste form types, engineered repositorymore » designs, and geologic settings, (2) for a range of time scales and distances, (3) with appropriate consideration of the inherent uncertainties, and (4) in accordance with rigorous verification, validation, and software quality requirements. The gap analyses documented in this report were are performed during an initial gap analysis to identify candidate codes and tools to support the development and integration of the Waste IPSC, and during follow-on activities that delved into more detailed assessments of the various codes that were acquired, studied, and tested. The current Waste IPSC strategy is to acquire and integrate the necessary Waste IPSC capabilities wherever feasible, and develop only those capabilities that cannot be acquired or suitably integrated, verified, or validated. The gap analysis indicates that significant capabilities may already exist in the existing THC codes although there is no single code able to fully account for all physical and chemical processes involved in a waste disposal system. Large gaps exist in modeling chemical processes and their couplings with other processes. The coupling of chemical processes with flow transport and mechanical deformation remains challenging. The data for extreme environments (e.g., for elevated temperature and high ionic strength media) that are needed for repository modeling are severely lacking. In addition, most of existing reactive transport codes were developed for non-radioactive contaminants, and they need to be adapted to account for radionuclide decay and in-growth. The accessibility to the source codes is generally limited. Because the problems of interest for the Waste IPSC are likely to result in relatively large computational models, a compact memory-usage footprint and a fast/robust solution procedure will be needed. A robust massively parallel processing (MPP) capability will also be required to provide reasonable turnaround times on the analyses that will be performed with the code. A performance assessment (PA) calculation for a waste disposal system generally requires a large number (hundreds to thousands) of model simulations to quantify the effect of model parameter uncertainties on the predicted repository performance. A set of codes for a PA calculation must be sufficiently robust and fast in terms of code execution. A PA system as a whole must be able to provide multiple alternative models for a specific set of physical/chemical processes, so that the users can choose various levels of modeling complexity based on their modeling needs. This requires PA codes, preferably, to be highly modularized. Most of the existing codes have difficulties meeting these requirements. Based on the gap analysis results, we have made the following recommendations for the code selection and code development for the NEAMS waste IPSC: (1) build fully coupled high-fidelity THCMBR codes using the existing SIERRA codes (e.g., ARIA and ADAGIO) and platform, (2) use DAKOTA to build an enhanced performance assessment system (EPAS), and build a modular code architecture and key code modules for performance assessments. The key chemical calculation modules will be built by expanding the existing CANTERA capabilities as well as by extracting useful components from other existing codes.« less

  20. Benchmark of neutron production cross sections with Monte Carlo codes

    NASA Astrophysics Data System (ADS)

    Tsai, Pi-En; Lai, Bo-Lun; Heilbronn, Lawrence H.; Sheu, Rong-Jiun

    2018-02-01

    Aiming to provide critical information in the fields of heavy ion therapy, radiation shielding in space, and facility design for heavy-ion research accelerators, the physics models in three Monte Carlo simulation codes - PHITS, FLUKA, and MCNP6, were systematically benchmarked with comparisons to fifteen sets of experimental data for neutron production cross sections, which include various combinations of 12C, 20Ne, 40Ar, 84Kr and 132Xe projectiles and natLi, natC, natAl, natCu, and natPb target nuclides at incident energies between 135 MeV/nucleon and 600 MeV/nucleon. For neutron energies above 60% of the specific projectile energy per nucleon, the LAQGMS03.03 in MCNP6, the JQMD/JQMD-2.0 in PHITS, and the RQMD-2.4 in FLUKA all show a better agreement with data in heavy-projectile systems than with light-projectile systems, suggesting that the collective properties of projectile nuclei and nucleon interactions in the nucleus should be considered for light projectiles. For intermediate-energy neutrons whose energies are below the 60% projectile energy per nucleon and above 20 MeV, FLUKA is likely to overestimate the secondary neutron production, while MCNP6 tends towards underestimation. PHITS with JQMD shows a mild tendency for underestimation, but the JQMD-2.0 model with a modified physics description for central collisions generally improves the agreement between data and calculations. For low-energy neutrons (below 20 MeV), which are dominated by the evaporation mechanism, PHITS (which uses GEM linked with JQMD and JQMD-2.0) and FLUKA both tend to overestimate the production cross section, whereas MCNP6 tends to underestimate more systems than to overestimate. For total neutron production cross sections, the trends of the benchmark results over the entire energy range are similar to the trends seen in the dominate energy region. Also, the comparison of GEM coupled with either JQMD or JQMD-2.0 in the PHITS code indicates that the model used to describe the first stage of a nucleus-nucleus collision also affects the low-energy neutron production. Thus, in this case, a proper combination of two physics models is desired to reproduce the measured results. In addition, code users should be aware that certain models consistently produce secondary neutrons within a constant fraction of another model in certain energy regions, which might be correlated to different physics treatments in different models.

  1. Inventory of Safety-related Codes and Standards for Energy Storage Systems with some Experiences related to Approval and Acceptance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Conover, David R.

    The purpose of this document is to identify laws, rules, model codes, codes, standards, regulations, specifications (CSR) related to safety that could apply to stationary energy storage systems (ESS) and experiences to date securing approval of ESS in relation to CSR. This information is intended to assist in securing approval of ESS under current CSR and to identification of new CRS or revisions to existing CRS and necessary supporting research and documentation that can foster the deployment of safe ESS.

  2. An Initial Study of the Sensitivity of Aircraft Vortex Spacing System (AVOSS) Spacing Sensitivity to Weather and Configuration Input Parameters

    NASA Technical Reports Server (NTRS)

    Riddick, Stephen E.; Hinton, David A.

    2000-01-01

    A study has been performed on a computer code modeling an aircraft wake vortex spacing system during final approach. This code represents an initial engineering model of a system to calculate reduced approach separation criteria needed to increase airport productivity. This report evaluates model sensitivity toward various weather conditions (crosswind, crosswind variance, turbulent kinetic energy, and thermal gradient), code configurations (approach corridor option, and wake demise definition), and post-processing techniques (rounding of provided spacing values, and controller time variance).

  3. COMBINED MODELING OF ACCELERATION, TRANSPORT, AND HYDRODYNAMIC RESPONSE IN SOLAR FLARES. I. THE NUMERICAL MODEL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu Wei; Petrosian, Vahe; Mariska, John T.

    2009-09-10

    Acceleration and transport of high-energy particles and fluid dynamics of atmospheric plasma are interrelated aspects of solar flares, but for convenience and simplicity they were artificially separated in the past. We present here self-consistently combined Fokker-Planck modeling of particles and hydrodynamic simulation of flare plasma. Energetic electrons are modeled with the Stanford unified code of acceleration, transport, and radiation, while plasma is modeled with the Naval Research Laboratory flux tube code. We calculated the collisional heating rate directly from the particle transport code, which is more accurate than those in previous studies based on approximate analytical solutions. We repeated themore » simulation of Mariska et al. with an injection of power law, downward-beamed electrons using the new heating rate. For this case, a {approx}10% difference was found from their old result. We also used a more realistic spectrum of injected electrons provided by the stochastic acceleration model, which has a smooth transition from a quasi-thermal background at low energies to a nonthermal tail at high energies. The inclusion of low-energy electrons results in relatively more heating in the corona (versus chromosphere) and thus a larger downward heat conduction flux. The interplay of electron heating, conduction, and radiative loss leads to stronger chromospheric evaporation than obtained in previous studies, which had a deficit in low-energy electrons due to an arbitrarily assumed low-energy cutoff. The energy and spatial distributions of energetic electrons and bremsstrahlung photons bear signatures of the changing density distribution caused by chromospheric evaporation. In particular, the density jump at the evaporation front gives rise to enhanced emission, which, in principle, can be imaged by X-ray telescopes. This model can be applied to investigate a variety of high-energy processes in solar, space, and astrophysical plasmas.« less

  4. Assessment of the MHD capability in the ATHENA code using data from the ALEX (Argonne Liquid Metal Experiment) facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roth, P.A.

    1988-10-28

    The ATHENA (Advanced Thermal Hydraulic Energy Network Analyzer) code is a system transient analysis code with multi-loop, multi-fluid capabilities, which is available to the fusion community at the National Magnetic Fusion Energy Computing Center (NMFECC). The work reported here assesses the ATHENA magnetohydrodynamic (MHD) pressure drop model for liquid metals flowing through a strong magnetic field. An ATHENA model was developed for two simple geometry, adiabatic test sections used in the Argonne Liquid Metal Experiment (ALEX) at Argonne National Laboratory (ANL). The pressure drops calculated by ATHENA agreed well with the experimental results from the ALEX facility. 13 refs., 4more » figs., 2 tabs.« less

  5. Implementation of a 3D halo neutral model in the TRANSP code and application to projected NSTX-U plasmas

    NASA Astrophysics Data System (ADS)

    Medley, S. S.; Liu, D.; Gorelenkova, M. V.; Heidbrink, W. W.; Stagner, L.

    2016-02-01

    A 3D halo neutral code developed at the Princeton Plasma Physics Laboratory and implemented for analysis using the TRANSP code is applied to projected National Spherical Torus eXperiment-Upgrade (NSTX-U plasmas). The legacy TRANSP code did not handle halo neutrals properly since they were distributed over the plasma volume rather than remaining in the vicinity of the neutral beam footprint as is actually the case. The 3D halo neutral code uses a ‘beam-in-a-box’ model that encompasses both injected beam neutrals and resulting halo neutrals. Upon deposition by charge exchange, a subset of the full, one-half and one-third beam energy components produce first generation halo neutrals that are tracked through successive generations until an ionization event occurs or the descendant halos exit the box. The 3D halo neutral model and neutral particle analyzer (NPA) simulator in the TRANSP code have been benchmarked with the Fast-Ion D-Alpha simulation (FIDAsim) code, which provides Monte Carlo simulations of beam neutral injection, attenuation, halo generation, halo spatial diffusion, and photoemission processes. When using the same atomic physics database, TRANSP and FIDAsim simulations achieve excellent agreement on the spatial profile and magnitude of beam and halo neutral densities and the NPA energy spectrum. The simulations show that the halo neutral density can be comparable to the beam neutral density. These halo neutrals can double the NPA flux, but they have minor effects on the NPA energy spectrum shape. The TRANSP and FIDAsim simulations also suggest that the magnitudes of beam and halo neutral densities are relatively sensitive to the choice of the atomic physics databases.

  6. Implementation of a 3D halo neutral model in the TRANSP code and application to projected NSTX-U plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Medley, S. S.; Liu, D.; Gorelenkova, M. V.

    2016-01-12

    A 3D halo neutral code developed at the Princeton Plasma Physics Laboratory and implemented for analysis using the TRANSP code is applied to projected National Spherical Torus eXperiment-Upgrade (NSTX-U plasmas). The legacy TRANSP code did not handle halo neutrals properly since they were distributed over the plasma volume rather than remaining in the vicinity of the neutral beam footprint as is actually the case. The 3D halo neutral code uses a 'beam-in-a-box' model that encompasses both injected beam neutrals and resulting halo neutrals. Upon deposition by charge exchange, a subset of the full, one-half and one-third beam energy components producemore » first generation halo neutrals that are tracked through successive generations until an ionization event occurs or the descendant halos exit the box. The 3D halo neutral model and neutral particle analyzer (NPA) simulator in the TRANSP code have been benchmarked with the Fast-Ion D-Alpha simulation (FIDAsim) code, which provides Monte Carlo simulations of beam neutral injection, attenuation, halo generation, halo spatial diffusion, and photoemission processes. When using the same atomic physics database, TRANSP and FIDAsim simulations achieve excellent agreement on the spatial profile and magnitude of beam and halo neutral densities and the NPA energy spectrum. The simulations show that the halo neutral density can be comparable to the beam neutral density. These halo neutrals can double the NPA flux, but they have minor effects on the NPA energy spectrum shape. The TRANSP and FIDAsim simulations also suggest that the magnitudes of beam and halo neutral densities are relatively sensitive to the choice of the atomic physics databases.« less

  7. The CCONE Code System and its Application to Nuclear Data Evaluation for Fission and Other Reactions

    NASA Astrophysics Data System (ADS)

    Iwamoto, O.; Iwamoto, N.; Kunieda, S.; Minato, F.; Shibata, K.

    2016-01-01

    A computer code system, CCONE, was developed for nuclear data evaluation within the JENDL project. The CCONE code system integrates various nuclear reaction models needed to describe nucleon, light charged nuclei up to alpha-particle and photon induced reactions. The code is written in the C++ programming language using an object-oriented technology. At first, it was applied to neutron-induced reaction data on actinides, which were compiled into JENDL Actinide File 2008 and JENDL-4.0. It has been extensively used in various nuclear data evaluations for both actinide and non-actinide nuclei. The CCONE code has been upgraded to nuclear data evaluation at higher incident energies for neutron-, proton-, and photon-induced reactions. It was also used for estimating β-delayed neutron emission. This paper describes the CCONE code system indicating the concept and design of coding and inputs. Details of the formulation for modelings of the direct, pre-equilibrium and compound reactions are presented. Applications to the nuclear data evaluations such as neutron-induced reactions on actinides and medium-heavy nuclei, high-energy nucleon-induced reactions, photonuclear reaction and β-delayed neutron emission are mentioned.

  8. OWL: A code for the two-center shell model with spherical Woods-Saxon potentials

    NASA Astrophysics Data System (ADS)

    Diaz-Torres, Alexis

    2018-03-01

    A Fortran-90 code for solving the two-center nuclear shell model problem is presented. The model is based on two spherical Woods-Saxon potentials and the potential separable expansion method. It describes the single-particle motion in low-energy nuclear collisions, and is useful for characterizing a broad range of phenomena from fusion to nuclear molecular structures.

  9. Modeling anomalous radial transport in kinetic transport codes

    NASA Astrophysics Data System (ADS)

    Bodi, K.; Krasheninnikov, S. I.; Cohen, R. H.; Rognlien, T. D.

    2009-11-01

    Anomalous transport is typically the dominant component of the radial transport in magnetically confined plasmas, where the physical origin of this transport is believed to be plasma turbulence. A model is presented for anomalous transport that can be used in continuum kinetic edge codes like TEMPEST, NEO and the next-generation code being developed by the Edge Simulation Laboratory. The model can also be adapted to particle-based codes. It is demonstrated that the model with a velocity-dependent diffusion and convection terms can match a diagonal gradient-driven transport matrix as found in contemporary fluid codes, but can also include off-diagonal effects. The anomalous transport model is also combined with particle drifts and a particle/energy-conserving Krook collision operator to study possible synergistic effects with neoclassical transport. For the latter study, a velocity-independent anomalous diffusion coefficient is used to mimic the effect of long-wavelength ExB turbulence.

  10. Galactic Cosmic Ray Event-Based Risk Model (GERM) Code

    NASA Technical Reports Server (NTRS)

    Cucinotta, Francis A.; Plante, Ianik; Ponomarev, Artem L.; Kim, Myung-Hee Y.

    2013-01-01

    This software describes the transport and energy deposition of the passage of galactic cosmic rays in astronaut tissues during space travel, or heavy ion beams in patients in cancer therapy. Space radiation risk is a probability distribution, and time-dependent biological events must be accounted for physical description of space radiation transport in tissues and cells. A stochastic model can calculate the probability density directly without unverified assumptions about shape of probability density function. The prior art of transport codes calculates the average flux and dose of particles behind spacecraft and tissue shielding. Because of the signaling times for activation and relaxation in the cell and tissue, transport code must describe temporal and microspatial density of functions to correlate DNA and oxidative damage with non-targeted effects of signals, bystander, etc. These are absolutely ignored or impossible in the prior art. The GERM code provides scientists data interpretation of experiments; modeling of beam line, shielding of target samples, and sample holders; and estimation of basic physical and biological outputs of their experiments. For mono-energetic ion beams, basic physical and biological properties are calculated for a selected ion type, such as kinetic energy, mass, charge number, absorbed dose, or fluence. Evaluated quantities are linear energy transfer (LET), range (R), absorption and fragmentation cross-sections, and the probability of nuclear interactions after 1 or 5 cm of water equivalent material. In addition, a set of biophysical properties is evaluated, such as the Poisson distribution for a specified cellular area, cell survival curves, and DNA damage yields per cell. Also, the GERM code calculates the radiation transport of the beam line for either a fixed number of user-specified depths or at multiple positions along the Bragg curve of the particle in a selected material. The GERM code makes the numerical estimates of basic physical and biophysical quantities of high-energy protons and heavy ions that have been studied at the NASA Space Radiation Laboratory (NSRL) for the purpose of simulating space radiation biological effects. In the first option, properties of monoenergetic beams are treated. In the second option, the transport of beams in different materials is treated. Similar biophysical properties as in the first option are evaluated for the primary ion and its secondary particles. Additional properties related to the nuclear fragmentation of the beam are evaluated. The GERM code is a computationally efficient Monte-Carlo heavy-ion-beam model. It includes accurate models of LET, range, residual energy, and straggling, and the quantum multiple scattering fragmentation (QMSGRG) nuclear database.

  11. Extension of the BRYNTRN code to monoenergetic light ion beams

    NASA Technical Reports Server (NTRS)

    Cucinotta, Francis A.; Wilson, John W.; Badavi, Francis F.

    1994-01-01

    A monoenergetic version of the BRYNTRN transport code is extended to beam transport of light ions (H-2, H-3, He-3, and He-4) in shielding materials (thick targets). The redistribution of energy in nuclear reactions is included in transport solutions that use nuclear fragmentation models. We also consider an equilibrium target-fragment spectrum for nuclei with mass number greater than four to include target fragmentation effects in the linear energy transfer (LET) spectrum. Illustrative results for water and aluminum shielding, including energy and LET spectra, are discussed for high-energy beams of H-2 and He-4.

  12. Improved continuum lowering calculations in screened hydrogenic model with l-splitting for high energy density systems

    NASA Astrophysics Data System (ADS)

    Ali, Amjad; Shabbir Naz, G.; Saleem Shahzad, M.; Kouser, R.; Aman-ur-Rehman; Nasim, M. H.

    2018-03-01

    The energy states of the bound electrons in high energy density systems (HEDS) are significantly affected due to the electric field of the neighboring ions. Due to this effect bound electrons require less energy to get themselves free and move into the continuum. This phenomenon of reduction in potential is termed as ionization potential depression (IPD) or the continuum lowering (CL). The foremost parameter to depict this change is the average charge state, therefore accurate modeling for CL is imperative in modeling atomic data for computation of radiative and thermodynamic properties of HEDS. In this paper, we present an improved model of CL in the screened hydrogenic model with l-splitting (SHML) proposed by G. Faussurier and C. Blancard, P. Renaudin [High Energy Density Physics 4 (2008) 114] and its effect on average charge state. We propose the level charge dependent calculation of CL potential energy and inclusion of exchange and correlation energy in SHML. By doing this, we made our model more relevant to HEDS and free from CL empirical parameter to the plasma environment. We have implemented both original and modified model of SHML in our code named OPASH and benchmark our results with experiments and other state-of-the-art simulation codes. We compared our results of average charge state for Carbon, Beryllium, Aluminum, Iron and Germanium against published literature and found a very reasonable agreement between them.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bose, Benjamin; Koyama, Kazuya, E-mail: benjamin.bose@port.ac.uk, E-mail: kazuya.koyama@port.ac.uk

    We develop a code to produce the power spectrum in redshift space based on standard perturbation theory (SPT) at 1-loop order. The code can be applied to a wide range of modified gravity and dark energy models using a recently proposed numerical method by A.Taruya to find the SPT kernels. This includes Horndeski's theory with a general potential, which accommodates both chameleon and Vainshtein screening mechanisms and provides a non-linear extension of the effective theory of dark energy up to the third order. Focus is on a recent non-linear model of the redshift space power spectrum which has been shownmore » to model the anisotropy very well at relevant scales for the SPT framework, as well as capturing relevant non-linear effects typical of modified gravity theories. We provide consistency checks of the code against established results and elucidate its application within the light of upcoming high precision RSD data.« less

  14. Whole Device Modeling of Compact Tori: Stability and Transport Modeling of C-2W

    NASA Astrophysics Data System (ADS)

    Dettrick, Sean; Fulton, Daniel; Lau, Calvin; Lin, Zhihong; Ceccherini, Francesco; Galeotti, Laura; Gupta, Sangeeta; Onofri, Marco; Tajima, Toshiki; TAE Team

    2017-10-01

    Recent experimental evidence from the C-2U FRC experiment shows that the confinement of energy improves with inverse collisionality, similar to other high beta toroidal devices, NSTX and MAST. This motivated the construction of a new FRC experiment, C-2W, to study the energy confinement scaling at higher electron temperature. Tri Alpha Energy is working towards catalysing a community-wide collaboration to develop a Whole Device Model (WDM) of Compact Tori. One application of the WDM is the study of stability and transport properties of C-2W using two particle-in-cell codes, ANC and FPIC. These codes can be used to find new stable operating points, and to make predictions of the turbulent transport at those points. They will be used in collaboration with the C-2W experimental program to validate the codes against C-2W, mitigate experimental risk inherent in the exploration of new parameter regimes, accelerate the optimization of experimental operating scenarios, and to find operating points for future FRC reactor designs.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Edwin S.

    Under the CRADA, NREL will provide assistance to NRGsim to debug and convert the EnergyPlus Hysteresis Phase Change Material ('PCM') model to C++ for adoption into the main code package of the EnergyPlus simulation engine.

  16. Investigation of plasma behavior during noble gas injection in the end-cell of GAMMA 10/PDX by using the multi-fluid code ‘LINDA’

    NASA Astrophysics Data System (ADS)

    Islam, M. S.; Nakashima, Y.; Hatayama, A.

    2017-12-01

    The linear divertor analysis with fluid model (LINDA) code has been developed in order to simulate plasma behavior in the end-cell of linear fusion device GAMMA 10/PDX. This paper presents the basic structure and simulated results of the LINDA code. The atomic processes of hydrogen and impurities have been included in the present model in order to investigate energy loss processes and mechanism of plasma detachment. A comparison among Ar, Kr and Xe shows that Xe is the most effective gas on the reduction of electron and ion temperature. Xe injection leads to strong reduction in the temperature of electron and ion. The energy loss terms for both the electron and the ion are enhanced significantly during Xe injection. It is shown that the major energy loss channels for ion and electron are charge-exchange loss and radiative power loss of the radiator gas, respectively. These outcomes indicate that Xe injection in the plasma edge region is effective for reducing plasma energy and generating detached plasma in linear device GAMMA 10/PDX.

  17. Potential capabilities of Reynolds stress turbulence model in the COMMIX-RSM code

    NASA Technical Reports Server (NTRS)

    Chang, F. C.; Bottoni, M.

    1994-01-01

    A Reynolds stress turbulence model has been implemented in the COMMIX code, together with transport equations describing turbulent heat fluxes, variance of temperature fluctuations, and dissipation of turbulence kinetic energy. The model has been verified partially by simulating homogeneous turbulent shear flow, and stable and unstable stratified shear flows with strong buoyancy-suppressing or enhancing turbulence. This article outlines the model, explains the verifications performed thus far, and discusses potential applications of the COMMIX-RSM code in several domains, including, but not limited to, analysis of thermal striping in engineering systems, simulation of turbulence in combustors, and predictions of bubbly and particulate flows.

  18. Evaluation and utilization of beam simulation codes for the SNS ion source and low energy beam transport developmenta)

    NASA Astrophysics Data System (ADS)

    Han, B. X.; Welton, R. F.; Stockli, M. P.; Luciano, N. P.; Carmichael, J. R.

    2008-02-01

    Beam simulation codes PBGUNS, SIMION, and LORENTZ-3D were evaluated by modeling the well-diagnosed SNS base line ion source and low energy beam transport (LEBT) system. Then, an investigation was conducted using these codes to assist our ion source and LEBT development effort which is directed at meeting the SNS operational and also the power-upgrade project goals. A high-efficiency H- extraction system as well as magnetic and electrostatic LEBT configurations capable of transporting up to 100mA is studied using these simulation tools.

  19. Sage Simulation Model for Technology Demonstration Convertor by a Step-by-Step Approach

    NASA Technical Reports Server (NTRS)

    Demko, Rikako; Penswick, L. Barry

    2006-01-01

    The development of a Stirling model using the 1-D Saga design code was completed using a step-by-step approach. This is a method of gradually increasing the complexity of the Saga model while observing the energy balance and energy losses at each step of the development. This step-by-step model development and energy-flow analysis can clarify where the losses occur, their impact, and suggest possible opportunities for design improvement.

  20. Industrial Demand Module - NEMS Documentation

    EIA Publications

    2014-01-01

    Documents the objectives, analytical approach, and development of the National Energy Modeling System (NEMS) Industrial Demand Module. The report catalogues and describes model assumptions, computational methodology, parameter estimation techniques, and model source code.

  1. Transportation Sector Module - NEMS Documentation

    EIA Publications

    2017-01-01

    Documents the objectives, analytical approach and development of the National Energy Modeling System (NEMS) Transportation Model (TRAN). The report catalogues and describes the model assumptions, computational methodology, parameter estimation techniques, model source code, and forecast results generated by the model.

  2. Dark matter annihilation in the circumgalactic medium at high redshifts

    NASA Astrophysics Data System (ADS)

    Schön, S.; Mack, K. J.; Wyithe, J. S. B.

    2018-03-01

    Annihilating dark matter (DM) models offer promising avenues for future DM detection, in particular via modification of astrophysical signals. However, when modelling such potential signals at high redshift, the emergence of both DM and baryonic structure, as well as the complexities of the energy transfer process, needs to be taken into account. In the following paper, we present a detailed energy deposition code and use this to examine the energy transfer efficiency of annihilating DM at high redshift, including the effects on baryonic structure. We employ the PYTHIA code to model neutralino-like DM candidates and their subsequent annihilation products for a range of masses and annihilation channels. We also compare different density profiles and mass-concentration relations for 105-107 M⊙ haloes at redshifts 20 and 40. For these DM halo and particle models, we show radially dependent ionization and heating curves and compare the deposited energy to the haloes' gravitational binding energy. We use the `filtered' annihilation spectra escaping the halo to calculate the heating of the circumgalactic medium and show that the mass of the minimal star-forming object is increased by a factor of 2-3 at redshift 20 and 4-5 at redshift 40 for some DM models.

  3. Intercomparison of Monte Carlo radiation transport codes to model TEPC response in low-energy neutron and gamma-ray fields.

    PubMed

    Ali, F; Waker, A J; Waller, E J

    2014-10-01

    Tissue-equivalent proportional counters (TEPC) can potentially be used as a portable and personal dosemeter in mixed neutron and gamma-ray fields, but what hinders this use is their typically large physical size. To formulate compact TEPC designs, the use of a Monte Carlo transport code is necessary to predict the performance of compact designs in these fields. To perform this modelling, three candidate codes were assessed: MCNPX 2.7.E, FLUKA 2011.2 and PHITS 2.24. In each code, benchmark simulations were performed involving the irradiation of a 5-in. TEPC with monoenergetic neutron fields and a 4-in. wall-less TEPC with monoenergetic gamma-ray fields. The frequency and dose mean lineal energies and dose distributions calculated from each code were compared with experimentally determined data. For the neutron benchmark simulations, PHITS produces data closest to the experimental values and for the gamma-ray benchmark simulations, FLUKA yields data closest to the experimentally determined quantities. © The Author 2013. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  4. Improving building energy efficiency in India: State-level analysis of building energy efficiency policies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, Sha; Tan, Qing; Evans, Meredydd

    India is expected to add 40 billion m2 of new buildings till 2050. Buildings are responsible for one third of India’s total energy consumption today and building energy use is expected to continue growing driven by rapid income and population growth. The implementation of the Energy Conservation Building Code (ECBC) is one of the measures to improve building energy efficiency. Using the Global Change Assessment Model, this study assesses growth in the buildings sector and impacts of building energy policies in Gujarat, which would help the state adopt ECBC and expand building energy efficiency programs. Without building energy policies, buildingmore » energy use in Gujarat would grow by 15 times in commercial buildings and 4 times in urban residential buildings between 2010 and 2050. ECBC improves energy efficiency in commercial buildings and could reduce building electricity use in Gujarat by 20% in 2050, compared to the no policy scenario. Having energy codes for both commercial and residential buildings could result in additional 10% savings in electricity use. To achieve these intended savings, it is critical to build capacity and institution for robust code implementation.« less

  5. System for the Analysis of Global Energy Markets - Vol. II, Model Documentation

    EIA Publications

    2003-01-01

    The second volume provides a data implementation guide that lists all naming conventions and model constraints. In addition, Volume 1 has two appendixes that provide a schematic of the System for the Analysis of Global Energy Markets (SAGE) structure and a listing of the source code, respectively.

  6. Implementation of new physics models for low energy electrons in liquid water in Geant4-DNA.

    PubMed

    Bordage, M C; Bordes, J; Edel, S; Terrissol, M; Franceries, X; Bardiès, M; Lampe, N; Incerti, S

    2016-12-01

    A new alternative set of elastic and inelastic cross sections has been added to the very low energy extension of the Geant4 Monte Carlo simulation toolkit, Geant4-DNA, for the simulation of electron interactions in liquid water. These cross sections have been obtained from the CPA100 Monte Carlo track structure code, which has been a reference in the microdosimetry community for many years. They are compared to the default Geant4-DNA cross sections and show better agreement with published data. In order to verify the correct implementation of the CPA100 cross section models in Geant4-DNA, simulations of the number of interactions and ranges were performed using Geant4-DNA with this new set of models, and the results were compared with corresponding results from the original CPA100 code. Good agreement is observed between the implementations, with relative differences lower than 1% regardless of the incident electron energy. Useful quantities related to the deposited energy at the scale of the cell or the organ of interest for internal dosimetry, like dose point kernels, are also calculated using these new physics models. They are compared with results obtained using the well-known Penelope Monte Carlo code. Copyright © 2016 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  7. Non-adiabatic Excited State Molecule Dynamics Modeling of Photochemistry and Photophysics of Materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nelson, Tammie Renee; Tretiak, Sergei

    2017-01-06

    Understanding and controlling excited state dynamics lies at the heart of all our efforts to design photoactive materials with desired functionality. This tailor-design approach has become the standard for many technological applications (e.g., solar energy harvesting) including the design of organic conjugated electronic materials with applications in photovoltaic and light-emitting devices. Over the years, our team has developed efficient LANL-based codes to model the relevant photophysical processes following photoexcitation (spatial energy transfer, excitation localization/delocalization, and/or charge separation). The developed approach allows the non-radiative relaxation to be followed on up to ~10 ps timescales for large realistic molecules (hundreds of atomsmore » in size) in the realistic solvent dielectric environment. The Collective Electronic Oscillator (CEO) code is used to compute electronic excited states, and the Non-adiabatic Excited State Molecular Dynamics (NA-ESMD) code is used to follow the non-adiabatic dynamics on multiple coupled Born-Oppenheimer potential energy surfaces. Our preliminary NA-ESMD simulations have revealed key photoinduced mechanisms controlling competing interactions and relaxation pathways in complex materials, including organic conjugated polymer materials, and have provided a detailed understanding of photochemical products and intermediates and the internal conversion process during the initiation of energetic materials. This project will be using LANL-based CEO and NA-ESMD codes to model nonradiative relaxation in organic and energetic materials. The NA-ESMD and CEO codes belong to a class of electronic structure/quantum chemistry codes that require large memory, “long-queue-few-core” distribution of resources in order to make useful progress. The NA-ESMD simulations are trivially parallelizable requiring ~300 processors for up to one week runtime to reach a meaningful restart point.« less

  8. Nonlinear static and dynamic finite element analysis of an eccentrically loaded graphite-epoxy beam

    NASA Technical Reports Server (NTRS)

    Fasanella, Edwin L.; Jackson, Karen E.; Jones, Lisa E.

    1991-01-01

    The Dynamic Crash Analysis of Structures (DYCAT) and NIKE3D nonlinear finite element codes were used to model the static and implulsive response of an eccentrically loaded graphite-epoxy beam. A 48-ply unidirectional composite beam was tested under an eccentric axial compressive load until failure. This loading configuration was chosen to highlight the capabilities of two finite element codes for modeling a highly nonlinear, large deflection structural problem which has an exact solution. These codes are currently used to perform dynamic analyses of aircraft structures under impact loads to study crashworthiness and energy absorbing capabilities. Both beam and plate element models were developed to compare with the experimental data using the DYCAST and NIKE3D codes.

  9. Advanced Power Electronic Interfaces for Distributed Energy Systems, Part 2: Modeling, Development, and Experimental Evaluation of Advanced Control Functions for Single-Phase Utility-Connected Inverter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chakraborty, S.; Kroposki, B.; Kramer, W.

    Integrating renewable energy and distributed generations into the Smart Grid architecture requires power electronic (PE) for energy conversion. The key to reaching successful Smart Grid implementation is to develop interoperable, intelligent, and advanced PE technology that improves and accelerates the use of distributed energy resource systems. This report describes the simulation, design, and testing of a single-phase DC-to-AC inverter developed to operate in both islanded and utility-connected mode. It provides results on both the simulations and the experiments conducted, demonstrating the ability of the inverter to provide advanced control functions such as power flow and VAR/voltage regulation. This report alsomore » analyzes two different techniques used for digital signal processor (DSP) code generation. Initially, the DSP code was written in C programming language using Texas Instrument's Code Composer Studio. In a later stage of the research, the Simulink DSP toolbox was used to self-generate code for the DSP. The successful tests using Simulink self-generated DSP codes show promise for fast prototyping of PE controls.« less

  10. Radiant Energy Measurements from a Scaled Jet Engine Axisymmetric Exhaust Nozzle for a Baseline Code Validation Case

    NASA Technical Reports Server (NTRS)

    Baumeister, Joseph F.

    1994-01-01

    A non-flowing, electrically heated test rig was developed to verify computer codes that calculate radiant energy propagation from nozzle geometries that represent aircraft propulsion nozzle systems. Since there are a variety of analysis tools used to evaluate thermal radiation propagation from partially enclosed nozzle surfaces, an experimental benchmark test case was developed for code comparison. This paper briefly describes the nozzle test rig and the developed analytical nozzle geometry used to compare the experimental and predicted thermal radiation results. A major objective of this effort was to make available the experimental results and the analytical model in a format to facilitate conversion to existing computer code formats. For code validation purposes this nozzle geometry represents one validation case for one set of analysis conditions. Since each computer code has advantages and disadvantages based on scope, requirements, and desired accuracy, the usefulness of this single nozzle baseline validation case can be limited for some code comparisons.

  11. Neutron Transport Models and Methods for HZETRN and Coupling to Low Energy Light Ion Transport

    NASA Technical Reports Server (NTRS)

    Blattnig, S.R.; Slaba, T.C.; Heinbockel, J.H.

    2008-01-01

    Exposure estimates inside space vehicles, surface habitats, and high altitude aircraft exposed to space radiation are highly influenced by secondary neutron production. The deterministic transport code HZETRN has been identified as a reliable and efficient tool for such studies, but improvements to the underlying transport models and numerical methods are still necessary. In this paper, the forward-backward (FB) and directionally coupled forward-backward (DC) neutron transport models are derived, numerical methods for the FB model are reviewed, and a computationally efficient numerical solution is presented for the DC model. Both models are compared to the Monte Carlo codes HETCHEDS and FLUKA, and the DC model is shown to agree closely with the Monte Carlo results. Finally, it is found in the development of either model that the decoupling of low energy neutrons from the light ion (A<4) transport procedure adversely affects low energy light ion fluence spectra and exposure quantities. A first order correction is presented to resolve the problem, and it is shown to be both accurate and efficient.

  12. Study of fusion product effects in field-reversed mirrors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Driemeyer, D.E.

    1980-01-01

    The effect of fusion products (fps) on Field-Reversed Mirror (FRM) reactor concepts has been evaluated through the development of two new computer models. The first code (MCFRM) treats fps as test particles in a fixed background plasma, which is represented as a fluid. MCFRM includes a Monte Carlo treatment of Coulomb scattering and thus provides an accurate treatment of fp behavior even at lower energies where pitch-angle scattering becomes important. The second code (FRMOD) is a steady-state, globally averaged, two-fluid (ion and electron), point model of the FRM plasma that incorporates fp heating and ash buildup values which are consistentmore » with the MCFRM calculations. These codes have been used extensively in the development of an advanced-fuel FRM reactor design (SAFFIRE). A Catalyzed-D version of the plant is also discussed along with an investigation of the steady-state energy distribution of fps in the FRM. User guides for the two computer codes are also included.« less

  13. An international survey of building energy codes and their implementation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Evans, Meredydd; Roshchanka, Volha; Graham, Peter

    Buildings are key to low-carbon development everywhere, and many countries have introduced building energy codes to improve energy efficiency in buildings. Yet, building energy codes can only deliver results when the codes are implemented. For this reason, studies of building energy codes need to consider implementation of building energy codes in a consistent and comprehensive way. This research identifies elements and practices in implementing building energy codes, covering codes in 22 countries that account for 70% of global energy demand from buildings. Access to benefits of building energy codes depends on comprehensive coverage of buildings by type, age, size, andmore » geographic location; an implementation framework that involves a certified agency to inspect construction at critical stages; and independently tested, rated, and labeled building energy materials. Training and supporting tools are another element of successful code implementation, and their role is growing in importance, given the increasing flexibility and complexity of building energy codes. Some countries have also introduced compliance evaluation and compliance checking protocols to improve implementation. This article provides examples of practices that countries have adopted to assist with implementation of building energy codes.« less

  14. On the implementation of the spherical collapse model for dark energy models

    NASA Astrophysics Data System (ADS)

    Pace, Francesco; Meyer, Sven; Bartelmann, Matthias

    2017-10-01

    In this work we review the theory of the spherical collapse model and critically analyse the aspects of the numerical implementation of its fundamental equations. By extending a recent work by [1], we show how different aspects, such as the initial integration time, the definition of constant infinity and the criterion for the extrapolation method (how close the inverse of the overdensity has to be to zero at the collapse time) can lead to an erroneous estimation (a few per mill error which translates to a few percent in the mass function) of the key quantity in the spherical collapse model: the linear critical overdensity δc, which plays a crucial role for the mass function of halos. We provide a better recipe to adopt in designing a code suitable to a generic smooth dark energy model and we compare our numerical results with analytic predictions for the EdS and the ΛCDM models. We further discuss the evolution of δc for selected classes of dark energy models as a general test of the robustness of our implementation. We finally outline which modifications need to be taken into account to extend the code to more general classes of models, such as clustering dark energy models and non-minimally coupled models.

  15. On the implementation of the spherical collapse model for dark energy models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pace, Francesco; Meyer, Sven; Bartelmann, Matthias, E-mail: francesco.pace@manchester.ac.uk, E-mail: sven.meyer@uni-heidelberg.de, E-mail: bartelmann@uni-heidelberg.de

    In this work we review the theory of the spherical collapse model and critically analyse the aspects of the numerical implementation of its fundamental equations. By extending a recent work by [1], we show how different aspects, such as the initial integration time, the definition of constant infinity and the criterion for the extrapolation method (how close the inverse of the overdensity has to be to zero at the collapse time) can lead to an erroneous estimation (a few per mill error which translates to a few percent in the mass function) of the key quantity in the spherical collapsemore » model: the linear critical overdensity δ{sub c}, which plays a crucial role for the mass function of halos. We provide a better recipe to adopt in designing a code suitable to a generic smooth dark energy model and we compare our numerical results with analytic predictions for the EdS and the ΛCDM models. We further discuss the evolution of δ{sub c} for selected classes of dark energy models as a general test of the robustness of our implementation. We finally outline which modifications need to be taken into account to extend the code to more general classes of models, such as clustering dark energy models and non-minimally coupled models.« less

  16. Comparison of four large-eddy simulation research codes and effects of model coefficient and inflow turbulence in actuator-line-based wind turbine modeling

    DOE PAGES

    Martinez-Tossas, Luis A.; Churchfield, Matthew J.; Yilmaz, Ali Emre; ...

    2018-05-16

    Here, large-eddy simulation (LES) of a wind turbine under uniform inflow is performed using an actuator line model (ALM). Predictions from four LES research codes from the wind energy community are compared. The implementation of the ALM in all codes is similar and quantities along the blades are shown to match closely for all codes. The value of the Smagorinsky coefficient in the subgrid-scale turbulence model is shown to have a negligible effect on the time-averaged loads along the blades. Conversely, the breakdown location of the wake is strongly dependent on the Smagorinsky coefficient in uniform laminar inflow. Simulations aremore » also performed using uniform mean velocity inflow with added homogeneous isotropic turbulence from a public database. The time-averaged loads along the blade do not depend on the inflow turbulence. Moreover, and in contrast to the uniform inflow cases, the Smagorinsky coefficient has a negligible effect on the wake profiles. It is concluded that for LES of wind turbines and wind farms using ALM, careful implementation and extensive cross-verification among codes can result in highly reproducible predictions. Moreover, the characteristics of the inflow turbulence appear to be more important than the details of the subgrid-scale modeling employed in the wake, at least for LES of wind energy applications at the resolutions tested in this work.« less

  17. Comparison of four large-eddy simulation research codes and effects of model coefficient and inflow turbulence in actuator-line-based wind turbine modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martinez-Tossas, Luis A.; Churchfield, Matthew J.; Yilmaz, Ali Emre

    Here, large-eddy simulation (LES) of a wind turbine under uniform inflow is performed using an actuator line model (ALM). Predictions from four LES research codes from the wind energy community are compared. The implementation of the ALM in all codes is similar and quantities along the blades are shown to match closely for all codes. The value of the Smagorinsky coefficient in the subgrid-scale turbulence model is shown to have a negligible effect on the time-averaged loads along the blades. Conversely, the breakdown location of the wake is strongly dependent on the Smagorinsky coefficient in uniform laminar inflow. Simulations aremore » also performed using uniform mean velocity inflow with added homogeneous isotropic turbulence from a public database. The time-averaged loads along the blade do not depend on the inflow turbulence. Moreover, and in contrast to the uniform inflow cases, the Smagorinsky coefficient has a negligible effect on the wake profiles. It is concluded that for LES of wind turbines and wind farms using ALM, careful implementation and extensive cross-verification among codes can result in highly reproducible predictions. Moreover, the characteristics of the inflow turbulence appear to be more important than the details of the subgrid-scale modeling employed in the wake, at least for LES of wind energy applications at the resolutions tested in this work.« less

  18. Technical Support Document for Version 3.4.0 of the COMcheck Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bartlett, Rosemarie; Connell, Linda M.; Gowri, Krishnan

    2007-09-14

    COMcheck provides an optional way to demonstrate compliance with commercial and high-rise residential building energy codes. Commercial buildings include all use groups except single family and multifamily not over three stories in height. COMcheck was originally based on ANSI/ASHRAE/IES Standard 90.1-1989 (Standard 90.1-1989) requirements and is intended for use with various codes based on Standard 90.1, including the Codification of ASHRAE/IES Standard 90.1-1989 (90.1-1989 Code) (ASHRAE 1989a, 1993b) and ASHRAE/IESNA Standard 90.1-1999 (Standard 90.1-1999). This includes jurisdictions that have adopted the 90.1-1989 Code, Standard 90.1-1989, Standard 90.1-1999, or their own code based on one of these. We view Standard 90.1-1989more » and the 90.1-1989 Code as having equivalent technical content and have used both as source documents in developing COMcheck. This technical support document (TSD) is designed to explain the technical basis for the COMcheck software as originally developed based on the ANSI/ASHRAE/IES Standard 90.1-1989 (Standard 90.1-1989). Documentation for other national model codes and standards and specific state energy codes supported in COMcheck has been added to this report as appendices. These appendices are intended to provide technical documentation for features specific to the supported codes and for any changes made for state-specific codes that differ from the standard features that support compliance with the national model codes and standards.« less

  19. Fast, Statistical Model of Surface Roughness for Ion-Solid Interaction Simulations and Efficient Code Coupling

    NASA Astrophysics Data System (ADS)

    Drobny, Jon; Curreli, Davide; Ruzic, David; Lasa, Ane; Green, David; Canik, John; Younkin, Tim; Blondel, Sophie; Wirth, Brian

    2017-10-01

    Surface roughness greatly impacts material erosion, and thus plays an important role in Plasma-Surface Interactions. Developing strategies for efficiently introducing rough surfaces into ion-solid interaction codes will be an important step towards whole-device modeling of plasma devices and future fusion reactors such as ITER. Fractal TRIDYN (F-TRIDYN) is an upgraded version of the Monte Carlo, BCA program TRIDYN developed for this purpose that includes an explicit fractal model of surface roughness and extended input and output options for file-based code coupling. Code coupling with both plasma and material codes has been achieved and allows for multi-scale, whole-device modeling of plasma experiments. These code coupling results will be presented. F-TRIDYN has been further upgraded with an alternative, statistical model of surface roughness. The statistical model is significantly faster than and compares favorably to the fractal model. Additionally, the statistical model compares well to alternative computational surface roughness models and experiments. Theoretical links between the fractal and statistical models are made, and further connections to experimental measurements of surface roughness are explored. This work was supported by the PSI-SciDAC Project funded by the U.S. Department of Energy through contract DOE-DE-SC0008658.

  20. Axial deformed solution of the Skyrme-Hartree-Fock-Bogolyubov equations using the transformed harmonic oscillator Basis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perez, R. Navarro; Schunck, N.; Lasseri, R.

    2017-03-09

    HFBTHO is a physics computer code that is used to model the structure of the nucleus. It is an implementation of the nuclear energy Density Functional Theory (DFT), where the energy of the nucleus is obtained by integration over space of some phenomenological energy density, which is itself a functional of the neutron and proton densities. In HFBTHO, the energy density derives either from the zero-range Dkyrme or the finite-range Gogny effective two-body interaction between nucleons. Nuclear superfluidity is treated at the Hartree-Fock-Bogoliubov (HFB) approximation, and axial-symmetry of the nuclear shape is assumed. This version is the 3rd release ofmore » the program; the two previous versions were published in Computer Physics Communications [1,2]. The previous version was released at LLNL under GPL 3 Open Source License and was given release code LLNL-CODE-573953.« less

  1. Country Report on Building Energy Codes in Australia

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shui, Bin; Evans, Meredydd; Somasundaram, Sriram

    2009-04-02

    This report is part of a series of reports on building energy efficiency codes in countries associated with the Asian Pacific Partnership (APP) - Australia, South Korea, Japan, China, India, and the United States of America (U.S.). This reports gives an overview of the development of building energy codes in Australia, including national energy policies related to building energy codes, history of building energy codes, recent national projects and activities to promote building energy codes. The report also provides a review of current building energy codes (such as building envelope, HVAC, and lighting) for commercial and residential buildings in Australia.

  2. The Simpsons program 6-D phase space tracking with acceleration

    NASA Astrophysics Data System (ADS)

    Machida, S.

    1993-12-01

    A particle tracking code, Simpsons, in 6-D phase space including energy ramping has been developed to model proton synchrotrons and storage rings. We take time as the independent variable to change machine parameters and diagnose beam quality in a quite similar way as real machines, unlike existing tracking codes for synchrotrons which advance a particle element by element. Arbitrary energy ramping and rf voltage curves as a function of time are read as an input file for defining a machine cycle. The code is used to study beam dynamics with time dependent parameters. Some of the examples from simulations of the Superconducting Super Collider (SSC) boosters are shown.

  3. Residential Demand Module - NEMS Documentation

    EIA Publications

    2017-01-01

    Model Documentation - Documents the objectives, analytical approach, and development of the National Energy Modeling System (NEMS) Residential Sector Demand Module. The report catalogues and describes the model assumptions, computational methodology, parameter estimation techniques, and FORTRAN source code.

  4. Large Eddy Simulation of wind turbine wakes: detailed comparisons of two codes focusing on effects of numerics and subgrid modeling

    NASA Astrophysics Data System (ADS)

    Martínez-Tossas, Luis A.; Churchfield, Matthew J.; Meneveau, Charles

    2015-06-01

    In this work we report on results from a detailed comparative numerical study from two Large Eddy Simulation (LES) codes using the Actuator Line Model (ALM). The study focuses on prediction of wind turbine wakes and their breakdown when subject to uniform inflow. Previous studies have shown relative insensitivity to subgrid modeling in the context of a finite-volume code. The present study uses the low dissipation pseudo-spectral LES code from Johns Hopkins University (LESGO) and the second-order, finite-volume OpenFOAMcode (SOWFA) from the National Renewable Energy Laboratory. When subject to uniform inflow, the loads on the blades are found to be unaffected by subgrid models or numerics, as expected. The turbulence in the wake and the location of transition to a turbulent state are affected by the subgrid-scale model and the numerics.

  5. Large Eddy Simulation of Wind Turbine Wakes. Detailed Comparisons of Two Codes Focusing on Effects of Numerics and Subgrid Modeling

    DOE PAGES

    Martinez-Tossas, Luis A.; Churchfield, Matthew J.; Meneveau, Charles

    2015-06-18

    In this work we report on results from a detailed comparative numerical study from two Large Eddy Simulation (LES) codes using the Actuator Line Model (ALM). The study focuses on prediction of wind turbine wakes and their breakdown when subject to uniform inflow. Previous studies have shown relative insensitivity to subgrid modeling in the context of a finite-volume code. The present study uses the low dissipation pseudo-spectral LES code from Johns Hopkins University (LESGO) and the second-order, finite-volume OpenFOAMcode (SOWFA) from the National Renewable Energy Laboratory. When subject to uniform inflow, the loads on the blades are found to bemore » unaffected by subgrid models or numerics, as expected. The turbulence in the wake and the location of transition to a turbulent state are affected by the subgrid-scale model and the numerics.« less

  6. Analysis of multi-fragmentation reactions induced by relativistic heavy ions using the statistical multi-fragmentation model

    NASA Astrophysics Data System (ADS)

    Ogawa, T.; Sato, T.; Hashimoto, S.; Niita, K.

    2013-09-01

    The fragmentation cross-sections of relativistic energy nucleus-nucleus collisions were analyzed using the statistical multi-fragmentation model (SMM) incorporated with the Monte-Carlo radiation transport simulation code particle and heavy ion transport code system (PHITS). Comparison with the literature data showed that PHITS-SMM reproduces fragmentation cross-sections of heavy nuclei at relativistic energies better than the original PHITS by up to two orders of magnitude. It was also found that SMM does not degrade the neutron production cross-sections in heavy ion collisions or the fragmentation cross-sections of light nuclei, for which SMM has not been benchmarked. Therefore, SMM is a robust model that can supplement conventional nucleus-nucleus reaction models, enabling more accurate prediction of fragmentation cross-sections.

  7. Modeling and Simulation of Explosively Driven Electromechanical Devices

    NASA Astrophysics Data System (ADS)

    Demmie, Paul N.

    2002-07-01

    Components that store electrical energy in ferroelectric materials and produce currents when their permittivity is explosively reduced are used in a variety of applications. The modeling and simulation of such devices is a challenging problem since one has to represent the coupled physics of detonation, shock propagation, and electromagnetic field generation. The high fidelity modeling and simulation of complicated electromechanical devices was not feasible prior to having the Accelerated Strategic Computing Initiative (ASCI) computers and the ASCI developed codes at Sandia National Laboratories (SNL). The EMMA computer code is used to model such devices and simulate their operation. In this paper, I discuss the capabilities of the EMMA code for the modeling and simulation of one such electromechanical device, a slim-loop ferroelectric (SFE) firing set.

  8. Modeling emission lag after photoexcitation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jensen, Kevin L.; Petillo, John J.; Ovtchinnikov, Serguei

    A theoretical model of delayed emission following photoexcitation from metals and semiconductors is given. Its numerical implementation is designed for beam optics codes used to model photocathodes in rf photoinjectors. The model extends the Moments approach for predicting photocurrent and mean transverse energy as moments of an emitted electron distribution by incorporating time of flight and scattering events that result in emission delay on a sub-picosecond level. The model accounts for a dynamic surface extraction field and changes in the energy distribution and time of emission as a consequence of the laser penetration depth and multiple scattering events during transport.more » Usage in the Particle-in-Cell code MICHELLE to predict the bunch shape and duration with or without laser jitter is given. The consequences of delayed emission effects for ultra-short pulses are discussed.« less

  9. Modeling emission lag after photoexcitation

    DOE PAGES

    Jensen, Kevin L.; Petillo, John J.; Ovtchinnikov, Serguei; ...

    2017-10-28

    A theoretical model of delayed emission following photoexcitation from metals and semiconductors is given. Its numerical implementation is designed for beam optics codes used to model photocathodes in rf photoinjectors. The model extends the Moments approach for predicting photocurrent and mean transverse energy as moments of an emitted electron distribution by incorporating time of flight and scattering events that result in emission delay on a sub-picosecond level. The model accounts for a dynamic surface extraction field and changes in the energy distribution and time of emission as a consequence of the laser penetration depth and multiple scattering events during transport.more » Usage in the Particle-in-Cell code MICHELLE to predict the bunch shape and duration with or without laser jitter is given. The consequences of delayed emission effects for ultra-short pulses are discussed.« less

  10. A model for the accurate computation of the lateral scattering of protons in water

    NASA Astrophysics Data System (ADS)

    Bellinzona, E. V.; Ciocca, M.; Embriaco, A.; Ferrari, A.; Fontana, A.; Mairani, A.; Parodi, K.; Rotondi, A.; Sala, P.; Tessonnier, T.

    2016-02-01

    A pencil beam model for the calculation of the lateral scattering in water of protons for any therapeutic energy and depth is presented. It is based on the full Molière theory, taking into account the energy loss and the effects of mixtures and compounds. Concerning the electromagnetic part, the model has no free parameters and is in very good agreement with the FLUKA Monte Carlo (MC) code. The effects of the nuclear interactions are parametrized with a two-parameter tail function, adjusted on MC data calculated with FLUKA. The model, after the convolution with the beam and the detector response, is in agreement with recent proton data in water from HIT. The model gives results with the same accuracy of the MC codes based on Molière theory, with a much shorter computing time.

  11. A model for the accurate computation of the lateral scattering of protons in water.

    PubMed

    Bellinzona, E V; Ciocca, M; Embriaco, A; Ferrari, A; Fontana, A; Mairani, A; Parodi, K; Rotondi, A; Sala, P; Tessonnier, T

    2016-02-21

    A pencil beam model for the calculation of the lateral scattering in water of protons for any therapeutic energy and depth is presented. It is based on the full Molière theory, taking into account the energy loss and the effects of mixtures and compounds. Concerning the electromagnetic part, the model has no free parameters and is in very good agreement with the FLUKA Monte Carlo (MC) code. The effects of the nuclear interactions are parametrized with a two-parameter tail function, adjusted on MC data calculated with FLUKA. The model, after the convolution with the beam and the detector response, is in agreement with recent proton data in water from HIT. The model gives results with the same accuracy of the MC codes based on Molière theory, with a much shorter computing time.

  12. An Energy-Efficient Compressive Image Coding for Green Internet of Things (IoT).

    PubMed

    Li, Ran; Duan, Xiaomeng; Li, Xu; He, Wei; Li, Yanling

    2018-04-17

    Aimed at a low-energy consumption of Green Internet of Things (IoT), this paper presents an energy-efficient compressive image coding scheme, which provides compressive encoder and real-time decoder according to Compressive Sensing (CS) theory. The compressive encoder adaptively measures each image block based on the block-based gradient field, which models the distribution of block sparse degree, and the real-time decoder linearly reconstructs each image block through a projection matrix, which is learned by Minimum Mean Square Error (MMSE) criterion. Both the encoder and decoder have a low computational complexity, so that they only consume a small amount of energy. Experimental results show that the proposed scheme not only has a low encoding and decoding complexity when compared with traditional methods, but it also provides good objective and subjective reconstruction qualities. In particular, it presents better time-distortion performance than JPEG. Therefore, the proposed compressive image coding is a potential energy-efficient scheme for Green IoT.

  13. Voltage-dependent K+ channels improve the energy efficiency of signalling in blowfly photoreceptors

    PubMed Central

    2017-01-01

    Voltage-dependent conductances in many spiking neurons are tuned to reduce action potential energy consumption, so improving the energy efficiency of spike coding. However, the contribution of voltage-dependent conductances to the energy efficiency of analogue coding, by graded potentials in dendrites and non-spiking neurons, remains unclear. We investigate the contribution of voltage-dependent conductances to the energy efficiency of analogue coding by modelling blowfly R1-6 photoreceptor membrane. Two voltage-dependent delayed rectifier K+ conductances (DRs) shape the membrane's voltage response and contribute to light adaptation. They make two types of energy saving. By reducing membrane resistance upon depolarization they convert the cheap, low bandwidth membrane needed in dim light to the expensive high bandwidth membrane needed in bright light. This investment of energy in bandwidth according to functional requirements can halve daily energy consumption. Second, DRs produce negative feedback that reduces membrane impedance and increases bandwidth. This negative feedback allows an active membrane with DRs to consume at least 30% less energy than a passive membrane with the same capacitance and bandwidth. Voltage-dependent conductances in other non-spiking neurons, and in dendrites, might be organized to make similar savings. PMID:28381642

  14. Voltage-dependent K+ channels improve the energy efficiency of signalling in blowfly photoreceptors.

    PubMed

    Heras, Francisco J H; Anderson, John; Laughlin, Simon B; Niven, Jeremy E

    2017-04-01

    Voltage-dependent conductances in many spiking neurons are tuned to reduce action potential energy consumption, so improving the energy efficiency of spike coding. However, the contribution of voltage-dependent conductances to the energy efficiency of analogue coding, by graded potentials in dendrites and non-spiking neurons, remains unclear. We investigate the contribution of voltage-dependent conductances to the energy efficiency of analogue coding by modelling blowfly R1-6 photoreceptor membrane. Two voltage-dependent delayed rectifier K + conductances (DRs) shape the membrane's voltage response and contribute to light adaptation. They make two types of energy saving. By reducing membrane resistance upon depolarization they convert the cheap, low bandwidth membrane needed in dim light to the expensive high bandwidth membrane needed in bright light. This investment of energy in bandwidth according to functional requirements can halve daily energy consumption. Second, DRs produce negative feedback that reduces membrane impedance and increases bandwidth. This negative feedback allows an active membrane with DRs to consume at least 30% less energy than a passive membrane with the same capacitance and bandwidth. Voltage-dependent conductances in other non-spiking neurons, and in dendrites, might be organized to make similar savings. © 2017 The Author(s).

  15. Identification of Trends into Dose Calculations for Astronauts through Performing Sensitivity Analysis on Calculational Models Used by the Radiation Health Office

    NASA Technical Reports Server (NTRS)

    Adams, Thomas; VanBaalen, Mary

    2009-01-01

    The Radiation Health Office (RHO) determines each astronaut s cancer risk by using models to associate the amount of radiation dose that astronauts receive from spaceflight missions. The baryon transport codes (BRYNTRN), high charge (Z) and energy transport codes (HZETRN), and computer risk models are used to determine the effective dose received by astronauts in Low Earth orbit (LEO). This code uses an approximation of the Boltzman transport formula. The purpose of the project is to run this code for various International Space Station (ISS) flight parameters in order to gain a better understanding of how this code responds to different scenarios. The project will determine how variations in one set of parameters such as, the point of the solar cycle and altitude can affect the radiation exposure of astronauts during ISS missions. This project will benefit NASA by improving mission dosimetry.

  16. Country Report on Building Energy Codes in Canada

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shui, Bin; Evans, Meredydd

    2009-04-06

    This report is part of a series of reports on building energy efficiency codes in countries associated with the Asian Pacific Partnership (APP) - Australia, South Korea, Japan, China, India, and the United States of America . This reports gives an overview of the development of building energy codes in Canada, including national energy policies related to building energy codes, history of building energy codes, recent national projects and activities to promote building energy codes. The report also provides a review of current building energy codes (such as building envelope, HVAC, lighting, and water heating) for commercial and residential buildingsmore » in Canada.« less

  17. Linear energy transfer in water phantom within SHIELD-HIT transport code

    NASA Astrophysics Data System (ADS)

    Ergun, A.; Sobolevsky, N.; Botvina, A. S.; Buyukcizmeci, N.; Latysheva, L.; Ogul, R.

    2017-02-01

    The effect of irradiation in tissue is important in hadron therapy for the dose measurement and treatment planning. This biological effect is defined by an equivalent dose H which depends on the Linear Energy Transfer (LET). Usually, H can be expressed in terms of the absorbed dose D and the quality factor K of the radiation under consideration. In literature, various types of transport codes have been used for modeling and simulation of the interaction of the beams of protons and heavier ions with tissue-equivalent materials. In this presentation we used SHIELD-HIT code to simulate decomposition of the absorbed dose by LET in water for 16O beams. A more detailed description of capabilities of the SHIELD-HIT code can be found in the literature.

  18. High Energy Laser Propagation in Various Atmospheric Conditions Utilizing a New Accelerated Scaling Code

    DTIC Science & Technology

    2014-06-01

    C.  MODTRAN ....................................................................................................34  1.  Preset Atmospheric Models ...37  3.  Aerosol Models ...................................................................................38  4.  Cloud and Rain Models ...52  E.  MODEL VALIDATION ...............................................................................53  VI.  RESULTS

  19. The MCNP6 Analytic Criticality Benchmark Suite

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Forrest B.

    2016-06-16

    Analytical benchmarks provide an invaluable tool for verifying computer codes used to simulate neutron transport. Several collections of analytical benchmark problems [1-4] are used routinely in the verification of production Monte Carlo codes such as MCNP® [5,6]. Verification of a computer code is a necessary prerequisite to the more complex validation process. The verification process confirms that a code performs its intended functions correctly. The validation process involves determining the absolute accuracy of code results vs. nature. In typical validations, results are computed for a set of benchmark experiments using a particular methodology (code, cross-section data with uncertainties, and modeling)more » and compared to the measured results from the set of benchmark experiments. The validation process determines bias, bias uncertainty, and possibly additional margins. Verification is generally performed by the code developers, while validation is generally performed by code users for a particular application space. The VERIFICATION_KEFF suite of criticality problems [1,2] was originally a set of 75 criticality problems found in the literature for which exact analytical solutions are available. Even though the spatial and energy detail is necessarily limited in analytical benchmarks, typically to a few regions or energy groups, the exact solutions obtained can be used to verify that the basic algorithms, mathematics, and methods used in complex production codes perform correctly. The present work has focused on revisiting this benchmark suite. A thorough review of the problems resulted in discarding some of them as not suitable for MCNP benchmarking. For the remaining problems, many of them were reformulated to permit execution in either multigroup mode or in the normal continuous-energy mode for MCNP. Execution of the benchmarks in continuous-energy mode provides a significant advance to MCNP verification methods.« less

  20. EMPIRE: A code for nuclear astrophysics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Palumbo, A.

    The nuclear reaction code EMPIRE is presented as a useful tool for nuclear astrophysics. EMPIRE combines a variety of the reaction models with a comprehensive library of input parameters providing a diversity of options for the user. With exclusion of the directsemidirect capture all reaction mechanisms relevant to the nuclear astrophysics energy range of interest are implemented in the code. Comparison to experimental data show consistent agreement for all relevant channels.

  1. Space Applications of the FLUKA Monte-Carlo Code: Lunar and Planetary Exploration

    NASA Technical Reports Server (NTRS)

    Anderson, V.; Ballarini, F.; Battistoni, G.; Campanella, M.; Carboni, M.; Cerutti, F.; Elkhayari, N.; Empl, A.; Fasso, A.; Ferrari, A.; hide

    2004-01-01

    NASA has recognized the need for making additional heavy-ion collision measurements at the U.S. Brookhaven National Laboratory in order to support further improvement of several particle physics transport-code models for space exploration applications. FLUKA has been identified as one of these codes and we will review the nature and status of this investigation as it relates to high-energy heavy-ion physics.

  2. CPMC-Lab: A MATLAB package for Constrained Path Monte Carlo calculations

    NASA Astrophysics Data System (ADS)

    Nguyen, Huy; Shi, Hao; Xu, Jie; Zhang, Shiwei

    2014-12-01

    We describe CPMC-Lab, a MATLAB program for the constrained-path and phaseless auxiliary-field Monte Carlo methods. These methods have allowed applications ranging from the study of strongly correlated models, such as the Hubbard model, to ab initio calculations in molecules and solids. The present package implements the full ground-state constrained-path Monte Carlo (CPMC) method in MATLAB with a graphical interface, using the Hubbard model as an example. The package can perform calculations in finite supercells in any dimensions, under periodic or twist boundary conditions. Importance sampling and all other algorithmic details of a total energy calculation are included and illustrated. This open-source tool allows users to experiment with various model and run parameters and visualize the results. It provides a direct and interactive environment to learn the method and study the code with minimal overhead for setup. Furthermore, the package can be easily generalized for auxiliary-field quantum Monte Carlo (AFQMC) calculations in many other models for correlated electron systems, and can serve as a template for developing a production code for AFQMC total energy calculations in real materials. Several illustrative studies are carried out in one- and two-dimensional lattices on total energy, kinetic energy, potential energy, and charge- and spin-gaps.

  3. A user's manual for DELSOL3: A computer code for calculating the optical performance and optimal system design for solar thermal central receiver plants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kistler, B.L.

    DELSOL3 is a revised and updated version of the DELSOL2 computer program (SAND81-8237) for calculating collector field performance and layout and optimal system design for solar thermal central receiver plants. The code consists of a detailed model of the optical performance, a simpler model of the non-optical performance, an algorithm for field layout, and a searching algorithm to find the best system design based on energy cost. The latter two features are coupled to a cost model of central receiver components and an economic model for calculating energy costs. The code can handle flat, focused and/or canted heliostats, and externalmore » cylindrical, multi-aperture cavity, and flat plate receivers. The program optimizes the tower height, receiver size, field layout, heliostat spacings, and tower position at user specified power levels subject to flux limits on the receiver and land constraints for field layout. DELSOL3 maintains the advantages of speed and accuracy which are characteristics of DELSOL2.« less

  4. Flyer Target Acceleration and Energy Transfer at its Collision with Massive Targets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Borodziuk, S.; Kasperczuk, A.; Pisarczyk, T.

    2006-01-15

    Numerical modelling was aimed at simulation of successive events resulting from interaction of laser beam-single and double targets. It was performed by means of the 2D Lagrangian hydrodynamics code ATLANT-HE. This code is based on one-fluid and two-temperature model of plasma with electron and ion heat conductivity considerations. The code has an advanced treatment of laser light propagation and absorption. This numerical modelling corresponds to the experiment, which was carried out with the use of the PALS facility. Two types of planar solid targets, i.e. single massive Al slabs and double targets consisting of 6 {mu}m thick Al foil andmore » Al slab were applied. The targets were irradiated by the iodine laser pulses of two wavelengths: 1.315 and 0.438 {mu}m. A pulse duration of 0.4 ns and a focal spot diameter of 250 {mu}m at a laser energy of 130 J were used. The numerical modelling allowed us to obtain a more detailed description of shock wave propagation and crater formation.« less

  5. Modeling of impulsive propellant reorientation

    NASA Technical Reports Server (NTRS)

    Hochstein, John I.; Patag, Alfredo E.; Chato, David J.

    1988-01-01

    The impulsive propellant reorientation process is modeled using the (Energy Calculations for Liquid Propellants in a Space Environment (ECLIPSE) code. A brief description of the process and the computational model is presented. Code validation is documented via comparison to experimentally derived data for small scale tanks. Predictions of reorientation performance are presented for two tanks designed for use in flight experiments and for a proposed full scale OTV tank. A new dimensionless parameter is developed to correlate reorientation performance in geometrically similar tanks. Its success is demonstrated.

  6. Modeling Laboratory Astrophysics Experiments using the CRASH code

    NASA Astrophysics Data System (ADS)

    Trantham, Matthew; Drake, R. P.; Grosskopf, Michael; Bauerle, Matthew; Kruanz, Carolyn; Keiter, Paul; Malamud, Guy; Crash Team

    2013-10-01

    The understanding of high energy density systems can be advanced by laboratory astrophysics experiments. Computer simulations can assist in the design and analysis of these experiments. The Center for Radiative Shock Hydrodynamics (CRASH) at the University of Michigan developed a code that has been used to design and analyze high-energy-density experiments on OMEGA, NIF, and other large laser facilities. This Eulerian code uses block-adaptive mesh refinement (AMR) with implicit multigroup radiation transport and electron heat conduction. This poster/talk will demonstrate some of the experiments the CRASH code has helped design or analyze including: Radiative shocks experiments, Kelvin-Helmholtz experiments, Rayleigh-Taylor experiments, plasma sheet, and interacting jets experiments. This work is funded by the Predictive Sciences Academic Alliances Program in NNSA-ASC via grant DEFC52- 08NA28616, by the NNSA-DS and SC-OFES Joint Program in High-Energy-Density Laboratory Plasmas, grant number DE-FG52-09NA29548, and by the National Laser User Facility Program, grant number DE-NA0000850.

  7. Development and preliminary verification of the 3D core neutronic code: COCO

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, H.; Mo, K.; Li, W.

    As the recent blooming economic growth and following environmental concerns (China)) is proactively pushing forward nuclear power development and encouraging the tapping of clean energy. Under this situation, CGNPC, as one of the largest energy enterprises in China, is planning to develop its own nuclear related technology in order to support more and more nuclear plants either under construction or being operation. This paper introduces the recent progress in software development for CGNPC. The focus is placed on the physical models and preliminary verification results during the recent development of the 3D Core Neutronic Code: COCO. In the COCO code,more » the non-linear Green's function method is employed to calculate the neutron flux. In order to use the discontinuity factor, the Neumann (second kind) boundary condition is utilized in the Green's function nodal method. Additionally, the COCO code also includes the necessary physical models, e.g. single-channel thermal-hydraulic module, burnup module, pin power reconstruction module and cross-section interpolation module. The preliminary verification result shows that the COCO code is sufficient for reactor core design and analysis for pressurized water reactor (PWR). (authors)« less

  8. A Monte Carlo simulation code for calculating damage and particle transport in solids: The case for electron-bombarded solids for electron energies up to 900 MeV

    NASA Astrophysics Data System (ADS)

    Yan, Qiang; Shao, Lin

    2017-03-01

    Current popular Monte Carlo simulation codes for simulating electron bombardment in solids focus primarily on electron trajectories, instead of electron-induced displacements. Here we report a Monte Carol simulation code, DEEPER (damage creation and particle transport in matter), developed for calculating 3-D distributions of displacements produced by electrons of incident energies up to 900 MeV. Electron elastic scattering is calculated by using full-Mott cross sections for high accuracy, and primary-knock-on-atoms (PKAs)-induced damage cascades are modeled using ZBL potential. We compare and show large differences in 3-D distributions of displacements and electrons in electron-irradiated Fe. The distributions of total displacements are similar to that of PKAs at low electron energies. But they are substantially different for higher energy electrons due to the shifting of PKA energy spectra towards higher energies. The study is important to evaluate electron-induced radiation damage, for the applications using high flux electron beams to intentionally introduce defects and using an electron analysis beam for microstructural characterization of nuclear materials.

  9. Manual of phosphoric acid fuel cell power plant optimization model and computer program

    NASA Technical Reports Server (NTRS)

    Lu, C. Y.; Alkasab, K. A.

    1984-01-01

    An optimized cost and performance model for a phosphoric acid fuel cell power plant system was derived and developed into a modular FORTRAN computer code. Cost, energy, mass, and electrochemical analyses were combined to develop a mathematical model for optimizing the steam to methane ratio in the reformer, hydrogen utilization in the PAFC plates per stack. The nonlinear programming code, COMPUTE, was used to solve this model, in which the method of mixed penalty function combined with Hooke and Jeeves pattern search was chosen to evaluate this specific optimization problem.

  10. Energy Codes at a Glance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cole, Pamala C.; Richman, Eric E.

    2008-09-01

    Feeling dim from energy code confusion? Read on to give your inspections a charge. The U.S. Department of Energy’s Building Energy Codes Program addresses hundreds of inquiries from the energy codes community every year. This article offers clarification for topics of confusion submitted to BECP Technical Support of interest to electrical inspectors, focusing on the residential and commercial energy code requirements based on the most recently published 2006 International Energy Conservation Code® and ANSI/ASHRAE/IESNA1 Standard 90.1-2004.

  11. Technical Support Document for Version 3.9.0 of the COMcheck Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bartlett, Rosemarie; Connell, Linda M.; Gowri, Krishnan

    2011-09-01

    COMcheck provides an optional way to demonstrate compliance with commercial and high-rise residential building energy codes. Commercial buildings include all use groups except single family and multifamily not over three stories in height. COMcheck was originally based on ANSI/ASHRAE/IES Standard 90.1-1989 (Standard 90.1-1989) requirements and is intended for use with various codes based on Standard 90.1, including the Codification of ASHRAE/IES Standard 90.1-1989 (90.1-1989 Code) (ASHRAE 1989a, 1993b) and ASHRAE/IESNA Standard 90.1-1999 (Standard 90.1-1999). This includes jurisdictions that have adopted the 90.1-1989 Code, Standard 90.1-1989, Standard 90.1-1999, or their own code based on one of these. We view Standard 90.1-1989more » and the 90.1-1989 Code as having equivalent technical content and have used both as source documents in developing COMcheck. This technical support document (TSD) is designed to explain the technical basis for the COMcheck software as originally developed based on the ANSI/ASHRAE/IES Standard 90.1-1989 (Standard 90.1-1989). Documentation for other national model codes and standards and specific state energy codes supported in COMcheck has been added to this report as appendices. These appendices are intended to provide technical documentation for features specific to the supported codes and for any changes made for state-specific codes that differ from the standard features that support compliance with the national model codes and standards. Beginning with COMcheck version 3.8.0, support for 90.1-1989, 90.1-1999, and the 1998 IECC are no longer included, but those sections remain in this document for reference purposes.« less

  12. Technical Support Document for Version 3.9.1 of the COMcheck Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bartlett, Rosemarie; Connell, Linda M.; Gowri, Krishnan

    2012-09-01

    COMcheck provides an optional way to demonstrate compliance with commercial and high-rise residential building energy codes. Commercial buildings include all use groups except single family and multifamily not over three stories in height. COMcheck was originally based on ANSI/ASHRAE/IES Standard 90.1-1989 (Standard 90.1-1989) requirements and is intended for use with various codes based on Standard 90.1, including the Codification of ASHRAE/IES Standard 90.1-1989 (90.1-1989 Code) (ASHRAE 1989a, 1993b) and ASHRAE/IESNA Standard 90.1-1999 (Standard 90.1-1999). This includes jurisdictions that have adopted the 90.1-1989 Code, Standard 90.1-1989, Standard 90.1-1999, or their own code based on one of these. We view Standard 90.1-1989more » and the 90.1-1989 Code as having equivalent technical content and have used both as source documents in developing COMcheck. This technical support document (TSD) is designed to explain the technical basis for the COMcheck software as originally developed based on the ANSI/ASHRAE/IES Standard 90.1-1989 (Standard 90.1-1989). Documentation for other national model codes and standards and specific state energy codes supported in COMcheck has been added to this report as appendices. These appendices are intended to provide technical documentation for features specific to the supported codes and for any changes made for state-specific codes that differ from the standard features that support compliance with the national model codes and standards. Beginning with COMcheck version 3.8.0, support for 90.1-1989, 90.1-1999, and the 1998 IECC and version 3.9.0 support for 2000 and 2001 IECC are no longer included, but those sections remain in this document for reference purposes.« less

  13. Reducing EnergyPlus Run Time For Code Compliance Tools

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Athalye, Rahul A.; Gowri, Krishnan; Schultz, Robert W.

    2014-09-12

    Integration of the EnergyPlus ™ simulation engine into performance-based code compliance software raises a concern about simulation run time, which impacts timely feedback of compliance results to the user. EnergyPlus annual simulations for proposed and code baseline building models, and mechanical equipment sizing result in simulation run times beyond acceptable limits. This paper presents a study that compares the results of a shortened simulation time period using 4 weeks of hourly weather data (one per quarter), to an annual simulation using full 52 weeks of hourly weather data. Three representative building types based on DOE Prototype Building Models and threemore » climate zones were used for determining the validity of using a shortened simulation run period. Further sensitivity analysis and run time comparisons were made to evaluate the robustness and run time savings of using this approach. The results of this analysis show that the shortened simulation run period provides compliance index calculations within 1% of those predicted using annual simulation results, and typically saves about 75% of simulation run time.« less

  14. Groundwater flow with energy transport and water-ice phase change: Numerical simulations, benchmarks, and application to freezing in peat bogs

    USGS Publications Warehouse

    McKenzie, J.M.; Voss, C.I.; Siegel, D.I.

    2007-01-01

    In northern peatlands, subsurface ice formation is an important process that can control heat transport, groundwater flow, and biological activity. Temperature was measured over one and a half years in a vertical profile in the Red Lake Bog, Minnesota. To successfully simulate the transport of heat within the peat profile, the U.S. Geological Survey's SUTRA computer code was modified. The modified code simulates fully saturated, coupled porewater-energy transport, with freezing and melting porewater, and includes proportional heat capacity and thermal conductivity of water and ice, decreasing matrix permeability due to ice formation, and latent heat. The model is verified by correctly simulating the Lunardini analytical solution for ice formation in a porous medium with a mixed ice-water zone. The modified SUTRA model correctly simulates the temperature and ice distributions in the peat bog. Two possible benchmark problems for groundwater and energy transport with ice formation and melting are proposed that may be used by other researchers for code comparison. ?? 2006 Elsevier Ltd. All rights reserved.

  15. Potts glass reflection of the decoding threshold for qudit quantum error correcting codes

    NASA Astrophysics Data System (ADS)

    Jiang, Yi; Kovalev, Alexey A.; Pryadko, Leonid P.

    We map the maximum likelihood decoding threshold for qudit quantum error correcting codes to the multicritical point in generalized Potts gauge glass models, extending the map constructed previously for qubit codes. An n-qudit quantum LDPC code, where a qudit can be involved in up to m stabilizer generators, corresponds to a ℤd Potts model with n interaction terms which can couple up to m spins each. We analyze general properties of the phase diagram of the constructed model, give several bounds on the location of the transitions, bounds on the energy density of extended defects (non-local analogs of domain walls), and discuss the correlation functions which can be used to distinguish different phases in the original and the dual models. This research was supported in part by the Grants: NSF PHY-1415600 (AAK), NSF PHY-1416578 (LPP), and ARO W911NF-14-1-0272 (LPP).

  16. Nuclear Energy Knowledge and Validation Center (NEKVaC) Needs Workshop Summary Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gougar, Hans

    2015-02-01

    The Department of Energy (DOE) has made significant progress developing simulation tools to predict the behavior of nuclear systems with greater accuracy and of increasing our capability to predict the behavior of these systems outside of the standard range of applications. These analytical tools require a more complex array of validation tests to accurately simulate the physics and multiple length and time scales. Results from modern simulations will allow experiment designers to narrow the range of conditions needed to bound system behavior and to optimize the deployment of instrumentation to limit the breadth and cost of the campaign. Modern validation,more » verification and uncertainty quantification (VVUQ) techniques enable analysts to extract information from experiments in a systematic manner and provide the users with a quantified uncertainty estimate. Unfortunately, the capability to perform experiments that would enable taking full advantage of the formalisms of these modern codes has progressed relatively little (with some notable exceptions in fuels and thermal-hydraulics); the majority of the experimental data available today is the "historic" data accumulated over the last decades of nuclear systems R&D. A validated code-model is a tool for users. An unvalidated code-model is useful for code developers to gain understanding, publish research results, attract funding, etc. As nuclear analysis codes have become more sophisticated, so have the measurement and validation methods and the challenges that confront them. A successful yet cost-effective validation effort requires expertise possessed only by a few, resources possessed only by the well-capitalized (or a willing collective), and a clear, well-defined objective (validating a code that is developed to satisfy the need(s) of an actual user). To that end, the Idaho National Laboratory established the Nuclear Energy Knowledge and Validation Center to address the challenges of modern code validation and to manage the knowledge from past, current, and future experimental campaigns. By pulling together the best minds involved in code development, experiment design, and validation to establish and disseminate best practices and new techniques, the Nuclear Energy Knowledge and Validation Center (NEKVaC or the ‘Center’) will be a resource for industry, DOE Programs, and academia validation efforts.« less

  17. Comparisons of anomalous and collisional radial transport with a continuum kinetic edge code

    NASA Astrophysics Data System (ADS)

    Bodi, K.; Krasheninnikov, S.; Cohen, R.; Rognlien, T.

    2009-05-01

    Modeling of anomalous (turbulence-driven) radial transport in controlled-fusion plasmas is necessary for long-time transport simulations. Here the focus is continuum kinetic edge codes such as the (2-D, 2-V) transport version of TEMPEST, NEO, and the code being developed by the Edge Simulation Laboratory, but the model also has wider application. Our previously developed anomalous diagonal transport matrix model with velocity-dependent convection and diffusion coefficients allows contact with typical fluid transport models (e.g., UEDGE). Results are presented that combine the anomalous transport model and collisional transport owing to ion drift orbits utilizing a Krook collision operator that conserves density and energy. Comparison is made of the relative magnitudes and possible synergistic effects of the two processes for typical tokamak device parameters.

  18. The Development of a Nonequilibrium Radiative Heat Transfer Computational Model for High Altitude Entry Vehicle Flowfield Methods

    NASA Technical Reports Server (NTRS)

    Carlson, Leland A.

    1995-01-01

    This final report will attempt to concisely summarize the activities and accomplishments associated with NASA Grant and to include pertinent documents in an appendix. The project initially had one primary and several secondary objectives. The original primary objective was to couple into the NASA Johnson Space Center (JSC) nonequilibrium chemistry Euler equation entry vehicle flowfield code, INEQ3D, the Texas A&M University (TAMU) local thermodynamic nonequilibrium (LTNE) radiation model. This model had previously been developed and verified under NASA Langley and NASA Johnson sponsorship as part of a viscous shock layer entry vehicle flowfield code. The secondary objectives were: (1) to investigate the necessity of including the radiative flux term in the vibrational-electron-electronic (VEE) energy equation as well as in the global energy equation, (2) to determine the importance of including the small net change in electronic energy between products and reactants which occurs during a chemical reaction, and (3) to study the effect of atom-atom impact ionization reactions on entry vehicle nonequilibrium flowfield chemistry and radiation. For each, of these objectives, it was assumed that the code would be applicable to lunar return entry conditions, i.e. altitude above 75 km, velocity greater, than 11 km/sec, where nonequilibrium chemistry and radiative heating phenomena would be significant. In addition, it was tacitly assumed that as part of the project the code would be applied to a variety of flight conditions and geometries.

  19. Investigation of the external flow analysis for density measurements at high altitude. [shuttle upper atmosphere mass spectrometer experiment

    NASA Technical Reports Server (NTRS)

    Bienkowski, G. K.

    1983-01-01

    A Monte Carlo program was developed for modeling the flow field around the space shuttle in the vicinity of the shuttle upper atmosphere mass spectrometer experiment. The operation of the EXTERNAL code is summarized. Issues associated with geometric modeling of the shuttle nose region and the modeling of intermolecular collisions including rotational energy exchange are discussed as well as a preliminary analysis of vibrational excitation and dissociation effects. The selection of trial runs is described and the parameters used for them is justified. The original version and the modified INTERNAL code for the entrance problem are reviewed. The code listing is included.

  20. Spin distributions and cross sections of evaporation residues in the 28Si+176Yb reaction

    NASA Astrophysics Data System (ADS)

    Sudarshan, K.; Tripathi, R.; Sodaye, S.; Sharma, S. K.; Pujari, P. K.; Gehlot, J.; Madhavan, N.; Nath, S.; Mohanto, G.; Mukul, I.; Jhingan, A.; Mazumdar, I.

    2017-02-01

    Background: Non-compound-nucleus fission in the preactinide region has been an active area of investigation in the recent past. Based on the measurements of fission-fragment mass distributions in the fission of 202Po, populated by reactions with varying entrance channel mass asymmetry, the onset of non-compound-nucleus fission was proposed to be around ZpZt˜1000 [Phys. Rev. C 77, 024606 (2008), 10.1103/PhysRevC.77.024606], where Zp and Zt are the projectile and target proton numbers, respectively. Purpose: The present paper is aimed at the measurement of cross sections and spin distributions of evaporation residues in the 28Si+176Yb reaction (ZpZt=980 ) to investigate the fusion hindrance which, in turn, would give information about the contribution from non-compound-nucleus fission in this reaction. Method: Evaporation-residue cross sections were measured in the beam energy range of 129-166 MeV using the hybrid recoil mass analyzer (HYRA) operated in the gas-filled mode. Evaporation-residue cross sections were also measured by the recoil catcher technique followed by off-line γ -ray spectrometry at few intermediate energies. γ -ray multiplicities of evaporation residues were measured to infer about their spin distribution. The measurements were carried out using NaI(Tl) detector-based 4π-spin spectrometer from the Tata Institute of Fundamental Research, Mumbai, coupled to the HYRA. Results: Evaporation-residue cross sections were significantly lower compared to those calculated using the statistical model code pace2 [Phys. Rev. C 21, 230 (1980), 10.1103/PhysRevC.21.230] with the coupled-channel fusion model code ccfus [Comput. Phys. Commun. 46, 187 (1987), 10.1016/0010-4655(87)90045-2] at beam energies close to the entrance channel Coulomb barrier. At higher beam energies, experimental cross sections were close to those predicted by the model. Average γ -ray multiplicities or angular momentum values of evaporation residues were in agreement with the calculations of the code ccfus + pace2 within the experimental uncertainties at all the beam energies. Conclusions: Deviation of evaporation-residue cross sections from the "fusion + statistical model" predictions at beam energies close to the entrance channel Coulomb barrier indicates fusion hindrance at these beam energies which would lead to non-compound-nucleus fission. However, reasonable agreement of average angular momentum values of evaporation residues at these beam energies with those calculated using the coupled-channel fusion model with the statistical model codes ccfus + pace2 suggests that fusion suppression at beam energies close to the entrance channel Coulomb barrier where populated l waves are low is not l dependent.

  1. Finite element modelling of crash response of composite aerospace sub-floor structures

    NASA Astrophysics Data System (ADS)

    McCarthy, M. A.; Harte, C. G.; Wiggenraad, J. F. M.; Michielsen, A. L. P. J.; Kohlgrüber, D.; Kamoulakos, A.

    Composite energy-absorbing structures for use in aircraft are being studied within a European Commission research programme (CRASURV - Design for Crash Survivability). One of the aims of the project is to evaluate the current capabilities of crashworthiness simulation codes for composites modelling. This paper focuses on the computational analysis using explicit finite element analysis, of a number of quasi-static and dynamic tests carried out within the programme. It describes the design of the structures, the analysis techniques used, and the results of the analyses in comparison to the experimental test results. It has been found that current multi-ply shell models are capable of modelling the main energy-absorbing processes at work in such structures. However some deficiencies exist, particularly in modelling fabric composites. Developments within the finite element code are taking place as a result of this work which will enable better representation of composite fabrics.

  2. Building Energy Codes: Policy Overview and Good Practices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cox, Sadie

    2016-02-19

    Globally, 32% of total final energy consumption is attributed to the building sector. To reduce energy consumption, energy codes set minimum energy efficiency standards for the building sector. With effective implementation, building energy codes can support energy cost savings and complementary benefits associated with electricity reliability, air quality improvement, greenhouse gas emission reduction, increased comfort, and economic and social development. This policy brief seeks to support building code policymakers and implementers in designing effective building code programs.

  3. Identification and Analysis of Critical Gaps in Nuclear Fuel Cycle Codes Required by the SINEMA Program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adrian Miron; Joshua Valentine; John Christenson

    2009-10-01

    The current state of the art in nuclear fuel cycle (NFC) modeling is an eclectic mixture of codes with various levels of applicability, flexibility, and availability. In support of the advanced fuel cycle systems analyses, especially those by the Advanced Fuel Cycle Initiative (AFCI), Unviery of Cincinnati in collaboration with Idaho State University carried out a detailed review of the existing codes describing various aspects of the nuclear fuel cycle and identified the research and development needs required for a comprehensive model of the global nuclear energy infrastructure and the associated nuclear fuel cycles. Relevant information obtained on the NFCmore » codes was compiled into a relational database that allows easy access to various codes' properties. Additionally, the research analyzed the gaps in the NFC computer codes with respect to their potential integration into programs that perform comprehensive NFC analysis.« less

  4. Cavitation Modeling in Euler and Navier-Stokes Codes

    NASA Technical Reports Server (NTRS)

    Deshpande, Manish; Feng, Jinzhang; Merkle, Charles L.

    1993-01-01

    Many previous researchers have modeled sheet cavitation by means of a constant pressure solution in the cavity region coupled with a velocity potential formulation for the outer flow. The present paper discusses the issues involved in extending these cavitation models to Euler or Navier-Stokes codes. The approach taken is to start from a velocity potential model to ensure our results are compatible with those of previous researchers and available experimental data, and then to implement this model in both Euler and Navier-Stokes codes. The model is then augmented in the Navier-Stokes code by the inclusion of the energy equation which allows the effect of subcooling in the vicinity of the cavity interface to be modeled to take into account the experimentally observed reduction in cavity pressures that occurs in cryogenic fluids such as liquid hydrogen. Although our goal is to assess the practicality of implementing these cavitation models in existing three-dimensional, turbomachinery codes, the emphasis in the present paper will center on two-dimensional computations, most specifically isolated airfoils and cascades. Comparisons between velocity potential, Euler and Navier-Stokes implementations indicate they all produce consistent predictions. Comparisons with experimental results also indicate that the predictions are qualitatively correct and give a reasonable first estimate of sheet cavitation effects in both cryogenic and non-cryogenic fluids. The impact on CPU time and the code modifications required suggests that these models are appropriate for incorporation in current generation turbomachinery codes.

  5. MCNP (Monte Carlo Neutron Photon) capabilities for nuclear well logging calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Forster, R.A.; Little, R.C.; Briesmeister, J.F.

    The Los Alamos Radiation Transport Code System (LARTCS) consists of state-of-the-art Monte Carlo and discrete ordinates transport codes and data libraries. The general-purpose continuous-energy Monte Carlo code MCNP (Monte Carlo Neutron Photon), part of the LARTCS, provides a computational predictive capability for many applications of interest to the nuclear well logging community. The generalized three-dimensional geometry of MCNP is well suited for borehole-tool models. SABRINA, another component of the LARTCS, is a graphics code that can be used to interactively create a complex MCNP geometry. Users can define many source and tally characteristics with standard MCNP features. The time-dependent capabilitymore » of the code is essential when modeling pulsed sources. Problems with neutrons, photons, and electrons as either single particle or coupled particles can be calculated with MCNP. The physics of neutron and photon transport and interactions is modeled in detail using the latest available cross-section data. A rich collections of variance reduction features can greatly increase the efficiency of a calculation. MCNP is written in FORTRAN 77 and has been run on variety of computer systems from scientific workstations to supercomputers. The next production version of MCNP will include features such as continuous-energy electron transport and a multitasking option. Areas of ongoing research of interest to the well logging community include angle biasing, adaptive Monte Carlo, improved discrete ordinates capabilities, and discrete ordinates/Monte Carlo hybrid development. Los Alamos has requested approval by the Department of Energy to create a Radiation Transport Computational Facility under their User Facility Program to increase external interactions with industry, universities, and other government organizations. 21 refs.« less

  6. ARC integration into the NEAMS Workbench

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stauff, N.; Gaughan, N.; Kim, T.

    2017-01-01

    One of the objectives of the Nuclear Energy Advanced Modeling and Simulation (NEAMS) Integration Product Line (IPL) is to facilitate the deployment of the high-fidelity codes developed within the program. The Workbench initiative was launched in FY-2017 by the IPL to facilitate the transition from conventional tools to high fidelity tools. The Workbench provides a common user interface for model creation, real-time validation, execution, output processing, and visualization for integrated codes.

  7. INL Results for Phases I and III of the OECD/NEA MHTGR-350 Benchmark

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gerhard Strydom; Javier Ortensi; Sonat Sen

    2013-09-01

    The Idaho National Laboratory (INL) Very High Temperature Reactor (VHTR) Technology Development Office (TDO) Methods Core Simulation group led the construction of the Organization for Economic Cooperation and Development (OECD) Modular High Temperature Reactor (MHTGR) 350 MW benchmark for comparing and evaluating prismatic VHTR analysis codes. The benchmark is sponsored by the OECD's Nuclear Energy Agency (NEA), and the project will yield a set of reference steady-state, transient, and lattice depletion problems that can be used by the Department of Energy (DOE), the Nuclear Regulatory Commission (NRC), and vendors to assess their code suits. The Methods group is responsible formore » defining the benchmark specifications, leading the data collection and comparison activities, and chairing the annual technical workshops. This report summarizes the latest INL results for Phase I (steady state) and Phase III (lattice depletion) of the benchmark. The INSTANT, Pronghorn and RattleSnake codes were used for the standalone core neutronics modeling of Exercise 1, and the results obtained from these codes are compared in Section 4. Exercise 2 of Phase I requires the standalone steady-state thermal fluids modeling of the MHTGR-350 design, and the results for the systems code RELAP5-3D are discussed in Section 5. The coupled neutronics and thermal fluids steady-state solution for Exercise 3 are reported in Section 6, utilizing the newly developed Parallel and Highly Innovative Simulation for INL Code System (PHISICS)/RELAP5-3D code suit. Finally, the lattice depletion models and results obtained for Phase III are compared in Section 7. The MHTGR-350 benchmark proved to be a challenging simulation set of problems to model accurately, and even with the simplifications introduced in the benchmark specification this activity is an important step in the code-to-code verification of modern prismatic VHTR codes. A final OECD/NEA comparison report will compare the Phase I and III results of all other international participants in 2014, while the remaining Phase II transient case results will be reported in 2015.« less

  8. A biomolecular electrostatics solver using Python, GPUs and boundary elements that can handle solvent-filled cavities and Stern layers.

    PubMed

    Cooper, Christopher D; Bardhan, Jaydeep P; Barba, L A

    2014-03-01

    The continuum theory applied to biomolecular electrostatics leads to an implicit-solvent model governed by the Poisson-Boltzmann equation. Solvers relying on a boundary integral representation typically do not consider features like solvent-filled cavities or ion-exclusion (Stern) layers, due to the added difficulty of treating multiple boundary surfaces. This has hindered meaningful comparisons with volume-based methods, and the effects on accuracy of including these features has remained unknown. This work presents a solver called PyGBe that uses a boundary-element formulation and can handle multiple interacting surfaces. It was used to study the effects of solvent-filled cavities and Stern layers on the accuracy of calculating solvation energy and binding energy of proteins, using the well-known apbs finite-difference code for comparison. The results suggest that if required accuracy for an application allows errors larger than about 2% in solvation energy, then the simpler, single-surface model can be used. When calculating binding energies, the need for a multi-surface model is problem-dependent, becoming more critical when ligand and receptor are of comparable size. Comparing with the apbs solver, the boundary-element solver is faster when the accuracy requirements are higher. The cross-over point for the PyGBe code is in the order of 1-2% error, when running on one gpu card (nvidia Tesla C2075), compared with apbs running on six Intel Xeon cpu cores. PyGBe achieves algorithmic acceleration of the boundary element method using a treecode, and hardware acceleration using gpus via PyCuda from a user-visible code that is all Python. The code is open-source under MIT license.

  9. The radiation fields around a proton therapy facility: A comparison of Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Ottaviano, G.; Picardi, L.; Pillon, M.; Ronsivalle, C.; Sandri, S.

    2014-02-01

    A proton therapy test facility with a beam current lower than 10 nA in average, and an energy up to 150 MeV, is planned to be sited at the Frascati ENEA Research Center, in Italy. The accelerator is composed of a sequence of linear sections. The first one is a commercial 7 MeV proton linac, from which the beam is injected in a SCDTL (Side Coupled Drift Tube Linac) structure reaching the energy of 52 MeV. Then a conventional CCL (coupled Cavity Linac) with side coupling cavities completes the accelerator. The linear structure has the important advantage that the main radiation losses during the acceleration process occur to protons with energy below 20 MeV, with a consequent low production of neutrons and secondary radiation. From the radiation protection point of view the source of radiation for this facility is then almost completely located at the final target. Physical and geometrical models of the device have been developed and implemented into radiation transport computer codes based on the Monte Carlo method. The scope is the assessment of the radiation field around the main source for supporting the safety analysis. For the assessment independent researchers used two different Monte Carlo computer codes named FLUKA (FLUktuierende KAskade) and MCNPX (Monte Carlo N-Particle eXtended) respectively. Both are general purpose tools for calculations of particle transport and interactions with matter, covering an extended range of applications including proton beam analysis. Nevertheless each one utilizes its own nuclear cross section libraries and uses specific physics models for particle types and energies. The models implemented into the codes are described and the results are presented. The differences between the two calculations are reported and discussed pointing out disadvantages and advantages of each code in the specific application.

  10. A biomolecular electrostatics solver using Python, GPUs and boundary elements that can handle solvent-filled cavities and Stern layers

    NASA Astrophysics Data System (ADS)

    Cooper, Christopher D.; Bardhan, Jaydeep P.; Barba, L. A.

    2014-03-01

    The continuum theory applied to biomolecular electrostatics leads to an implicit-solvent model governed by the Poisson-Boltzmann equation. Solvers relying on a boundary integral representation typically do not consider features like solvent-filled cavities or ion-exclusion (Stern) layers, due to the added difficulty of treating multiple boundary surfaces. This has hindered meaningful comparisons with volume-based methods, and the effects on accuracy of including these features has remained unknown. This work presents a solver called PyGBe that uses a boundary-element formulation and can handle multiple interacting surfaces. It was used to study the effects of solvent-filled cavities and Stern layers on the accuracy of calculating solvation energy and binding energy of proteins, using the well-known APBS finite-difference code for comparison. The results suggest that if required accuracy for an application allows errors larger than about 2% in solvation energy, then the simpler, single-surface model can be used. When calculating binding energies, the need for a multi-surface model is problem-dependent, becoming more critical when ligand and receptor are of comparable size. Comparing with the APBS solver, the boundary-element solver is faster when the accuracy requirements are higher. The cross-over point for the PyGBe code is on the order of 1-2% error, when running on one GPU card (NVIDIA Tesla C2075), compared with APBS running on six Intel Xeon CPU cores. PyGBe achieves algorithmic acceleration of the boundary element method using a treecode, and hardware acceleration using GPUs via PyCuda from a user-visible code that is all Python. The code is open-source under MIT license.

  11. Accelerated GPU based SPECT Monte Carlo simulations.

    PubMed

    Garcia, Marie-Paule; Bert, Julien; Benoit, Didier; Bardiès, Manuel; Visvikis, Dimitris

    2016-06-07

    Monte Carlo (MC) modelling is widely used in the field of single photon emission computed tomography (SPECT) as it is a reliable technique to simulate very high quality scans. This technique provides very accurate modelling of the radiation transport and particle interactions in a heterogeneous medium. Various MC codes exist for nuclear medicine imaging simulations. Recently, new strategies exploiting the computing capabilities of graphical processing units (GPU) have been proposed. This work aims at evaluating the accuracy of such GPU implementation strategies in comparison to standard MC codes in the context of SPECT imaging. GATE was considered the reference MC toolkit and used to evaluate the performance of newly developed GPU Geant4-based Monte Carlo simulation (GGEMS) modules for SPECT imaging. Radioisotopes with different photon energies were used with these various CPU and GPU Geant4-based MC codes in order to assess the best strategy for each configuration. Three different isotopes were considered: (99m) Tc, (111)In and (131)I, using a low energy high resolution (LEHR) collimator, a medium energy general purpose (MEGP) collimator and a high energy general purpose (HEGP) collimator respectively. Point source, uniform source, cylindrical phantom and anthropomorphic phantom acquisitions were simulated using a model of the GE infinia II 3/8" gamma camera. Both simulation platforms yielded a similar system sensitivity and image statistical quality for the various combinations. The overall acceleration factor between GATE and GGEMS platform derived from the same cylindrical phantom acquisition was between 18 and 27 for the different radioisotopes. Besides, a full MC simulation using an anthropomorphic phantom showed the full potential of the GGEMS platform, with a resulting acceleration factor up to 71. The good agreement with reference codes and the acceleration factors obtained support the use of GPU implementation strategies for improving computational efficiency of SPECT imaging simulations.

  12. An international survey of building energy codes and their implementation

    DOE PAGES

    Evans, Meredydd; Roshchanka, Volha; Graham, Peter

    2017-08-01

    Buildings are key to low-carbon development everywhere, and many countries have introduced building energy codes to improve energy efficiency in buildings. Yet, building energy codes can only deliver results when the codes are implemented. For this reason, studies of building energy codes need to consider implementation of building energy codes in a consistent and comprehensive way. This research identifies elements and practices in implementing building energy codes, covering codes in 22 countries that account for 70% of global energy use in buildings. These elements and practices include: comprehensive coverage of buildings by type, age, size, and geographic location; an implementationmore » framework that involves a certified agency to inspect construction at critical stages; and building materials that are independently tested, rated, and labeled. Training and supporting tools are another element of successful code implementation. Some countries have also introduced compliance evaluation studies, which suggested that tightening energy requirements would only be meaningful when also addressing gaps in implementation (Pitt&Sherry, 2014; U.S. DOE, 2016b). Here, this article provides examples of practices that countries have adopted to assist with implementation of building energy codes.« less

  13. An international survey of building energy codes and their implementation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Evans, Meredydd; Roshchanka, Volha; Graham, Peter

    Buildings are key to low-carbon development everywhere, and many countries have introduced building energy codes to improve energy efficiency in buildings. Yet, building energy codes can only deliver results when the codes are implemented. For this reason, studies of building energy codes need to consider implementation of building energy codes in a consistent and comprehensive way. This research identifies elements and practices in implementing building energy codes, covering codes in 22 countries that account for 70% of global energy use in buildings. These elements and practices include: comprehensive coverage of buildings by type, age, size, and geographic location; an implementationmore » framework that involves a certified agency to inspect construction at critical stages; and building materials that are independently tested, rated, and labeled. Training and supporting tools are another element of successful code implementation. Some countries have also introduced compliance evaluation studies, which suggested that tightening energy requirements would only be meaningful when also addressing gaps in implementation (Pitt&Sherry, 2014; U.S. DOE, 2016b). Here, this article provides examples of practices that countries have adopted to assist with implementation of building energy codes.« less

  14. GUI to Facilitate Research on Biological Damage from Radiation

    NASA Technical Reports Server (NTRS)

    Cucinotta, Frances A.; Ponomarev, Artem Lvovich

    2010-01-01

    A graphical-user-interface (GUI) computer program has been developed to facilitate research on the damage caused by highly energetic particles and photons impinging on living organisms. The program brings together, into one computational workspace, computer codes that have been developed over the years, plus codes that will be developed during the foreseeable future, to address diverse aspects of radiation damage. These include codes that implement radiation-track models, codes for biophysical models of breakage of deoxyribonucleic acid (DNA) by radiation, pattern-recognition programs for extracting quantitative information from biological assays, and image-processing programs that aid visualization of DNA breaks. The radiation-track models are based on transport models of interactions of radiation with matter and solution of the Boltzmann transport equation by use of both theoretical and numerical models. The biophysical models of breakage of DNA by radiation include biopolymer coarse-grained and atomistic models of DNA, stochastic- process models of deposition of energy, and Markov-based probabilistic models of placement of double-strand breaks in DNA. The program is designed for use in the NT, 95, 98, 2000, ME, and XP variants of the Windows operating system.

  15. A Web 2.0 Interface to Ion Stopping Power and Other Physics Routines for High Energy Density Physics Applications

    NASA Astrophysics Data System (ADS)

    Stoltz, Peter; Veitzer, Seth

    2008-04-01

    We present a new Web 2.0-based interface to physics routines for High Energy Density Physics applications. These routines include models for ion stopping power, sputtering, secondary electron yields and energies, impact ionization cross sections, and atomic radiated power. The Web 2.0 interface allows users to easily explore the results of the models before using the routines within other codes or to analyze experimental results. We discuss how we used various Web 2.0 tools, including the Python 2.5, Django, and the Yahoo User Interface library. Finally, we demonstrate the interface by showing as an example the stopping power algorithms researchers are currently using within the Hydra code to analyze warm, dense matter experiments underway at the Neutralized Drift Compression Experiment facility at Lawrence Berkeley National Laboratory.

  16. Transport of Light Ions in Matter

    NASA Technical Reports Server (NTRS)

    Wilson, J. W.; Cucinotta, F. A.; Tai, H.; Shinn, J. L.; Chun, S. Y.; Tripathi, R. K.; Sihver, L.

    1998-01-01

    A recent set of light ion experiments are analyzed using the Green's function method of solving the Boltzmann equation for ions of high charge and energy (the GRNTRN transport code) and the NUCFRG2 fragmentation database generator code. Although the NUCFRG2 code reasonably represents the fragmentation of heavy ions, the effects of light ion fragmentation requires a more detailed nuclear model including shell structure and short range correlations appearing as tightly bound clusters in the light ion nucleus. The most recent NTJCFRG2 code is augmented with a quasielastic alpha knockout model and semiempirical adjustments (up to 30 percent in charge removal) in the fragmentation process allowing reasonable agreement with the experiments to be obtained. A final resolution of the appropriate cross sections must await the full development of a coupled channel reaction model in which shell structure and clustering can be accurately evaluated.

  17. Entropy Production during Fatigue as a Criterion for Failure. The Critical Entropy Threshold: A Mathematical Model for Fatigue.

    DTIC Science & Technology

    1983-08-15

    Measurement of Material Damping," Experimental Mechanics, 297-302 (Aug 1977). 4. Feltner, C. E., and J. D. Morrow, " Microplastic Strain Hysteresis Energy as...Code OOKB, CP5, Room 606 Washington, DC 20360 Mr. Richard R. Graham, II Code 5243, Bldg. NC4 Naval Sea Systems Command "* Washington, DC 20362 Mr. Al...Harbage, Jr. Code 2723 DTNSRDC Annapolis, MD 21402 L’r. Martih Kandl Code 5231 Naval Sea Systems Command *i Washington, DC 20362 S. Karpe David W

  18. Model documentation report: Transportation sector model of the National Energy Modeling System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1994-03-01

    This report documents the objectives, analytical approach and development of the National Energy Modeling System (NEMS) Transportation Model (TRAN). The report catalogues and describes the model assumptions, computational methodology, parameter estimation techniques, model source code, and forecast results generated by the model. This document serves three purposes. First, it is a reference document providing a detailed description of TRAN for model analysts, users, and the public. Second, this report meets the legal requirements of the Energy Information Administration (EIA) to provide adequate documentation in support of its statistical and forecast reports (Public Law 93-275, 57(b)(1)). Third, it permits continuity inmore » model development by providing documentation from which energy analysts can undertake model enhancements, data updates, and parameter refinements.« less

  19. The origin of neutron biological effectiveness as a function of energy.

    PubMed

    Baiocco, G; Barbieri, S; Babini, G; Morini, J; Alloni, D; Friedland, W; Kundrát, P; Schmitt, E; Puchalska, M; Sihver, L; Ottolenghi, A

    2016-09-22

    The understanding of the impact of radiation quality in early and late responses of biological targets to ionizing radiation exposure necessarily grounds on the results of mechanistic studies starting from physical interactions. This is particularly true when, already at the physical stage, the radiation field is mixed, as it is the case for neutron exposure. Neutron Relative Biological Effectiveness (RBE) is energy dependent, maximal for energies ~1 MeV, varying significantly among different experiments. The aim of this work is to shed light on neutron biological effectiveness as a function of field characteristics, with a comprehensive modeling approach: this brings together transport calculations of neutrons through matter (with the code PHITS) and the predictive power of the biophysical track structure code PARTRAC in terms of DNA damage evaluation. Two different energy dependent neutron RBE models are proposed: the first is phenomenological and based only on the characterization of linear energy transfer on a microscopic scale; the second is purely ab-initio and based on the induction of complex DNA damage. Results for the two models are compared and found in good qualitative agreement with current standards for radiation protection factors, which are agreed upon on the basis of RBE data.

  20. The origin of neutron biological effectiveness as a function of energy

    NASA Astrophysics Data System (ADS)

    Baiocco, G.; Barbieri, S.; Babini, G.; Morini, J.; Alloni, D.; Friedland, W.; Kundrát, P.; Schmitt, E.; Puchalska, M.; Sihver, L.; Ottolenghi, A.

    2016-09-01

    The understanding of the impact of radiation quality in early and late responses of biological targets to ionizing radiation exposure necessarily grounds on the results of mechanistic studies starting from physical interactions. This is particularly true when, already at the physical stage, the radiation field is mixed, as it is the case for neutron exposure. Neutron Relative Biological Effectiveness (RBE) is energy dependent, maximal for energies ~1 MeV, varying significantly among different experiments. The aim of this work is to shed light on neutron biological effectiveness as a function of field characteristics, with a comprehensive modeling approach: this brings together transport calculations of neutrons through matter (with the code PHITS) and the predictive power of the biophysical track structure code PARTRAC in terms of DNA damage evaluation. Two different energy dependent neutron RBE models are proposed: the first is phenomenological and based only on the characterization of linear energy transfer on a microscopic scale; the second is purely ab-initio and based on the induction of complex DNA damage. Results for the two models are compared and found in good qualitative agreement with current standards for radiation protection factors, which are agreed upon on the basis of RBE data.

  1. The origin of neutron biological effectiveness as a function of energy

    PubMed Central

    Baiocco, G.; Barbieri, S.; Babini, G.; Morini, J.; Alloni, D.; Friedland, W.; Kundrát, P.; Schmitt, E.; Puchalska, M.; Sihver, L.; Ottolenghi, A.

    2016-01-01

    The understanding of the impact of radiation quality in early and late responses of biological targets to ionizing radiation exposure necessarily grounds on the results of mechanistic studies starting from physical interactions. This is particularly true when, already at the physical stage, the radiation field is mixed, as it is the case for neutron exposure. Neutron Relative Biological Effectiveness (RBE) is energy dependent, maximal for energies ~1 MeV, varying significantly among different experiments. The aim of this work is to shed light on neutron biological effectiveness as a function of field characteristics, with a comprehensive modeling approach: this brings together transport calculations of neutrons through matter (with the code PHITS) and the predictive power of the biophysical track structure code PARTRAC in terms of DNA damage evaluation. Two different energy dependent neutron RBE models are proposed: the first is phenomenological and based only on the characterization of linear energy transfer on a microscopic scale; the second is purely ab-initio and based on the induction of complex DNA damage. Results for the two models are compared and found in good qualitative agreement with current standards for radiation protection factors, which are agreed upon on the basis of RBE data. PMID:27654349

  2. Macroeconomic Activity Module - NEMS Documentation

    EIA Publications

    2016-01-01

    Documents the objectives, analytical approach, and development of the National Energy Modeling System (NEMS) Macroeconomic Activity Module (MAM) used to develop the Annual Energy Outlook for 2016 (AEO2016). The report catalogues and describes the module assumptions, computations, methodology, parameter estimation techniques, and mainframe source code

  3. Integrated modelling framework for short pulse high energy density physics experiments

    NASA Astrophysics Data System (ADS)

    Sircombe, N. J.; Hughes, S. J.; Ramsay, M. G.

    2016-03-01

    Modelling experimental campaigns on the Orion laser at AWE, and developing a viable point-design for fast ignition (FI), calls for a multi-scale approach; a complete description of the problem would require an extensive range of physics which cannot realistically be included in a single code. For modelling the laser-plasma interaction (LPI) we need a fine mesh which can capture the dispersion of electromagnetic waves, and a kinetic model for each plasma species. In the dense material of the bulk target, away from the LPI region, collisional physics dominates. The transport of hot particles generated by the action of the laser is dependent on their slowing and stopping in the dense material and their need to draw a return current. These effects will heat the target, which in turn influences transport. On longer timescales, the hydrodynamic response of the target will begin to play a role as the pressure generated from isochoric heating begins to take effect. Recent effort at AWE [1] has focussed on the development of an integrated code suite based on: the particle in cell code EPOCH, to model LPI; the Monte-Carlo electron transport code THOR, to model the onward transport of hot electrons; and the radiation hydrodynamics code CORVUS, to model the hydrodynamic response of the target. We outline the methodology adopted, elucidate on the advantages of a robustly integrated code suite compared to a single code approach, demonstrate the integrated code suite's application to modelling the heating of buried layers on Orion, and assess the potential of such experiments for the validation of modelling capability in advance of more ambitious HEDP experiments, as a step towards a predictive modelling capability for FI.

  4. Coupled Neutron Transport for HZETRN

    NASA Technical Reports Server (NTRS)

    Slaba, Tony C.; Blattnig, Steve R.

    2009-01-01

    Exposure estimates inside space vehicles, surface habitats, and high altitude aircrafts exposed to space radiation are highly influenced by secondary neutron production. The deterministic transport code HZETRN has been identified as a reliable and efficient tool for such studies, but improvements to the underlying transport models and numerical methods are still necessary. In this paper, the forward-backward (FB) and directionally coupled forward-backward (DC) neutron transport models are derived, numerical methods for the FB model are reviewed, and a computationally efficient numerical solution is presented for the DC model. Both models are compared to the Monte Carlo codes HETC-HEDS, FLUKA, and MCNPX, and the DC model is shown to agree closely with the Monte Carlo results. Finally, it is found in the development of either model that the decoupling of low energy neutrons from the light particle transport procedure adversely affects low energy light ion fluence spectra and exposure quantities. A first order correction is presented to resolve the problem, and it is shown to be both accurate and efficient.

  5. OBERON: OBliquity and Energy balance Run on N-body systems

    NASA Astrophysics Data System (ADS)

    Forgan, Duncan H.

    2016-08-01

    OBERON (OBliquity and Energy balance Run on N-body systems) models the climate of Earthlike planets under the effects of an arbitrary number and arrangement of other bodies, such as stars, planets and moons. The code, written in C++, simultaneously computes N body motions using a 4th order Hermite integrator, simulates climates using a 1D latitudinal energy balance model, and evolves the orbital spin of bodies using the equations of Laskar (1986a,b).

  6. A new response matrix for a 6LiI scintillator BSS system

    NASA Astrophysics Data System (ADS)

    Lacerda, M. A. S.; Méndez-Villafañe, R.; Lorente, A.; Ibañez, S.; Gallego, E.; Vega-Carrillo, H. R.

    2017-10-01

    A new response matrix was calculated for a Bonner Sphere Spectrometer (BSS) with a 6 LiI(Eu) scintillator, using the Monte Carlo N-Particle radiation transport code MCNPX. Responses were calculated for 6 spheres and the bare detector, for energies varying from 1.059E(-9) MeV to 105.9 MeV, with 20 equal-log(E)-width bins per energy decade, totalizing 221 energy groups. A comparison was done among the responses obtained in this work and other published elsewhere, for the same detector model. The calculated response functions were inserted in the response input file of the MAXED code and used to unfold the total and direct neutron spectra generated by the 241Am-Be source of the Universidad Politécnica de Madrid (UPM). These spectra were compared with those obtained using the same unfolding code with the Mares and Schraube matrix response.

  7. runDM: Running couplings of Dark Matter to the Standard Model

    NASA Astrophysics Data System (ADS)

    D'Eramo, Francesco; Kavanagh, Bradley J.; Panci, Paolo

    2018-02-01

    runDM calculates the running of the couplings of Dark Matter (DM) to the Standard Model (SM) in simplified models with vector mediators. By specifying the mass of the mediator and the couplings of the mediator to SM fields at high energy, the code can calculate the couplings at low energy, taking into account the mixing of all dimension-6 operators. runDM can also extract the operator coefficients relevant for direct detection, namely low energy couplings to up, down and strange quarks and to protons and neutrons.

  8. PHISICS/RELAP5-3D Adaptive Time-Step Method Demonstrated for the HTTR LOFC#1 Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baker, Robin Ivey; Balestra, Paolo; Strydom, Gerhard

    A collaborative effort between Japan Atomic Energy Agency (JAEA) and Idaho National Laboratory (INL) as part of the Civil Nuclear Energy Working Group is underway to model the high temperature engineering test reactor (HTTR) loss of forced cooling (LOFC) transient that was performed in December 2010. The coupled version of RELAP5-3D, a thermal fluids code, and PHISICS, a neutronics code, were used to model the transient. The focus of this report is to summarize the changes made to the PHISICS-RELAP5-3D code for implementing an adaptive time step methodology into the code for the first time, and to test it usingmore » the full HTTR PHISICS/RELAP5-3D model developed by JAEA and INL and the LOFC simulation. Various adaptive schemes are available based on flux or power convergence criteria that allow significantly larger time steps to be taken by the neutronics module. The report includes a description of the HTTR and the associated PHISICS/RELAP5-3D model test results as well as the University of Rome sub-contractor report documenting the adaptive time step theory and methodology implemented in PHISICS/RELAP5-3D. Two versions of the HTTR model were tested using 8 and 26 energy groups. It was found that most of the new adaptive methods lead to significant improvements in the LOFC simulation time required without significant accuracy penalties in the prediction of the fission power and the fuel temperature. In the best performing 8 group model scenarios, a LOFC simulation of 20 hours could be completed in real-time, or even less than real-time, compared with the previous version of the code that completed the same transient 3-8 times slower than real-time. A few of the user choice combinations between the methodologies available and the tolerance settings did however result in unacceptably high errors or insignificant gains in simulation time. The study is concluded with recommendations on which methods to use for this HTTR model. An important caveat is that these findings are very model-specific and cannot be generalized to other PHISICS/RELAP5-3D models.« less

  9. Web-based reactive transport modeling using PFLOTRAN

    NASA Astrophysics Data System (ADS)

    Zhou, H.; Karra, S.; Lichtner, P. C.; Versteeg, R.; Zhang, Y.

    2017-12-01

    Actionable understanding of system behavior in the subsurface is required for a wide spectrum of societal and engineering needs by both commercial firms and government entities and academia. These needs include, for example, water resource management, precision agriculture, contaminant remediation, unconventional energy production, CO2 sequestration monitoring, and climate studies. Such understanding requires the ability to numerically model various coupled processes that occur across different temporal and spatial scales as well as multiple physical domains (reservoirs - overburden, surface-subsurface, groundwater-surface water, saturated-unsaturated zone). Currently, this ability is typically met through an in-house approach where computational resources, model expertise, and data for model parameterization are brought together to meet modeling needs. However, such an approach has multiple drawbacks which limit the application of high-end reactive transport codes such as the Department of Energy funded[?] PFLOTRAN code. In addition, while many end users have a need for the capabilities provided by high-end reactive transport codes, they do not have the expertise - nor the time required to obtain the expertise - to effectively use these codes. We have developed and are actively enhancing a cloud-based software platform through which diverse users are able to easily configure, execute, visualize, share, and interpret PFLOTRAN models. This platform consists of a web application and available on-demand HPC computational infrastructure. The web application consists of (1) a browser-based graphical user interface which allows users to configure models and visualize results interactively, and (2) a central server with back-end relational databases which hold configuration, data, modeling results, and Python scripts for model configuration, and (3) a HPC environment for on-demand model execution. We will discuss lessons learned in the development of this platform, the rationale for different interfaces, implementation choices, as well as the planned path forward.

  10. Development and Demonstration of a Computational Tool for the Analysis of Particle Vitiation Effects in Hypersonic Propulsion Test Facilities

    NASA Technical Reports Server (NTRS)

    Perkins, Hugh Douglas

    2010-01-01

    In order to improve the understanding of particle vitiation effects in hypersonic propulsion test facilities, a quasi-one dimensional numerical tool was developed to efficiently model reacting particle-gas flows over a wide range of conditions. Features of this code include gas-phase finite-rate kinetics, a global porous-particle combustion model, mass, momentum and energy interactions between phases, and subsonic and supersonic particle drag and heat transfer models. The basic capabilities of this tool were validated against available data or other validated codes. To demonstrate the capabilities of the code a series of computations were performed for a model hypersonic propulsion test facility and scramjet. Parameters studied were simulated flight Mach number, particle size, particle mass fraction and particle material.

  11. BADGER v1.0: A Fortran equation of state library

    NASA Astrophysics Data System (ADS)

    Heltemes, T. A.; Moses, G. A.

    2012-12-01

    The BADGER equation of state library was developed to enable inertial confinement fusion plasma codes to more accurately model plasmas in the high-density, low-temperature regime. The code had the capability to calculate 1- and 2-T plasmas using the Thomas-Fermi model and an individual electron accounting model. Ion equation of state data can be calculated using an ideal gas model or via a quotidian equation of state with scaled binding energies. Electron equation of state data can be calculated via the ideal gas model or with an adaptation of the screened hydrogenic model with ℓ-splitting. The ionization and equation of state calculations can be done in local thermodynamic equilibrium or in a non-LTE mode using a variant of the Busquet equivalent temperature method. The code was written as a stand-alone Fortran library for ease of implementation by external codes. EOS results for aluminum are presented that show good agreement with the SESAME library and ionization calculations show good agreement with the FLYCHK code. Program summaryProgram title: BADGERLIB v1.0 Catalogue identifier: AEND_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEND_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 41 480 No. of bytes in distributed program, including test data, etc.: 2 904 451 Distribution format: tar.gz Programming language: Fortran 90. Computer: 32- or 64-bit PC, or Mac. Operating system: Windows, Linux, MacOS X. RAM: 249.496 kB plus 195.630 kB per isotope record in memory Classification: 19.1, 19.7. Nature of problem: Equation of State (EOS) calculations are necessary for the accurate simulation of high energy density plasmas. Historically, most EOS codes used in these simulations have relied on an ideal gas model. This model is inadequate for low-temperature, high-density plasma conditions; the gaseous and liquid phases; and the solid phase. The BADGER code was developed to give more realistic EOS data in these regimes. Solution method: BADGER has multiple, user-selectable models to treat the ions, average-atom ionization state and electrons. Ion models are ideal gas and quotidian equation of state (QEOS), ionization models are Thomas-Fermi and individual accounting method (IEM) formulation of the screened hydrogenic model (SHM) with l-splitting, electron ionization models are ideal gas and a Helmholtz free energy minimization method derived from the SHM. The default equation of state and ionization models are appropriate for plasmas in local thermodynamic equilibrium (LTE). The code can calculate non-LTE equation of state (EOS) and ionization data using a simplified form of the Busquet equivalent-temperature method. Restrictions: Physical data are only provided for elements Z=1 to Z=86. Multiple solid phases are not currently supported. Liquid, gas and plasma phases are combined into a generalized "fluid" phase. Unusual features: BADGER divorces the calculation of average-atom ionization from the electron equation of state model, allowing the user to select ionization and electron EOS models that are most appropriate to the simulation. The included ion ideal gas model uses ground-state nuclear spin data to differentiate between isotopes of a given element. Running time: Example provided only takes a few seconds to run.

  12. Dakota Uncertainty Quantification Methods Applied to the CFD code Nek5000

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Delchini, Marc-Olivier; Popov, Emilian L.; Pointer, William David

    This report presents the state of advancement of a Nuclear Energy Advanced Modeling and Simulation (NEAMS) project to characterize the uncertainty of the computational fluid dynamics (CFD) code Nek5000 using the Dakota package for flows encountered in the nuclear engineering industry. Nek5000 is a high-order spectral element CFD code developed at Argonne National Laboratory for high-resolution spectral-filtered large eddy simulations (LESs) and unsteady Reynolds-averaged Navier-Stokes (URANS) simulations.

  13. How to obtain the National Energy Modeling System (NEMS)

    EIA Publications

    2013-01-01

    The National Energy Modeling System (NEMS) NEMS is used by the modelers at the U. S. Energy Information Administration (EIA) who understand its structure and programming. NEMS has only been used by a few organizations outside of the EIA, because most people that requested NEMS found out that it was too difficult or rigid to use. NEMS is not typically used for state-level analysis and is poorly suited for application to other countries. However, many do obtain the model simply to use the data in its input files or to examine the source code.

  14. Influence of the plasma environment on atomic structure using an ion-sphere model

    DOE PAGES

    Belkhiri, Madeny Jean; Fontes, Christopher John; Poirier, Michel

    2015-09-03

    Plasma environment effects on atomic structure are analyzed using various atomic structure codes. To monitor the effect of high free-electron density or low temperatures, Fermi-Dirac and Maxwell-Boltzmann statistics are compared. After a discussion of the implementation of the Fermi-Dirac approach within the ion-sphere model, several applications are considered. In order to check the consistency of the modifications brought here to extant codes, calculations have been performed using the Los Alamos Cowan Atomic Structure (cats) code in its Hartree-Fock or Hartree-Fock-Slater form and the parametric potential Flexible Atomic Code (fac). The ground-state energy shifts due to the plasma effects for themore » six most ionized aluminum ions have been calculated using the fac and cats codes and fairly agree. For the intercombination resonance line in Fe 22+, the plasma effect within the uniform electron gas model results in a positive shift that agrees with the MCDF value of B. Saha et al.« less

  15. Influence of the plasma environment on atomic structure using an ion-sphere model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Belkhiri, Madeny Jean; Fontes, Christopher John; Poirier, Michel

    Plasma environment effects on atomic structure are analyzed using various atomic structure codes. To monitor the effect of high free-electron density or low temperatures, Fermi-Dirac and Maxwell-Boltzmann statistics are compared. After a discussion of the implementation of the Fermi-Dirac approach within the ion-sphere model, several applications are considered. In order to check the consistency of the modifications brought here to extant codes, calculations have been performed using the Los Alamos Cowan Atomic Structure (cats) code in its Hartree-Fock or Hartree-Fock-Slater form and the parametric potential Flexible Atomic Code (fac). The ground-state energy shifts due to the plasma effects for themore » six most ionized aluminum ions have been calculated using the fac and cats codes and fairly agree. For the intercombination resonance line in Fe 22+, the plasma effect within the uniform electron gas model results in a positive shift that agrees with the MCDF value of B. Saha et al.« less

  16. MCNP capabilities for nuclear well logging calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Forster, R.A.; Little, R.C.; Briesmeister, J.F.

    The Los Alamos Radiation Transport Code System (LARTCS) consists of state-of-the-art Monte Carlo and discrete ordinates transport codes and data libraries. This paper discusses how the general-purpose continuous-energy Monte Carlo code MCNP ({und M}onte {und C}arlo {und n}eutron {und p}hoton), part of the LARTCS, provides a computational predictive capability for many applications of interest to the nuclear well logging community. The generalized three-dimensional geometry of MCNP is well suited for borehole-tool models. SABRINA, another component of the LARTCS, is a graphics code that can be used to interactively create a complex MCNP geometry. Users can define many source and tallymore » characteristics with standard MCNP features. The time-dependent capability of the code is essential when modeling pulsed sources. Problems with neutrons, photons, and electrons as either single particle or coupled particles can be calculated with MCNP. The physics of neutron and photon transport and interactions is modeled in detail using the latest available cross-section data.« less

  17. RTE: A computer code for Rocket Thermal Evaluation

    NASA Technical Reports Server (NTRS)

    Naraghi, Mohammad H. N.

    1995-01-01

    The numerical model for a rocket thermal analysis code (RTE) is discussed. RTE is a comprehensive thermal analysis code for thermal analysis of regeneratively cooled rocket engines. The input to the code consists of the composition of fuel/oxidant mixture and flow rates, chamber pressure, coolant temperature and pressure. dimensions of the engine, materials and the number of nodes in different parts of the engine. The code allows for temperature variation in axial, radial and circumferential directions. By implementing an iterative scheme, it provides nodal temperature distribution, rates of heat transfer, hot gas and coolant thermal and transport properties. The fuel/oxidant mixture ratio can be varied along the thrust chamber. This feature allows the user to incorporate a non-equilibrium model or an energy release model for the hot-gas-side. The user has the option of bypassing the hot-gas-side calculations and directly inputting the gas-side fluxes. This feature is used to link RTE to a boundary layer module for the hot-gas-side heat flux calculations.

  18. 10 CFR 434.99 - Explanation of numbering system for codes.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 10 Energy 3 2011-01-01 2011-01-01 false Explanation of numbering system for codes. 434.99 Section 434.99 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION ENERGY CODE FOR NEW FEDERAL COMMERCIAL AND MULTI-FAMILY HIGH RISE RESIDENTIAL BUILDINGS § 434.99 Explanation of numbering system for codes. (a) For...

  19. 10 CFR 434.99 - Explanation of numbering system for codes.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 3 2010-01-01 2010-01-01 false Explanation of numbering system for codes. 434.99 Section 434.99 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION ENERGY CODE FOR NEW FEDERAL COMMERCIAL AND MULTI-FAMILY HIGH RISE RESIDENTIAL BUILDINGS § 434.99 Explanation of numbering system for codes. (a) For...

  20. Synchrotron characterization of nanograined UO 2 grain growth

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mo, Kun; Miao, Yinbin; Yun, Di

    2015-09-30

    This activity is supported by the US Nuclear Energy Advanced Modeling and Simulation (NEAMS) Fuels Product Line (FPL) and aims at providing experimental data for the validation of the mesoscale simulation code MARMOT. MARMOT is a mesoscale multiphysics code that predicts the coevolution of microstructure and properties within reactor fuel during its lifetime in the reactor. It is an important component of the Moose-Bison-Marmot (MBM) code suite that has been developed by Idaho National Laboratory (INL) to enable next generation fuel performance modeling capability as part of the NEAMS Program FPL. In order to ensure the accuracy of the microstructuremore » based materials models being developed within the MARMOT code, extensive validation efforts must be carried out. In this report, we summarize our preliminary synchrotron radiation experiments at APS to determine the grain size of nanograin UO 2. The methodology and experimental setup developed in this experiment can directly apply to the proposed in-situ grain growth measurements. The investigation of the grain growth kinetics was conducted based on isothermal annealing and grain growth characterization as functions of duration and temperature. The kinetic parameters such as activation energy for grain growth for UO 2 with different stoichiometry are obtained and compared with molecular dynamics (MD) simulations.« less

  1. Supplying materials needed for grain growth characterizations of nano-grained UO 2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mo, Kun; Miao, Yinbin; Yun, Di

    2015-09-30

    This activity is supported by the US Nuclear Energy Advanced Modeling and Simulation (NEAMS) Fuels Product Line (FPL) and aims at providing experimental data for the validation of the mesoscale simulation code MARMOT. MARMOT is a mesoscale multiphysics code that predicts the coevolution of microstructure and properties within reactor fuel during its lifetime in the reactor. It is an important component of the Moose-Bison-Marmot (MBM) code suite that has been developed by Idaho National Laboratory (INL) to enable next generation fuel performance modeling capability as part of the NEAMS Program FPL. In order to ensure the accuracy of the microstructuremore » based materials models being developed within the MARMOT code, extensive validation efforts must be carried out. In this report, we summarize our preliminary synchrotron radiation experiments at APS to determine the grain size of nanograin UO 2. The methodology and experimental setup developed in this experiment can directly apply to the proposed in-situ grain growth measurements. The investigation of the grain growth kinetics was conducted based on isothermal annealing and grain growth characterization as functions of duration and temperature. The kinetic parameters such as activation energy for grain growth for UO 2 with different stoichiometry are obtained and compared with molecular dynamics (MD) simulations.« less

  2. University of Washington/ Northwest National Marine Renewable Energy Center Tidal Current Technology Test Protocol, Instrumentation, Design Code, and Oceanographic Modeling Collaboration: Cooperative Research and Development Final Report, CRADA Number CRD-11-452

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Driscoll, Frederick R.

    The University of Washington (UW) - Northwest National Marine Renewable Energy Center (UW-NNMREC) and the National Renewable Energy Laboratory (NREL) will collaborate to advance research and development (R&D) of Marine Hydrokinetic (MHK) renewable energy technology, specifically renewable energy captured from ocean tidal currents. UW-NNMREC is endeavoring to establish infrastructure, capabilities and tools to support in-water testing of marine energy technology. NREL is leveraging its experience and capabilities in field testing of wind systems to develop protocols and instrumentation to advance field testing of MHK systems. Under this work, UW-NNMREC and NREL will work together to develop a common instrumentation systemmore » and testing methodologies, standards and protocols. UW-NNMREC is also establishing simulation capabilities for MHK turbine and turbine arrays. NREL has extensive experience in wind turbine array modeling and is developing several computer based numerical simulation capabilities for MHK systems. Under this CRADA, UW-NNMREC and NREL will work together to augment single device and array modeling codes. As part of this effort UW NNMREC will also work with NREL to run simulations on NREL's high performance computer system.« less

  3. The Use of a Code-generating System for the Derivation of the Equations for Wind Turbine Dynamics

    NASA Astrophysics Data System (ADS)

    Ganander, Hans

    2003-10-01

    For many reasons the size of wind turbines on the rapidly growing wind energy market is increasing. Relations between aeroelastic properties of these new large turbines change. Modifications of turbine designs and control concepts are also influenced by growing size. All these trends require development of computer codes for design and certification. Moreover, there is a strong desire for design optimization procedures, which require fast codes. General codes, e.g. finite element codes, normally allow such modifications and improvements of existing wind turbine models. This is done relatively easy. However, the calculation times of such codes are unfavourably long, certainly for optimization use. The use of an automatic code generating system is an alternative for relevance of the two key issues, the code and the design optimization. This technique can be used for rapid generation of codes of particular wind turbine simulation models. These ideas have been followed in the development of new versions of the wind turbine simulation code VIDYN. The equations of the simulation model were derived according to the Lagrange equation and using Mathematica®, which was directed to output the results in Fortran code format. In this way the simulation code is automatically adapted to an actual turbine model, in terms of subroutines containing the equations of motion, definitions of parameters and degrees of freedom. Since the start in 1997, these methods, constituting a systematic way of working, have been used to develop specific efficient calculation codes. The experience with this technique has been very encouraging, inspiring the continued development of new versions of the simulation code as the need has arisen, and the interest for design optimization is growing.

  4. The FLUKA Code: An Overview

    NASA Technical Reports Server (NTRS)

    Ballarini, F.; Battistoni, G.; Campanella, M.; Carboni, M.; Cerutti, F.; Empl, A.; Fasso, A.; Ferrari, A.; Gadioli, E.; Garzelli, M. V.; hide

    2006-01-01

    FLUKA is a multipurpose Monte Carlo code which can transport a variety of particles over a wide energy range in complex geometries. The code is a joint project of INFN and CERN: part of its development is also supported by the University of Houston and NASA. FLUKA is successfully applied in several fields, including but not only, particle physics, cosmic ray physics, dosimetry, radioprotection, hadron therapy, space radiation, accelerator design and neutronics. The code is the standard tool used at CERN for dosimetry, radioprotection and beam-machine interaction studies. Here we give a glimpse into the code physics models with a particular emphasis to the hadronic and nuclear sector.

  5. ecode - Electron Transport Algorithm Testing v. 1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Franke, Brian C.; Olson, Aaron J.; Bruss, Donald Eugene

    2016-10-05

    ecode is a Monte Carlo code used for testing algorithms related to electron transport. The code can read basic physics parameters, such as energy-dependent stopping powers and screening parameters. The code permits simple planar geometries of slabs or cubes. Parallelization consists of domain replication, with work distributed at the start of the calculation and statistical results gathered at the end of the calculation. Some basic routines (such as input parsing, random number generation, and statistics processing) are shared with the Integrated Tiger Series codes. A variety of algorithms for uncertainty propagation are incorporated based on the stochastic collocation and stochasticmore » Galerkin methods. These permit uncertainty only in the total and angular scattering cross sections. The code contains algorithms for simulating stochastic mixtures of two materials. The physics is approximate, ranging from mono-energetic and isotropic scattering to screened Rutherford angular scattering and Rutherford energy-loss scattering (simple electron transport models). No production of secondary particles is implemented, and no photon physics is implemented.« less

  6. Extension of the energy range of experimental activation cross-sections data of deuteron induced nuclear reactions on indium up to 50MeV.

    PubMed

    Tárkányi, F; Ditrói, F; Takács, S; Hermanne, A; Ignatyuk, A V

    2015-11-01

    The energy range of our earlier measured activation cross-sections data of longer-lived products of deuteron induced nuclear reactions on indium were extended from 40MeV up to 50MeV. The traditional stacked foil irradiation technique and non-destructive gamma spectrometry were used. No experimental data were found in literature for this higher energy range. Experimental cross-sections for the formation of the radionuclides (113,110)Sn, (116m,115m,114m,113m,111,110g,109)In and (115)Cd are reported in the 37-50MeV energy range, for production of (110)Sn and (110g,109)In these are the first measurements ever. The experimental data were compared with the results of cross section calculations of the ALICE and EMPIRE nuclear model codes and of the TALYS 1.6 nuclear model code as listed in the on-line library TENDL-2014. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. ADAPTION OF NONSTANDARD PIPING COMPONENTS INTO PRESENT DAY SEISMIC CODES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    D. T. Clark; M. J. Russell; R. E. Spears

    2009-07-01

    With spiraling energy demand and flat energy supply, there is a need to extend the life of older nuclear reactors. This sometimes requires that existing systems be evaluated to present day seismic codes. Older reactors built in the 1960s and early 1970s often used fabricated piping components that were code compliant during their initial construction time period, but are outside the standard parameters of present-day piping codes. There are several approaches available to the analyst in evaluating these non-standard components to modern codes. The simplest approach is to use the flexibility factors and stress indices for similar standard components withmore » the assumption that the non-standard component’s flexibility factors and stress indices will be very similar. This approach can require significant engineering judgment. A more rational approach available in Section III of the ASME Boiler and Pressure Vessel Code, which is the subject of this paper, involves calculation of flexibility factors using finite element analysis of the non-standard component. Such analysis allows modeling of geometric and material nonlinearities. Flexibility factors based on these analyses are sensitive to the load magnitudes used in their calculation, load magnitudes that need to be consistent with those produced by the linear system analyses where the flexibility factors are applied. This can lead to iteration, since the magnitude of the loads produced by the linear system analysis depend on the magnitude of the flexibility factors. After the loading applied to the nonstandard component finite element model has been matched to loads produced by the associated linear system model, the component finite element model can then be used to evaluate the performance of the component under the loads with the nonlinear analysis provisions of the Code, should the load levels lead to calculated stresses in excess of Allowable stresses. This paper details the application of component-level finite element modeling to account for geometric and material nonlinear component behavior in a linear elastic piping system model. Note that this technique can be applied to the analysis of B31 piping systems.« less

  8. NREL: International Activities - U.S.-China Renewable Energy Partnership

    Science.gov Websites

    Solar PV and TC88 Wind working groups. Renewable Energy Technology These projects enhance policies to Collaboration on innovative business models and financing solutions for solar PV deployment. Micrositing and O development. Current Projects Recommendations for photovoltaic (PV) and wind grid code updates. New energy

  9. A Binary-Encounter-Bethe Approach to Simulate DNA Damage by the Direct Effect

    NASA Technical Reports Server (NTRS)

    Plante, Ianik; Cucinotta, Francis A.

    2013-01-01

    The DNA damage is of crucial importance in the understanding of the effects of ionizing radiation. The main mechanisms of DNA damage are by the direct effect of radiation (e.g. direct ionization) and by indirect effect (e.g. damage by.OH radicals created by the radiolysis of water). Despite years of research in this area, many questions on the formation of DNA damage remains. To refine existing DNA damage models, an approach based on the Binary-Encounter-Bethe (BEB) model was developed[1]. This model calculates differential cross sections for ionization of the molecular orbitals of the DNA bases, sugars and phosphates using the electron binding energy, the mean kinetic energy and the occupancy number of the orbital. This cross section has an analytic form which is quite convenient to use and allows the sampling of the energy loss occurring during an ionization event. To simulate the radiation track structure, the code RITRACKS developed at the NASA Johnson Space Center is used[2]. This code calculates all the energy deposition events and the formation of the radiolytic species by the ion and the secondary electrons as well. We have also developed a technique to use the integrated BEB cross section for the bases, sugar and phosphates in the radiation transport code RITRACKS. These techniques should allow the simulation of DNA damage by ionizing radiation, and understanding of the formation of double-strand breaks caused by clustered damage in different conditions.

  10. Verification and Validation of the k-kL Turbulence Model in FUN3D and CFL3D Codes

    NASA Technical Reports Server (NTRS)

    Abdol-Hamid, Khaled S.; Carlson, Jan-Renee; Rumsey, Christopher L.

    2015-01-01

    The implementation of the k-kL turbulence model using multiple computational uid dy- namics (CFD) codes is reported herein. The k-kL model is a two-equation turbulence model based on Abdol-Hamid's closure and Menter's modi cation to Rotta's two-equation model. Rotta shows that a reliable transport equation can be formed from the turbulent length scale L, and the turbulent kinetic energy k. Rotta's equation is well suited for term-by-term mod- eling and displays useful features compared to other two-equation models. An important di erence is that this formulation leads to the inclusion of higher-order velocity derivatives in the source terms of the scale equations. This can enhance the ability of the Reynolds- averaged Navier-Stokes (RANS) solvers to simulate unsteady ows. The present report documents the formulation of the model as implemented in the CFD codes Fun3D and CFL3D. Methodology, veri cation and validation examples are shown. Attached and sepa- rated ow cases are documented and compared with experimental data. The results show generally very good comparisons with canonical and experimental data, as well as matching results code-to-code. The results from this formulation are similar or better than results using the SST turbulence model.

  11. Some Progress in Large-Eddy Simulation using the 3-D Vortex Particle Method

    NASA Technical Reports Server (NTRS)

    Winckelmans, G. S.

    1995-01-01

    This two-month visit at CTR was devoted to investigating possibilities in LES modeling in the context of the 3-D vortex particle method (=vortex element method, VEM) for unbounded flows. A dedicated code was developed for that purpose. Although O(N(sup 2)) and thus slow, it offers the advantage that it can easily be modified to try out many ideas on problems involving up to N approx. 10(exp 4) particles. Energy spectrums (which require O(N(sup 2)) operations per wavenumber) are also computed. Progress was realized in the following areas: particle redistribution schemes, relaxation schemes to maintain the solenoidal condition on the particle vorticity field, simple LES models and their VEM extension, possible new avenues in LES. Model problems that involve strong interaction between vortex tubes were computed, together with diagnostics: total vorticity, linear and angular impulse, energy and energy spectrum, enstrophy. More work is needed, however, especially regarding relaxation schemes and further validation and development of LES models for VEM. Finally, what works well will eventually have to be incorporated into the fast parallel tree code.

  12. Extending the Coyote emulator to dark energy models with standard w {sub 0}- w {sub a} parametrization of the equation of state

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Casarini, L.; Bonometto, S.A.; Tessarotto, E.

    2016-08-01

    We discuss an extension of the Coyote emulator to predict non-linear matter power spectra of dark energy (DE) models with a scale factor dependent equation of state of the form w = w {sub 0}+(1- a ) w {sub a} . The extension is based on the mapping rule between non-linear spectra of DE models with constant equation of state and those with time varying one originally introduced in ref. [40]. Using a series of N-body simulations we show that the spectral equivalence is accurate to sub-percent level across the same range of modes and redshift covered by the Coyotemore » suite. Thus, the extended emulator provides a very efficient and accurate tool to predict non-linear power spectra for DE models with w {sub 0}- w {sub a} parametrization. According to the same criteria we have developed a numerical code that we have implemented in a dedicated module for the CAMB code, that can be used in combination with the Coyote Emulator in likelihood analyses of non-linear matter power spectrum measurements. All codes can be found at https://github.com/luciano-casarini/pkequal.« less

  13. Comparative Dosimetric Estimates of a 25 keV Electron Micro-beam with three Monte Carlo Codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mainardi, Enrico; Donahue, Richard J.; Blakely, Eleanor A.

    2002-09-11

    The calculations presented compare the different performances of the three Monte Carlo codes PENELOPE-1999, MCNP-4C and PITS, for the evaluation of Dose profiles from a 25 keV electron micro-beam traversing individual cells. The overall model of a cell is a water cylinder equivalent for the three codes but with a different internal scoring geometry: hollow cylinders for PENELOPE and MCNP, whereas spheres are used for the PITS code. A cylindrical cell geometry with scoring volumes with the shape of hollow cylinders was initially selected for PENELOPE and MCNP because of its superior simulation of the actual shape and dimensions ofmore » a cell and for its improved computer-time efficiency if compared to spherical internal volumes. Some of the transfer points and energy transfer that constitute a radiation track may actually fall in the space between spheres, that would be outside the spherical scoring volume. This internal geometry, along with the PENELOPE algorithm, drastically reduced the computer time when using this code if comparing with event-by-event Monte Carlo codes like PITS. This preliminary work has been important to address dosimetric estimates at low electron energies. It demonstrates that codes like PENELOPE can be used for Dose evaluation, even with such small geometries and energies involved, which are far below the normal use for which the code was created. Further work (initiated in Summer 2002) is still needed however, to create a user-code for PENELOPE that allows uniform comparison of exact cell geometries, integral volumes and also microdosimetric scoring quantities, a field where track-structure codes like PITS, written for this purpose, are believed to be superior.« less

  14. A comparison of models for supernova remnants including cosmic rays

    NASA Astrophysics Data System (ADS)

    Kang, Hyesung; Drury, L. O'C.

    1992-11-01

    A simplified model which can follow the dynamical evolution of a supernova remnant including the acceleration of cosmic rays without carrying out full numerical simulations has been proposed by Drury, Markiewicz, & Voelk in 1989. To explore the accuracy and the merits of using such a model, we have recalculated with the simplified code the evolution of the supernova remnants considered in Jones & Kang, in which more detailed and accurate numerical simulations were done using a full hydrodynamic code based on the two-fluid approximation. For the total energy transferred to cosmic rays the two codes are in good agreement, the acceleration efficiency being the same within a factor of 2 or so. The dependence of the results of the two codes on the closure parameters for the two-fluid approximation is also qualitatively similar. The agreement is somewhat degraded in those cases where the shock is smoothed out by the cosmic rays.

  15. Commercial Demand Module - NEMS Documentation

    EIA Publications

    2017-01-01

    Documents the objectives, analytical approach and development of the National Energy Modeling System (NEMS) Commercial Sector Demand Module. The report catalogues and describes the model assumptions, computational methodology, parameter estimation techniques, model source code, and forecast results generated through the synthesis and scenario development based on these components.

  16. Final Technical Report for GO17004 Regulatory Logic: Codes and Standards for the Hydrogen Economy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nakarado, Gary L.

    The objectives of this project are to: develop a robust supporting research and development program to provide critical hydrogen behavior data and a detailed understanding of hydrogen combustion and safety across a range of scenarios, needed to establish setback distances in building codes and minimize the overall data gaps in code development; support and facilitate the completion of technical specifications by the International Organization for Standardization (ISO) for gaseous hydrogen refueling (TS 20012) and standards for on-board liquid (ISO 13985) and gaseous or gaseous blend (ISO 15869) hydrogen storage by 2007; support and facilitate the effort, led by the NFPA,more » to complete the draft Hydrogen Technologies Code (NFPA 2) by 2008; with experimental data and input from Technology Validation Program element activities, support and facilitate the completion of standards for bulk hydrogen storage (e.g., NFPA 55) by 2008; facilitate the adoption of the most recently available model codes (e.g., from the International Code Council [ICC]) in key regions; complete preliminary research and development on hydrogen release scenarios to support the establishment of setback distances in building codes and provide a sound basis for model code development and adoption; support and facilitate the development of Global Technical Regulations (GTRs) by 2010 for hydrogen vehicle systems under the United Nations Economic Commission for Europe, World Forum for Harmonization of Vehicle Regulations and Working Party on Pollution and Energy Program (ECE-WP29/GRPE); and to Support and facilitate the completion by 2012 of necessary codes and standards needed for the early commercialization and market entry of hydrogen energy technologies.« less

  17. Model documentation report: Residential sector demand module of the national energy modeling system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    This report documents the objectives, analytical approach, and development of the National Energy Modeling System (NEMS) Residential Sector Demand Module. The report catalogues and describes the model assumptions, computational methodology, parameter estimation techniques, and FORTRAN source code. This reference document provides a detailed description for energy analysts, other users, and the public. The NEMS Residential Sector Demand Module is currently used for mid-term forecasting purposes and energy policy analysis over the forecast horizon of 1993 through 2020. The model generates forecasts of energy demand for the residential sector by service, fuel, and Census Division. Policy impacts resulting from new technologies,more » market incentives, and regulatory changes can be estimated using the module. 26 refs., 6 figs., 5 tabs.« less

  18. Nonlinear 3D visco-resistive MHD modeling of fusion plasmas: a comparison between numerical codes

    NASA Astrophysics Data System (ADS)

    Bonfiglio, D.; Chacon, L.; Cappello, S.

    2008-11-01

    Fluid plasma models (and, in particular, the MHD model) are extensively used in the theoretical description of laboratory and astrophysical plasmas. We present here a successful benchmark between two nonlinear, three-dimensional, compressible visco-resistive MHD codes. One is the fully implicit, finite volume code PIXIE3D [1,2], which is characterized by many attractive features, notably the generalized curvilinear formulation (which makes the code applicable to different geometries) and the possibility to include in the computation the energy transport equation and the extended MHD version of Ohm's law. In addition, the parallel version of the code features excellent scalability properties. Results from this code, obtained in cylindrical geometry, are compared with those produced by the semi-implicit cylindrical code SpeCyl, which uses finite differences radially, and spectral formulation in the other coordinates [3]. Both single and multi-mode simulations are benchmarked, regarding both reversed field pinch (RFP) and ohmic tokamak magnetic configurations. [1] L. Chacon, Computer Physics Communications 163, 143 (2004). [2] L. Chacon, Phys. Plasmas 15, 056103 (2008). [3] S. Cappello, Plasma Phys. Control. Fusion 46, B313 (2004) & references therein.

  19. METHES: A Monte Carlo collision code for the simulation of electron transport in low temperature plasmas

    NASA Astrophysics Data System (ADS)

    Rabie, M.; Franck, C. M.

    2016-06-01

    We present a freely available MATLAB code for the simulation of electron transport in arbitrary gas mixtures in the presence of uniform electric fields. For steady-state electron transport, the program provides the transport coefficients, reaction rates and the electron energy distribution function. The program uses established Monte Carlo techniques and is compatible with the electron scattering cross section files from the open-access Plasma Data Exchange Project LXCat. The code is written in object-oriented design, allowing the tracing and visualization of the spatiotemporal evolution of electron swarms and the temporal development of the mean energy and the electron number due to attachment and/or ionization processes. We benchmark our code with well-known model gases as well as the real gases argon, N2, O2, CF4, SF6 and mixtures of N2 and O2.

  20. State-to-state models of vibrational relaxation in Direct Simulation Monte Carlo (DSMC)

    NASA Astrophysics Data System (ADS)

    Oblapenko, G. P.; Kashkovsky, A. V.; Bondar, Ye A.

    2017-02-01

    In the present work, the application of state-to-state models of vibrational energy exchanges to the Direct Simulation Monte Carlo (DSMC) is considered. A state-to-state model for VT transitions of vibrational energy in nitrogen and oxygen, based on the application of the inverse Laplace transform to results of quasiclassical trajectory calculations (QCT) of vibrational energy transitions, along with the Forced Harmonic Oscillator (FHO) state-to-state model is implemented in DSMC code and applied to flows around blunt bodies. Comparisons are made with the widely used Larsen-Borgnakke model and the in uence of multi-quantum VT transitions is assessed.

  1. Extension to Higher Mass Numbers of an Improved Knockout-Ablation-Coalescence Model for Secondary Neutron and Light Ion Production in Cosmic Ray Interactions

    NASA Astrophysics Data System (ADS)

    Indi Sriprisan, Sirikul; Townsend, Lawrence; Cucinotta, Francis A.; Miller, Thomas M.

    Purpose: An analytical knockout-ablation-coalescence model capable of making quantitative predictions of the neutron spectra from high-energy nucleon-nucleus and nucleus-nucleus collisions is being developed for use in space radiation protection studies. The FORTRAN computer code that implements this model is called UBERNSPEC. The knockout or abrasion stage of the model is based on Glauber multiple scattering theory. The ablation part of the model uses the classical evaporation model of Weisskopf-Ewing. In earlier work, the knockout-ablation model has been extended to incorporate important coalescence effects into the formalism. Recently, alpha coalescence has been incorporated, and the ability to predict light ion spectra with the coalescence model added. The earlier versions were limited to nuclei with mass numbers less than 69. In this work, the UBERNSPEC code has been extended to make predictions of secondary neutrons and light ion production from the interactions of heavy charged particles with higher mass numbers (as large as 238). The predictions are compared with published measurements of neutron spectra and light ion energy for a variety of collision pairs. Furthermore, the predicted spectra from this work are compared with the predictions from the recently-developed heavy ion event generator incorporated in the Monte Carlo radiation transport code HETC-HEDS.

  2. Delamination modeling of laminate plate made of sublaminates

    NASA Astrophysics Data System (ADS)

    Kormaníková, Eva; Kotrasová, Kamila

    2017-07-01

    The paper presents the mixed-mode delamination of plates made of sublaminates. To this purpose an opening load mode of delamination is proposed as failure model. The failure model is implemented in ANSYS code to calculate the mixed-mode delamination response as energy release rate. The analysis is based on interface techniques. Within the interface finite element modeling there are calculated the individual components of damage parameters as spring reaction forces, relative displacements and energy release rates along the lamination front.

  3. Monte Carlo dose calculations in homogeneous media and at interfaces: a comparison between GEPTS, EGSnrc, MCNP, and measurements.

    PubMed

    Chibani, Omar; Li, X Allen

    2002-05-01

    Three Monte Carlo photon/electron transport codes (GEPTS, EGSnrc, and MCNP) are bench-marked against dose measurements in homogeneous (both low- and high-Z) media as well as at interfaces. A brief overview on physical models used by each code for photon and electron (positron) transport is given. Absolute calorimetric dose measurements for 0.5 and 1 MeV electron beams incident on homogeneous and multilayer media are compared with the predictions of the three codes. Comparison with dose measurements in two-layer media exposed to a 60Co gamma source is also performed. In addition, comparisons between the codes (including the EGS4 code) are done for (a) 0.05 to 10 MeV electron beams and positron point sources in lead, (b) high-energy photons (10 and 20 MeV) irradiating a multilayer phantom (water/steel/air), and (c) simulation of a 90Sr/90Y brachytherapy source. A good agreement is observed between the calorimetric electron dose measurements and predictions of GEPTS and EGSnrc in both homogeneous and multilayer media. MCNP outputs are found to be dependent on the energy-indexing method (Default/ITS style). This dependence is significant in homogeneous media as well as at interfaces. MCNP(ITS) fits more closely the experimental data than MCNP(DEF), except for the case of Be. At low energy (0.05 and 0.1 MeV), MCNP(ITS) dose distributions in lead show higher maximums in comparison with GEPTS and EGSnrc. EGS4 produces too penetrating electron-dose distributions in high-Z media, especially at low energy (<0.1 MeV). For positrons, differences between GEPTS and EGSnrc are observed in lead because GEPTS distinguishes positrons from electrons for both elastic multiple scattering and bremsstrahlung emission models. For the 60Co source, a quite good agreement between calculations and measurements is observed with regards to the experimental uncertainty. For the other cases (10 and 20 MeV photon sources and the 90Sr/90Y beta source), a good agreement is found between the three codes. In conclusion, differences between GEPTS and EGSnrc results are found to be very small for almost all media and energies studied. MCNP results depend significantly on the electron energy-indexing method.

  4. Data Collection Handbook to Support Modeling Impacts of Radioactive Material in Soil and Building Structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, Charley; Kamboj, Sunita; Wang, Cheng

    2015-09-01

    This handbook is an update of the 1993 version of the Data Collection Handbook and the Radionuclide Transfer Factors Report to support modeling the impact of radioactive material in soil. Many new parameters have been added to the RESRAD Family of Codes, and new measurement methodologies are available. A detailed review of available parameter databases was conducted in preparation of this new handbook. This handbook is a companion document to the user manuals when using the RESRAD (onsite) and RESRAD-OFFSITE code. It can also be used for RESRAD-BUILD code because some of the building-related parameters are included in this handbook.more » The RESRAD (onsite) has been developed for implementing U.S. Department of Energy Residual Radioactive Material Guidelines. Hydrogeological, meteorological, geochemical, geometrical (size, area, depth), crops and livestock, human intake, source characteristic, and building characteristic parameters are used in the RESRAD (onsite) code. The RESRAD-OFFSITE code is an extension of the RESRAD (onsite) code and can also model the transport of radionuclides to locations outside the footprint of the primary contamination. This handbook discusses parameter definitions, typical ranges, variations, and measurement methodologies. It also provides references for sources of additional information. Although this handbook was developed primarily to support the application of RESRAD Family of Codes, the discussions and values are valid for use of other pathway analysis models and codes.« less

  5. Nuclear Energy Advanced Modeling and Simulation (NEAMS) Waste Integrated Performance and Safety Codes (IPSC) : FY10 development and integration.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Criscenti, Louise Jacqueline; Sassani, David Carl; Arguello, Jose Guadalupe, Jr.

    2011-02-01

    This report describes the progress in fiscal year 2010 in developing the Waste Integrated Performance and Safety Codes (IPSC) in support of the U.S. Department of Energy (DOE) Office of Nuclear Energy Advanced Modeling and Simulation (NEAMS) Campaign. The goal of the Waste IPSC is to develop an integrated suite of computational modeling and simulation capabilities to quantitatively assess the long-term performance of waste forms in the engineered and geologic environments of a radioactive waste storage or disposal system. The Waste IPSC will provide this simulation capability (1) for a range of disposal concepts, waste form types, engineered repository designs,more » and geologic settings, (2) for a range of time scales and distances, (3) with appropriate consideration of the inherent uncertainties, and (4) in accordance with robust verification, validation, and software quality requirements. Waste IPSC activities in fiscal year 2010 focused on specifying a challenge problem to demonstrate proof of concept, developing a verification and validation plan, and performing an initial gap analyses to identify candidate codes and tools to support the development and integration of the Waste IPSC. The current Waste IPSC strategy is to acquire and integrate the necessary Waste IPSC capabilities wherever feasible, and develop only those capabilities that cannot be acquired or suitably integrated, verified, or validated. This year-end progress report documents the FY10 status of acquisition, development, and integration of thermal-hydrologic-chemical-mechanical (THCM) code capabilities, frameworks, and enabling tools and infrastructure.« less

  6. Overview of Particle and Heavy Ion Transport Code System PHITS

    NASA Astrophysics Data System (ADS)

    Sato, Tatsuhiko; Niita, Koji; Matsuda, Norihiro; Hashimoto, Shintaro; Iwamoto, Yosuke; Furuta, Takuya; Noda, Shusaku; Ogawa, Tatsuhiko; Iwase, Hiroshi; Nakashima, Hiroshi; Fukahori, Tokio; Okumura, Keisuke; Kai, Tetsuya; Chiba, Satoshi; Sihver, Lembit

    2014-06-01

    A general purpose Monte Carlo Particle and Heavy Ion Transport code System, PHITS, is being developed through the collaboration of several institutes in Japan and Europe. The Japan Atomic Energy Agency is responsible for managing the entire project. PHITS can deal with the transport of nearly all particles, including neutrons, protons, heavy ions, photons, and electrons, over wide energy ranges using various nuclear reaction models and data libraries. It is written in Fortran language and can be executed on almost all computers. All components of PHITS such as its source, executable and data-library files are assembled in one package and then distributed to many countries via the Research organization for Information Science and Technology, the Data Bank of the Organization for Economic Co-operation and Development's Nuclear Energy Agency, and the Radiation Safety Information Computational Center. More than 1,000 researchers have been registered as PHITS users, and they apply the code to various research and development fields such as nuclear technology, accelerator design, medical physics, and cosmic-ray research. This paper briefly summarizes the physics models implemented in PHITS, and introduces some important functions useful for specific applications, such as an event generator mode and beam transport functions.

  7. Measurement of charge- and mass-changing cross sections for 4He+12C collisions in the energy range 80-220 MeV/u for applications in ion beam therapy

    NASA Astrophysics Data System (ADS)

    Horst, Felix; Schuy, Christoph; Weber, Uli; Brinkmann, Kai-Thomas; Zink, Klemens

    2017-08-01

    4He ions are considered to be used for hadron radiotherapy due to their favorable physical and radiobiological properties. For an accurate dose calculation the fragmentation of the primary 4He ions occurring as a result of nuclear collisions must be taken into account. Therefore precise nuclear reaction models need to be implemented in the radiation transport codes used for dose calculation. A fragmentation experiment using thin graphite targets was conducted at the Heidelberg Ion Beam Therapy Center (HIT) to obtain new and precise 4He-nucleus cross section data in the clinically relevant energy range. Measured values for the charge-changing cross section, mass-changing cross section, as well as the inclusive 3He production cross section for 4He+12C collisions at energies between 80 and 220 MeV /u are presented. These data are compared to the 4He-nucleus reaction model by DeVries and Peng as well as to the parametrizations by Tripathi et al. and by Cucinotta et al., which are implemented in the treatment planning code trip98 and several other radiation transport codes.

  8. Advancements in OSeMOSYS - the Open Source energy MOdelling SYStem

    NASA Astrophysics Data System (ADS)

    Gardumi, Francesco; Almulla, Youssef; Shivakumar, Abhishek; Taliotis, Constantinos; Howells, Mark

    2017-04-01

    This work provides a review of the latest developments and applications of OSeMOSYS energy systems model generator. OSeMOSYS was launched at Oxford university in 2011, including co-authors from UCL, UNIDO, UCT, Stanford, PSI and other institutions. It was designed to fill a gap in the energy modelling toolkit, where no open source optimising model generators were available at the time. OSeMOSYS is free, open source and accessible. Written in GNU MathProg programming language, it can generate from small village energy models up to global multi-resource integrated - Climate, Land, Energy, Water - models. In its most widespread version it calculates what investments to make, when, at what capacity and how to operate them, to meet given final demands and policy targets at the lowest cost. OSeMOSYS is structured into blocks of functionalities, each consisting in a stand-alone set of equations which can be plugged into the core code to add specific insights for the case-study of interest. Originally, seven blocks of functionalities for the objective function, costs, storage, capacity adequacy, energy balance, constraints, emissions were provided, documented by plain English descriptions and algebraic formulations. Recently, the block for storage was deeply revised and developed, while new blocks of functionality for studying short-term implications of energy planning onto the electricity system were designed. These include equations for computing 1) the reserve capacity dispatch; 2) the costs of flexible operation of power plants and 3) the reserve capacity demand as a function of the penetration of intermittent renewables were introduced. Additionally, a revision of the whole code was completed, as the result of a public call launched and led by UNite Ideas. This allowed the computational time to be greatly reduced and opened up the path to refinements of the scales of analysis. Finally, the code was made available in Python and GAMS programming languages, thus engaging two of the widest existing communities of programmers. Such developments allowed a number of applications to be produced at different scales. Regional and country models were generated for the whole of South America and Sub-Saharan Africa. A Pan-European model is under development. Models of Cyprus and Tunisia, detailed down to the individual power plant, are among the latest applications. Finally, integrated assessment water-energy models have been generated for regions in Central Asia and the Balkans, in the framework of the UNECE Water Convention. These look into trans-boundary issues related to the water and energy management along river basins, including detailed representations of water storage and cascading power plants. This multiplicity of developments and applications of OSeMOSYS engages a wide community of users and decision-makers and fosters the use of modelling tools for energy planning. This fulfils a scientific and social mission to empower communities with the development of solutions for a better access to energy.

  9. Modeling of Flow Blockage in a Liquid Metal-Cooled Reactor Subassembly with a Subchannel Analysis Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jeong, Hae-Yong; Ha, Kwi-Seok; Chang, Won-Pyo

    The local blockage in a subassembly of a liquid metal-cooled reactor (LMR) is of importance to the plant safety because of the compact design and the high power density of the core. To analyze the thermal-hydraulic parameters in a subassembly of a liquid metal-cooled reactor with a flow blockage, the Korea Atomic Energy Research Institute has developed the MATRA-LMR-FB code. This code uses the distributed resistance model to describe the sweeping flow formed by the wire wrap around the fuel rods and to model the recirculation flow after a blockage. The hybrid difference scheme is also adopted for the descriptionmore » of the convective terms in the recirculating wake region of low velocity. Some state-of-the-art turbulent mixing models were implemented in the code, and the models suggested by Rehme and by Zhukov are analyzed and found to be appropriate for the description of the flow blockage in an LMR subassembly. The MATRA-LMR-FB code predicts accurately the experimental data of the Oak Ridge National Laboratory 19-pin bundle with a blockage for both the high-flow and low-flow conditions. The influences of the distributed resistance model, the hybrid difference method, and the turbulent mixing models are evaluated step by step with the experimental data. The appropriateness of the models also has been evaluated through a comparison with the results from the COMMIX code calculation. The flow blockage for the KALIMER design has been analyzed with the MATRA-LMR-FB code and is compared with the SABRE code to guarantee the design safety for the flow blockage.« less

  10. Data collection handbook to support modeling the impacts of radioactive material in soil

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, C.; Cheng, J.J.; Jones, L.G.

    1993-04-01

    A pathway analysis computer code called RESRAD has been developed for implementing US Department of Energy Residual Radioactive Material Guidelines. Hydrogeological, meteorological, geochemical, geometrical (size, area, depth), and material-related (soil, concrete) parameters are used in the RESRAD code. This handbook discusses parameter definitions, typical ranges, variations, measurement methodologies, and input screen locations. Although this handbook was developed primarily to support the application of RESRAD, the discussions and values are valid for other model applications.

  11. [Medical Applications of the PHITS Code I: Recent Improvements and Biological Dose Estimation Model].

    PubMed

    Sato, Tatsuhiko; Furuta, Takuya; Hashimoto, Shintaro; Kuga, Naoya

    2015-01-01

    PHITS is a general purpose Monte Carlo particle transport simulation code developed through the collaboration of several institutes mainly in Japan. It can analyze the motion of nearly all radiations over wide energy ranges in 3-dimensional matters. It has been used for various applications including medical physics. This paper reviews the recent improvements of the code, together with the biological dose estimation method developed on the basis of the microdosimetric function implemented in PHITS.

  12. FASTGRASS implementation in BISON and Fission gas behavior characterization in UO 2 and connection to validating MARMOT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yun, Di; Mo, Kun; Ye, Bei

    2015-09-30

    This activity is supported by the US Nuclear Energy Advanced Modeling and Simulation (NEAMS) Fuels Product Line (FPL). Two major accomplishments in FY 15 are summarized in this report: (1) implementation of the FASTGRASS module in the BISON code; and (2) a Xe implantation experiment for large-grained UO 2. Both BISON AND MARMOT codes have been developed by Idaho National Laboratory (INL) to enable next generation fuel performance modeling capability as part of the NEAMS Program FPL. To contribute to the development of the Moose-Bison-Marmot (MBM) code suite, we have implemented the FASTGRASS fission gas model as a module inmore » the BISON code. Based on rate theory formulations, the coupled FASTGRASS module in BISON is capable of modeling LWR oxide fuel fission gas behavior and fission gas release. In addition, we conducted a Xe implantation experiment at the Argonne Tandem Linac Accelerator System (ATLAS) in order to produce the needed UO 2 samples with desired bubble morphology. With these samples, further experiments to study the fission gas diffusivity are planned to provide validation data for the Fission Gas Release Model in MARMOT codes.« less

  13. Extreme Scale Plasma Turbulence Simulations on Top Supercomputers Worldwide

    DOE PAGES

    Tang, William; Wang, Bei; Ethier, Stephane; ...

    2016-11-01

    The goal of the extreme scale plasma turbulence studies described in this paper is to expedite the delivery of reliable predictions on confinement physics in large magnetic fusion systems by using world-class supercomputers to carry out simulations with unprecedented resolution and temporal duration. This has involved architecture-dependent optimizations of performance scaling and addressing code portability and energy issues, with the metrics for multi-platform comparisons being 'time-to-solution' and 'energy-to-solution'. Realistic results addressing how confinement losses caused by plasma turbulence scale from present-day devices to the much larger $25 billion international ITER fusion facility have been enabled by innovative advances in themore » GTC-P code including (i) implementation of one-sided communication from MPI 3.0 standard; (ii) creative optimization techniques on Xeon Phi processors; and (iii) development of a novel performance model for the key kernels of the PIC code. Our results show that modeling data movement is sufficient to predict performance on modern supercomputer platforms.« less

  14. Energy spectrum of 208Pb(n,x) reactions

    NASA Astrophysics Data System (ADS)

    Tel, E.; Kavun, Y.; Özdoǧan, H.; Kaplan, A.

    2018-02-01

    Fission and fusion reactor technologies have been investigated since 1950's on the world. For reactor technology, fission and fusion reaction investigations are play important role for improve new generation technologies. Especially, neutron reaction studies have an important place in the development of nuclear materials. So neutron effects on materials should study as theoretically and experimentally for improve reactor design. For this reason, Nuclear reaction codes are very useful tools when experimental data are unavailable. For such circumstances scientists created many nuclear reaction codes such as ALICE/ASH, CEM95, PCROSS, TALYS, GEANT, FLUKA. In this study we used ALICE/ASH, PCROSS and CEM95 codes for energy spectrum calculation of outgoing particles from Pb bombardment by neutron. While Weisskopf-Ewing model has been used for the equilibrium process in the calculations, full exciton, hybrid and geometry dependent hybrid nuclear reaction models have been used for the pre-equilibrium process. The calculated results have been discussed and compared with the experimental data taken from EXFOR.

  15. Scalable nanohelices for predictive studies and enhanced 3D visualization.

    PubMed

    Meagher, Kwyn A; Doblack, Benjamin N; Ramirez, Mercedes; Davila, Lilian P

    2014-11-12

    Spring-like materials are ubiquitous in nature and of interest in nanotechnology for energy harvesting, hydrogen storage, and biological sensing applications. For predictive simulations, it has become increasingly important to be able to model the structure of nanohelices accurately. To study the effect of local structure on the properties of these complex geometries one must develop realistic models. To date, software packages are rather limited in creating atomistic helical models. This work focuses on producing atomistic models of silica glass (SiO₂) nanoribbons and nanosprings for molecular dynamics (MD) simulations. Using an MD model of "bulk" silica glass, two computational procedures to precisely create the shape of nanoribbons and nanosprings are presented. The first method employs the AWK programming language and open-source software to effectively carve various shapes of silica nanoribbons from the initial bulk model, using desired dimensions and parametric equations to define a helix. With this method, accurate atomistic silica nanoribbons can be generated for a range of pitch values and dimensions. The second method involves a more robust code which allows flexibility in modeling nanohelical structures. This approach utilizes a C++ code particularly written to implement pre-screening methods as well as the mathematical equations for a helix, resulting in greater precision and efficiency when creating nanospring models. Using these codes, well-defined and scalable nanoribbons and nanosprings suited for atomistic simulations can be effectively created. An added value in both open-source codes is that they can be adapted to reproduce different helical structures, independent of material. In addition, a MATLAB graphical user interface (GUI) is used to enhance learning through visualization and interaction for a general user with the atomistic helical structures. One application of these methods is the recent study of nanohelices via MD simulations for mechanical energy harvesting purposes.

  16. Correlated prompt fission data in transport simulations

    DOE PAGES

    Talou, P.; Vogt, R.; Randrup, J.; ...

    2018-01-24

    Detailed information on the fission process can be inferred from the observation, modeling and theoretical understanding of prompt fission neutron and γ-ray observables. Beyond simple average quantities, the study of distributions and correlations in prompt data, e.g., multiplicity-dependent neutron and γ-ray spectra, angular distributions of the emitted particles, n -n, n - γ, and γ - γ correlations, can place stringent constraints on fission models and parameters that would otherwise be free to be tuned separately to represent individual fission observables. The FREYA and CGMF codes have been developed to follow the sequential emissions of prompt neutrons and γ raysmore » from the initial excited fission fragments produced right after scission. Both codes implement Monte Carlo techniques to sample initial fission fragment configurations in mass, charge and kinetic energy and sample probabilities of neutron and γ emission at each stage of the decay. This approach naturally leads to using simple but powerful statistical techniques to infer distributions and correlations among many observables and model parameters. The comparison of model calculations with experimental data provides a rich arena for testing various nuclear physics models such as those related to the nuclear structure and level densities of neutron-rich nuclei, the γ-ray strength functions of dipole and quadrupole transitions, the mechanism for dividing the excitation energy between the two nascent fragments near scission, and the mechanisms behind the production of angular momentum in the fragments, etc. Beyond the obvious interest from a fundamental physics point of view, such studies are also important for addressing data needs in various nuclear applications. The inclusion of the FREYA and CGMF codes into the MCNP6.2 and MCNPX - PoliMi transport codes, for instance, provides a new and powerful tool to simulate correlated fission events in neutron transport calculations important in nonproliferation, safeguards, nuclear energy, and defense programs. Here, this review provides an overview of the topic, starting from theoretical considerations of the fission process, with a focus on correlated signatures. It then explores the status of experimental correlated fission data and current efforts to address some of the known shortcomings. Numerical simulations employing the FREYA and CGMF codes are compared to experimental data for a wide range of correlated fission quantities. The inclusion of those codes into the MCNP6.2 and MCNPX - PoliMi transport codes is described and discussed in the context of relevant applications. The accuracy of the model predictions and their sensitivity to model assumptions and input parameters are discussed. Lastly, a series of important experimental and theoretical questions that remain unanswered are presented, suggesting a renewed effort to address these shortcomings.« less

  17. Correlated prompt fission data in transport simulations

    NASA Astrophysics Data System (ADS)

    Talou, P.; Vogt, R.; Randrup, J.; Rising, M. E.; Pozzi, S. A.; Verbeke, J.; Andrews, M. T.; Clarke, S. D.; Jaffke, P.; Jandel, M.; Kawano, T.; Marcath, M. J.; Meierbachtol, K.; Nakae, L.; Rusev, G.; Sood, A.; Stetcu, I.; Walker, C.

    2018-01-01

    Detailed information on the fission process can be inferred from the observation, modeling and theoretical understanding of prompt fission neutron and γ-ray observables. Beyond simple average quantities, the study of distributions and correlations in prompt data, e.g., multiplicity-dependent neutron and γ-ray spectra, angular distributions of the emitted particles, n - n, n - γ, and γ - γ correlations, can place stringent constraints on fission models and parameters that would otherwise be free to be tuned separately to represent individual fission observables. The FREYA and CGMF codes have been developed to follow the sequential emissions of prompt neutrons and γ rays from the initial excited fission fragments produced right after scission. Both codes implement Monte Carlo techniques to sample initial fission fragment configurations in mass, charge and kinetic energy and sample probabilities of neutron and γ emission at each stage of the decay. This approach naturally leads to using simple but powerful statistical techniques to infer distributions and correlations among many observables and model parameters. The comparison of model calculations with experimental data provides a rich arena for testing various nuclear physics models such as those related to the nuclear structure and level densities of neutron-rich nuclei, the γ-ray strength functions of dipole and quadrupole transitions, the mechanism for dividing the excitation energy between the two nascent fragments near scission, and the mechanisms behind the production of angular momentum in the fragments, etc. Beyond the obvious interest from a fundamental physics point of view, such studies are also important for addressing data needs in various nuclear applications. The inclusion of the FREYA and CGMF codes into the MCNP6.2 and MCNPX - PoliMi transport codes, for instance, provides a new and powerful tool to simulate correlated fission events in neutron transport calculations important in nonproliferation, safeguards, nuclear energy, and defense programs. This review provides an overview of the topic, starting from theoretical considerations of the fission process, with a focus on correlated signatures. It then explores the status of experimental correlated fission data and current efforts to address some of the known shortcomings. Numerical simulations employing the FREYA and CGMF codes are compared to experimental data for a wide range of correlated fission quantities. The inclusion of those codes into the MCNP6.2 and MCNPX - PoliMi transport codes is described and discussed in the context of relevant applications. The accuracy of the model predictions and their sensitivity to model assumptions and input parameters are discussed. Finally, a series of important experimental and theoretical questions that remain unanswered are presented, suggesting a renewed effort to address these shortcomings.

  18. Correlated prompt fission data in transport simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Talou, P.; Vogt, R.; Randrup, J.

    Detailed information on the fission process can be inferred from the observation, modeling and theoretical understanding of prompt fission neutron and γ-ray observables. Beyond simple average quantities, the study of distributions and correlations in prompt data, e.g., multiplicity-dependent neutron and γ-ray spectra, angular distributions of the emitted particles, n -n, n - γ, and γ - γ correlations, can place stringent constraints on fission models and parameters that would otherwise be free to be tuned separately to represent individual fission observables. The FREYA and CGMF codes have been developed to follow the sequential emissions of prompt neutrons and γ raysmore » from the initial excited fission fragments produced right after scission. Both codes implement Monte Carlo techniques to sample initial fission fragment configurations in mass, charge and kinetic energy and sample probabilities of neutron and γ emission at each stage of the decay. This approach naturally leads to using simple but powerful statistical techniques to infer distributions and correlations among many observables and model parameters. The comparison of model calculations with experimental data provides a rich arena for testing various nuclear physics models such as those related to the nuclear structure and level densities of neutron-rich nuclei, the γ-ray strength functions of dipole and quadrupole transitions, the mechanism for dividing the excitation energy between the two nascent fragments near scission, and the mechanisms behind the production of angular momentum in the fragments, etc. Beyond the obvious interest from a fundamental physics point of view, such studies are also important for addressing data needs in various nuclear applications. The inclusion of the FREYA and CGMF codes into the MCNP6.2 and MCNPX - PoliMi transport codes, for instance, provides a new and powerful tool to simulate correlated fission events in neutron transport calculations important in nonproliferation, safeguards, nuclear energy, and defense programs. Here, this review provides an overview of the topic, starting from theoretical considerations of the fission process, with a focus on correlated signatures. It then explores the status of experimental correlated fission data and current efforts to address some of the known shortcomings. Numerical simulations employing the FREYA and CGMF codes are compared to experimental data for a wide range of correlated fission quantities. The inclusion of those codes into the MCNP6.2 and MCNPX - PoliMi transport codes is described and discussed in the context of relevant applications. The accuracy of the model predictions and their sensitivity to model assumptions and input parameters are discussed. Lastly, a series of important experimental and theoretical questions that remain unanswered are presented, suggesting a renewed effort to address these shortcomings.« less

  19. National Cost-effectiveness of ASHRAE Standard 90.1-2010 Compared to ASHRAE Standard 90.1-2007

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thornton, Brian; Halverson, Mark A.; Myer, Michael

    Pacific Northwest National Laboratory (PNNL) completed this project for the U.S. Department of Energy’s (DOE’s) Building Energy Codes Program (BECP). DOE’s BECP supports upgrading building energy codes and standards, and the states’ adoption, implementation, and enforcement of upgraded codes and standards. Building energy codes and standards set minimum requirements for energy-efficient design and construction for new and renovated buildings, and impact energy use and greenhouse gas emissions for the life of buildings. Continuous improvement of building energy efficiency is achieved by periodically upgrading energy codes and standards. Ensuring that changes in the code that may alter costs (for building components,more » initial purchase and installation, replacement, maintenance and energy) are cost-effective encourages their acceptance and implementation. ANSI/ASHRAE/IESNA Standard 90.1 is the energy standard for commercial and multi-family residential buildings over three floors.« less

  20. Cost-effectiveness of ASHRAE Standard 90.1-2010 Compared to ASHRAE Standard 90.1-2007

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thornton, Brian A.; Halverson, Mark A.; Myer, Michael

    Pacific Northwest National Laboratory (PNNL) completed this project for the U.S. Department of Energy’s (DOE’s) Building Energy Codes Program (BECP). DOE’s BECP supports upgrading building energy codes and standards, and the states’ adoption, implementation, and enforcement of upgraded codes and standards. Building energy codes and standards set minimum requirements for energy-efficient design and construction for new and renovated buildings, and impact energy use and greenhouse gas emissions for the life of buildings. Continuous improvement of building energy efficiency is achieved by periodically upgrading energy codes and standards. Ensuring that changes in the code that may alter costs (for building components,more » initial purchase and installation, replacement, maintenance and energy) are cost-effective encourages their acceptance and implementation. ANSI/ASHRAE/IESNA Standard 90.1 is the energy standard for commercial and multi-family residential buildings over three floors.« less

  1. Monte Carlo simulations of {sup 3}He ion physical characteristics in a water phantom and evaluation of radiobiological effectiveness

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taleei, Reza; Guan, Fada; Peeler, Chris

    Purpose: {sup 3}He ions may hold great potential for clinical therapy because of both their physical and biological properties. In this study, the authors investigated the physical properties, i.e., the depth-dose curves from primary and secondary particles, and the energy distributions of helium ({sup 3}He) ions. A relative biological effectiveness (RBE) model was applied to assess the biological effectiveness on survival of multiple cell lines. Methods: In light of the lack of experimental measurements and cross sections, the authors used Monte Carlo methods to study the energy deposition of {sup 3}He ions. The transport of {sup 3}He ions in watermore » was simulated by using three Monte Carlo codes—FLUKA, GEANT4, and MCNPX—for incident beams with Gaussian energy distributions with average energies of 527 and 699 MeV and a full width at half maximum of 3.3 MeV in both cases. The RBE of each was evaluated by using the repair-misrepair-fixation model. In all of the simulations with each of the three Monte Carlo codes, the same geometry and primary beam parameters were used. Results: Energy deposition as a function of depth and energy spectra with high resolution was calculated on the central axis of the beam. Secondary proton dose from the primary {sup 3}He beams was predicted quite differently by the three Monte Carlo systems. The predictions differed by as much as a factor of 2. Microdosimetric parameters such as dose mean lineal energy (y{sub D}), frequency mean lineal energy (y{sub F}), and frequency mean specific energy (z{sub F}) were used to characterize the radiation beam quality at four depths of the Bragg curve. Calculated RBE values were close to 1 at the entrance, reached on average 1.8 and 1.6 for prostate and head and neck cancer cell lines at the Bragg peak for both energies, but showed some variations between the different Monte Carlo codes. Conclusions: Although the Monte Carlo codes provided different results in energy deposition and especially in secondary particle production (most of the differences between the three codes were observed close to the Bragg peak, where the energy spectrum broadens), the results in terms of RBE were generally similar.« less

  2. The NETL MFiX Suite of multiphase flow models: A brief review and recent applications of MFiX-TFM to fossil energy technologies

    DOE PAGES

    Li, Tingwen; Rogers, William A.; Syamlal, Madhava; ...

    2016-07-29

    Here, the MFiX suite of multiphase computational fluid dynamics (CFD) codes is being developed at U.S. Department of Energy's National Energy Technology Laboratory (NETL). It includes several different approaches to multiphase simulation: MFiX-TFM, a two-fluid (Eulerian–Eulerian) model; MFiX-DEM, an Eulerian fluid model with a Lagrangian Discrete Element Model for the solids phase; and MFiX-PIC, Eulerian fluid model with Lagrangian particle ‘parcels’ representing particle groups. These models are undergoing continuous development and application, with verification, validation, and uncertainty quantification (VV&UQ) as integrated activities. After a brief summary of recent progress in the verification, validation and uncertainty quantification (VV&UQ), this article highlightsmore » two recent accomplishments in the application of MFiX-TFM to fossil energy technology development. First, recent application of MFiX to the pilot-scale KBR TRIG™ Transport Gasifier located at DOE's National Carbon Capture Center (NCCC) is described. Gasifier performance over a range of operating conditions was modeled and compared to NCCC operational data to validate the ability of the model to predict parametric behavior. Second, comparison of code predictions at a detailed fundamental scale is presented studying solid sorbents for the post-combustion capture of CO 2 from flue gas. Specifically designed NETL experiments are being used to validate hydrodynamics and chemical kinetics for the sorbent-based carbon capture process.« less

  3. Efficient Coding and Energy Efficiency Are Promoted by Balanced Excitatory and Inhibitory Synaptic Currents in Neuronal Network

    PubMed Central

    Yu, Lianchun; Shen, Zhou; Wang, Chen; Yu, Yuguo

    2018-01-01

    Selective pressure may drive neural systems to process as much information as possible with the lowest energy cost. Recent experiment evidence revealed that the ratio between synaptic excitation and inhibition (E/I) in local cortex is generally maintained at a certain value which may influence the efficiency of energy consumption and information transmission of neural networks. To understand this issue deeply, we constructed a typical recurrent Hodgkin-Huxley network model and studied the general principles that governs the relationship among the E/I synaptic current ratio, the energy cost and total amount of information transmission. We observed in such a network that there exists an optimal E/I synaptic current ratio in the network by which the information transmission achieves the maximum with relatively low energy cost. The coding energy efficiency which is defined as the mutual information divided by the energy cost, achieved the maximum with the balanced synaptic current. Although background noise degrades information transmission and imposes an additional energy cost, we find an optimal noise intensity that yields the largest information transmission and energy efficiency at this optimal E/I synaptic transmission ratio. The maximization of energy efficiency also requires a certain part of energy cost associated with spontaneous spiking and synaptic activities. We further proved this finding with analytical solution based on the response function of bistable neurons, and demonstrated that optimal net synaptic currents are capable of maximizing both the mutual information and energy efficiency. These results revealed that the development of E/I synaptic current balance could lead a cortical network to operate at a highly efficient information transmission rate at a relatively low energy cost. The generality of neuronal models and the recurrent network configuration used here suggest that the existence of an optimal E/I cell ratio for highly efficient energy costs and information maximization is a potential principle for cortical circuit networks. Summary We conducted numerical simulations and mathematical analysis to examine the energy efficiency of neural information transmission in a recurrent network as a function of the ratio of excitatory and inhibitory synaptic connections. We obtained a general solution showing that there exists an optimal E/I synaptic ratio in a recurrent network at which the information transmission as well as the energy efficiency of this network achieves a global maximum. These results reflect general mechanisms for sensory coding processes, which may give insight into the energy efficiency of neural communication and coding. PMID:29773979

  4. Efficient Coding and Energy Efficiency Are Promoted by Balanced Excitatory and Inhibitory Synaptic Currents in Neuronal Network.

    PubMed

    Yu, Lianchun; Shen, Zhou; Wang, Chen; Yu, Yuguo

    2018-01-01

    Selective pressure may drive neural systems to process as much information as possible with the lowest energy cost. Recent experiment evidence revealed that the ratio between synaptic excitation and inhibition (E/I) in local cortex is generally maintained at a certain value which may influence the efficiency of energy consumption and information transmission of neural networks. To understand this issue deeply, we constructed a typical recurrent Hodgkin-Huxley network model and studied the general principles that governs the relationship among the E/I synaptic current ratio, the energy cost and total amount of information transmission. We observed in such a network that there exists an optimal E/I synaptic current ratio in the network by which the information transmission achieves the maximum with relatively low energy cost. The coding energy efficiency which is defined as the mutual information divided by the energy cost, achieved the maximum with the balanced synaptic current. Although background noise degrades information transmission and imposes an additional energy cost, we find an optimal noise intensity that yields the largest information transmission and energy efficiency at this optimal E/I synaptic transmission ratio. The maximization of energy efficiency also requires a certain part of energy cost associated with spontaneous spiking and synaptic activities. We further proved this finding with analytical solution based on the response function of bistable neurons, and demonstrated that optimal net synaptic currents are capable of maximizing both the mutual information and energy efficiency. These results revealed that the development of E/I synaptic current balance could lead a cortical network to operate at a highly efficient information transmission rate at a relatively low energy cost. The generality of neuronal models and the recurrent network configuration used here suggest that the existence of an optimal E/I cell ratio for highly efficient energy costs and information maximization is a potential principle for cortical circuit networks. We conducted numerical simulations and mathematical analysis to examine the energy efficiency of neural information transmission in a recurrent network as a function of the ratio of excitatory and inhibitory synaptic connections. We obtained a general solution showing that there exists an optimal E/I synaptic ratio in a recurrent network at which the information transmission as well as the energy efficiency of this network achieves a global maximum. These results reflect general mechanisms for sensory coding processes, which may give insight into the energy efficiency of neural communication and coding.

  5. An Improved Analytic Model for Microdosimeter Response

    NASA Technical Reports Server (NTRS)

    Shinn, Judy L.; Wilson, John W.; Xapsos, Michael A.

    2001-01-01

    An analytic model used to predict energy deposition fluctuations in a microvolume by ions through direct events is improved to include indirect delta ray events. The new model can now account for the increase in flux at low lineal energy when the ions are of very high energy. Good agreement is obtained between the calculated results and available data for laboratory ion beams. Comparison of GCR (galactic cosmic ray) flux between Shuttle TEPC (tissue equivalent proportional counter) flight data and current calculations draws a different assessment of developmental work required for the GCR transport code (HZETRN) than previously concluded.

  6. HZEFRG1: An energy-dependent semiempirical nuclear fragmentation model

    NASA Technical Reports Server (NTRS)

    Townsend, Lawrence W.; Wilson, John W.; Tripathi, Ram K.; Norbury, John W.; Badavi, Francis F.; Khan, Ferdous

    1993-01-01

    Methods for calculating cross sections for the breakup of high-energy heavy ions by the combined nuclear and coulomb fields of the interacting nuclei are presented. The nuclear breakup contributions are estimated with an abrasion-ablation model of heavy ion fragmentation that includes an energy-dependent, mean free path. The electromagnetic dissociation contributions arising from the interacting coulomb fields are estimated by using Weizsacker-Williams theory extended to include electric dipole and electric quadrupole contributions. The complete computer code that implements the model is included as an appendix. Extensive comparisons of cross section predictions with available experimental data are made.

  7. The Easy Way of Finding Parameters in IBM (EWofFP-IBM)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Turkan, Nureddin

    E2/M1 multipole mixing ratios of even-even nuclei in transitional region can be calculated as soon as B(E2) and B(M1) values by using the PHINT and/or NP-BOS codes. The correct calculations of energies must be obtained to produce such calculations. Also, the correct parameter values are needed to calculate the energies. The logic of the codes is based on the mathematical and physical Statements describing interacting boson model (IBM) which is one of the model of nuclear structure physics. Here, the big problem is to find the best fitted parameters values of the model. So, by using the Easy Way ofmore » Finding Parameters in IBM (EWofFP-IBM), the best parameter values of IBM Hamiltonian for {sup 102-110}Pd and {sup 102-110}Ru isotopes were firstly obtained and then the energies were calculated. At the end, it was seen that the calculated results are in good agreement with the experimental ones. In addition, it was carried out that the presented energy values obtained by using the EWofFP-IBM are dominantly better than the previous theoretical data.« less

  8. A NetCDF version of the two-dimensional energy balance model based on the full multigrid algorithm

    NASA Astrophysics Data System (ADS)

    Zhuang, Kelin; North, Gerald R.; Stevens, Mark J.

    A NetCDF version of the two-dimensional energy balance model based on the full multigrid method in Fortran is introduced for both pedagogical and research purposes. Based on the land-sea-ice distribution, orbital elements, greenhouse gases concentration, and albedo, the code calculates the global seasonal surface temperature. A step-by-step guide with examples is provided for practice.

  9. Spacecraft Solar Particle Event (SPE) Shielding: Shielding Effectiveness as a Function of SPE model as Determined with the FLUKA Radiation Transport Code

    NASA Technical Reports Server (NTRS)

    Koontz, Steve; Atwell, William; Reddell, Brandon; Rojdev, Kristina

    2010-01-01

    Analysis of both satellite and surface neutron monitor data demonstrate that the widely utilized Exponential model of solar particle event (SPE) proton kinetic energy spectra can seriously underestimate SPE proton flux, especially at the highest kinetic energies. The more recently developed Band model produces better agreement with neutron monitor data ground level events (GLEs) and is believed to be considerably more accurate at high kinetic energies. Here, we report the results of modeling and simulation studies in which the radiation transport code FLUKA (FLUktuierende KAskade) is used to determine the changes in total ionizing dose (TID) and single-event environments (SEE) behind aluminum, polyethylene, carbon, and titanium shielding masses when the assumed form (i. e., Band or Exponential) of the solar particle event (SPE) kinetic energy spectra is changed. FLUKA simulations have fully three dimensions with an isotropic particle flux incident on a concentric spherical shell shielding mass and detector structure. The effects are reported for both energetic primary protons penetrating the shield mass and secondary particle showers caused by energetic primary protons colliding with shielding mass nuclei. Our results, in agreement with previous studies, show that use of the Exponential form of the event

  10. Update on the Code Intercomparison and Benchmark for Muon Fluence and Absorbed Dose Induced by an 18 GeV Electron Beam After Massive Iron Shielding

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fasso, A.; Ferrari, A.; Ferrari, A.

    In 1974, Nelson, Kase and Svensson published an experimental investigation on muon shielding around SLAC high-energy electron accelerators [1]. They measured muon fluence and absorbed dose induced by 14 and 18 GeV electron beams hitting a copper/water beamdump and attenuated in a thick steel shielding. In their paper, they compared the results with the theoretical models available at that time. In order to compare their experimental results with present model calculations, we use the modern transport Monte Carlo codes MARS15, FLUKA2011 and GEANT4 to model the experimental setup and run simulations. The results are then compared between the codes, andmore » with the SLAC data.« less

  11. BayeSED: A General Approach to Fitting the Spectral Energy Distribution of Galaxies

    NASA Astrophysics Data System (ADS)

    Han, Yunkun; Han, Zhanwen

    2014-11-01

    We present a newly developed version of BayeSED, a general Bayesian approach to the spectral energy distribution (SED) fitting of galaxies. The new BayeSED code has been systematically tested on a mock sample of galaxies. The comparison between the estimated and input values of the parameters shows that BayeSED can recover the physical parameters of galaxies reasonably well. We then applied BayeSED to interpret the SEDs of a large Ks -selected sample of galaxies in the COSMOS/UltraVISTA field with stellar population synthesis models. Using the new BayeSED code, a Bayesian model comparison of stellar population synthesis models has been performed for the first time. We found that the 2003 model by Bruzual & Charlot, statistically speaking, has greater Bayesian evidence than the 2005 model by Maraston for the Ks -selected sample. In addition, while setting the stellar metallicity as a free parameter obviously increases the Bayesian evidence of both models, varying the initial mass function has a notable effect only on the Maraston model. Meanwhile, the physical parameters estimated with BayeSED are found to be generally consistent with those obtained using the popular grid-based FAST code, while the former parameters exhibit more natural distributions. Based on the estimated physical parameters of the galaxies in the sample, we qualitatively classified the galaxies in the sample into five populations that may represent galaxies at different evolution stages or in different environments. We conclude that BayeSED could be a reliable and powerful tool for investigating the formation and evolution of galaxies from the rich multi-wavelength observations currently available. A binary version of the BayeSED code parallelized with Message Passing Interface is publicly available at https://bitbucket.org/hanyk/bayesed.

  12. BayeSED: A GENERAL APPROACH TO FITTING THE SPECTRAL ENERGY DISTRIBUTION OF GALAXIES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Han, Yunkun; Han, Zhanwen, E-mail: hanyk@ynao.ac.cn, E-mail: zhanwenhan@ynao.ac.cn

    2014-11-01

    We present a newly developed version of BayeSED, a general Bayesian approach to the spectral energy distribution (SED) fitting of galaxies. The new BayeSED code has been systematically tested on a mock sample of galaxies. The comparison between the estimated and input values of the parameters shows that BayeSED can recover the physical parameters of galaxies reasonably well. We then applied BayeSED to interpret the SEDs of a large K{sub s} -selected sample of galaxies in the COSMOS/UltraVISTA field with stellar population synthesis models. Using the new BayeSED code, a Bayesian model comparison of stellar population synthesis models has beenmore » performed for the first time. We found that the 2003 model by Bruzual and Charlot, statistically speaking, has greater Bayesian evidence than the 2005 model by Maraston for the K{sub s} -selected sample. In addition, while setting the stellar metallicity as a free parameter obviously increases the Bayesian evidence of both models, varying the initial mass function has a notable effect only on the Maraston model. Meanwhile, the physical parameters estimated with BayeSED are found to be generally consistent with those obtained using the popular grid-based FAST code, while the former parameters exhibit more natural distributions. Based on the estimated physical parameters of the galaxies in the sample, we qualitatively classified the galaxies in the sample into five populations that may represent galaxies at different evolution stages or in different environments. We conclude that BayeSED could be a reliable and powerful tool for investigating the formation and evolution of galaxies from the rich multi-wavelength observations currently available. A binary version of the BayeSED code parallelized with Message Passing Interface is publicly available at https://bitbucket.org/hanyk/bayesed.« less

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dillon, Heather E.; Antonopoulos, Chrissi A.; Solana, Amy E.

    As the model energy codes are improved to reach efficiency levels 50 percent greater than current codes, use of on-site renewable energy generation is likely to become a code requirement. This requirement will be needed because traditional mechanisms for code improvement, including envelope, mechanical and lighting, have been pressed to the end of reasonable limits. Research has been conducted to determine the mechanism for implementing this requirement (Kaufman 2011). Kaufmann et al. determined that the most appropriate way to structure an on-site renewable requirement for commercial buildings is to define the requirement in terms of an installed power density permore » unit of roof area. This provides a mechanism that is suitable for the installation of photovoltaic (PV) systems on future buildings to offset electricity and reduce the total building energy load. Kaufmann et al. suggested that an appropriate maximum for the requirement in the commercial sector would be 4 W/ft{sup 2} of roof area or 0.5 W/ft{sup 2} of conditioned floor area. As with all code requirements, there must be an alternative compliance path for buildings that may not reasonably meet the renewables requirement. This might include conditions like shading (which makes rooftop PV arrays less effective), unusual architecture, undesirable roof pitch, unsuitable building orientation, or other issues. In the short term, alternative compliance paths including high performance mechanical equipment, dramatic envelope changes, or controls changes may be feasible. These options may be less expensive than many renewable systems, which will require careful balance of energy measures when setting the code requirement levels. As the stringency of the code continues to increase however, efficiency trade-offs will be maximized, requiring alternative compliance options to be focused solely on renewable electricity trade-offs or equivalent programs. One alternate compliance path includes purchase of Renewable Energy Credits (RECs). Each REC represents a specified amount of renewable electricity production and provides an offset of environmental externalities associated with non-renewable electricity production. The purpose of this paper is to explore the possible issues with RECs and comparable alternative compliance options. Existing codes have been examined to determine energy equivalence between the energy generation requirement and the RECs alternative over the life of the building. The price equivalence of the requirement and the alternative are determined to consider the economic drivers for a market decision. This research includes case studies that review how the few existing codes have incorporated RECs and some of the issues inherent with REC markets. Section 1 of the report reviews compliance options including RECs, green energy purchase programs, shared solar agreements and leases, and other options. Section 2 provides detailed case studies on codes that include RECs and community based alternative compliance methods. The methods the existing code requirements structure alternative compliance options like RECs are the focus of the case studies. Section 3 explores the possible structure of the renewable energy generation requirement in the context of energy and price equivalence. The price of RECs have shown high variation by market and over time which makes it critical to for code language to be updated frequently for a renewable energy generation requirement or the requirement will not remain price-equivalent over time. Section 4 of the report provides a maximum case estimate for impact to the PV market and the REC market based on the Kaufmann et al. proposed requirement levels. If all new buildings in the commercial sector complied with the requirement to install rooftop PV arrays, nearly 4,700 MW of solar would be installed in 2012, a major increase from EIA estimates of 640 MW of solar generation capacity installed in 2009. The residential sector could contribute roughly an additional 2,300 MW based on the same code requirement levels of 4 W/ft{sup 2} of roof area. Section 5 of the report provides a basic framework for draft code language recommendations based on the analysis of the alternative compliance levels.« less

  14. A survey of modelling methods for high-fidelity wind farm simulations using large eddy simulation.

    PubMed

    Breton, S-P; Sumner, J; Sørensen, J N; Hansen, K S; Sarmast, S; Ivanell, S

    2017-04-13

    Large eddy simulations (LES) of wind farms have the capability to provide valuable and detailed information about the dynamics of wind turbine wakes. For this reason, their use within the wind energy research community is on the rise, spurring the development of new models and methods. This review surveys the most common schemes available to model the rotor, atmospheric conditions and terrain effects within current state-of-the-art LES codes, of which an overview is provided. A summary of the experimental research data available for validation of LES codes within the context of single and multiple wake situations is also supplied. Some typical results for wind turbine and wind farm flows are presented to illustrate best practices for carrying out high-fidelity LES of wind farms under various atmospheric and terrain conditions.This article is part of the themed issue 'Wind energy in complex terrains'. © 2017 The Author(s).

  15. A survey of modelling methods for high-fidelity wind farm simulations using large eddy simulation

    PubMed Central

    Sumner, J.; Sørensen, J. N.; Hansen, K. S.; Sarmast, S.; Ivanell, S.

    2017-01-01

    Large eddy simulations (LES) of wind farms have the capability to provide valuable and detailed information about the dynamics of wind turbine wakes. For this reason, their use within the wind energy research community is on the rise, spurring the development of new models and methods. This review surveys the most common schemes available to model the rotor, atmospheric conditions and terrain effects within current state-of-the-art LES codes, of which an overview is provided. A summary of the experimental research data available for validation of LES codes within the context of single and multiple wake situations is also supplied. Some typical results for wind turbine and wind farm flows are presented to illustrate best practices for carrying out high-fidelity LES of wind farms under various atmospheric and terrain conditions. This article is part of the themed issue ‘Wind energy in complex terrains’. PMID:28265021

  16. Experimental differential cross sections, level densities, and spin cutoffs as a testing ground for nuclear reaction codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Voinov, Alexander V.; Grimes, Steven M.; Brune, Carl R.

    Proton double-differential cross sections from 59Co(α,p) 62Ni, 57Fe(α,p) 60Co, 56Fe( 7Li,p) 62Ni, and 55Mn( 6Li,p) 60Co reactions have been measured with 21-MeV α and 15-MeV lithium beams. Cross sections have been compared against calculations with the empire reaction code. Different input level density models have been tested. It was found that the Gilbert and Cameron [A. Gilbert and A. G. W. Cameron, Can. J. Phys. 43, 1446 (1965)] level density model is best to reproduce experimental data. Level densities and spin cutoff parameters for 62Ni and 60Co above the excitation energy range of discrete levels (in continuum) have been obtainedmore » with a Monte Carlo technique. Furthermore, excitation energy dependencies were found to be inconsistent with the Fermi-gas model.« less

  17. Event-by-Event Simulations of Early Gluon Fields in High Energy Nuclear Collisions

    NASA Astrophysics Data System (ADS)

    Nickel, Matthew; Rose, Steven; Fries, Rainer

    2017-09-01

    Collisions of heavy ions are carried out at ultra relativistic speeds at the Relativistic Heavy Ion Collider and the Large Hadron Collider to create Quark Gluon Plasma. The earliest stages of such collisions are dominated by the dynamics of classical gluon fields. The McLerran-Venugopalan (MV) model of color glass condensate provides a model for this process. Previous research has provided an analytic solution for event averaged observables in the MV model. Using the High Performance Research Computing Center (HPRC) at Texas A&M, we have developed a C++ code to explicitly calculate the initial gluon fields and energy momentum tensor event by event using the analytic recursive solution. The code has been tested against previously known analytic results up to fourth order. We have also have been able to test the convergence of the recursive solution at high orders in time and studied the time evolution of color glass condensate.

  18. Influence of the plasma environment on atomic structure using an ion-sphere model

    NASA Astrophysics Data System (ADS)

    Belkhiri, Madeny; Fontes, Christopher J.; Poirier, Michel

    2015-09-01

    Plasma environment effects on atomic structure are analyzed using various atomic structure codes. To monitor the effect of high free-electron density or low temperatures, Fermi-Dirac and Maxwell-Boltzmann statistics are compared. After a discussion of the implementation of the Fermi-Dirac approach within the ion-sphere model, several applications are considered. In order to check the consistency of the modifications brought here to extant codes, calculations have been performed using the Los Alamos Cowan Atomic Structure (cats) code in its Hartree-Fock or Hartree-Fock-Slater form and the parametric potential Flexible Atomic Code (fac). The ground-state energy shifts due to the plasma effects for the six most ionized aluminum ions have been calculated using the fac and cats codes and fairly agree. For the intercombination resonance line in Fe22 +, the plasma effect within the uniform electron gas model results in a positive shift that agrees with the multiconfiguration Dirac-Fock value of B. Saha and S. Fritzsche [J. Phys. B 40, 259 (2007), 10.1088/0953-4075/40/2/002]. Last, the present model is compared to experimental data in titanium measured on the terawatt Astra facility and provides values for electron temperature and density in agreement with the maria code.

  19. Measurement of Thick Target Neutron Yields at 0-Degree Bombarded With 140-MeV, 250-MeV And 350-MeV Protons

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Iwamoto, Yosuke; /JAERI, Kyoto; Taniguchi, Shingo

    Neutron energy spectra at 0{sup o} produced from stopping-length graphite, aluminum, iron and lead targets bombarded with 140, 250 and 350 MeV protons were measured at the neutron TOF course in RCNP of Osaka University. The neutron energy spectra were obtained by using the time-of-flight technique in the energy range from 10 MeV to incident proton energy. To compare the experimental results, Monte Carlo calculations with the PHITS and MCNPX codes were performed using the JENDL-HE and the LA150 evaluated nuclear data files, the ISOBAR model implemented in PHITS, and the LAHET code in MCNPX. It was found that thesemore » calculated results at 0{sup o} generally agreed with the experimental results in the energy range above 20 MeV except for graphite at 250 and 350 MeV.« less

  20. Analysis of CRRES PHA Data for Low-Energy-Deposition Events

    NASA Technical Reports Server (NTRS)

    McNulty, P. J.; Hardage, Donna

    2004-01-01

    This effort analyzed the low-energy deposition Pulse Height Analyzer (PHA) data from the Combined Release and Radiation Effects Satellite (CRRES). The high-energy deposition data had been previously analyzed and shown to be in agreement with spallation reactions predicted by the Clemson University Proton Interactions in Devices (CUPID) simulation model and existing environmental and orbit positioning models (AP-8 with USAF B-L coordinates). The scope of this project was to develop and improve the CUPID model by increasing its range to lower incident particle energies, and to expand the modeling to include contributions from elastic interactions. Before making changes, it was necessary to identify experimental data suitable for benchmarking the codes; then, the models to the CRRES PHA data could be applied. It was also planned to test the model against available low-energy proton or neutron SEU data obtained with mono-energetic beams.

  1. High-Energy Activation Simulation Coupling TENDL and SPACS with FISPACT-II

    NASA Astrophysics Data System (ADS)

    Fleming, Michael; Sublet, Jean-Christophe; Gilbert, Mark

    2018-06-01

    To address the needs of activation-transmutation simulation in incident-particle fields with energies above a few hundred MeV, the FISPACT-II code has been extended to splice TENDL standard ENDF-6 nuclear data with extended nuclear data forms. The JENDL-2007/HE and HEAD-2009 libraries were processed for FISPACT-II and used to demonstrate the capabilities of the new code version. Tests of the libraries and comparisons against both experimental yield data and the most recent intra-nuclear cascade model results demonstrate that there is need for improved nuclear data libraries up to and above 1 GeV. Simulations on lead targets show that important radionuclides, such as 148Gd, can vary by more than an order of magnitude where more advanced models find agreement within the experimental uncertainties.

  2. Self-consistent modeling of electron cyclotron resonance ion sources

    NASA Astrophysics Data System (ADS)

    Girard, A.; Hitz, D.; Melin, G.; Serebrennikov, K.; Lécot, C.

    2004-05-01

    In order to predict the performances of electron cyclotron resonance ion source (ECRIS), it is necessary to perfectly model the different parts of these sources: (i) magnetic configuration; (ii) plasma characteristics; (iii) extraction system. The magnetic configuration is easily calculated via commercial codes; different codes also simulate the ion extraction, either in two dimension, or even in three dimension (to take into account the shape of the plasma at the extraction influenced by the hexapole). However the characteristics of the plasma are not always mastered. This article describes the self-consistent modeling of ECRIS: we have developed a code which takes into account the most important construction parameters: the size of the plasma (length, diameter), the mirror ratio and axial magnetic profile, whether a biased probe is installed or not. These input parameters are used to feed a self-consistent code, which calculates the characteristics of the plasma: electron density and energy, charge state distribution, plasma potential. The code is briefly described, and some of its most interesting results are presented. Comparisons are made between the calculations and the results obtained experimentally.

  3. A fast analytical undulator model for realistic high-energy FEL simulations

    NASA Astrophysics Data System (ADS)

    Tatchyn, R.; Cremer, T.

    1997-02-01

    A number of leading FEL simulation codes used for modeling gain in the ultralong undulators required for SASE saturation in the <100 Å range employ simplified analytical models both for field and error representations. Although it is recognized that both the practical and theoretical validity of such codes could be enhanced by incorporating realistic undulator field calculations, the computational cost of doing this can be prohibitive, especially for point-to-point integration of the equations of motion through each undulator period. In this paper we describe a simple analytical model suitable for modeling realistic permanent magnet (PM), hybrid/PM, and non-PM undulator structures, and discuss selected techniques for minimizing computation time.

  4. Ultra-high-energy cosmic rays from low-luminosity active galactic nuclei

    NASA Astrophysics Data System (ADS)

    Duţan, Ioana; Caramete, Laurenţiu I.

    2015-03-01

    We investigate the production of ultra-high-energy cosmic ray (UHECR) in relativistic jets from low-luminosity active galactic nuclei (LLAGN). We start by proposing a model for the UHECR contribution from the black holes (BHs) in LLAGN, which present a jet power Pj ⩽1046 erg s-1. This is in contrast to the opinion that only high-luminosity AGN can accelerate particles to energies ⩾ 50 EeV. We rewrite the equations which describe the synchrotron self-absorbed emission of a non-thermal particle distribution to obtain the observed radio flux density from sources with a flat-spectrum core and its relationship to the jet power. We found that the UHECR flux is dependent on the observed radio flux density, the distance to the AGN, and the BH mass, where the particle acceleration regions can be sustained by the magnetic energy extraction from the BH at the center of the AGN. We use a complete sample of 29 radio sources with a total flux density at 5 GHz greater than 0.5 Jy to make predictions for the maximum particle energy, luminosity, and flux of the UHECRs from nearby AGN. These predictions are then used in a semi-analytical code developed in Mathematica (SAM code) as inputs for the Monte-Carlo simulations to obtain the distribution of the arrival direction at the Earth and the energy spectrum of the UHECRs, taking into account their deflection in the intergalactic magnetic fields. For comparison, we also use the CRPropa code with the same initial conditions as for the SAM code. Importantly, to calculate the energy spectrum we also include the weighting of the UHECR flux per each UHECR source. Next, we compare the energy spectrum of the UHECRs with that obtained by the Pierre Auger Observatory.

  5. Single event upsets in semiconductor devices induced by highly ionising particles.

    PubMed

    Sannikov, A V

    2004-01-01

    A new model of single event upsets (SEUs), created in memory cells by heavy ions and high energy hadrons, has been developed. The model takes into account the spatial distribution of charge collection efficiency over the cell area not considered in previous approaches. Three-dimensional calculations made by the HADRON code have shown good agreement with experimental data for the energy dependence of proton SEU cross sections, sensitive depths and other SEU observables. The model is promising for prediction of SEU rates for memory chips exposed in space and in high-energy experiments as well as for the development of a high-energy neutron dosemeter based on the SEU effect.

  6. Interplay of Laser-Plasma Interactions and Inertial Fusion Hydrodynamics.

    PubMed

    Strozzi, D J; Bailey, D S; Michel, P; Divol, L; Sepke, S M; Kerbel, G D; Thomas, C A; Ralph, J E; Moody, J D; Schneider, M B

    2017-01-13

    The effects of laser-plasma interactions (LPI) on the dynamics of inertial confinement fusion hohlraums are investigated via a new approach that self-consistently couples reduced LPI models into radiation-hydrodynamics numerical codes. The interplay between hydrodynamics and LPI-specifically stimulated Raman scatter and crossed-beam energy transfer (CBET)-mostly occurs via momentum and energy deposition into Langmuir and ion acoustic waves. This spatially redistributes energy coupling to the target, which affects the background plasma conditions and thus, modifies laser propagation. This model shows reduced CBET and significant laser energy depletion by Langmuir waves, which reduce the discrepancy between modeling and data from hohlraum experiments on wall x-ray emission and capsule implosion shape.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Strozzi, D. J.; Bailey, D. S.; Michel, P.

    The effects of laser-plasma interactions (LPI) on the dynamics of inertial confinement fusion hohlraums are investigated in this work via a new approach that self-consistently couples reduced LPI models into radiation-hydrodynamics numerical codes. The interplay between hydrodynamics and LPI—specifically stimulated Raman scatter and crossed-beam energy transfer (CBET)—mostly occurs via momentum and energy deposition into Langmuir and ion acoustic waves. This spatially redistributes energy coupling to the target, which affects the background plasma conditions and thus, modifies laser propagation. In conclusion, this model shows reduced CBET and significant laser energy depletion by Langmuir waves, which reduce the discrepancy between modeling andmore » data from hohlraum experiments on wall x-ray emission and capsule implosion shape.« less

  8. 40% Whole-House Energy Savings in the Hot-Humid Climate

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    none,

    This guide book is a resource to help builders design and construct highly energy-efficient homes, while addressing building durability, indoor air quality, and occupant health, safety, and comfort. With the measures described in this guide, builders in the hot-humid climate can build homes that achieve whole house energy savings of 40% over the Building America benchmark (the 1993 Model Energy Code) with no added overall costs for consumers.

  9. 40% Whole-House Energy Savings in the Mixed-Humid Climate

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baechler, Michael C.; Gilbride, T. L.; Hefty, M. G.

    2011-09-01

    This guide book is a resource to help builders design and construct highly energy-efficient homes, while addressing building durability, indoor air quality, and occupant health, safety, and comfort. With the measures described in this guide, builders in the mixed-humid climate can build homes that achieve whole house energy savings of 40% over the Building America benchmark (the 1993 Model Energy Code) with no added overall costs for consumers.

  10. Calculation of thermodynamic functions of aluminum plasma for high-energy-density systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shumaev, V. V., E-mail: shumaev@student.bmstu.ru

    The results of calculating the degree of ionization, the pressure, and the specific internal energy of aluminum plasma in a wide temperature range are presented. The TERMAG computational code based on the Thomas–Fermi model was used at temperatures T > 105 K, and the ionization equilibrium model (Saha model) was applied at lower temperatures. Quantitatively similar results were obtained in the temperature range where both models are applicable. This suggests that the obtained data may be joined to produce a wide-range equation of state.

  11. A Validation and Code-to-Code Verification of FAST for a Megawatt-Scale Wind Turbine with Aeroelastically Tailored Blades

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guntur, Srinivas; Jonkman, Jason; Sievers, Ryan

    This paper presents validation and code-to-code verification of the latest version of the U.S. Department of Energy, National Renewable Energy Laboratory wind turbine aeroelastic engineering simulation tool, FAST v8. A set of 1,141 test cases, for which experimental data from a Siemens 2.3 MW machine have been made available and were in accordance with the International Electrotechnical Commission 61400-13 guidelines, were identified. These conditions were simulated using FAST as well as the Siemens in-house aeroelastic code, BHawC. This paper presents a detailed analysis comparing results from FAST with those from BHawC as well as experimental measurements, using statistics including themore » means and the standard deviations along with the power spectral densities of select turbine parameters and loads. Results indicate a good agreement among the predictions using FAST, BHawC, and experimental measurements. Here, these agreements are discussed in detail in this paper, along with some comments regarding the differences seen in these comparisons relative to the inherent uncertainties in such a model-based analysis.« less

  12. A Validation and Code-to-Code Verification of FAST for a Megawatt-Scale Wind Turbine with Aeroelastically Tailored Blades

    DOE PAGES

    Guntur, Srinivas; Jonkman, Jason; Sievers, Ryan; ...

    2017-08-29

    This paper presents validation and code-to-code verification of the latest version of the U.S. Department of Energy, National Renewable Energy Laboratory wind turbine aeroelastic engineering simulation tool, FAST v8. A set of 1,141 test cases, for which experimental data from a Siemens 2.3 MW machine have been made available and were in accordance with the International Electrotechnical Commission 61400-13 guidelines, were identified. These conditions were simulated using FAST as well as the Siemens in-house aeroelastic code, BHawC. This paper presents a detailed analysis comparing results from FAST with those from BHawC as well as experimental measurements, using statistics including themore » means and the standard deviations along with the power spectral densities of select turbine parameters and loads. Results indicate a good agreement among the predictions using FAST, BHawC, and experimental measurements. Here, these agreements are discussed in detail in this paper, along with some comments regarding the differences seen in these comparisons relative to the inherent uncertainties in such a model-based analysis.« less

  13. Monte Carlo track structure for radiation biology and space applications

    NASA Technical Reports Server (NTRS)

    Nikjoo, H.; Uehara, S.; Khvostunov, I. G.; Cucinotta, F. A.; Wilson, W. E.; Goodhead, D. T.

    2001-01-01

    Over the past two decades event by event Monte Carlo track structure codes have increasingly been used for biophysical modelling and radiotherapy. Advent of these codes has helped to shed light on many aspects of microdosimetry and mechanism of damage by ionising radiation in the cell. These codes have continuously been modified to include new improved cross sections and computational techniques. This paper provides a summary of input data for ionizations, excitations and elastic scattering cross sections for event by event Monte Carlo track structure simulations for electrons and ions in the form of parametric equations, which makes it easy to reproduce the data. Stopping power and radial distribution of dose are presented for ions and compared with experimental data. A model is described for simulation of full slowing down of proton tracks in water in the range 1 keV to 1 MeV. Modelling and calculations are presented for the response of a TEPC proportional counter irradiated with 5 MeV alpha-particles. Distributions are presented for the wall and wall-less counters. Data shows contribution of indirect effects to the lineal energy distribution for the wall counters responses even at such a low ion energy.

  14. Microdosimetric investigation of the spectra from YAYOI by use of the Monte Carlo code PHITS.

    PubMed

    Nakao, Minoru; Baba, Hiromi; Oishi, Ayumu; Onizuka, Yoshihiko

    2010-07-01

    The purpose of this study was to obtain the neutron energy spectrum on the surface of the moderator of the Tokyo University reactor YAYOI and to investigate the origins of peaks observed in the neutron energy spectrum by use of the Monte Carlo Code PHITS for evaluating biological studies. The moderator system was modeled with the use of details from an article that reported a calculation result and a measurement result for a neutron spectrum on the surface of the moderator of the reactor. Our calculation results with PHITS were compared to those obtained with the discrete ordinate code ANISN described in the article. In addition, the changes in the neutron spectrum at the boundaries of materials in the moderator system were examined with PHITS. Also, microdosimetric energy distributions of secondary charged particles from neutron recoil or reaction were calculated by use of PHITS and compared with a microdosimetric experiment. Our calculations of the neutron energy spectrum with PHITS showed good agreement with the results of ANISN in terms of the energy and structure of the peaks. However, the microdosimetric dose distribution spectrum with PHITS showed a remarkable discrepancy with the experimental one. The experimental spectrum could not be explained by PHITS when we used neutron beams of two mono-energies.

  15. Numerical modelling of biomass combustion: Solid conversion processes in a fixed bed furnace

    NASA Astrophysics Data System (ADS)

    Karim, Md. Rezwanul; Naser, Jamal

    2017-06-01

    Increasing demand for energy and rising concerns over global warming has urged the use of renewable energy sources to carry a sustainable development of the world. Bio mass is a renewable energy which has become an important fuel to produce thermal energy or electricity. It is an eco-friendly source of energy as it reduces carbon dioxide emissions. Combustion of solid biomass is a complex phenomenon due to its large varieties and physical structures. Among various systems, fixed bed combustion is the most commonly used technique for thermal conversion of solid biomass. But inadequate knowledge on complex solid conversion processes has limited the development of such combustion system. Numerical modelling of this combustion system has some advantages over experimental analysis. Many important system parameters (e.g. temperature, density, solid fraction) can be estimated inside the entire domain under different working conditions. In this work, a complete numerical model is used for solid conversion processes of biomass combustion in a fixed bed furnace. The combustion system is divided in to solid and gas phase. This model includes several sub models to characterize the solid phase of the combustion with several variables. User defined subroutines are used to introduce solid phase variables in commercial CFD code. Gas phase of combustion is resolved using built-in module of CFD code. Heat transfer model is modified to predict the temperature of solid and gas phases with special radiation heat transfer solution for considering the high absorptivity of the medium. Considering all solid conversion processes the solid phase variables are evaluated. Results obtained are discussed with reference from an experimental burner.

  16. Validation of a Three-Dimensional Ablation and Thermal Response Simulation Code

    NASA Technical Reports Server (NTRS)

    Chen, Yih-Kanq; Milos, Frank S.; Gokcen, Tahir

    2010-01-01

    The 3dFIAT code simulates pyrolysis, ablation, and shape change of thermal protection materials and systems in three dimensions. The governing equations, which include energy conservation, a three-component decomposition model, and a surface energy balance, are solved with a moving grid system to simulate the shape change due to surface recession. This work is the first part of a code validation study for new capabilities that were added to 3dFIAT. These expanded capabilities include a multi-block moving grid system and an orthotropic thermal conductivity model. This paper focuses on conditions with minimal shape change in which the fluid/solid coupling is not necessary. Two groups of test cases of 3dFIAT analyses of Phenolic Impregnated Carbon Ablator in an arc-jet are presented. In the first group, axisymmetric iso-q shaped models are studied to check the accuracy of three-dimensional multi-block grid system. In the second group, similar models with various through-the-thickness conductivity directions are examined. In this group, the material thermal response is three-dimensional, because of the carbon fiber orientation. Predictions from 3dFIAT are presented and compared with arcjet test data. The 3dFIAT predictions agree very well with thermocouple data for both groups of test cases.

  17. Study of premixing phase of steam explosion with JASMINE code in ALPHA program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moriyama, Kiyofumi; Yamano, Norihiro; Maruyama, Yu

    Premixing phase of steam explosion has been studied in ALPHA Program at Japan Atomic Energy Research Institute (JAERI). An analytical model to simulate the premixing phase, JASMINE (JAERI Simulator for Multiphase Interaction and Explosion), has been developed based on a multi-dimensional multi-phase thermal hydraulics code MISTRAL (by Fuji Research Institute Co.). The original code was extended to simulate the physics in the premixing phenomena. The first stage of the code validation was performed by analyzing two mixing experiments with solid particles and water: the isothermal experiment by Gilbertson et al. (1992) and the hot particle experiment by Angelini et al.more » (1993) (MAGICO). The code predicted reasonably well the experiments. Effectiveness of the TVD scheme employed in the code was also demonstrated.« less

  18. Effects of Nonequilibrium Chemistry and Darcy-Forchheimer Pyrolysis Flow for Charring Ablator

    NASA Technical Reports Server (NTRS)

    Chen, Yih-Kanq; Milos, Frank S.

    2013-01-01

    The fully implicit ablation and thermal response code simulates pyrolysis and ablation of thermal protection materials and systems. The governing equations, which include energy conservation, a three-component decomposition model, and a surface energy balance, are solved with a moving grid.This work describes new modeling capabilities that are added to a special version of code. These capabilities include a time-dependent pyrolysis gas flow momentum equation with Darcy-Forchheimer terms and pyrolysis gas species conservation equations with finite rate homogeneous chemical reactions. The total energy conservation equation is also enhanced for consistency with these new additions. Two groups of parametric studies of the phenolic impregnated carbon ablator are performed. In the first group, an Orion flight environment for a proposed lunar-return trajectory is considered. In the second group, various test conditions for arcjet models are examined. The central focus of these parametric studies is to understand the effect of pyrolysis gas momentum transfer on material in-depth thermal responses with finite-rate, equilibrium, or frozen homogeneous gas chemistry. Results indicate that the presence of chemical nonequilibrium pyrolysis gas flow does not significantly alter the in-depth thermal response performance predicted using the chemical equilibrium gas model.

  19. Flowfield Comparisons from Three Navier-Stokes Solvers for an Axisymmetric Separate Flow Jet

    NASA Technical Reports Server (NTRS)

    Koch, L. Danielle; Bridges, James; Khavaran, Abbas

    2002-01-01

    To meet new noise reduction goals, many concepts to enhance mixing in the exhaust jets of turbofan engines are being studied. Accurate steady state flowfield predictions from state-of-the-art computational fluid dynamics (CFD) solvers are needed as input to the latest noise prediction codes. The main intent of this paper was to ascertain that similar Navier-Stokes solvers run at different sites would yield comparable results for an axisymmetric two-stream nozzle case. Predictions from the WIND and the NPARC codes are compared to previously reported experimental data and results from the CRAFT Navier-Stokes solver. Similar k-epsilon turbulence models were employed in each solver, and identical computational grids were used. Agreement between experimental data and predictions from each code was generally good for mean values. All three codes underpredict the maximum value of turbulent kinetic energy. The predicted locations of the maximum turbulent kinetic energy were farther downstream than seen in the data. A grid study was conducted using the WIND code, and comments about convergence criteria and grid requirements for CFD solutions to be used as input for noise prediction computations are given. Additionally, noise predictions from the MGBK code, using the CFD results from the CRAFT code, NPARC, and WIND as input are compared to data.

  20. Development of 1D Liner Compression Code for IDL

    NASA Astrophysics Data System (ADS)

    Shimazu, Akihisa; Slough, John; Pancotti, Anthony

    2015-11-01

    A 1D liner compression code is developed to model liner implosion dynamics in the Inductively Driven Liner Experiment (IDL) where FRC plasmoid is compressed via inductively-driven metal liners. The driver circuit, magnetic field, joule heating, and liner dynamics calculations are performed at each time step in sequence to couple these effects in the code. To obtain more realistic magnetic field results for a given drive coil geometry, 2D and 3D effects are incorporated into the 1D field calculation through use of correction factor table lookup approach. Commercial low-frequency electromagnetic fields solver, ANSYS Maxwell 3D, is used to solve the magnetic field profile for static liner condition at various liner radius in order to derive correction factors for the 1D field calculation in the code. The liner dynamics results from the code is verified to be in good agreement with the results from commercial explicit dynamics solver, ANSYS Explicit Dynamics, and previous liner experiment. The developed code is used to optimize the capacitor bank and driver coil design for better energy transfer and coupling. FRC gain calculations are also performed using the liner compression data from the code for the conceptual design of the reactor sized system for fusion energy gains.

  1. REX3DV1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holm, Elizabeth A.

    2002-03-28

    This code is a FORTRAN code for three-dimensional Monte Carol Potts Model (MCPM) Recrystallization and grain growth. A continuum grain structure is mapped onto a three-dimensional lattice. The mapping procedure is analogous to color bitmapping the grain structure; grains are clusters of pixels (sites) of the same color (spin). The total system energy is given by the Pott Hamiltonian and the kinetics of grain growth are determined through a Monte Carlo technique with a nonconserved order parameter (Glauber dynamics). The code can be compiled and run on UNIX/Linux platforms.

  2. Nonequilibrium radiation and chemistry models for aerocapture vehicle flowfields, volume 1

    NASA Technical Reports Server (NTRS)

    Carlson, Leland A.

    1991-01-01

    The following subject areas are covered: the development of detailed nonequilibrium radiation models for molecules along with appropriate models for atoms; the inclusion of nongray radiation gasdynamic coupling in the VSL (Viscous Shock Layer) code; the development and evaluation of various electron-electronic energy models; and an examination of the effects of shock slip.

  3. Model documentation: Electricity Market Module, Electricity Fuel Dispatch Submodule

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    This report documents the objectives, analytical approach and development of the National Energy Modeling System Electricity Fuel Dispatch Submodule (EFD), a submodule of the Electricity Market Module (EMM). The report catalogues and describes the model assumptions, computational methodology, parameter estimation techniques, model source code, and forecast results generated through the synthesis and scenario development based on these components.

  4. GFSSP Training Course Lectures

    NASA Technical Reports Server (NTRS)

    Majumdar, Alok K.

    2008-01-01

    GFSSP has been extended to model conjugate heat transfer Fluid Solid Network Elements include: a) Fluid nodes and Flow Branches; b) Solid Nodes and Ambient Nodes; c) Conductors connecting Fluid-Solid, Solid-Solid and Solid-Ambient Nodes. Heat Conduction Equations are solved simultaneously with Fluid Conservation Equations for Mass, Momentum, Energy and Equation of State. The extended code was verified by comparing with analytical solution for simple conduction-convection problem The code was applied to model: a) Pressurization of Cryogenic Tank; b) Freezing and Thawing of Metal; c) Chilldown of Cryogenic Transfer Line; d) Boil-off from Cryogenic Tank.

  5. a Proposed Benchmark Problem for Scatter Calculations in Radiographic Modelling

    NASA Astrophysics Data System (ADS)

    Jaenisch, G.-R.; Bellon, C.; Schumm, A.; Tabary, J.; Duvauchelle, Ph.

    2009-03-01

    Code Validation is a permanent concern in computer modelling, and has been addressed repeatedly in eddy current and ultrasonic modeling. A good benchmark problem is sufficiently simple to be taken into account by various codes without strong requirements on geometry representation capabilities, focuses on few or even a single aspect of the problem at hand to facilitate interpretation and to avoid that compound errors compensate themselves, yields a quantitative result and is experimentally accessible. In this paper we attempt to address code validation for one aspect of radiographic modeling, the scattered radiation prediction. Many NDT applications can not neglect scattered radiation, and the scatter calculation thus is important to faithfully simulate the inspection situation. Our benchmark problem covers the wall thickness range of 10 to 50 mm for single wall inspections, with energies ranging from 100 to 500 keV in the first stage, and up to 1 MeV with wall thicknesses up to 70 mm in the extended stage. A simple plate geometry is sufficient for this purpose, and the scatter data is compared on a photon level, without a film model, which allows for comparisons with reference codes like MCNP. We compare results of three Monte Carlo codes (McRay, Sindbad and Moderato) as well as an analytical first order scattering code (VXI), and confront them to results obtained with MCNP. The comparison with an analytical scatter model provides insights into the application domain where this kind of approach can successfully replace Monte-Carlo calculations.

  6. Kinetic Modeling of Next-Generation High-Energy, High-Intensity Laser-Ion Accelerators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Albright, Brian James; Yin, Lin; Stark, David James

    One of the long-standing problems in the community is the question of how we can model “next-generation” laser-ion acceleration in a computationally tractable way. A new particle tracking capability in the LANL VPIC kinetic plasma modeling code has enabled us to solve this long-standing problem

  7. Methodology for Evaluating Cost-effectiveness of Commercial Energy Code Changes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hart, Philip R.; Liu, Bing

    This document lays out the U.S. Department of Energy’s (DOE’s) method for evaluating the cost-effectiveness of energy code proposals and editions. The evaluation is applied to provisions or editions of the American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE) Standard 90.1 and the International Energy Conservation Code (IECC). The method follows standard life-cycle cost (LCC) economic analysis procedures. Cost-effectiveness evaluation requires three steps: 1) evaluating the energy and energy cost savings of code changes, 2) evaluating the incremental and replacement costs related to the changes, and 3) determining the cost-effectiveness of energy code changes based on those costs andmore » savings over time.« less

  8. MCNP6 Simulation of Light and Medium Nuclei Fragmentation at Intermediate Energies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mashnik, Stepan Georgievich; Kerby, Leslie Marie

    2015-05-22

    MCNP6, the latest and most advanced LANL Monte Carlo transport code, representing a merger of MCNP5 and MCNPX, is actually much more than the sum of those two computer codes; MCNP6 is available to the public via RSICC at Oak Ridge, TN, USA. In the present work, MCNP6 was validated and verified (V&V) against different experimental data on intermediate-energy fragmentation reactions, and results by several other codes, using mainly the latest modifications of the Cascade-Exciton Model (CEM) and of the Los Alamos version of the Quark-Gluon String Model (LAQGSM) event generators CEM03.03 and LAQGSM03.03. It was found that MCNP6 usingmore » CEM03.03 and LAQGSM03.03 describes well fragmentation reactions induced on light and medium target nuclei by protons and light nuclei of energies around 1 GeV/nucleon and below, and can serve as a reliable simulation tool for different applications, like cosmic-ray-induced single event upsets (SEU’s), radiation protection, and cancer therapy with proton and ion beams, to name just a few. Future improvements of the predicting capabilities of MCNP6 for such reactions are possible, and are discussed in this work.« less

  9. Numerical MHD codes for modeling astrophysical flows

    NASA Astrophysics Data System (ADS)

    Koldoba, A. V.; Ustyugova, G. V.; Lii, P. S.; Comins, M. L.; Dyda, S.; Romanova, M. M.; Lovelace, R. V. E.

    2016-05-01

    We describe a Godunov-type magnetohydrodynamic (MHD) code based on the Miyoshi and Kusano (2005) solver which can be used to solve various astrophysical hydrodynamic and MHD problems. The energy equation is in the form of entropy conservation. The code has been implemented on several different coordinate systems: 2.5D axisymmetric cylindrical coordinates, 2D Cartesian coordinates, 2D plane polar coordinates, and fully 3D cylindrical coordinates. Viscosity and diffusivity are implemented in the code to control the accretion rate in the disk and the rate of penetration of the disk matter through the magnetic field lines. The code has been utilized for the numerical investigations of a number of different astrophysical problems, several examples of which are shown.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lao, Lang L.; St John, Holger; Staebler, Gary M.

    This report describes the work done under U.S. Department of Energy grant number DE-FG02-07ER54935 for the period ending July 31, 2010. The goal of this project was to provide predictive transport analysis to the PTRANSP code. Our contribution to this effort consisted of three parts: (a) a predictive solver suitable for use with highly non-linear transport models and installation of the turbulent confinement models GLF23 and TGLF, (b) an interface of this solver with the PTRANSP code, and (c) initial development of an EPED1 edge pedestal model interface with PTRANSP. PTRANSP has been installed locally on this cluster by importingmore » a complete PTRANSP build environment that always contains the proper version of the libraries and other object files that PTRANSP requires. The GCNMP package and its interface code have been added to the SVN repository at PPPL.« less

  11. Staggering of angular momentum distribution in fission

    NASA Astrophysics Data System (ADS)

    Tamagno, Pierre; Litaize, Olivier

    2018-03-01

    We review here the role of angular momentum distributions in the fission process. To do so the algorithm implemented in the FIFRELIN code [?] is detailed with special emphasis on the place of fission fragment angular momenta. The usual Rayleigh distribution used for angular momentum distribution is presented and the related model derivation is recalled. Arguments are given to justify why this distribution should not hold for low excitation energy of the fission fragments. An alternative ad hoc expression taking into account low-lying collectiveness is presented as has been implemented in the FIFRELIN code. Yet on observables currently provided by the code, no dramatic impact has been found. To quantify the magnitude of the impact of the low-lying staggering in the angular momentum distribution, a textbook case is considered for the decay of the 144Ba nucleus with low excitation energy.

  12. Exposure calculation code module for reactor core analysis: BURNER

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vondy, D.R.; Cunningham, G.W.

    1979-02-01

    The code module BURNER for nuclear reactor exposure calculations is presented. The computer requirements are shown, as are the reference data and interface data file requirements, and the programmed equations and procedure of calculation are described. The operating history of a reactor is followed over the period between solutions of the space, energy neutronics problem. The end-of-period nuclide concentrations are determined given the necessary information. A steady state, continuous fueling model is treated in addition to the usual fixed fuel model. The control options provide flexibility to select among an unusually wide variety of programmed procedures. The code also providesmore » user option to make a number of auxiliary calculations and print such information as the local gamma source, cumulative exposure, and a fine scale power density distribution in a selected zone. The code is used locally in a system for computation which contains the VENTURE diffusion theory neutronics code and other modules.« less

  13. SPECabq v. 2.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chambers, Robert S.; Neidigk, Matthew A.

    Sandia SPECabq is FORTRAN code that defines the user supplied subroutines needed to perform nonlinear viscoelastic analyses in the ABAQUS commercial finite element code based on the Simplified Potential Energy Clock (SPEC) Model. The SPEC model was published in the open literature in 2009. It must be compiled and linked with the ABAQUS libraries under the user supplied subroutine option of the ABAQUS executable script. The subroutine is used to analyze the thermomechanical behavior of isotropic polymers predicting things like how a polymer may undergo stress or volume relaxation under different temperature and loading environments. This subroutine enables the ABAQUSmore » finite element code to be used for analyzing the thermo-mechanical behavior of samples and parts that are made from glassy polymers.« less

  14. 78 FR 23550 - Department of Energy's (DOE) Participation in Development of the International Energy...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-04-19

    ...: Notice. SUMMARY: The DOE participates in the code development process of the International Code Council... notice outlines the process by which DOE produces code change proposals, and participates in the ICC code development process. FOR FURTHER INFORMATION CONTACT: Jeremiah Williams, U.S. Department of Energy, Office of...

  15. NEAMS FPL M2 Milestone Report: Development of a UO₂ Grain Size Model using Multicale Modeling and Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tonks, Michael R; Zhang, Yongfeng; Bai, Xianming

    2014-06-01

    This report summarizes development work funded by the Nuclear Energy Advanced Modeling Simulation program's Fuels Product Line (FPL) to develop a mechanistic model for the average grain size in UO₂ fuel. The model is developed using a multiscale modeling and simulation approach involving atomistic simulations, as well as mesoscale simulations using INL's MARMOT code.

  16. Monte Carlo simulations and benchmark measurements on the response of TE(TE) and Mg(Ar) ionization chambers in photon, electron and neutron beams

    NASA Astrophysics Data System (ADS)

    Lin, Yi-Chun; Huang, Tseng-Te; Liu, Yuan-Hao; Chen, Wei-Lin; Chen, Yen-Fu; Wu, Shu-Wei; Nievaart, Sander; Jiang, Shiang-Huei

    2015-06-01

    The paired ionization chambers (ICs) technique is commonly employed to determine neutron and photon doses in radiology or radiotherapy neutron beams, where neutron dose shows very strong dependence on the accuracy of accompanying high energy photon dose. During the dose derivation, it is an important issue to evaluate the photon and electron response functions of two commercially available ionization chambers, denoted as TE(TE) and Mg(Ar), used in our reactor based epithermal neutron beam. Nowadays, most perturbation corrections for accurate dose determination and many treatment planning systems are based on the Monte Carlo technique. We used general purposed Monte Carlo codes, MCNP5, EGSnrc, FLUKA or GEANT4 for benchmark verifications among them and carefully measured values for a precise estimation of chamber current from absorbed dose rate of cavity gas. Also, energy dependent response functions of two chambers were calculated in a parallel beam with mono-energies from 20 keV to 20 MeV photons and electrons by using the optimal simple spherical and detailed IC models. The measurements were performed in the well-defined (a) four primary M-80, M-100, M120 and M150 X-ray calibration fields, (b) primary 60Co calibration beam, (c) 6 MV and 10 MV photon, (d) 6 MeV and 18 MeV electron LINACs in hospital and (e) BNCT clinical trials neutron beam. For the TE(TE) chamber, all codes were almost identical over the whole photon energy range. In the Mg(Ar) chamber, MCNP5 showed lower response than other codes for photon energy region below 0.1 MeV and presented similar response above 0.2 MeV (agreed within 5% in the simple spherical model). With the increase of electron energy, the response difference between MCNP5 and other codes became larger in both chambers. Compared with the measured currents, MCNP5 had the difference from the measurement data within 5% for the 60Co, 6 MV, 10 MV, 6 MeV and 18 MeV LINACs beams. But for the Mg(Ar) chamber, the derivations reached 7.8-16.5% below 120 kVp X-ray beams. In this study, we were especially interested in BNCT doses where low energy photon contribution is less to ignore, MCNP model is recognized as the most suitable to simulate wide photon-electron and neutron energy distributed responses of the paired ICs. Also, MCNP provides the best prediction of BNCT source adjustment by the detector's neutron and photon responses.

  17. Effects of the plasma profiles on photon and pair production in ultrahigh intensity laser solid interaction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tian, Y. X.; Jin, X. L., E-mail: jinxiaolin@uestc.edu.cn; Yan, W. Z.

    The model of photon and pair production in strong field quantum electrodynamics is implemented into our 1D3V particle-in-cell code with Monte Carlo algorithm. Using this code, the evolution of the particles in ultrahigh intensity laser (∼10{sup 23} W/cm{sup 2}) interaction with aluminum foil target is observed. Four different initial plasma profiles are considered in the simulations. The effects of initial plasma profiles on photon and pair production, energy spectra, and energy evolution are analyzed. The results imply that one can set an optimal initial plasma profile to obtain the desired photon distributions.

  18. Development of the US3D Code for Advanced Compressible and Reacting Flow Simulations

    NASA Technical Reports Server (NTRS)

    Candler, Graham V.; Johnson, Heath B.; Nompelis, Ioannis; Subbareddy, Pramod K.; Drayna, Travis W.; Gidzak, Vladimyr; Barnhardt, Michael D.

    2015-01-01

    Aerothermodynamics and hypersonic flows involve complex multi-disciplinary physics, including finite-rate gas-phase kinetics, finite-rate internal energy relaxation, gas-surface interactions with finite-rate oxidation and sublimation, transition to turbulence, large-scale unsteadiness, shock-boundary layer interactions, fluid-structure interactions, and thermal protection system ablation and thermal response. Many of the flows have a large range of length and time scales, requiring large computational grids, implicit time integration, and large solution run times. The University of Minnesota NASA US3D code was designed for the simulation of these complex, highly-coupled flows. It has many of the features of the well-established DPLR code, but uses unstructured grids and has many advanced numerical capabilities and physical models for multi-physics problems. The main capabilities of the code are described, the physical modeling approaches are discussed, the different types of numerical flux functions and time integration approaches are outlined, and the parallelization strategy is overviewed. Comparisons between US3D and the NASA DPLR code are presented, and several advanced simulations are presented to illustrate some of novel features of the code.

  19. Computer modeling of pulsed CO2 lasers for lidar applications

    NASA Technical Reports Server (NTRS)

    Spiers, Gary D.; Smithers, Martin E.; Murty, Rom

    1991-01-01

    The experimental results will enable a comparison of the numerical code output with experimental data. This will ensure verification of the validity of the code. The measurements were made on a modified commercial CO2 laser. Results are listed as following. (1) The pulse shape and energy dependence on gas pressure were measured. (2) The intrapulse frequency chirp due to plasma and laser induced medium perturbation effects were determined. A simple numerical model showed quantitative agreement with these measurements. The pulse to pulse frequency stability was also determined. (3) The dependence was measured of the laser transverse mode stability on cavity length. A simple analysis of this dependence in terms of changes to the equivalent fresnel number and the cavity magnification was performed. (4) An analysis was made of the discharge pulse shape which enabled the low efficiency of the laser to be explained in terms of poor coupling of the electrical energy into the vibrational levels. And (5) the existing laser resonator code was changed to allow it to run on the Cray XMP under the new operating system.

  20. Role of tunnelling in complete and incomplete fusion induced by 9Be on 169Tm and 187Re targets at around barrier energies

    NASA Astrophysics Data System (ADS)

    Kharab, Rajesh; Chahal, Rajiv; Kumar, Rajiv

    2017-04-01

    We have analyzed the complete and incomplete fusion excitation function for 9Be +169Tm, 187Re reactions at around barrier energies using the code PLATYPUS based on classical dynamical model. The quantum mechanical tunnelling correction is incorporated at near and sub barrier energies which significantly improves the matching between the data and prediction.

  1. Open-Source Development of the Petascale Reactive Flow and Transport Code PFLOTRAN

    NASA Astrophysics Data System (ADS)

    Hammond, G. E.; Andre, B.; Bisht, G.; Johnson, T.; Karra, S.; Lichtner, P. C.; Mills, R. T.

    2013-12-01

    Open-source software development has become increasingly popular in recent years. Open-source encourages collaborative and transparent software development and promotes unlimited free redistribution of source code to the public. Open-source development is good for science as it reveals implementation details that are critical to scientific reproducibility, but generally excluded from journal publications. In addition, research funds that would have been spent on licensing fees can be redirected to code development that benefits more scientists. In 2006, the developers of PFLOTRAN open-sourced their code under the U.S. Department of Energy SciDAC-II program. Since that time, the code has gained popularity among code developers and users from around the world seeking to employ PFLOTRAN to simulate thermal, hydraulic, mechanical and biogeochemical processes in the Earth's surface/subsurface environment. PFLOTRAN is a massively-parallel subsurface reactive multiphase flow and transport simulator designed from the ground up to run efficiently on computing platforms ranging from the laptop to leadership-class supercomputers, all from a single code base. The code employs domain decomposition for parallelism and is founded upon the well-established and open-source parallel PETSc and HDF5 frameworks. PFLOTRAN leverages modern Fortran (i.e. Fortran 2003-2008) in its extensible object-oriented design. The use of this progressive, yet domain-friendly programming language has greatly facilitated collaboration in the code's software development. Over the past year, PFLOTRAN's top-level data structures were refactored as Fortran classes (i.e. extendible derived types) to improve the flexibility of the code, ease the addition of new process models, and enable coupling to external simulators. For instance, PFLOTRAN has been coupled to the parallel electrical resistivity tomography code E4D to enable hydrogeophysical inversion while the same code base can be used as a third-party library to provide hydrologic flow, energy transport, and biogeochemical capability to the community land model, CLM, part of the open-source community earth system model (CESM) for climate. In this presentation, the advantages and disadvantages of open source software development in support of geoscience research at government laboratories, universities, and the private sector are discussed. Since the code is open-source (i.e. it's transparent and readily available to competitors), the PFLOTRAN team's development strategy within a competitive research environment is presented. Finally, the developers discuss their approach to object-oriented programming and the leveraging of modern Fortran in support of collaborative geoscience research as the Fortran standard evolves among compiler vendors.

  2. Energy-based aeroelastic analysis of a morphing wing

    NASA Astrophysics Data System (ADS)

    De Breuker, Roeland; Abdalla, Mostafa; Gürdal, Zafer; Lindner, Douglas

    2007-04-01

    Aircraft are often confronted with distinct circumstances during different parts of their mission. Ideally the aircraft should fly optimally in terms of aerodynamic performance and other criteria in each one of these mission requirements. This requires in principle as many different aircraft configurations as there are flight conditions, so therefore a morphing aircraft would be the ideal solution. A morphing aircraft is a flying vehicle that i) changes its state substantially, ii) provides superior system capability and iii) uses a design that integrates innovative technologies. It is important for such aircraft that the gains due to the adaptability to the flight condition are not nullified by the energy consumption to carry out the morphing manoeuvre. Therefore an aeroelastic numerical tool that takes into account the morphing energy is needed to analyse the net gain of the morphing. The code couples three-dimensional beam finite elements model in a co-rotational framework to a lifting-line aerodynamic code. The morphing energy is calculated by summing actuation moments, applied at the beam nodes, multiplied by the required angular rotations of the beam elements. The code is validated with NASTRAN Aeroelasticity Module and found to be in agreement. Finally the applicability of the code is tested for a sweep morphing manoeuvre and it has been demonstrated that sweep morphing can improve the aerodynamic performance of an aircraft and that the inclusion of aeroelastic effects is important.

  3. Sandia National Laboratories analysis code data base

    NASA Astrophysics Data System (ADS)

    Peterson, C. W.

    1994-11-01

    Sandia National Laboratories' mission is to solve important problems in the areas of national defense, energy security, environmental integrity, and industrial technology. The laboratories' strategy for accomplishing this mission is to conduct research to provide an understanding of the important physical phenomena underlying any problem, and then to construct validated computational models of the phenomena which can be used as tools to solve the problem. In the course of implementing this strategy, Sandia's technical staff has produced a wide variety of numerical problem-solving tools which they use regularly in the design, analysis, performance prediction, and optimization of Sandia components, systems, and manufacturing processes. This report provides the relevant technical and accessibility data on the numerical codes used at Sandia, including information on the technical competency or capability area that each code addresses, code 'ownership' and release status, and references describing the physical models and numerical implementation.

  4. The updated algorithm of the Energy Consumption Program (ECP): A computer model simulating heating and cooling energy loads in buildings

    NASA Technical Reports Server (NTRS)

    Lansing, F. L.; Strain, D. M.; Chai, V. W.; Higgins, S.

    1979-01-01

    The energy Comsumption Computer Program was developed to simulate building heating and cooling loads and compute thermal and electric energy consumption and cost. This article reports on the new additional algorithms and modifications made in an effort to widen the areas of application. The program structure was rewritten accordingly to refine and advance the building model and to further reduce the processing time and cost. The program is noted for its very low cost and ease of use compared to other available codes. The accuracy of computations is not sacrificed however, since the results are expected to lie within + or - 10% of actual energy meter readings.

  5. Energy coding in biological neural networks

    PubMed Central

    Zhang, Zhikang

    2007-01-01

    According to the experimental result of signal transmission and neuronal energetic demands being tightly coupled to information coding in the cerebral cortex, we present a brand new scientific theory that offers an unique mechanism for brain information processing. We demonstrate that the neural coding produced by the activity of the brain is well described by our theory of energy coding. Due to the energy coding model’s ability to reveal mechanisms of brain information processing based upon known biophysical properties, we can not only reproduce various experimental results of neuro-electrophysiology, but also quantitatively explain the recent experimental results from neuroscientists at Yale University by means of the principle of energy coding. Due to the theory of energy coding to bridge the gap between functional connections within a biological neural network and energetic consumption, we estimate that the theory has very important consequences for quantitative research of cognitive function. PMID:19003513

  6. Local dynamic subgrid-scale models in channel flow

    NASA Technical Reports Server (NTRS)

    Cabot, William H.

    1994-01-01

    The dynamic subgrid-scale (SGS) model has given good results in the large-eddy simulation (LES) of homogeneous isotropic or shear flow, and in the LES of channel flow, using averaging in two or three homogeneous directions (the DA model). In order to simulate flows in general, complex geometries (with few or no homogeneous directions), the dynamic SGS model needs to be applied at a local level in a numerically stable way. Channel flow, which is inhomogeneous and wall-bounded flow in only one direction, provides a good initial test for local SGS models. Tests of the dynamic localization model were performed previously in channel flow using a pseudospectral code and good results were obtained. Numerical instability due to persistently negative eddy viscosity was avoided by either constraining the eddy viscosity to be positive or by limiting the time that eddy viscosities could remain negative by co-evolving the SGS kinetic energy (the DLk model). The DLk model, however, was too expensive to run in the pseudospectral code due to a large near-wall term in the auxiliary SGS kinetic energy (k) equation. One objective was then to implement the DLk model in a second-order central finite difference channel code, in which the auxiliary k equation could be integrated implicitly in time at great reduction in cost, and to assess its performance in comparison with the plane-averaged dynamic model or with no model at all, and with direct numerical simulation (DNS) and/or experimental data. Other local dynamic SGS models have been proposed recently, e.g., constrained dynamic models with random backscatter, and with eddy viscosity terms that are averaged in time over material path lines rather than in space. Another objective was to incorporate and test these models in channel flow.

  7. The role of water vapor in the ITCZ response to hemispherically asymmetric forcings

    NASA Astrophysics Data System (ADS)

    Clark, S.; Ming, Y.; Held, I.

    2016-12-01

    Studies using both comprehensive and simplified models have shown that changes to the inter-hemispheric energy budget can lead to changes in the position of the ITCZ. In these studies, the mean position of the ITCZ tends to shift toward the hemisphere receiving more energy. While included in many studies using comprehensive models, the role of the water vapor-radiation feedback in influencing ITCZ shifts has not been focused on in isolation in an idealized setting. Here we use an aquaplanet idealized moist general circulation model initially developed by Dargan Frierson, without clouds, newly coupled to a full radiative transfer code to investigate the role of water vapor in the ITCZ response to hemispherically asymmetric forcings. We induce a southward ITCZ shift by reducing the incoming solar radiation in the northern hemisphere. To isolate the radiative impact of water vapor, we run simulations where the radiation code sees the prognostic water vapor field, which responds dynamically to temperature, parameterized convection, and the circulation and also run simulations where the radiation code sees a prescribed static climatological water vapor field. We find that under Earth-like climate conditions, a shifting water vapor distribution's interaction with longwave radiation amplifies the latitudinal displacement of the ITCZ in response to a given hemispherically asymmetric forcing roughly by a factor of two; this effect appears robust to the convection scheme used. We argue that this amplifying effect can be explained using the energy flux equator theory for the position of the ITCZ.

  8. Simulations of ultra-high energy cosmic rays in the local Universe and the origin of cosmic magnetic fields

    NASA Astrophysics Data System (ADS)

    Hackstein, S.; Vazza, F.; Brüggen, M.; Sorce, J. G.; Gottlöber, S.

    2018-04-01

    We simulate the propagation of cosmic rays at ultra-high energies, ≳1018 eV, in models of extragalactic magnetic fields in constrained simulations of the local Universe. We use constrained initial conditions with the cosmological magnetohydrodynamics code ENZO. The resulting models of the distribution of magnetic fields in the local Universe are used in the CRPROPA code to simulate the propagation of ultra-high energy cosmic rays. We investigate the impact of six different magneto-genesis scenarios, both primordial and astrophysical, on the propagation of cosmic rays over cosmological distances. Moreover, we study the influence of different source distributions around the Milky Way. Our study shows that different scenarios of magneto-genesis do not have a large impact on the anisotropy measurements of ultra-high energy cosmic rays. However, at high energies above the Greisen-Zatsepin-Kuzmin (GZK)-limit, there is anisotropy caused by the distribution of nearby sources, independent of the magnetic field model. This provides a chance to identify cosmic ray sources with future full-sky measurements and high number statistics at the highest energies. Finally, we compare our results to the dipole signal measured by the Pierre Auger Observatory. All our source models and magnetic field models could reproduce the observed dipole amplitude with a pure iron injection composition. Our results indicate that the dipole is observed due to clustering of secondary nuclei in direction of nearby sources of heavy nuclei. A light injection composition is disfavoured, since the increase in dipole angular power from 4 to 8 EeV is too slow compared to observation by the Pierre Auger Observatory.

  9. GPU accelerated population annealing algorithm

    NASA Astrophysics Data System (ADS)

    Barash, Lev Yu.; Weigel, Martin; Borovský, Michal; Janke, Wolfhard; Shchur, Lev N.

    2017-11-01

    Population annealing is a promising recent approach for Monte Carlo simulations in statistical physics, in particular for the simulation of systems with complex free-energy landscapes. It is a hybrid method, combining importance sampling through Markov chains with elements of sequential Monte Carlo in the form of population control. While it appears to provide algorithmic capabilities for the simulation of such systems that are roughly comparable to those of more established approaches such as parallel tempering, it is intrinsically much more suitable for massively parallel computing. Here, we tap into this structural advantage and present a highly optimized implementation of the population annealing algorithm on GPUs that promises speed-ups of several orders of magnitude as compared to a serial implementation on CPUs. While the sample code is for simulations of the 2D ferromagnetic Ising model, it should be easily adapted for simulations of other spin models, including disordered systems. Our code includes implementations of some advanced algorithmic features that have only recently been suggested, namely the automatic adaptation of temperature steps and a multi-histogram analysis of the data at different temperatures. Program Files doi:http://dx.doi.org/10.17632/sgzt4b7b3m.1 Licensing provisions: Creative Commons Attribution license (CC BY 4.0) Programming language: C, CUDA External routines/libraries: NVIDIA CUDA Toolkit 6.5 or newer Nature of problem: The program calculates the internal energy, specific heat, several magnetization moments, entropy and free energy of the 2D Ising model on square lattices of edge length L with periodic boundary conditions as a function of inverse temperature β. Solution method: The code uses population annealing, a hybrid method combining Markov chain updates with population control. The code is implemented for NVIDIA GPUs using the CUDA language and employs advanced techniques such as multi-spin coding, adaptive temperature steps and multi-histogram reweighting. Additional comments: Code repository at https://github.com/LevBarash/PAising. The system size and size of the population of replicas are limited depending on the memory of the GPU device used. For the default parameter values used in the sample programs, L = 64, θ = 100, β0 = 0, βf = 1, Δβ = 0 . 005, R = 20 000, a typical run time on an NVIDIA Tesla K80 GPU is 151 seconds for the single spin coded (SSC) and 17 seconds for the multi-spin coded (MSC) program (see Section 2 for a description of these parameters).

  10. Residential Building Energy Code Field Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    R. Bartlett, M. Halverson, V. Mendon, J. Hathaway, Y. Xie

    This document presents a methodology for assessing baseline energy efficiency in new single-family residential buildings and quantifying related savings potential. The approach was developed by Pacific Northwest National Laboratory (PNNL) for the U.S. Department of Energy (DOE) Building Energy Codes Program with the objective of assisting states as they assess energy efficiency in residential buildings and implementation of their building energy codes, as well as to target areas for improvement through energy codes and broader energy-efficiency programs. It is also intended to facilitate a consistent and replicable approach to research studies of this type and establish a transparent data setmore » to represent baseline construction practices across U.S. states.« less

  11. Development of an energy storage tank model

    NASA Astrophysics Data System (ADS)

    Buckley, Robert Christopher

    A linearized, one-dimensional finite difference model employing an implicit finite difference method for energy storage tanks is developed, programmed with MATLAB, and demonstrated for different applications. A set of nodal energy equations is developed by considering the energy interactions on a small control volume. The general method of solving these equations is described as are other features of the simulation program. Two modeling applications are presented: the first using a hot water storage tank with a solar collector and an absorption chiller to cool a building in the summer, the second using a molten salt storage system with a solar collector and steam power plant to generate electricity. Recommendations for further study as well as all of the source code generated in the project are also provided.

  12. Interplay of Laser-Plasma Interactions and Inertial Fusion Hydrodynamics

    DOE PAGES

    Strozzi, D. J.; Bailey, D. S.; Michel, P.; ...

    2017-01-12

    The effects of laser-plasma interactions (LPI) on the dynamics of inertial confinement fusion hohlraums are investigated in this work via a new approach that self-consistently couples reduced LPI models into radiation-hydrodynamics numerical codes. The interplay between hydrodynamics and LPI—specifically stimulated Raman scatter and crossed-beam energy transfer (CBET)—mostly occurs via momentum and energy deposition into Langmuir and ion acoustic waves. This spatially redistributes energy coupling to the target, which affects the background plasma conditions and thus, modifies laser propagation. In conclusion, this model shows reduced CBET and significant laser energy depletion by Langmuir waves, which reduce the discrepancy between modeling andmore » data from hohlraum experiments on wall x-ray emission and capsule implosion shape.« less

  13. Subgroup A : nuclear model codes report to the Sixteenth Meeting of the WPEC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Talou, P.; Chadwick, M. B.; Dietrich, F. S.

    2004-01-01

    The Subgroup A activities focus on the development of nuclear reaction models and codes, used in evaluation work for nuclear reactions from the unresolved energy region up to the pion threshold production limit, and for target nuclides from the low teens and heavier. Much of the efforts are devoted by each participant to the continuing development of their own Institution codes. Progresses in this arena are reported in detail for each code in the present document. EMPIRE-II is of public access. The release of the TALYS code has been announced for the ND2004 Conference in Santa Fe, NM, October 2004.more » McGNASH is still under development and is not expected to be released in the very near future. In addition, Subgroup A members have demonstrated a growing interest in working on common modeling and codes capabilities, which would significantly reduce the amount of duplicate work, help manage efficiently the growing lines of existing codes, and render codes inter-comparison much easier. A recent and important activity of the Subgroup A has therefore been to develop the framework and the first bricks of the ModLib library, which is constituted of mostly independent pieces of codes written in Fortran 90 (and above) to be used in existing and future nuclear reaction codes. Significant progresses in the development of ModLib have been made during the past year. Several physics modules have been added to the library, and a few more have been planned in detail for the coming year.« less

  14. Benchmark Simulations of the Thermal-Hydraulic Responses during EBR-II Inherent Safety Tests using SAM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hu, Rui; Sumner, Tyler S.

    2016-04-17

    An advanced system analysis tool SAM is being developed for fast-running, improved-fidelity, and whole-plant transient analyses at Argonne National Laboratory under DOE-NE’s Nuclear Energy Advanced Modeling and Simulation (NEAMS) program. As an important part of code development, companion validation activities are being conducted to ensure the performance and validity of the SAM code. This paper presents the benchmark simulations of two EBR-II tests, SHRT-45R and BOP-302R, whose data are available through the support of DOE-NE’s Advanced Reactor Technology (ART) program. The code predictions of major primary coolant system parameter are compared with the test results. Additionally, the SAS4A/SASSYS-1 code simulationmore » results are also included for a code-to-code comparison.« less

  15. A Study of Neutron Leakage in Finite Objects

    NASA Technical Reports Server (NTRS)

    Wilson, John W.; Slaba, Tony C.; Badavi, Francis F.; Reddell, Brandon D.; Bahadori, Amir A.

    2015-01-01

    A computationally efficient 3DHZETRN code capable of simulating High charge (Z) and Energy (HZE) and light ions (including neutrons) under space-like boundary conditions with enhanced neutron and light ion propagation was recently developed for simple shielded objects. Monte Carlo (MC) benchmarks were used to verify the 3DHZETRN methodology in slab and spherical geometry, and it was shown that 3DHZETRN agrees with MC codes to the degree that various MC codes agree among themselves. One limitation in the verification process is that all of the codes (3DHZETRN and three MC codes) utilize different nuclear models/databases. In the present report, the new algorithm, with well-defined convergence criteria, is used to quantify the neutron leakage from simple geometries to provide means of verifying 3D effects and to provide guidance for further code development.

  16. The application of simulation modeling to the cost and performance ranking of solar thermal power plants

    NASA Technical Reports Server (NTRS)

    Rosenberg, L. S.; Revere, W. R.; Selcuk, M. K.

    1981-01-01

    A computer simulation code was employed to evaluate several generic types of solar power systems (up to 10 MWe). Details of the simulation methodology, and the solar plant concepts are given along with cost and performance results. The Solar Energy Simulation computer code (SESII) was used, which optimizes the size of the collector field and energy storage subsystem for given engine-generator and energy-transport characteristics. Nine plant types were examined which employed combinations of different technology options, such as: distributed or central receivers with one- or two-axis tracking or no tracking; point- or line-focusing concentrator; central or distributed power conversion; Rankin, Brayton, or Stirling thermodynamic cycles; and thermal or electrical storage. Optimal cost curves were plotted as a function of levelized busbar energy cost and annualized plant capacity. Point-focusing distributed receiver systems were found to be most efficient (17-26 percent).

  17. Update On the Status of the FLUKA Monte Carlo Transport Code*

    NASA Technical Reports Server (NTRS)

    Ferrari, A.; Lorenzo-Sentis, M.; Roesler, S.; Smirnov, G.; Sommerer, F.; Theis, C.; Vlachoudis, V.; Carboni, M.; Mostacci, A.; Pelliccioni, M.

    2006-01-01

    The FLUKA Monte Carlo transport code is a well-known simulation tool in High Energy Physics. FLUKA is a dynamic tool in the sense that it is being continually updated and improved by the authors. We review the progress achieved since the last CHEP Conference on the physics models, some technical improvements to the code and some recent applications. From the point of view of the physics, improvements have been made with the extension of PEANUT to higher energies for p, n, pi, pbar/nbar and for nbars down to the lowest energies, the addition of the online capability to evolve radioactive products and get subsequent dose rates, upgrading of the treatment of EM interactions with the elimination of the need to separately prepare preprocessed files. A new coherent photon scattering model, an updated treatment of the photo-electric effect, an improved pair production model, new photon cross sections from the LLNL Cullen database have been implemented. In the field of nucleus-- nucleus interactions the electromagnetic dissociation of heavy ions has been added along with the extension of the interaction models for some nuclide pairs to energies below 100 MeV/A using the BME approach, as well as the development of an improved QMD model for intermediate energies. Both DPMJET 2.53 and 3 remain available along with rQMD 2.4 for heavy ion interactions above 100 MeV/A. Technical improvements include the ability to use parentheses in setting up the combinatorial geometry, the introduction of pre-processor directives in the input stream. a new random number generator with full 64 bit randomness, new routines for mathematical special functions (adapted from SLATEC). Finally, work is progressing on the deployment of a user-friendly GUI input interface as well as a CAD-like geometry creation and visualization tool. On the application front, FLUKA has been used to extensively evaluate the potential space radiation effects on astronauts for future deep space missions, the activation dose for beam target areas, dose calculations for radiation therapy as well as being adapted for use in the simulation of events in the ALICE detector at the LHC.

  18. Large and small-scale structures and the dust energy balance problem in spiral galaxies

    NASA Astrophysics Data System (ADS)

    Saftly, W.; Baes, M.; De Geyter, G.; Camps, P.; Renaud, F.; Guedes, J.; De Looze, I.

    2015-04-01

    The interstellar dust content in galaxies can be traced in extinction at optical wavelengths, or in emission in the far-infrared. Several studies have found that radiative transfer models that successfully explain the optical extinction in edge-on spiral galaxies generally underestimate the observed FIR/submm fluxes by a factor of about three. In order to investigate this so-called dust energy balance problem, we use two Milky Way-like galaxies produced by high-resolution hydrodynamical simulations. We create mock optical edge-on views of these simulated galaxies (using the radiative transfer code SKIRT), and we then fit the parameters of a basic spiral galaxy model to these images (using the fitting code FitSKIRT). The basic model includes smooth axisymmetric distributions along a Sérsic bulge and exponential disc for the stars, and a second exponential disc for the dust. We find that the dust mass recovered by the fitted models is about three times smaller than the known dust mass of the hydrodynamical input models. This factor is in agreement with previous energy balance studies of real edge-on spiral galaxies. On the other hand, fitting the same basic model to less complex input models (e.g. a smooth exponential disc with a spiral perturbation or with random clumps), does recover the dust mass of the input model almost perfectly. Thus it seems that the complex asymmetries and the inhomogeneous structure of real and hydrodynamically simulated galaxies are a lot more efficient at hiding dust than the rather contrived geometries in typical quasi-analytical models. This effect may help explain the discrepancy between the dust emission predicted by radiative transfer models and the observed emission in energy balance studies for edge-on spiral galaxies.

  19. TRANSP: status and planning

    NASA Astrophysics Data System (ADS)

    Andre, R.; Carlsson, J.; Gorelenkova, M.; Jardin, S.; Kaye, S.; Poli, F.; Yuan, X.

    2016-10-01

    TRANSP is an integrated interpretive and predictive transport analysis tool that incorporates state of the art heating/current drive sources and transport models. The treatments and transport solvers are becoming increasingly sophisticated and comprehensive. For instance, the ISOLVER component provides a free boundary equilibrium solution, while the PT- SOLVER transport solver is especially suited for stiff transport models such as TGLF. TRANSP incorporates high fidelity heating and current drive source models, such as NUBEAM for neutral beam injection, the beam tracing code TORBEAM for EC, TORIC for ICRF, the ray tracing TORAY and GENRAY for EC. The implementation of selected components makes efficient use of MPI for speed up of code calculations. Recently the GENRAY-CQL3D solver for modeling of LH heating and current drive has been implemented and currently being extended to multiple antennas, to allow modeling of EAST discharges. Also, GENRAY+CQL3D is being extended to the use of EC/EBW and of HHFW for NSTX-U. This poster will describe present uses of the code worldwide, as well as plans for upgrading the physics modules and code framework. Work supported by the US Department of Energy under DE-AC02-CH0911466.

  20. Simulation of Containment Atmosphere Mixing and Stratification Experiment in the ThAI Facility with a CFD Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Babic, Miroslav; Kljenak, Ivo; Mavko, Borut

    2006-07-01

    The CFD code CFX4.4 was used to simulate an experiment in the ThAI facility, which was designed for investigation of thermal-hydraulic processes during a severe accident inside a Light Water Reactor containment. In the considered experiment, air was initially present in the vessel, and helium and steam were injected during different phases of the experiment at various mass flow rates and at different locations. The main purpose of the proposed work was to assess the capabilities of the CFD code to reproduce the atmosphere structure with a three-dimensional model, coupled with condensation models proposed by the authors. A three-dimensional modelmore » of the ThAI vessel for the CFX4.4 code was developed. The flow in the simulation domain was modeled as single-phase. Steam condensation on vessel walls was modeled as a sink of mass and energy using a correlation that was originally developed for an integral approach. A simple model of bulk phase change was also included. Calculated time-dependent variables together with temperature and volume fraction distributions at the end of different experiment phases are compared to experimental results. (authors)« less

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sprague, Michael A.

    Enabled by petascale supercomputing, the next generation of computer models for wind energy will simulate a vast range of scales and physics, spanning from turbine structural dynamics and blade-scale turbulence to mesoscale atmospheric flow. A single model covering all scales and physics is not feasible. Thus, these simulations will require the coupling of different models/codes, each for different physics, interacting at their domain boundaries.

  2. NASA Radiation Protection Research for Exploration Missions

    NASA Technical Reports Server (NTRS)

    Wilson, John W.; Cucinotta, Francis A.; Tripathi, Ram K.; Heinbockel, John H.; Tweed, John; Mertens, Christopher J.; Walker, Steve A.; Blattnig, Steven R.; Zeitlin, Cary J.

    2006-01-01

    The HZETRN code was used in recent trade studies for renewed lunar exploration and currently used in engineering development of the next generation of space vehicles, habitats, and EVA equipment. A new version of the HZETRN code capable of simulating high charge and energy (HZE) ions, light-ions and neutrons with either laboratory or space boundary conditions with enhanced neutron and light-ion propagation is under development. Atomic and nuclear model requirements to support that development will be discussed. Such engineering design codes require establishing validation processes using laboratory ion beams and space flight measurements in realistic geometries. We discuss limitations of code validation due to the currently available data and recommend priorities for new data sets.

  3. The Martian climate: Energy balance models with CO2/H2O atmospheres

    NASA Technical Reports Server (NTRS)

    Hoffert, M. I.

    1984-01-01

    Progress in the development of a multi-reservoir, time dependent energy balance climate model for Mars driven by prescribed insolation at the top of the atmosphere is reported. The first approximately half-year of the program was devoted to assembling and testing components of the full model. Specific accomplishments were made on a longwave radiation code, coupling seasonal solar input to a ground temperature simulation, and conceptualizing an approach to modeling the seasonal pressure waves that develop in the Martian atmosphere as a result of sublimation and condensation of CO2 in polar regions.

  4. 75 FR 20833 - Building Energy Codes

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-04-21

    ... DEPARTMENT OF ENERGY Office of Energy Efficiency and Renewable Energy [Docket No. EERE-2010-BT-BC-0012] Building Energy Codes AGENCY: Office of Energy Efficiency and Renewable Energy, Department of Energy. ACTION: Request for Information. SUMMARY: The U.S. Department of Energy (DOE) is soliciting...

  5. Clean Energy in City Codes: A Baseline Analysis of Municipal Codification across the United States

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cook, Jeffrey J.; Aznar, Alexandra; Dane, Alexander

    Municipal governments in the United States are well positioned to influence clean energy (energy efficiency and alternative energy) and transportation technology and strategy implementation within their jurisdictions through planning, programs, and codification. Municipal governments are leveraging planning processes and programs to shape their energy futures. There is limited understanding in the literature related to codification, the primary way that municipal governments enact enforceable policies. The authors fill the gap in the literature by documenting the status of municipal codification of clean energy and transportation across the United States. More directly, we leverage online databases of municipal codes to develop nationalmore » and state-specific representative samples of municipal governments by population size. Our analysis finds that municipal governments with the authority to set residential building energy codes within their jurisdictions frequently do so. In some cases, communities set codes higher than their respective state governments. Examination of codes across the nation indicates that municipal governments are employing their code as a policy mechanism to address clean energy and transportation.« less

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mueller, Bernhard; Janka, Hans-Thomas; Dimmelmeier, Harald, E-mail: bjmuellr@mpa-garching.mpg.d, E-mail: thj@mpa-garching.mpg.d, E-mail: harrydee@mpa-garching.mpg.d

    We present a new general relativistic code for hydrodynamical supernova simulations with neutrino transport in spherical and azimuthal symmetry (one dimension and two dimensions, respectively). The code is a combination of the COCONUT hydro module, which is a Riemann-solver-based, high-resolution shock-capturing method, and the three-flavor, fully energy-dependent VERTEX scheme for the transport of massless neutrinos. VERTEX integrates the coupled neutrino energy and momentum equations with a variable Eddington factor closure computed from a model Boltzmann equation and uses the 'ray-by-ray plus' approximation in two dimensions, assuming the neutrino distribution to be axially symmetric around the radial direction at every pointmore » in space, and thus the neutrino flux to be radial. Our spacetime treatment employs the Arnowitt-Deser-Misner 3+1 formalism with the conformal flatness condition for the spatial three metric. This approach is exact for the one-dimensional case and has previously been shown to yield very accurate results for spherical and rotational stellar core collapse. We introduce new formulations of the energy equation to improve total energy conservation in relativistic and Newtonian hydro simulations with grid-based Eulerian finite-volume codes. Moreover, a modified version of the VERTEX scheme is developed that simultaneously conserves energy and lepton number in the neutrino transport with better accuracy and higher numerical stability in the high-energy tail of the spectrum. To verify our code, we conduct a series of tests in spherical symmetry, including a detailed comparison with published results of the collapse, shock formation, shock breakout, and accretion phases. Long-time simulations of proto-neutron star cooling until several seconds after core bounce both demonstrate the robustness of the new COCONUT-VERTEX code and show the approximate treatment of relativistic effects by means of an effective relativistic gravitational potential as in PROMETHEUS-VERTEX to be remarkably accurate in spherical symmetry.« less

  7. CHIC - Coupling Habitability, Interior and Crust

    NASA Astrophysics Data System (ADS)

    Noack, Lena; Labbe, Francois; Boiveau, Thomas; Rivoldini, Attilio; Van Hoolst, Tim

    2014-05-01

    We present a new code developed for simulating convection in terrestrial planets and icy moons. The code CHIC is written in Fortran and employs the finite volume method and finite difference method for solving energy, mass and momentum equations in either silicate or icy mantles. The code uses either Cartesian (2D and 3D box) or spherical coordinates (2D cylinder or annulus). It furthermore contains a 1D parametrised model to obtain temperature profiles in specific regions, for example in the iron core or in the silicate mantle (solving only the energy equation). The 2D/3D convection model uses the same input parameters as the 1D model, which allows for comparison of the different models and adaptation of the 1D model, if needed. The code has already been benchmarked for the following aspects: - viscosity-dependent rheology (Blankenbach et al., 1989) - pseudo-plastic deformation (Tosi et al., in preparation phase) - subduction mechanism and plastic deformation (Quinquis et al., in preparation phase) New features that are currently developed and benchmarked include: - compressibility (following King et al., 2009 and Leng and Zhong, 2008) - different melt modules (Plesa et al., in preparation phase) - freezing of an inner core (comparison with GAIA code, Huettig and Stemmer, 2008) - build-up of oceanic and continental crust (Noack et al., in preparation phase) The code represents a useful tool to couple the interior with the surface of a planet (e.g. via build-up and erosion of crust) and it's atmosphere (via outgassing on the one hand and subduction of hydrated crust and carbonates back into the mantle). It will be applied to investigate several factors that might influence the habitability of a terrestrial planet, and will also be used to simulate icy bodies with high-pressure ice phases. References: Blankenbach et al. (1989). A benchmark comparison for mantle convection codes. GJI 98, 23-38. Huettig and Stemmer (2008). Finite volume discretization for dynamic viscosities on Voronoi grids. PEPI 171(1-4), 137-146. King et al. (2009). A Community Benchmark for 2D Cartesian Compressible Convection in the Earth's Mantle. GJI 179, 1-11. Leng and Zhong (2008). Viscous heating, adiabatic heating and energetic consistency in compressible mantle convection. GJI 173, 693-702.

  8. Prediction of Turbulence-Generated Noise in Unheated Jets. Part 2; JeNo Users' Manual (Version 1.0)

    NASA Technical Reports Server (NTRS)

    Khavaran, Abbas; Wolter, John D.; Koch, L. Danielle

    2009-01-01

    JeNo (Version 1.0) is a Fortran90 computer code that calculates the far-field sound spectral density produced by axisymmetric, unheated jets at a user specified observer location and frequency range. The user must provide a structured computational grid and a mean flow solution from a Reynolds-Averaged Navier Stokes (RANS) code as input. Turbulence kinetic energy and its dissipation rate from a k-epsilon or k-omega turbulence model must also be provided. JeNo is a research code, and as such, its development is ongoing. The goal is to create a code that is able to accurately compute far-field sound pressure levels for jets at all observer angles and all operating conditions. In order to achieve this goal, current theories must be combined with the best practices in numerical modeling, all of which must be validated by experiment. Since the acoustic predictions from JeNo are based on the mean flow solutions from a RANS code, quality predictions depend on accurate aerodynamic input.This is why acoustic source modeling, turbulence modeling, together with the development of advanced measurement systems are the leading areas of research in jet noise research at NASA Glenn Research Center.

  9. Assessment of the Potential to Achieve very Low Energy Use in Public Buildings in China with Advanced Window and Shading Systems

    DOE PAGES

    Lee, Eleanor; Pang, Xiufeng; McNeil, Andrew; ...

    2015-05-29

    Here, as rapid growth in the construction industry continues to occur in China, the increased demand for a higher standard living is driving significant growth in energy use and demand across the country. Building codes and standards have been implemented to head off this trend, tightening prescriptive requirements for fenestration component measures using methods similar to the US model energy code American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE) 90.1. The objective of this study is to (a) provide an overview of applicable code requirements and current efforts within China to enable characterization and comparison of window and shadingmore » products, and (b) quantify the load reduction and energy savings potential of several key advanced window and shading systems, given the divergent views on how space conditioning requirements will be met in the future. System-level heating and cooling loads and energy use performance were evaluated for a code-compliant large office building using the EnergyPlus building energy simulation program. Commercially-available, highly-insulating, low-emittance windows were found to produce 24-66% lower perimeter zone HVAC electricity use compared to the mandated energy-efficiency standard in force (GB 50189-2005) in cold climates like Beijing. Low-e windows with operable exterior shading produced up to 30-80% reductions in perimeter zone HVAC electricity use in Beijing and 18-38% reductions in Shanghai compared to the standard. The economic context of China is unique since the cost of labor and materials for the building industry is so low. Broad deployment of these commercially available technologies with the proper supporting infrastructure for design, specification, and verification in the field would enable significant reductions in energy use and greenhouse gas emissions in the near term.« less

  10. Assessment of the Potential to Achieve very Low Energy Use in Public Buildings in China with Advanced Window and Shading Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Eleanor; Pang, Xiufeng; McNeil, Andrew

    Here, as rapid growth in the construction industry continues to occur in China, the increased demand for a higher standard living is driving significant growth in energy use and demand across the country. Building codes and standards have been implemented to head off this trend, tightening prescriptive requirements for fenestration component measures using methods similar to the US model energy code American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE) 90.1. The objective of this study is to (a) provide an overview of applicable code requirements and current efforts within China to enable characterization and comparison of window and shadingmore » products, and (b) quantify the load reduction and energy savings potential of several key advanced window and shading systems, given the divergent views on how space conditioning requirements will be met in the future. System-level heating and cooling loads and energy use performance were evaluated for a code-compliant large office building using the EnergyPlus building energy simulation program. Commercially-available, highly-insulating, low-emittance windows were found to produce 24-66% lower perimeter zone HVAC electricity use compared to the mandated energy-efficiency standard in force (GB 50189-2005) in cold climates like Beijing. Low-e windows with operable exterior shading produced up to 30-80% reductions in perimeter zone HVAC electricity use in Beijing and 18-38% reductions in Shanghai compared to the standard. The economic context of China is unique since the cost of labor and materials for the building industry is so low. Broad deployment of these commercially available technologies with the proper supporting infrastructure for design, specification, and verification in the field would enable significant reductions in energy use and greenhouse gas emissions in the near term.« less

  11. The application of simulation modeling to the cost and performance ranking of solar thermal power plants

    NASA Technical Reports Server (NTRS)

    Rosenberg, L. S.; Revere, W. R.; Selcuk, M. K.

    1981-01-01

    Small solar thermal power systems (up to 10 MWe in size) were tested. The solar thermal power plant ranking study was performed to aid in experiment activity and support decisions for the selection of the most appropriate technological approach. The cost and performance were determined for insolation conditions by utilizing the Solar Energy Simulation computer code (SESII). This model optimizes the size of the collector field and energy storage subsystem for given engine generator and energy transport characteristics. The development of the simulation tool, its operation, and the results achieved from the analysis are discussed.

  12. Preparation macroconstants to simulate the core of VVER-1000 reactor

    NASA Astrophysics Data System (ADS)

    Seleznev, V. Y.

    2017-01-01

    Dynamic model is used in simulators of VVER-1000 reactor for training of operating staff and students. As a code for the simulation of neutron-physical characteristics is used DYNCO code that allows you to perform calculations of stationary, transient and emergency processes in real time to a different geometry of the reactor lattices [1]. To perform calculations using this code, you need to prepare macroconstants for each FA. One way of getting macroconstants is to use the WIMS code, which is based on the use of its own 69-group macroconstants library. This paper presents the results of calculations of FA obtained by the WIMS code for VVER-1000 reactor with different parameters of fuel and coolant, as well as the method of selection of energy groups for further calculation macroconstants.

  13. A New Non-LTE Model based on Super Configurations

    NASA Astrophysics Data System (ADS)

    Bar-Shalom, A.; Klapisch, M.

    1996-11-01

    Non-LTE effects are vital for the simulation of radiation in hot plasmas involving even medium Z materials. However, the exceedingly large number of atomic energy levels forbids using a detailed collisional radiative model on-line in the hydrodynamic simulations. For this purpose, greatly simplified models are required. We implemented recently Busquet's model(M. Busquet, Phys. Fluids B, 5, 4191 (1993)) in NRL's RAD2D Hydro code in conservative form (M. Klapisch et al., Bull. Am. Phys. Soc., 40, 1806 (1995), and poster at this meeting.). This model is quick and the results make sense, but in the absence of precisely defined experiments, it is difficult to asses its accuracy. We present here a new collisional radiative model based on superconfigurations( A. Bar-Shalom, J. Oreg, J. F. Seely, U. Feldman, C. M. Brown, B. A. Hammel, R. W. Lee and C. A. Back, Phys. Rev. E, 52, 6686 (1995).), intended to be a benchmark for approximate models used in hydro-codes. It uses accurate rates from the HULLAC Code. Results for various elements will be presented and compared with RADIOM.

  14. Energy Savings Analysis of the Proposed Revision of the Washington D.C. Non-Residential Energy Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rosenberg, Michael I.; Athalye, Rahul A.; Hart, Philip R.

    This report presents the results of an assessment of savings for the proposed Washington D.C. energy code relative to ASHRAE Standard 90.1-2010. It includes annual and life cycle savings for site energy, source energy, energy cost, and carbon dioxide emissions that would result from adoption and enforcement of the proposed code for newly constructed buildings in Washington D.C. over a five year period.

  15. LATIS3D: The Goal Standard for Laser-Tissue-Interaction Modeling

    NASA Astrophysics Data System (ADS)

    London, R. A.; Makarewicz, A. M.; Kim, B. M.; Gentile, N. A.; Yang, T. Y. B.

    2000-03-01

    The goal of this LDRD project has been to create LATIS3D-the world's premier computer program for laser-tissue interaction modeling. The development was based on recent experience with the 2D LATIS code and the ASCI code, KULL. With LATIS3D, important applications in laser medical therapy were researched including dynamical calculations of tissue emulsification and ablation, photothermal therapy, and photon transport for photodynamic therapy. This project also enhanced LLNL's core competency in laser-matter interactions and high-energy-density physics by pushing simulation codes into new parameter regimes and by attracting external expertise. This will benefit both existing LLNL programs such as ICF and SBSS and emerging programs in medical technology and other laser applications. The purpose of this project was to develop and apply a computer program for laser-tissue interaction modeling to aid in the development of new instruments and procedures in laser medicine.

  16. Simulation of the hybrid and steady state advanced operating modes in ITER

    NASA Astrophysics Data System (ADS)

    Kessel, C. E.; Giruzzi, G.; Sips, A. C. C.; Budny, R. V.; Artaud, J. F.; Basiuk, V.; Imbeaux, F.; Joffrin, E.; Schneider, M.; Murakami, M.; Luce, T.; St. John, Holger; Oikawa, T.; Hayashi, N.; Takizuka, T.; Ozeki, T.; Na, Y.-S.; Park, J. M.; Garcia, J.; Tucillo, A. A.

    2007-09-01

    Integrated simulations are performed to establish a physics basis, in conjunction with present tokamak experiments, for the operating modes in the International Thermonuclear Experimental Reactor (ITER). Simulations of the hybrid mode are done using both fixed and free-boundary 1.5D transport evolution codes including CRONOS, ONETWO, TSC/TRANSP, TOPICS and ASTRA. The hybrid operating mode is simulated using the GLF23 and CDBM05 energy transport models. The injected powers are limited to the negative ion neutral beam, ion cyclotron and electron cyclotron heating systems. Several plasma parameters and source parameters are specified for the hybrid cases to provide a comparison of 1.5D core transport modelling assumptions, source physics modelling assumptions, as well as numerous peripheral physics modelling. Initial results indicate that very strict guidelines will need to be imposed on the application of GLF23, for example, to make useful comparisons. Some of the variations among the simulations are due to source models which vary widely among the codes used. In addition, there are a number of peripheral physics models that should be examined, some of which include fusion power production, bootstrap current, treatment of fast particles and treatment of impurities. The hybrid simulations project to fusion gains of 5.6-8.3, βN values of 2.1-2.6 and fusion powers ranging from 350 to 500 MW, under the assumptions outlined in section 3. Simulations of the steady state operating mode are done with the same 1.5D transport evolution codes cited above, except the ASTRA code. In these cases the energy transport model is more difficult to prescribe, so that energy confinement models will range from theory based to empirically based. The injected powers include the same sources as used for the hybrid with the possible addition of lower hybrid. The simulations of the steady state mode project to fusion gains of 3.5-7, βN values of 2.3-3.0 and fusion powers of 290 to 415 MW, under the assumptions described in section 4. These simulations will be presented and compared with particular focus on the resulting temperature profiles, source profiles and peripheral physics profiles. The steady state simulations are at an early stage and are focused on developing a range of safety factor profiles with 100% non-inductive current.

  17. On the Stefan Problem with Volumetric Energy Generation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    John Crepeau; Ali Siahpush; Blaine Spotten

    2009-11-01

    This paper presents results of solid-liquid phase change, driven by volumetric energy generation, in a vertical cylinder. We show excellent agreement between a quasi-static, approximate analytical solution valid for Stefan numbers less than one, and a computational model solved using the CFD code FLUENT®. A computational study also shows the effect that the volumetric energy generation has on both the mushy zone thickness and convection in the melt during phase change.

  18. Integrated Earth System Model (iESM)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thornton, Peter Edmond; Mao, Jiafu; Shi, Xiaoying

    2016-12-02

    The iESM is a simulation code that represents the physical and biological aspects of Earth's climate system, and also includes the macro-economic and demographic properties of human societies. The human aspect of the simulation code is focused in particular on the effects of human activities on land use and land cover change, but also includes aspects such as energy economies. The time frame for predictions with iESM is approximately 1970 through 2100.

  19. Load management strategy for Particle-In-Cell simulations in high energy particle acceleration

    NASA Astrophysics Data System (ADS)

    Beck, A.; Frederiksen, J. T.; Dérouillat, J.

    2016-09-01

    In the wake of the intense effort made for the experimental CILEX project, numerical simulation campaigns have been carried out in order to finalize the design of the facility and to identify optimal laser and plasma parameters. These simulations bring, of course, important insight into the fundamental physics at play. As a by-product, they also characterize the quality of our theoretical and numerical models. In this paper, we compare the results given by different codes and point out algorithmic limitations both in terms of physical accuracy and computational performances. These limitations are illustrated in the context of electron laser wakefield acceleration (LWFA). The main limitation we identify in state-of-the-art Particle-In-Cell (PIC) codes is computational load imbalance. We propose an innovative algorithm to deal with this specific issue as well as milestones towards a modern, accurate high-performance PIC code for high energy particle acceleration.

  20. FLUSH: A tool for the design of slush hydrogen flow systems

    NASA Technical Reports Server (NTRS)

    Hardy, Terry L.

    1990-01-01

    As part of the National Aerospace Plane Project an analytical model was developed to perform calculations for in-line transfer of solid-liquid mixtures of hydrogen. This code, called FLUSH, calculates pressure drop and solid fraction loss for the flow of slush hydrogen through pipe systems. The model solves the steady-state, one-dimensional equation of energy to obtain slush loss estimates. A description of the code is provided as well as a guide for users of the program. Preliminary results are also presented showing the anticipated degradation of slush hydrogen solid content for various piping systems.

  1. ScintSim1: A new Monte Carlo simulation code for transport of optical photons in 2D arrays of scintillation detectors

    PubMed Central

    Mosleh-Shirazi, Mohammad Amin; Zarrini-Monfared, Zinat; Karbasi, Sareh; Zamani, Ali

    2014-01-01

    Two-dimensional (2D) arrays of thick segmented scintillators are of interest as X-ray detectors for both 2D and 3D image-guided radiotherapy (IGRT). Their detection process involves ionizing radiation energy deposition followed by production and transport of optical photons. Only a very limited number of optical Monte Carlo simulation models exist, which has limited the number of modeling studies that have considered both stages of the detection process. We present ScintSim1, an in-house optical Monte Carlo simulation code for 2D arrays of scintillation crystals, developed in the MATLAB programming environment. The code was rewritten and revised based on an existing program for single-element detectors, with the additional capability to model 2D arrays of elements with configurable dimensions, material, etc., The code generates and follows each optical photon history through the detector element (and, in case of cross-talk, the surrounding ones) until it reaches a configurable receptor, or is attenuated. The new model was verified by testing against relevant theoretically known behaviors or quantities and the results of a validated single-element model. For both sets of comparisons, the discrepancies in the calculated quantities were all <1%. The results validate the accuracy of the new code, which is a useful tool in scintillation detector optimization. PMID:24600168

  2. ScintSim1: A new Monte Carlo simulation code for transport of optical photons in 2D arrays of scintillation detectors.

    PubMed

    Mosleh-Shirazi, Mohammad Amin; Zarrini-Monfared, Zinat; Karbasi, Sareh; Zamani, Ali

    2014-01-01

    Two-dimensional (2D) arrays of thick segmented scintillators are of interest as X-ray detectors for both 2D and 3D image-guided radiotherapy (IGRT). Their detection process involves ionizing radiation energy deposition followed by production and transport of optical photons. Only a very limited number of optical Monte Carlo simulation models exist, which has limited the number of modeling studies that have considered both stages of the detection process. We present ScintSim1, an in-house optical Monte Carlo simulation code for 2D arrays of scintillation crystals, developed in the MATLAB programming environment. The code was rewritten and revised based on an existing program for single-element detectors, with the additional capability to model 2D arrays of elements with configurable dimensions, material, etc., The code generates and follows each optical photon history through the detector element (and, in case of cross-talk, the surrounding ones) until it reaches a configurable receptor, or is attenuated. The new model was verified by testing against relevant theoretically known behaviors or quantities and the results of a validated single-element model. For both sets of comparisons, the discrepancies in the calculated quantities were all <1%. The results validate the accuracy of the new code, which is a useful tool in scintillation detector optimization.

  3. Super Configuration Accounting Equation of State for WDM and HED plasma

    NASA Astrophysics Data System (ADS)

    Lee, T. G.; Busquet, M.; Gilles, D.; Klapisch, M.

    2017-10-01

    Rad-Hydro numerical codes require Equation of State (EOS) and opacities. We describe a procedure to obtain an EOS compatible with our STA opacity model. We use our relativistic super-configuration code - STA-2.5 to compute the average 〈 Z 〉 and excitation-ionization internal energy U and chemical potential _. These and other data will serve as basic inputs into a Yukawa Monte-Carlo improved version of quotidian EOS, known as YMCQ. The screening in the Yukawa potential describing the ion-ion interaction is modified by the data from STA. This integrated procedure yields the excess internal energy due to the non-ideal behavior of the EOS concordant with our opacity model and allows us to have an EOS model applicable from solid matter to very tenuous plasmas as found in laser fusion, astrophysics, or tokamaks. We shall present its application to Carbon, Aluminum and Iron. This work is made possible by a financial support from DOE/NNSA.

  4. A Kinetics Model for KrF Laser Amplifiers

    NASA Astrophysics Data System (ADS)

    Giuliani, J. L.; Kepple, P.; Lehmberg, R.; Obenschain, S. P.; Petrov, G.

    1999-11-01

    A computer kinetics code has been developed to model the temporal and spatial behavior of an e-beam pumped KrF laser amplifier. The deposition of the primary beam electrons is assumed to be spatially uniform and the energy distribution function of the nascent electron population is calculated to be near Maxwellian below 10 eV. For an initial Kr/Ar/F2 composition, the code calculates the densities of 24 species subject to over 100 reactions with 1-D spatial resolution (typically 16 zones) along the longitudinal lasing axis. Enthalpy accounting for each process is performed to partition the energy into internal, thermal, and radiative components. The electron as well as the heavy particle temperatures are followed for energy conservation and excitation rates. Transport of the lasing photons is performed along the axis on a dense subgrid using the method of characteristics. Amplified spontaneous emission is calculated using a discrete ordinates approach and includes contributions to the local intensity from the whole amplifier volume. Specular reflection off side walls and the rear mirror are included. Results of the model will be compared with data from the NRL NIKE laser and other published results.

  5. Integral Transport Analysis Results for Ions Flowing Through Neutral Gas

    NASA Astrophysics Data System (ADS)

    Emmert, Gilbert; Santarius, John

    2017-10-01

    Results of a computational model for the flow of energetic ions and neutrals through a background neutral gas will be presented. The method models reactions as creating a new source of ions or neutrals if the energy or charge state of the resulting particle is changed. For a given source boundary condition, the creation and annihilation of the various species is formulated as a 1-D Volterra integral equation that can quickly be solved numerically by finite differences. The present work focuses on multiple-pass, 1-D ion flow through neutral gas and a nearly transparent, concentric anode and cathode pair in spherical, cylindrical, or linear geometry. This has been implemented as a computer code for atomic (3He, 3He +, 3He + +) and molecular (D, D2, D-, D +, D2 +, D3 +) ion and neutral species, and applied to modeling inertial-electrostatic connement (IEC) devices. The code yields detailed energy spectra of the various ions and energetic neutral species. Calculations for several University of Wisconsin IEC and ion implantation devices will be presented. Research supported by US Dept. of Homeland Security Grant 2015-DN-077-ARI095, Dept. of Energy Grant DE-FG02-04ER54745, and the Grainger Foundation.

  6. FastDart : a fast, accurate and friendly version of DART code.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rest, J.; Taboada, H.

    2000-11-08

    A new enhanced, visual version of DART code is presented. DART is a mechanistic model based code, developed for the performance calculation and assessment of aluminum dispersion fuel. Major issues of this new version are the development of a new, time saving calculation routine, able to be run on PC, a friendly visual input interface and a plotting facility. This version, available for silicide and U-Mo fuels,adds to the classical accuracy of DART models for fuel performance prediction, a faster execution and visual interfaces. It is part of a collaboration agreement between ANL and CNEA in the area of Lowmore » Enriched Uranium Advanced Fuels, held by the Implementation Arrangement for Technical Exchange and Cooperation in the Area of Peaceful Uses of Nuclear Energy.« less

  7. Code dependencies of pre-supernova evolution and nucleosynthesis in massive stars: evolution to the end of core helium burning

    DOE PAGES

    Jones, S.; Hirschi, R.; Pignatari, M.; ...

    2015-01-15

    We present a comparison of 15M ⊙ , 20M ⊙ and 25M ⊙ stellar models from three different codes|GENEC, KEPLER and MESA|and their nucleosynthetic yields. The models are calculated from the main sequence up to the pre-supernova (pre-SN) stage and do not include rotation. The GENEC and KEPLER models hold physics assumptions that are characteristic of the two codes. The MESA code is generally more flexible; overshooting of the convective core during the hydrogen and helium burning phases in MESA is chosen such that the CO core masses are consistent with those in the GENEC models. Full nucleosynthesis calculations aremore » performed for all models using the NuGrid post-processing tool MPPNP and the key energy-generating nuclear reaction rates are the same for all codes. We are thus able to highlight the key diferences between the models that are caused by the contrasting physics assumptions and numerical implementations of the three codes. A reasonable agreement is found between the surface abundances predicted by the models computed using the different codes, with GENEC exhibiting the strongest enrichment of H-burning products and KEPLER exhibiting the weakest. There are large variations in both the structure and composition of the models—the 15M ⊙ and 20M ⊙ in particular—at the pre-SN stage from code to code caused primarily by convective shell merging during the advanced stages. For example the C-shell abundances of O, Ne and Mg predicted by the three codes span one order of magnitude in the 15M ⊙ models. For the alpha elements between Si and Fe the differences are even larger. The s-process abundances in the C shell are modified by the merging of convective shells; the modification is strongest in the 15M ⊙ model in which the C-shell material is exposed to O-burning temperatures and the γ -process is activated. The variation in the s-process abundances across the codes is smallest in the 25M ⊙ models, where it is comparable to the impact of nuclear reaction rate uncertainties. In general the differences in the results from the three codes are due to their contrasting physics assumptions (e.g. prescriptions for mass loss and convection). The broadly similar evolution of the 25M ⊙ models gives us reassurance that different stellar evolution codes do produce similar results. For the 15M ⊙ and 20M ⊙ models, however, the different input physics and the interplay between the various convective zones lead to important differences in both the pre-supernova structure and nucleosynthesis predicted by the three codes. For the KEPLER models the core masses are different and therefore an exact match could not be expected.« less

  8. Airside HVAC BESTEST: HVAC Air-Distribution System Model Test Cases for ASHRAE Standard 140

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Judkoff, Ronald; Neymark, Joel; Kennedy, Mike D.

    This paper summarizes recent work to develop new airside HVAC equipment model analytical verification test cases for ANSI/ASHRAE Standard 140, Standard Method of Test for the Evaluation of Building Energy Analysis Computer Programs. The analytical verification test method allows comparison of simulation results from a wide variety of building energy simulation programs with quasi-analytical solutions, further described below. Standard 140 is widely cited for evaluating software for use with performance-path energy efficiency analysis, in conjunction with well-known energy-efficiency standards including ASHRAE Standard 90.1, the International Energy Conservation Code, and other international standards. Airside HVAC Equipment is a common area ofmore » modelling not previously explicitly tested by Standard 140. Integration of the completed test suite into Standard 140 is in progress.« less

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roth, Amir; Goldwasser, David; Parker, Andrew

    The OpenStudio software development kit has played a significant role in the adoption of the EnergyPlus whole building energy modeling engine and in the development and launch of new applications that use EnergyPlus for a variety of purposes, from design to auditing to code compliance and management of large portfolios. One of the most powerful features of the OpenStudio platform is Measure, a scripting facility similar to Excel's Visual Basic macros. Measures can be used to apply energy conservation measures to models--hence the name--to create reports and visualizations, and even to sew together custom workflows. Measures automate tedious tasks increasingmore » modeler productivity and reducing error. Measures have also become a currency in the OpenStudio tools ecosystem, a way to codify knowledge and protocol and transfer it from one modeler to another, either within an organization or within the global modeling community. This paper describes some of the many applications of Measures.« less

  10. Plasma separation process. Betacell (BCELL) code, user's manual

    NASA Astrophysics Data System (ADS)

    Taherzadeh, M.

    1987-11-01

    The emergence of clearly defined applications for (small or large) amounts of long-life and reliable power sources has given the design and production of betavoltaic systems a new life. Moreover, because of the availability of the Plasma Separation Program, (PSP) at TRW, it is now possible to separate the most desirable radioisotopes for betacell power generating devices. A computer code, named BCELL, has been developed to model the betavoltaic concept by utilizing the available up-to-date source/cell parameters. In this program, attempts have been made to determine the betacell energy device maximum efficiency, degradation due to the emitting source radiation and source/cell lifetime power reduction processes. Additionally, comparison is made between the Schottky and PN junction devices for betacell battery design purposes. Certain computer code runs have been made to determine the JV distribution function and the upper limit of the betacell generated power for specified energy sources. A Ni beta emitting radioisotope was used for the energy source and certain semiconductors were used for the converter subsystem of the betacell system. Some results for a Promethium source are also given here for comparison.

  11. Intercomparison of Radiation Codes in Climate Models (ICRCCM) Infrared (Clear-Sky) Line-by Line Radiative Fluxes (DB1002)

    DOE Data Explorer

    Arking, A.; Ridgeway, B.; Clough, T.; Iacono, M.; Fomin, B.; Trotsenko, A.; Freidenreich, S.; Schwarzkopf, D.

    1994-01-01

    The intercomparison of Radiation Codes in Climate Models (ICRCCM) study was launched under the auspices of the World Meteorological Organization and with the support of the U.S. Department of Energy to document differences in results obtained with various radiation codes and radiation parameterizations in general circulation models (GCMs). ICRCCM produced benchmark, longwave, line-by-line (LBL) fluxes that may be compared against each other and against models of lower spectral resolution. During ICRCCM, infrared fluxes and cooling rates for several standard model atmospheres with varying concentrations of water vapor, carbon dioxide, and ozone were calculated with LBL methods at resolutions of 0.01 cm-1 or higher. For comparison with other models, values were summed for the IR spectrum and given at intervals of 5 or 10 cm-1. This archive contains fluxes for ICRCCM-prescribed clear-sky cases. Radiative flux and cooling-rate profiles are given for specified atmospheric profiles for temperature, water vapor, and ozone-mixing ratios. The archive contains 328 files, including spectral summaries, formatted data files, and a variety of programs (i.e., C-shell scripts, FORTRAN codes, and IDL programs) to read, reformat, and display data. Collectively, these files require approximately 59 MB of disk space.

  12. Development and application of computational aerothermodynamics flowfield computer codes

    NASA Technical Reports Server (NTRS)

    Venkatapathy, Ethiraj

    1994-01-01

    Research was performed in the area of computational modeling and application of hypersonic, high-enthalpy, thermo-chemical nonequilibrium flow (Aerothermodynamics) problems. A number of computational fluid dynamic (CFD) codes were developed and applied to simulate high altitude rocket-plume, the Aeroassist Flight Experiment (AFE), hypersonic base flow for planetary probes, the single expansion ramp model (SERN) connected with the National Aerospace Plane, hypersonic drag devices, hypersonic ramp flows, ballistic range models, shock tunnel facility nozzles, transient and steady flows in the shock tunnel facility, arc-jet flows, thermochemical nonequilibrium flows around simple and complex bodies, axisymmetric ionized flows of interest to re-entry, unsteady shock induced combustion phenomena, high enthalpy pulsed facility simulations, and unsteady shock boundary layer interactions in shock tunnels. Computational modeling involved developing appropriate numerical schemes for the flows on interest and developing, applying, and validating appropriate thermochemical processes. As part of improving the accuracy of the numerical predictions, adaptive grid algorithms were explored, and a user-friendly, self-adaptive code (SAGE) was developed. Aerothermodynamic flows of interest included energy transfer due to strong radiation, and a significant level of effort was spent in developing computational codes for calculating radiation and radiation modeling. In addition, computational tools were developed and applied to predict the radiative heat flux and spectra that reach the model surface.

  13. Li-SF(6) Combustion in Stored Chemical Energy Propulsion Systems

    DTIC Science & Technology

    1990-07-01

    S 3. STRUCTURE OF SF6 3ETS IN MOLTEN LI ........... ................. 8 3.1 Mathematical Model ...ill - ABSTRACT Appropriate thermodynamic models and thermo-chemical data for multicompo- nents and immiscible phases have been Incorporated into a code...by a simplified integral model which was improved9 by the use of the local homogeneous flow approximation, equilibrium combustion model and Kc-C-g

  14. Review on Monte-Carlo Tools for Simulating Relativistic Runaway Electron Avalanches and the Propagation of TerretrialTerrestrial-Gamma Ray Flashes in the Atmosphere

    NASA Astrophysics Data System (ADS)

    Sarria, D.

    2016-12-01

    The field of High Energy Atmospheric Physics (HEAP) includes the study of energetic events related to thunderstorms, such as Terrestrial Gamma-ray Flashes (TGF), associated electron-positron beams (TEB), gamma-ray glows and Thunderstorm Ground Enhancements (TGE). Understanding these phenomena requires accurate models for the interaction of particles with atmospheric air and electro-magnetic fields in the <100 MeV energy range. This study is the next step of the work presented in [C. Rutjes et al., 2016] that compared the performances of various codes in the absence of electro-magnetic fields. In the first part, we quantify simple but informative test cases of electrons in various electric field profiles. We will compare the avalanche length (of the Relativistic Runaway Electron Avalanche (RREA) process), the photon/electron spectra and spatial scattering. In particular, we test the effect of the low-energy threshold, that was found to be very important [Skeltved et al., 2014]. Note that even without a field, it was found to be important because of the straggling effect [C. Rutjes et al., 2016]. For this first part, we will be comparing GEANT4 (different flavours), FLUKA and the custom made code GRRR. In the second part, we test the propagation of these high energy particles in the atmosphere, from production altitude (around 10 km to 18 km) to satellite altitude (600 km). We use a simple and clearly fixed set-up for the atmospheric density, the geomagnetic field, the initial conditions, and the detection conditions of the particles. For this second part, we will be comparing GEANT4 (different flavours), FLUKA/CORSIKA and the custom made code MC-PEPTITA. References : C. Rutjes et al., 2016. Evaluation of Monte Carlo tools for high energy atmospheric physics. Geosci. Model Dev. Under review. Skeltved, A. B. et al., 2014. Modelling the relativistic runaway electron avalanche and the feedback mechanism with geant4. JGRA, doi :10.1002/2014JA020504.

  15. DEM code-based modeling of energy accumulation and release in structurally heterogeneous rock masses

    NASA Astrophysics Data System (ADS)

    Lavrikov, S. V.; Revuzhenko, A. F.

    2015-10-01

    Based on discrete element method, the authors model loading of a physical specimen to describe its capacity to accumulate and release elastic energy. The specimen is modeled as a packing of particles with viscoelastic coupling and friction. The external elastic boundary of the packing is represented by particles connected by elastic springs. The latter means introduction of an additional special potential of interaction between the boundary particles, that exercises effect even when there is no direct contact between the particles. On the whole, the model specimen represents an element of a medium capable of accumulation of deformation energy in the form of internal stresses. The data of the numerical modeling of the physical specimen compression and the laboratory testing results show good qualitative consistency.

  16. Simulation des fuites neutroniques a l'aide d'un modele B1 heterogene pour des reacteurs a neutrons rapides et a eau legere

    NASA Astrophysics Data System (ADS)

    Faure, Bastien

    The neutronic calculation of a reactor's core is usually done in two steps. After solving the neutron transport equation over an elementary domain of the core, a set of parameters, namely macroscopic cross sections and potentially diffusion coefficients, are defined in order to perform a full core calculation. In the first step, the cell or assembly is calculated using the "fundamental mode theory", the pattern being inserted in an infinite lattice of periodic structures. This simple representation allows a precise modeling for the geometry and the energy variable and can be treated within transport theory with minimalist approximations. However, it supposes that the reactor's core can be treated as a periodic lattice of elementary domains, which is already a big hypothesis, and cannot, at first sight, take into account neutron leakage between two different zones and out of the core. The leakage models propose to correct the transport equation with an additional leakage term in order to represent this phenomenon. For historical reasons, numerical methods for solving the transport equation being limited by computer's features (processor speeds and memory sizes), the leakage term is, in most cases, modeled by a homogeneous and isotropic probability within a "homogeneous leakage model". Driven by technological innovation in the computer science field, "heterogeneous leakage models" have been developed and implemented in several neutron transport calculation codes. This work focuses on a study of some of those models, including the TIBERE model from the DRAGON-3 code developed at Ecole Polytechnique de Montreal, as well as the heterogeneous model from the APOLLO-3 code developed at Commissariat a l'Energie Atomique et aux energies alternatives. The research based on sodium cooled fast reactors and light water reactors has allowed us to demonstrate the interest of those models compared to a homogeneous leakage model. In particular, it has been shown that a heterogeneous model has a significant impact on the calculation of the out of core leakage rate that permits a better estimation of the transport equation eigenvalue Keff . The neutron streaming between two zones of different compositions was also proven to be better calculated.

  17. Prediction of high-energy radiation belt electron fluxes using a combined VERB-NARMAX model

    NASA Astrophysics Data System (ADS)

    Pakhotin, I. P.; Balikhin, M. A.; Shprits, Y.; Subbotin, D.; Boynton, R.

    2013-12-01

    This study is concerned with the modelling and forecasting of energetic electron fluxes that endanger satellites in space. By combining data-driven predictions from the NARMAX methodology with the physics-based VERB code, it becomes possible to predict electron fluxes with a high level of accuracy and across a radial distance from inside the local acceleration region to out beyond geosynchronous orbit. The model coupling also makes is possible to avoid accounting for seed electron variations at the outer boundary. Conversely, combining a convection code with the VERB and NARMAX models has the potential to provide even greater accuracy in forecasting that is not limited to geostationary orbit but makes predictions across the entire outer radiation belt region.

  18. Coal Market Module - NEMS Documentation

    EIA Publications

    2014-01-01

    Documents the objectives and the conceptual and methodological approach used in the development of the National Energy Modeling System's (NEMS) Coal Market Module (CMM) used to develop the Annual Energy Outlook 2014 (AEO2014). This report catalogues and describes the assumptions, methodology, estimation techniques, and source code of CMM's two submodules. These are the Coal Production Submodule (CPS) and the Coal Distribution Submodule (CDS).

  19. Modeling multi-GeV class laser-plasma accelerators with INF&RNO

    NASA Astrophysics Data System (ADS)

    Benedetti, Carlo; Schroeder, Carl; Bulanov, Stepan; Geddes, Cameron; Esarey, Eric; Leemans, Wim

    2016-10-01

    Laser plasma accelerators (LPAs) can produce accelerating gradients on the order of tens to hundreds of GV/m, making them attractive as compact particle accelerators for radiation production or as drivers for future high-energy colliders. Understanding and optimizing the performance of LPAs requires detailed numerical modeling of the nonlinear laser-plasma interaction. We present simulation results, obtained with the computationally efficient, PIC/fluid code INF&RNO (INtegrated Fluid & paRticle simulatioN cOde), concerning present (multi-GeV stages) and future (10 GeV stages) LPA experiments performed with the BELLA PW laser system at LBNL. In particular, we will illustrate the issues related to the guiding of a high-intensity, short-pulse, laser when a realistic description for both the laser driver and the background plasma is adopted. Work Supported by the U.S. Department of Energy under contract No. DE-AC02-05CH11231.

  20. Fast Faraday cup for fast ion beam TOF measurements in deuterium filled plasma focus device and correlation with Lee model

    NASA Astrophysics Data System (ADS)

    Damideh, Vahid; Ali, Jalil; Saw, Sor Heoh; Rawat, Rajdeep Singh; Lee, Paul; Chaudhary, Kashif Tufail; Rizvi, Zuhaib Haider; Dabagh, Shadab; Ismail, Fairuz Diyana; Sing, Lee

    2017-06-01

    In this work, the design and construction of a 50 Ω fast Faraday cup and its results in correlation with the Lee Model Code for fast ion beam and ion time of flight measurements for a Deuterium filled plasma focus device are presented. Fast ion beam properties such as ion flux, fluence, speed, and energy at 2-8 Torr Deuterium are studied. The minimum 34 ns full width at half maximum ion signal at 12 kV, 3 Torr Deuterium in INTI PF was captured by a Faraday cup. The maximum ion energy of 67 ± 5 keV at 4 Torr Deuterium was detected by the Faraday cup. Ion time of flight measurements by the Faraday cup show consistent correlation with Lee Code results for Deuterium especially at near to optimum pressures.

  1. Extension of the energy range of the experimental activation cross-sections data of longer-lived products of proton induced nuclear reactions on dysprosium up to 65MeV.

    PubMed

    Tárkányi, F; Ditrói, F; Takács, S; Hermanne, A; Ignatyuk, A V

    2015-04-01

    Activation cross-sections data of longer-lived products of proton induced nuclear reactions on dysprosium were extended up to 65MeV by using stacked foil irradiation and gamma spectrometry experimental methods. Experimental cross-sections data for the formation of the radionuclides (159)Dy, (157)Dy, (155)Dy, (161)Tb, (160)Tb, (156)Tb, (155)Tb, (154m2)Tb, (154m1)Tb, (154g)Tb, (153)Tb, (152)Tb and (151)Tb are reported in the 36-65MeV energy range, and compared with an old dataset from 1964. The experimental data were also compared with the results of cross section calculations of the ALICE and EMPIRE nuclear model codes and of the TALYS nuclear reaction model code as listed in the latest on-line libraries TENDL 2013. Copyright © 2015. Published by Elsevier Ltd.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adamek, Julian; Daverio, David; Durrer, Ruth

    We present a new N-body code, gevolution , for the evolution of large scale structure in the Universe. Our code is based on a weak field expansion of General Relativity and calculates all six metric degrees of freedom in Poisson gauge. N-body particles are evolved by solving the geodesic equation which we write in terms of a canonical momentum such that it remains valid also for relativistic particles. We validate the code by considering the Schwarzschild solution and, in the Newtonian limit, by comparing with the Newtonian N-body codes Gadget-2 and RAMSES . We then proceed with a simulation ofmore » large scale structure in a Universe with massive neutrinos where we study the gravitational slip induced by the neutrino shear stress. The code can be extended to include different kinds of dark energy or modified gravity models and going beyond the usually adopted quasi-static approximation. Our code is publicly available.« less

  3. Calibration and comparison of the NASA Lewis free-piston Stirling engine model predictions with RE-1000 test data

    NASA Technical Reports Server (NTRS)

    Geng, Steven M.

    1987-01-01

    A free-piston Stirling engine performance code is being upgraded and validated at the NASA Lewis Research Center under an interagency agreement between the Department of Energy's Oak Ridge National Laboratory and NASA Lewis. Many modifications were made to the free-piston code in an attempt to decrease the calibration effort. A procedure was developed that made the code calibration process more systematic. Engine-specific calibration parameters are often used to bring predictions and experimental data into better agreement. The code was calibrated to a matrix of six experimental data points. Predictions of the calibrated free-piston code are compared with RE-1000 free-piston Stirling engine sensitivity test data taken at NASA Lewis. Reasonable agreement was obtained between the code prediction and the experimental data over a wide range of engine operating conditions.

  4. Calibration and comparison of the NASA Lewis free-piston Stirling engine model predictions with RE-1000 test data

    NASA Technical Reports Server (NTRS)

    Geng, Steven M.

    1987-01-01

    A free-piston Stirling engine performance code is being upgraded and validated at the NASA Lewis Research Center under an interagency agreement between the Department of Energy's Oak Ridge National Laboratory and NASA Lewis. Many modifications were made to the free-piston code in an attempt to decrease the calibration effort. A procedure was developed that made the code calibration process more systematic. Engine-specific calibration parameters are often used to bring predictions and experimental data into better agreement. The code was calibrated to a matrix of six experimental data points. Predictions of the calibrated free-piston code are compared with RE-1000 free-piston Stirling engine sensitivity test data taken at NASA Lewis. Resonable agreement was obtained between the code predictions and the experimental data over a wide range of engine operating conditions.

  5. Theoretical studies of Resonance Enhanced Stimulated Raman Scattering (RESRS) of frequency doubled Alexandrite laser wavelength in cesium vapor

    NASA Technical Reports Server (NTRS)

    Lawandy, Nabil M.

    1987-01-01

    The third phase of research will focus on the propagation and energy extraction of the pump and SERS beams in a variety of configurations including oscillator structures. In order to address these questions a numerical code capable of allowing for saturation and full transverse beam evolution is required. The method proposed is based on a discretized propagation energy extraction model which uses a Kirchoff integral propagator coupled to the three level Raman model already developed. The model will have the resolution required by diffraction limits and will use the previous density matrix results in the adiabatic following limit. Owing to its large computational requirements, such a code must be implemented on a vector array processor. One code on the Cyber is being tested by using previously understood two-level laser models as guidelines for interpreting the results. Two tests were implemented: the evolution of modes in a passive resonator and the evolution of a stable state of the adiabatically eliminated laser equations. These results show mode shapes and diffraction losses for the first case and relaxation oscillations for the second one. Finally, in order to clarify the computing methodology used to exploit the speed of the Cyber's computational speed, the time it takes to perform both of the computations previously mentioned to run on the Cyber and VAX 730 must be measured. Also included is a short description of the current laser model (CAVITY.FOR) and a flow chart of the test computations.

  6. FlexibleSUSY-A spectrum generator generator for supersymmetric models

    NASA Astrophysics Data System (ADS)

    Athron, Peter; Park, Jae-hyeon; Stöckinger, Dominik; Voigt, Alexander

    2015-05-01

    We introduce FlexibleSUSY, a Mathematica and C++ package, which generates a fast, precise C++ spectrum generator for any SUSY model specified by the user. The generated code is designed with both speed and modularity in mind, making it easy to adapt and extend with new features. The model is specified by supplying the superpotential, gauge structure and particle content in a SARAH model file; specific boundary conditions e.g. at the GUT, weak or intermediate scales are defined in a separate FlexibleSUSY model file. From these model files, FlexibleSUSY generates C++ code for self-energies, tadpole corrections, renormalization group equations (RGEs) and electroweak symmetry breaking (EWSB) conditions and combines them with numerical routines for solving the RGEs and EWSB conditions simultaneously. The resulting spectrum generator is then able to solve for the spectrum of the model, including loop-corrected pole masses, consistent with user specified boundary conditions. The modular structure of the generated code allows for individual components to be replaced with an alternative if available. FlexibleSUSY has been carefully designed to grow as alternative solvers and calculators are added. Predefined models include the MSSM, NMSSM, E6SSM, USSM, R-symmetric models and models with right-handed neutrinos.

  7. Flexible configuration-interaction shell-model many-body solver

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Calvin W.; Ormand, W. Erich; McElvain, Kenneth S.

    BIGSTICK Is a flexible configuration-Interaction open-source shell-model code for the many-fermion problem In a shell model (occupation representation) framework. BIGSTICK can generate energy spectra, static and transition one-body densities, and expectation values of scalar operators. Using the built-in Lanczos algorithm one can compute transition probabflity distributions and decompose wave functions into components defined by group theory.

  8. Electron cyclotron thruster new modeling results preparation for initial experiments

    NASA Technical Reports Server (NTRS)

    Hooper, E. Bickford

    1993-01-01

    The following topics are discussed: a whistler-based electron cyclotron resonance heating (ECRH) thruster; cross-field coupling in the helicon approximation; wave propagation; wave structure; plasma density; wave absorption; the electron distribution function; isothermal and adiabatic plasma flow; ECRH thruster modeling; a PIC code model; electron temperature; electron energy; and initial experimental tests. The discussion is presented in vugraph form.

  9. Computer modeling of pulsed CO2 lasers for lidar applications

    NASA Technical Reports Server (NTRS)

    Spiers, Gary D.

    1993-01-01

    The object of this effort is to develop code to enable the accurate prediction of the performance of pulsed transversely excited (TE) CO2 lasers prior to their construction. This is of particular benefit to the NASA Laser Atmospheric Wind Sounder (LAWS) project. A benefit of the completed code is that although developed specifically for the pulsed CO2 laser much of the code can be modified to model other laser systems of interest to the lidar community. A Boltzmann equation solver has been developed which enables the electron excitation rates for the vibrational levels of CO2 and N2, together with the electron ionization and attachment coefficients to be determined for any CO2 laser gas mixture consisting of a combination of CO2, N2, CO, He and CO. The validity of the model has been verified by comparison with published material. The results from the Boltzmann equation solver have been used as input to the laser kinetics code which is currently under development. A numerical code to model the laser induced medium perturbation (LIMP) arising from the relaxation of the lower laser level has been developed and used to determine the effect of LIMP on the frequency spectrum of the LAWS laser output pulse. The enclosed figures show representative results for a laser operating at 0.5 atm. with a discharge cross-section of 4.5 cm to produce a 20 J pulse with aFWHM of 3.1 microns. The first four plots show the temporal evolution of the laser pulse power, energy evolution, LIMP frequency chirp and electric field magnitude. The electric field magnitude is taken by beating the calculated complex electric field and beating it with a local oscillator signal. The remaining two figures show the power spectrum and energy distribution in the pulse as a function of the varying pulse frequency. The LIMP theory has been compared with experimental data from the NOAA Windvan Lidar and has been found to be in good agreement.

  10. XSEOS: An Open Software for Chemical Engineering Thermodynamics

    ERIC Educational Resources Information Center

    Castier, Marcelo

    2008-01-01

    An Excel add-in--XSEOS--that implements several excess Gibbs free energy models and equations of state has been developed for educational use. Several traditional and modern thermodynamic models are available in the package with a user-friendly interface. XSEOS has open code, is freely available, and should be useful for instructors and students…

  11. Warthog: Progress on Coupling BISON and PROTEUS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hart, Shane W.D.

    The Nuclear Energy Advanced Modeling and Simulation (NEAMS) program from the Office of Nuclear Energy at the Department of Energy (DOE) provides a robust toolkit for modeling and simulation of current and future advanced nuclear reactor designs. This toolkit provides these technologies organized across product lines, with two divisions targeted at fuels and end-to-end reactor modeling, and a third for integration, coupling, and high-level workflow management. The Fuels Product Line (FPL) and the Reactor Product Line (RPL) provide advanced computational technologies that serve each respective field effectively. There is currently a lack of integration between the product lines, impeding futuremore » improvements of simulation solution fidelity. In order to mix and match tools across the product lines, a new application called Warthog was produced. Warthog is built on the Multi-physics Object-Oriented Simulation Environment (MOOSE) framework developed at Idaho National Laboratory (INL). This report details the continuing efforts to provide the Integration Product Line (IPL) with interoperability using the Warthog code. Currently, this application strives to couple the BISON fuel performance application from the FPL using the PROTEUS Core Neutronics application from the RPL. Warthog leverages as much prior work from the NEAMS program as possible, enabling interoperability between the independently developed MOOSE and SHARP frameworks, and the libMesh and MOAB mesh data formats. Previous work performed on Warthog allowed it to couple a pin cell between the two codes. However, as the temperature changed due to the BISON calculation, the cross sections were not recalculated, leading to errors as the temperature got further away from the initial conditions. XSProc from the SCALE code suite was used to calculate the cross sections as needed. The remainder of this report discusses the changes to Warthog to allow for the implementation of XSProc as an external code. It also discusses the changes made to Warthog to allow it to fit more cleanly into the MultiApp syntax of the MOOSE framework. The capabilities, design, and limitations of Warthog will be described, in addition to some of the test cases that were used to demonstrate the code. Future plans for Warthog will be discussed, including continuation of the modifications to the input and coupling to other SHARP codes such as Nek5000.« less

  12. A Computational Model for Predicting Gas Breakdown

    NASA Astrophysics Data System (ADS)

    Gill, Zachary

    2017-10-01

    Pulsed-inductive discharges are a common method of producing a plasma. They provide a mechanism for quickly and efficiently generating a large volume of plasma for rapid use and are seen in applications including propulsion, fusion power, and high-power lasers. However, some common designs see a delayed response time due to the plasma forming when the magnitude of the magnetic field in the thruster is at a minimum. New designs are difficult to evaluate due to the amount of time needed to construct a new geometry and the high monetary cost of changing the power generation circuit. To more quickly evaluate new designs and better understand the shortcomings of existing designs, a computational model is developed. This model uses a modified single-electron model as the basis for a Mathematica code to determine how the energy distribution in a system changes with regards to time and location. By analyzing this energy distribution, the approximate time and location of initial plasma breakdown can be predicted. The results from this code are then compared to existing data to show its validity and shortcomings. Missouri S&T APLab.

  13. Effective field theory of cosmic acceleration: Constraining dark energy with CMB data

    NASA Astrophysics Data System (ADS)

    Raveri, Marco; Hu, Bin; Frusciante, Noemi; Silvestri, Alessandra

    2014-08-01

    We introduce EFTCAMB/EFTCosmoMC as publicly available patches to the commonly used camb/CosmoMC codes. We briefly describe the structure of the codes, their applicability and main features. To illustrate the use of these patches, we obtain constraints on parametrized pure effective field theory and designer f(R) models, both on ΛCDM and wCDM background expansion histories, using data from Planck temperature and lensing potential spectra, WMAP low-ℓ polarization spectra (WP), and baryon acoustic oscillations (BAO). Upon inspecting the theoretical stability of the models on the given background, we find nontrivial parameter spaces that we translate into viability priors. We use different combinations of data sets to show their individual effects on cosmological and model parameters. Our data analysis results show that, depending on the adopted data sets, in the wCDM background case these viability priors could dominate the marginalized posterior distributions. Interestingly, with Planck +WP+BAO+lensing data, in f(R) gravity models, we get very strong constraints on the constant dark energy equation of state, w0∈(-1,-0.9997) (95% C.L.).

  14. Energy Referencing in LANL HE-EOS Codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leiding, Jeffery Allen; Coe, Joshua Damon

    2017-10-19

    Here, We briefly describe the choice of energy referencing in LANL's HE-EOS codes, HEOS and MAGPIE. Understanding this is essential to comparing energies produced by different EOS codes, as well as to the correct calculation of shock Hugoniots of HEs and other materials. In all equations after (3) throughout this report, all energies, enthalpies and volumes are assumed to be molar quantities.

  15. Overview of NASA Multi-dimensional Stirling Convertor Code Development and Validation Effort

    NASA Technical Reports Server (NTRS)

    Tew, Roy C.; Cairelli, James E.; Ibrahim, Mounir B.; Simon, Terrence W.; Gedeon, David

    2002-01-01

    A NASA grant has been awarded to Cleveland State University (CSU) to develop a multi-dimensional (multi-D) Stirling computer code with the goals of improving loss predictions and identifying component areas for improvements. The University of Minnesota (UMN) and Gedeon Associates are teamed with CSU. Development of test rigs at UMN and CSU and validation of the code against test data are part of the effort. The one-dimensional (1-D) Stirling codes used for design and performance prediction do not rigorously model regions of the working space where abrupt changes in flow area occur (such as manifolds and other transitions between components). Certain hardware experiences have demonstrated large performance gains by varying manifolds and heat exchanger designs to improve flow distributions in the heat exchangers. 1-D codes were not able to predict these performance gains. An accurate multi-D code should improve understanding of the effects of area changes along the main flow axis, sensitivity of performance to slight changes in internal geometry, and, in general, the understanding of various internal thermodynamic losses. The commercial CFD-ACE code has been chosen for development of the multi-D code. This 2-D/3-D code has highly developed pre- and post-processors, and moving boundary capability. Preliminary attempts at validation of CFD-ACE models of MIT gas spring and "two space" test rigs were encouraging. Also, CSU's simulations of the UMN oscillating-flow fig compare well with flow visualization results from UMN. A complementary Department of Energy (DOE) Regenerator Research effort is aiding in development of regenerator matrix models that will be used in the multi-D Stirling code. This paper reports on the progress and challenges of this

  16. Operational advances in ring current modeling using RAM-SCB

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Welling, Daniel T; Jordanova, Vania K; Zaharia, Sorin G

    The Ring current Atmosphere interaction Model with Self-Consistently calculated 3D Magnetic field (RAM-SCB) combines a kinetic model of the ring current with a force-balanced model of the magnetospheric magnetic field to create an inner magnetospheric model that is magnetically self consistent. RAM-SCB produces a wealth of outputs that are valuable to space weather applications. For example, the anisotropic particle distribution of the KeV-energy population calculated by the code is key for predicting surface charging on spacecraft. Furthermore, radiation belt codes stand to benefit substantially from RAM-SCB calculated magnetic field values and plasma wave growth rates - both important for determiningmore » the evolution of relativistic electron populations. RAM-SCB is undergoing development to bring these benefits to the space weather community. Data-model validation efforts are underway to assess the performance of the system. 'Virtual Satellite' capability has been added to yield satellite-specific particle distribution and magnetic field output. The code's outer boundary is being expanded to 10 Earth Radii to encompass previously neglected geosynchronous orbits and allow the code to be driven completely by either empirical or first-principles based inputs. These advances are culminating towards a new, real-time version of the code, rtRAM-SCB, that can monitor the inner magnetosphere conditions on both a global and spacecraft-specific level. This paper summarizes these new features as well as the benefits they provide the space weather community.« less

  17. On fully three-dimensional resistive wall mode and feedback stabilization computationsa)

    NASA Astrophysics Data System (ADS)

    Strumberger, E.; Merkel, P.; Sempf, M.; Günter, S.

    2008-05-01

    Resistive walls, located close to the plasma boundary, reduce the growth rates of external kink modes to resistive time scales. For such slowly growing resistive wall modes, the stabilization by an active feedback system becomes feasible. The fully three-dimensional stability code STARWALL, and the feedback optimization code OPTIM have been developed [P. Merkel and M. Sempf, 21st IAEA Fusion Energy Conference 2006, Chengdu, China (International Atomic Energy Agency, Vienna, 2006, paper TH/P3-8] to compute the growth rates of resistive wall modes in the presence of nonaxisymmetric, multiply connected wall structures and to model the active feedback stabilization of these modes. In order to demonstrate the capabilities of the codes and to study the effect of the toroidal mode coupling caused by multiply connected wall structures, the codes are applied to test equilibria using the resistive wall structures currently under debate for ITER [M. Shimada et al., Nucl. Fusion 47, S1 (2007)] and ASDEX Upgrade [W. Köppendörfer et al., Proceedings of the 16th Symposium on Fusion Technology, London, 1990 (Elsevier, Amsterdam, 1991), Vol. 1, p. 208].

  18. On fully three-dimensional resistive wall mode and feedback stabilization computations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Strumberger, E.; Merkel, P.; Sempf, M.

    2008-05-15

    Resistive walls, located close to the plasma boundary, reduce the growth rates of external kink modes to resistive time scales. For such slowly growing resistive wall modes, the stabilization by an active feedback system becomes feasible. The fully three-dimensional stability code STARWALL, and the feedback optimization code OPTIM have been developed [P. Merkel and M. Sempf, 21st IAEA Fusion Energy Conference 2006, Chengdu, China (International Atomic Energy Agency, Vienna, 2006, paper TH/P3-8] to compute the growth rates of resistive wall modes in the presence of nonaxisymmetric, multiply connected wall structures and to model the active feedback stabilization of these modes.more » In order to demonstrate the capabilities of the codes and to study the effect of the toroidal mode coupling caused by multiply connected wall structures, the codes are applied to test equilibria using the resistive wall structures currently under debate for ITER [M. Shimada et al., Nucl. Fusion 47, S1 (2007)] and ASDEX Upgrade [W. Koeppendoerfer et al., Proceedings of the 16th Symposium on Fusion Technology, London, 1990 (Elsevier, Amsterdam, 1991), Vol. 1, p. 208].« less

  19. Energy Efficiency Program Administrators and Building Energy Codes

    EPA Pesticide Factsheets

    Explore how energy efficiency program administrators have helped advance building energy codes at federal, state, and local levels—using technical, institutional, financial, and other resources—and discusses potential next steps.

  20. STAR-CCM+ Verification and Validation Plan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pointer, William David

    2016-09-30

    The commercial Computational Fluid Dynamics (CFD) code STAR-CCM+ provides general purpose finite volume method solutions for fluid dynamics and energy transport. This document defines plans for verification and validation (V&V) of the base code and models implemented within the code by the Consortium for Advanced Simulation of Light water reactors (CASL). The software quality assurance activities described herein are port of the overall software life cycle defined in the CASL Software Quality Assurance (SQA) Plan [Sieger, 2015]. STAR-CCM+ serves as the principal foundation for development of an advanced predictive multi-phase boiling simulation capability within CASL. The CASL Thermal Hydraulics Methodsmore » (THM) team develops advanced closure models required to describe the subgrid-resolution behavior of secondary fluids or fluid phases in multiphase boiling flows within the Eulerian-Eulerian framework of the code. These include wall heat partitioning models that describe the formation of vapor on the surface and the forces the define bubble/droplet dynamic motion. The CASL models are implemented as user coding or field functions within the general framework of the code. This report defines procedures and requirements for V&V of the multi-phase CFD capability developed by CASL THM. Results of V&V evaluations will be documented in a separate STAR-CCM+ V&V assessment report. This report is expected to be a living document and will be updated as additional validation cases are identified and adopted as part of the CASL THM V&V suite.« less

  1. Implementing nationally determined contributions: building energy policies in India’s mitigation strategy

    NASA Astrophysics Data System (ADS)

    Yu, Sha; Evans, Meredydd; Kyle, Page; Vu, Linh; Tan, Qing; Gupta, Ashu; Patel, Pralit

    2018-03-01

    The Nationally Determined Contributions are allowing countries to examine options for reducing emissions through a range of domestic policies. India, like many developing countries, has committed to reducing emissions through specific policies, including building energy codes. Here we assess the potential of these sectoral policies to help in achieving mitigation targets. Collectively, it is critically important to see the potential impact of such policies across developing countries in meeting national and global emission goals. Buildings accounted for around one third of global final energy use in 2010, and building energy consumption is expected to increase as income grows in developing countries. Using the Global Change Assessment Model, this study finds that implementing a range of energy efficiency policies robustly can reduce total Indian building energy use by 22% and lower total Indian carbon dioxide emissions by 9% in 2050 compared to the business-as-usual scenario. Among various policies, energy codes for new buildings can result in the most significant savings. For all building energy policies, well-coordinated, consistent implementation is critical, which requires coordination across different departments and agencies, improving capacity of stakeholders, and developing appropriate institutions to facilitate policy implementation.

  2. System Advisor Model (SAM) General Description (Version 2017.9.5)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Freeman, Janine M; DiOrio, Nicholas A; Blair, Nathan J

    This document describes the capabilities of the System Advisor Model (SAM) developed and distributed by the U.S. Department of Energy's National Renewable Energy Laboratory. The document is for potential users and others wanting to learn about the model's capabilities. SAM is a techno-economic computer model that calculates performance and financial metrics of renewable energy projects. Project developers, policy makers, equipment manufacturers, and researchers use graphs and tables of SAM results in the process of evaluating financial, technology, and incentive options for renewable energy projects. SAM simulates the performance of photovoltaic, concentrating solar power, solar water heating, wind, geothermal, biomass, andmore » conventional power systems. The financial models are for projects that either buy and sell electricity at retail rates (residential and commercial) or sell electricity at a price determined in a power purchase agreement (PPA). SAM's simulation tools facilitate parametric and sensitivity analyses, Monte Carlo simulation and weather variability (P50/P90) studies. SAM can also read input variables from Microsoft Excel worksheets. For software developers, the SAM software development kit (SDK) makes it possible to use SAM simulation modules in their applications written in C/C plus plus, C sharp, Java, Python, MATLAB, and other languages. NREL provides both SAM and the SDK as free downloads at https://sam.nrel.gov. SAM is an open source project, so its source code is available to the public. Researchers can study the code to understand the model algorithms, and software programmers can contribute their own models and enhancements to the project. Technical support and more information about the software are available on the website.« less

  3. Low Energy Electrons in the Mars Plasma Environment

    NASA Technical Reports Server (NTRS)

    Link, Richard

    2001-01-01

    The ionosphere of Mars is rather poorly understood. The only direct measurements were performed by the Viking 1 and 2 landers in 1976, both of which carried a Retarding Potential Analyzer. The RPA was designed to measure ion properties during the descent, although electron fluxes were estimated from changes in the ion currents. Using these derived low-energy electron fluxes, Mantas and Hanson studied the photoelectron and the solar wind electron interactions with the atmosphere and ionosphere of Mars. Unanswered questions remain regarding the origin of the low-energy electron fluxes in the vicinity of the Mars plasma boundary. Crider, in an analysis of Mars Global Surveyor Magnetometer/Electron Reflectometer measurements, has attributed the formation of the magnetic pile-up boundary to electron impact ionization of exospheric neutral species by solar wind electrons. However, the role of photoelectrons escaping from the lower ionosphere was not determined. In the proposed work, we will examine the role of solar wind and ionospheric photoelectrons in producing ionization in the upper ionosphere of Mars. Low-energy (< 4 keV) electrons will be modeled using the two-stream electron transport code of Link. The code models both external (solar wind) and internal (photoelectron) sources of ionization, and accounts for Auger electron production. The code will be used to analyze Mars Global Surveyor measurements of solar wind and photoelectrons down to altitudes below 200 km in the Mars ionosphere, in order to determine the relative roles of solar wind and escaping photoelectrons in maintaining plasma densities in the region of the Mars plasma boundary.

  4. Electron-Impact Excitation Cross Sections for Modeling Non-Equilibrium Gas

    NASA Technical Reports Server (NTRS)

    Huo, Winifred M.; Liu, Yen; Panesi, Marco; Munafo, Alessandro; Wray, Alan; Carbon, Duane F.

    2015-01-01

    In order to provide a database for modeling hypersonic entry in a partially ionized gas under non-equilibrium, the electron-impact excitation cross sections of atoms have been calculated using perturbation theory. The energy levels covered in the calculation are retrieved from the level list in the HyperRad code. The downstream flow-field is determined by solving a set of continuity equations for each component. The individual structure of each energy level is included. These equations are then complemented by the Euler system of equations. Finally, the radiation field is modeled by solving the radiative transfer equation.

  5. Validation of DRAGON4/DONJON4 simulation methodology for a typical MNSR by calculating reactivity feedback coefficient and neutron flux

    NASA Astrophysics Data System (ADS)

    Al Zain, Jamal; El Hajjaji, O.; El Bardouni, T.; Boukhal, H.; Jaï, Otman

    2018-06-01

    The MNSR is a pool type research reactor, which is difficult to model because of the importance of neutron leakage. The aim of this study is to evaluate a 2-D transport model for the reactor compatible with the latest release of the DRAGON code and 3-D diffusion of the DONJON code. DRAGON code is then used to generate the group macroscopic cross sections needed for full core diffusion calculations. The diffusion DONJON code, is then used to compute the effective multiplication factor (keff), the feedback reactivity coefficients and neutron flux which account for variation in fuel and moderator temperatures as well as the void coefficient have been calculated using the DRAGON and DONJON codes for the MNSR research reactor. The cross sections of all the reactor components at different temperatures were generated using the DRAGON code. These group constants were used then in the DONJON code to calculate the multiplication factor and the neutron spectrum at different water and fuel temperatures using 69 energy groups. Only one parameter was changed where all other parameters were kept constant. Finally, Good agreements between the calculated and measured have been obtained for every of the feedback reactivity coefficients and neutron flux.

  6. Compliance Verification Paths for Residential and Commercial Energy Codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Conover, David R.; Makela, Eric J.; Fannin, Jerica D.

    2011-10-10

    This report looks at different ways to verify energy code compliance and to ensure that the energy efficiency goals of an adopted document are achieved. Conformity assessment is the body of work that ensures compliance, including activities that can ensure residential and commercial buildings satisfy energy codes and standards. This report identifies and discusses conformity-assessment activities and provides guidance for conducting assessments.

  7. Searching for Physics Beyond the Standard Model: Strongly-Coupled Field Theories at the Intensity and Energy Frontiers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brower, Richard C.

    This proposal is to develop the software and algorithmic infrastructure needed for the numerical study of quantum chromodynamics (QCD), and of theories that have been proposed to describe physics beyond the Standard Model (BSM) of high energy physics, on current and future computers. This infrastructure will enable users (1) to improve the accuracy of QCD calculations to the point where they no longer limit what can be learned from high-precision experiments that seek to test the Standard Model, and (2) to determine the predictions of BSM theories in order to understand which of them are consistent with the data thatmore » will soon be available from the LHC. Work will include the extension and optimizations of community codes for the next generation of leadership class computers, the IBM Blue Gene/Q and the Cray XE/XK, and for the dedicated hardware funded for our field by the Department of Energy. Members of our collaboration at Brookhaven National Laboratory and Columbia University worked on the design of the Blue Gene/Q, and have begun to develop software for it. Under this grant we will build upon their experience to produce high-efficiency production codes for this machine. Cray XE/XK computers with many thousands of GPU accelerators will soon be available, and the dedicated commodity clusters we obtain with DOE funding include growing numbers of GPUs. We will work with our partners in NVIDIA's Emerging Technology group to scale our existing software to thousands of GPUs, and to produce highly efficient production codes for these machines. Work under this grant will also include the development of new algorithms for the effective use of heterogeneous computers, and their integration into our codes. It will include improvements of Krylov solvers and the development of new multigrid methods in collaboration with members of the FASTMath SciDAC Institute, using their HYPRE framework, as well as work on improved symplectic integrators.« less

  8. Turbulent, Extreme Multi-zone Model for Simulating Flux and Polarization Variability in Blazars

    NASA Astrophysics Data System (ADS)

    Marscher, Alan P.

    2014-01-01

    The author presents a model for variability of the flux and polarization of blazars in which turbulent plasma flowing at a relativistic speed down a jet crosses a standing conical shock. The shock compresses the plasma and accelerates electrons to energies up to γmax >~ 104 times their rest-mass energy, with the value of γmax determined by the direction of the magnetic field relative to the shock front. The turbulence is approximated in a computer code as many cells, each with a uniform magnetic field whose direction is selected randomly. The density of high-energy electrons in the plasma changes randomly with time in a manner consistent with the power spectral density of flux variations derived from observations of blazars. The variations in flux and polarization are therefore caused by continuous noise processes rather than by singular events such as explosive injection of energy at the base of the jet. Sample simulations illustrate the behavior of flux and linear polarization versus time that such a model produces. The variations in γ-ray flux generated by the code are often, but not always, correlated with those at lower frequencies, and many of the flares are sharply peaked. The mean degree of polarization of synchrotron radiation is higher and its timescale of variability shorter toward higher frequencies, while the polarization electric vector sometimes randomly executes apparent rotations. The slope of the spectral energy distribution exhibits sharper breaks than can arise solely from energy losses. All of these results correspond to properties observed in blazars.

  9. A comparison between detailed and configuration-averaged collisional-radiative codes applied to nonlocal thermal equilibrium plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Poirier, M.; Gaufridy de Dortan, F. de

    A collisional-radiative model describing nonlocal-thermodynamic-equilibrium plasmas is developed. It is based on the HULLAC (Hebrew University Lawrence Livermore Atomic Code) suite for the transitions rates, in the zero-temperature radiation field hypothesis. Two variants of the model are presented: the first one is configuration averaged, while the second one is a detailed level version. Comparisons are made between them in the case of a carbon plasma; they show that the configuration-averaged code gives correct results for an electronic temperature T{sub e}=10 eV (or higher) but fails at lower temperatures such as T{sub e}=1 eV. The validity of the configuration-averaged approximation ismore » discussed: the intuitive criterion requiring that the average configuration-energy dispersion must be less than the electron thermal energy turns out to be a necessary but far from sufficient condition. Another condition based on the resolution of a modified rate-equation system is proposed. Its efficiency is emphasized in the case of low-temperature plasmas. Finally, it is shown that near-threshold autoionization cascade processes may induce a severe failure of the configuration-average formalism.« less

  10. Energy-Constrained Recharge, Assimilation, and Fractional Crystallization (EC-RAχFC): A Visual Basic computer code for calculating trace element and isotope variations of open-system magmatic systems

    NASA Astrophysics Data System (ADS)

    Bohrson, Wendy A.; Spera, Frank J.

    2007-11-01

    Volcanic and plutonic rocks provide abundant evidence for complex processes that occur in magma storage and transport systems. The fingerprint of these processes, which include fractional crystallization, assimilation, and magma recharge, is captured in petrologic and geochemical characteristics of suites of cogenetic rocks. Quantitatively evaluating the relative contributions of each process requires integration of mass, species, and energy constraints, applied in a self-consistent way. The energy-constrained model Energy-Constrained Recharge, Assimilation, and Fractional Crystallization (EC-RaχFC) tracks the trace element and isotopic evolution of a magmatic system (melt + solids) undergoing simultaneous fractional crystallization, recharge, and assimilation. Mass, thermal, and compositional (trace element and isotope) output is provided for melt in the magma body, cumulates, enclaves, and anatectic (i.e., country rock) melt. Theory of the EC computational method has been presented by Spera and Bohrson (2001, 2002, 2004), and applications to natural systems have been elucidated by Bohrson and Spera (2001, 2003) and Fowler et al. (2004). The purpose of this contribution is to make the final version of the EC-RAχFC computer code available and to provide instructions for code implementation, description of input and output parameters, and estimates of typical values for some input parameters. A brief discussion highlights measures by which the user may evaluate the quality of the output and also provides some guidelines for implementing nonlinear productivity functions. The EC-RAχFC computer code is written in Visual Basic, the programming language of Excel. The code therefore launches in Excel and is compatible with both PC and MAC platforms. The code is available on the authors' Web sites http://magma.geol.ucsb.edu/and http://www.geology.cwu.edu/ecrafc) as well as in the auxiliary material.

  11. FORCE2: A state-of-the-art two-phase code for hydrodynamic calculations

    NASA Astrophysics Data System (ADS)

    Ding, Jianmin; Lyczkowski, R. W.; Burge, S. W.

    1993-02-01

    A three-dimensional computer code for two-phase flow named FORCE2 has been developed by Babcock and Wilcox (B & W) in close collaboration with Argonne National Laboratory (ANL). FORCE2 is capable of both transient as well as steady-state simulations. This Cartesian coordinates computer program is a finite control volume, industrial grade and quality embodiment of the pilot-scale FLUFIX/MOD2 code and contains features such as three-dimensional blockages, volume and surface porosities to account for various obstructions in the flow field, and distributed resistance modeling to account for pressure drops caused by baffles, distributor plates and large tube banks. Recently computed results demonstrated the significance of and necessity for three-dimensional models of hydrodynamics and erosion. This paper describes the process whereby ANL's pilot-scale FLUFIX/MOD2 models and numerics were implemented into FORCE2. A description of the quality control to assess the accuracy of the new code and the validation using some of the measured data from Illinois Institute of Technology (UT) and the University of Illinois at Urbana-Champaign (UIUC) are given. It is envisioned that one day, FORCE2 with additional modules such as radiation heat transfer, combustion kinetics and multi-solids together with user-friendly pre- and post-processor software and tailored for massively parallel multiprocessor shared memory computational platforms will be used by industry and researchers to assist in reducing and/or eliminating the environmental and economic barriers which limit full consideration of coal, shale and biomass as energy sources, to retain energy security, and to remediate waste and ecological problems.

  12. Application of ARC/INFO to regional scale hydrogeologic modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wurstner, S.K.; McWethy, G.; Devary, J.L.

    1993-05-01

    Geographic Information Systems (GIS) can be a useful tool in data preparation for groundwater flow modeling, especially when studying large regional systems. ARC/INFO is being used in conjunction with GRASS to support data preparation for input to the CFEST (Coupled Fluid, Energy, and Solute Transport) groundwater modeling code. Simulations will be performed with CFEST to model three-dimensional, regional, groundwater flow in the West Siberian Basin.

  13. Physical models, cross sections, and numerical approximations used in MCNP and GEANT4 Monte Carlo codes for photon and electron absorbed fraction calculation.

    PubMed

    Yoriyaz, Hélio; Moralles, Maurício; Siqueira, Paulo de Tarso Dalledone; Guimarães, Carla da Costa; Cintra, Felipe Belonsi; dos Santos, Adimir

    2009-11-01

    Radiopharmaceutical applications in nuclear medicine require a detailed dosimetry estimate of the radiation energy delivered to the human tissues. Over the past years, several publications addressed the problem of internal dose estimate in volumes of several sizes considering photon and electron sources. Most of them used Monte Carlo radiation transport codes. Despite the widespread use of these codes due to the variety of resources and potentials they offered to carry out dose calculations, several aspects like physical models, cross sections, and numerical approximations used in the simulations still remain an object of study. Accurate dose estimate depends on the correct selection of a set of simulation options that should be carefully chosen. This article presents an analysis of several simulation options provided by two of the most used codes worldwide: MCNP and GEANT4. For this purpose, comparisons of absorbed fraction estimates obtained with different physical models, cross sections, and numerical approximations are presented for spheres of several sizes and composed as five different biological tissues. Considerable discrepancies have been found in some cases not only between the different codes but also between different cross sections and algorithms in the same code. Maximum differences found between the two codes are 5.0% and 10%, respectively, for photons and electrons. Even for simple problems as spheres and uniform radiation sources, the set of parameters chosen by any Monte Carlo code significantly affects the final results of a simulation, demonstrating the importance of the correct choice of parameters in the simulation.

  14. Object-Oriented/Data-Oriented Design of a Direct Simulation Monte Carlo Algorithm

    NASA Technical Reports Server (NTRS)

    Liechty, Derek S.

    2014-01-01

    Over the past decade, there has been much progress towards improved phenomenological modeling and algorithmic updates for the direct simulation Monte Carlo (DSMC) method, which provides a probabilistic physical simulation of gas Rows. These improvements have largely been based on the work of the originator of the DSMC method, Graeme Bird. Of primary importance are improved chemistry, internal energy, and physics modeling and a reduction in time to solution. These allow for an expanded range of possible solutions In altitude and velocity space. NASA's current production code, the DSMC Analysis Code (DAC), is well-established and based on Bird's 1994 algorithms written in Fortran 77 and has proven difficult to upgrade. A new DSMC code is being developed in the C++ programming language using object-oriented and data-oriented design paradigms to facilitate the inclusion of the recent improvements and future development activities. The development efforts on the new code, the Multiphysics Algorithm with Particles (MAP), are described, and performance comparisons are made with DAC.

  15. The escape of high explosive products: An exact-solution problem for verification of hydrodynamics codes

    DOE PAGES

    Doebling, Scott William

    2016-10-22

    This paper documents the escape of high explosive (HE) products problem. The problem, first presented by Fickett & Rivard, tests the implementation and numerical behavior of a high explosive detonation and energy release model and its interaction with an associated compressible hydrodynamics simulation code. The problem simulates the detonation of a finite-length, one-dimensional piece of HE that is driven by a piston from one end and adjacent to a void at the other end. The HE equation of state is modeled as a polytropic ideal gas. The HE detonation is assumed to be instantaneous with an infinitesimal reaction zone. Viamore » judicious selection of the material specific heat ratio, the problem has an exact solution with linear characteristics, enabling a straightforward calculation of the physical variables as a function of time and space. Lastly, implementation of the exact solution in the Python code ExactPack is discussed, as are verification cases for the exact solution code.« less

  16. Structured Set Intra Prediction With Discriminative Learning in a Max-Margin Markov Network for High Efficiency Video Coding

    PubMed Central

    Dai, Wenrui; Xiong, Hongkai; Jiang, Xiaoqian; Chen, Chang Wen

    2014-01-01

    This paper proposes a novel model on intra coding for High Efficiency Video Coding (HEVC), which simultaneously predicts blocks of pixels with optimal rate distortion. It utilizes the spatial statistical correlation for the optimal prediction based on 2-D contexts, in addition to formulating the data-driven structural interdependences to make the prediction error coherent with the probability distribution, which is desirable for successful transform and coding. The structured set prediction model incorporates a max-margin Markov network (M3N) to regulate and optimize multiple block predictions. The model parameters are learned by discriminating the actual pixel value from other possible estimates to maximize the margin (i.e., decision boundary bandwidth). Compared to existing methods that focus on minimizing prediction error, the M3N-based model adaptively maintains the coherence for a set of predictions. Specifically, the proposed model concurrently optimizes a set of predictions by associating the loss for individual blocks to the joint distribution of succeeding discrete cosine transform coefficients. When the sample size grows, the prediction error is asymptotically upper bounded by the training error under the decomposable loss function. As an internal step, we optimize the underlying Markov network structure to find states that achieve the maximal energy using expectation propagation. For validation, we integrate the proposed model into HEVC for optimal mode selection on rate-distortion optimization. The proposed prediction model obtains up to 2.85% bit rate reduction and achieves better visual quality in comparison to the HEVC intra coding. PMID:25505829

  17. The efficiency of convective energy transport in the sun

    NASA Technical Reports Server (NTRS)

    Schatten, Kenneth H.

    1988-01-01

    Mixing length theory (MLT) utilizes adiabatic expansion (as well as radiative transport) to diminish the energy content of rising convective elements. Thus in MLT, the rising elements lose their energy to the environment most efficiently and consequently transport heat with the least efficiency. On the other hand Malkus proposed that convection would maximize the efficiency of energy transport. A new stellar envelope code is developed to first examine this other extreme, wherein rising turbulent elements transport heat with the greatest possible efficiency. This other extreme model differs from MLT by providing a small reduction in the upper convection zone temperatures but greatly diminished turbulent velocities below the top few hundred kilometers. Using the findings of deep atmospheric models with the Navier-Stokes equation allows the calculation of an intermediate solar envelope model. Consideration is given to solar observations, including recent helioseismology, to examine the position of the solar envelope compared with the envelope models.

  18. Modeling of Non-Homogeneous Containment Atmosphere in the ThAI Experimental Facility Using a CFD Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Babic, Miroslav; Kljenak, Ivo; Mavko, Borut

    2006-07-01

    The CFD code CFX4.4 was used to simulate an experiment in the ThAI facility, which was designed for investigation of thermal-hydraulic processes during a severe accident inside a Light Water Reactor containment. In the considered experiment, air was initially present in the vessel, and helium and steam were injected during different phases of the experiment at various mass flow rates and at different locations. The main purpose of the simulation was to reproduce the non-homogeneous temperature and species concentration distributions in the ThAI experimental facility. A three-dimensional model of the ThAI vessel for the CFX4.4 code was developed. The flowmore » in the simulation domain was modeled as single-phase. Steam condensation on vessel walls was modeled as a sink of mass and energy using a correlation that was originally developed for an integral approach. A simple model of bulk phase change was also introduced. The calculated time-dependent variables together with temperature and concentration distributions at the end of experiment phases are compared to experimental results. (authors)« less

  19. Sierra/SolidMechanics 4.48 User's Guide: Addendum for Shock Capabilities.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Plews, Julia A.; Crane, Nathan K; de Frias, Gabriel Jose

    This is an addendum to the Sierra/SolidMechanics 4.48 User's Guide that documents additional capabilities available only in alternate versions of the Sierra/SolidMechanics (Sierra/SM) code. These alternate versions are enhanced to provide capabilities that are regulated under the U.S. Department of State's International Traffic in Arms Regulations (ITAR) export-control rules. The ITAR regulated codes are only distributed to entities that comply with the ITAR export-control requirements. The ITAR enhancements to Sierra/SM in- clude material models with an energy-dependent pressure response (appropriate for very large deformations and strain rates) and capabilities for blast modeling. Since this is an addendum to the standardmore » Sierra/SM user's guide, please refer to that document first for general descriptions of code capability and use.« less

  20. Theoretical Thermal Evaluation of Energy Recovery Incinerators

    DTIC Science & Technology

    1985-12-01

    Army Logistics Mgt Center, Fort Lee , VA DTIC Alexandria, VA DTNSRDC Code 4111 (R. Gierich), Bethesda MD; Code 4120, Annapolis, MD; Code 522 (Library...Washington. DC: Code (I6H4. Washington. DC NAVSECGRUACT PWO (Code .’^O.’^). Winter Harbor. IVIE ; PWO (Code 4(1). Edzell. Scotland; PWO. Adak AK...NEW YORK Fort Schuyler. NY (Longobardi) TEXAS A&M UNIVERSITY W.B. Ledbetter College Station. TX UNIVERSITY OF CALIFORNIA Energy Engineer. Davis CA

  1. SCALE Continuous-Energy Eigenvalue Sensitivity Coefficient Calculations

    DOE PAGES

    Perfetti, Christopher M.; Rearden, Bradley T.; Martin, William R.

    2016-02-25

    Sensitivity coefficients describe the fractional change in a system response that is induced by changes to system parameters and nuclear data. The Tools for Sensitivity and UNcertainty Analysis Methodology Implementation (TSUNAMI) code within the SCALE code system makes use of eigenvalue sensitivity coefficients for an extensive number of criticality safety applications, including quantifying the data-induced uncertainty in the eigenvalue of critical systems, assessing the neutronic similarity between different critical systems, and guiding nuclear data adjustment studies. The need to model geometrically complex systems with improved fidelity and the desire to extend TSUNAMI analysis to advanced applications has motivated the developmentmore » of a methodology for calculating sensitivity coefficients in continuous-energy (CE) Monte Carlo applications. The Contributon-Linked eigenvalue sensitivity/Uncertainty estimation via Tracklength importance CHaracterization (CLUTCH) and Iterated Fission Probability (IFP) eigenvalue sensitivity methods were recently implemented in the CE-KENO framework of the SCALE code system to enable TSUNAMI-3D to perform eigenvalue sensitivity calculations using continuous-energy Monte Carlo methods. This work provides a detailed description of the theory behind the CLUTCH method and describes in detail its implementation. This work explores the improvements in eigenvalue sensitivity coefficient accuracy that can be gained through the use of continuous-energy sensitivity methods and also compares several sensitivity methods in terms of computational efficiency and memory requirements.« less

  2. Porting ONETEP to graphical processing unit-based coprocessors. 1. FFT box operations.

    PubMed

    Wilkinson, Karl; Skylaris, Chris-Kriton

    2013-10-30

    We present the first graphical processing unit (GPU) coprocessor-enabled version of the Order-N Electronic Total Energy Package (ONETEP) code for linear-scaling first principles quantum mechanical calculations on materials. This work focuses on porting to the GPU the parts of the code that involve atom-localized fast Fourier transform (FFT) operations. These are among the most computationally intensive parts of the code and are used in core algorithms such as the calculation of the charge density, the local potential integrals, the kinetic energy integrals, and the nonorthogonal generalized Wannier function gradient. We have found that direct porting of the isolated FFT operations did not provide any benefit. Instead, it was necessary to tailor the port to each of the aforementioned algorithms to optimize data transfer to and from the GPU. A detailed discussion of the methods used and tests of the resulting performance are presented, which show that individual steps in the relevant algorithms are accelerated by a significant amount. However, the transfer of data between the GPU and host machine is a significant bottleneck in the reported version of the code. In addition, an initial investigation into a dynamic precision scheme for the ONETEP energy calculation has been performed to take advantage of the enhanced single precision capabilities of GPUs. The methods used here result in no disruption to the existing code base. Furthermore, as the developments reported here concern the core algorithms, they will benefit the full range of ONETEP functionality. Our use of a directive-based programming model ensures portability to other forms of coprocessors and will allow this work to form the basis of future developments to the code designed to support emerging high-performance computing platforms. Copyright © 2013 Wiley Periodicals, Inc.

  3. hi-class: Horndeski in the Cosmic Linear Anisotropy Solving System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zumalacárregui, Miguel; Bellini, Emilio; Sawicki, Ignacy

    We present the public version of hi-class (www.hiclass-code.net), an extension of the Boltzmann code CLASS to a broad ensemble of modifications to general relativity. In particular, hi-class can calculate predictions for models based on Horndeski's theory, which is the most general scalar-tensor theory described by second-order equations of motion and encompasses any perfect-fluid dark energy, quintessence, Brans-Dicke, f ( R ) and covariant Galileon models. hi-class has been thoroughly tested and can be readily used to understand the impact of alternative theories of gravity on linear structure formation as well as for cosmological parameter extraction.

  4. TRAX-CHEM: A pre-chemical and chemical stage extension of the particle track structure code TRAX in water targets

    NASA Astrophysics Data System (ADS)

    Boscolo, D.; Krämer, M.; Durante, M.; Fuss, M. C.; Scifoni, E.

    2018-04-01

    The production, diffusion, and interaction of particle beam induced water-derived radicals is studied with the a pre-chemical and chemical module of the Monte Carlo particle track structure code TRAX, based on a step by step approach. After a description of the model implemented, the chemical evolution of the most important products of water radiolysis is studied for electron, proton, helium, and carbon ion radiation at different energies. The validity of the model is verified by comparing the calculated time and LET dependent yield with experimental data from literature and other simulation approaches.

  5. Multiple Detector Optimization for Hidden Radiation Source Detection

    DTIC Science & Technology

    2015-03-26

    important in achieving operationally useful methods for optimizing detector emplacement, the 2-D attenuation model approach promises to speed up the...process of hidden source detection significantly. The model focused on detection of the full energy peak of a radiation source. Methods to optimize... radioisotope identification is possible without using a computationally intensive stochastic model such as the Monte Carlo n-Particle (MCNP) code

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    This document contains the State Building Energy Codes Status prepared by Pacific Northwest National Laboratory for the U.S. Department of Energy under Contract DE-AC06-76RL01830 and dated September 1996. The U.S. Department of Energy`s Office of Codes and Standards has developed this document to provide an information resource for individuals interested in energy efficiency of buildings and the relevant building energy codes in each state and U.S. territory. This is considered to be an evolving document and will be updated twice a year. In addition, special state updates will be issued as warranted.

  7. Ion absorption of the high harmonic fast wave in the National Spherical Torus Experiment

    NASA Astrophysics Data System (ADS)

    Rosenberg, Adam Lewis

    Ion absorption of the high harmonic fast wave in a spherical torus is of critical importance to assessing the viability of the wave as a means of heating and driving current. Analysis of recent NSTX shots has revealed that under some conditions when neutral beam and RF power are injected into the plasma simultaneously, a fast ion population with energy above the beam injection energy is sustained by the wave. In agreement with modeling, these experiments find the RF-induced fast ion tail strength and neutron rate at lower B-fields to be less enhanced, likely due to a larger β profile, which promotes greater off-axis absorption where the fast ion population is small. Ion loss codes find the increased loss fraction with decreased B insufficient to account for the changes in tail strength, providing further evidence that this is an RF interaction effect. Though greater ion absorption is predicted with lower k∥, surprisingly little variation in the tail was observed, along with a neutron rate enhancement with higher k∥. Data from the neutral particle analyzer, neutron detectors, x-ray crystal spectrometer, and Thomson scattering is presented, along with results from the TRANSP transport analysis code, ray-tracing codes HPRT and CURRAY, full-wave code and AORSA, quasilinear code CQL3D, and ion loss codes EIGOL and CONBEAM.

  8. Dynamic modeling of potentially conflicting energy reduction strategies for residential structures in semi-arid climates.

    PubMed

    Hester, Nathan; Li, Ke; Schramski, John R; Crittenden, John

    2012-04-30

    Globally, residential energy consumption continues to rise due to a variety of trends such as increasing access to modern appliances, overall population growth, and the overall increase of electricity distribution. Currently, residential energy consumption accounts for approximately one-fifth of total U.S. energy consumption. This research analyzes the effectiveness of a range of energy-saving measures for residential houses in semi-arid climates. These energy-saving measures include: structural insulated panels (SIP) for exterior wall construction, daylight control, increased window area, efficient window glass suitable for the local weather, and several combinations of these. Our model determined that energy consumption is reduced by up to 6.1% when multiple energy savings technologies are combined. In addition, pre-construction technologies (structural insulated panels (SIPs), daylight control, and increased window area) provide roughly 4 times the energy savings when compared to post-construction technologies (window blinds and efficient window glass). The model also illuminated the importance variations in local climate and building configuration; highlighting the site-specific nature of this type of energy consumption quantification for policy and building code considerations. Published by Elsevier Ltd.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dobromir Panayotov; Andrew Grief; Brad J. Merrill

    'Fusion for Energy' (F4E) develops designs and implements the European Test Blanket Systems (TBS) in ITER - Helium-Cooled Lithium-Lead (HCLL) and Helium-Cooled Pebble-Bed (HCPB). Safety demonstration is an essential element for the integration of TBS in ITER and accident analyses are one of its critical segments. A systematic approach to the accident analyses had been acquired under the F4E contract on TBS safety analyses. F4E technical requirements and AMEC and INL efforts resulted in the development of a comprehensive methodology for fusion breeding blanket accident analyses. It addresses the specificity of the breeding blankets design, materials and phenomena and atmore » the same time is consistent with the one already applied to ITER accident analyses. Methodology consists of several phases. At first the reference scenarios are selected on the base of FMEA studies. In the second place elaboration of the accident analyses specifications we use phenomena identification and ranking tables to identify the requirements to be met by the code(s) and TBS models. Thus the limitations of the codes are identified and possible solutions to be built into the models are proposed. These include among others the loose coupling of different codes or code versions in order to simulate multi-fluid flows and phenomena. The code selection and issue of the accident analyses specifications conclude this second step. Furthermore the breeding blanket and ancillary systems models are built on. In this work challenges met and solutions used in the development of both MELCOR and RELAP5 codes models of HCLL and HCPB TBSs will be shared. To continue the developed models are qualified by comparison with finite elements analyses, by code to code comparison and sensitivity studies. Finally, the qualified models are used for the execution of the accident analyses of specific scenario. When possible the methodology phases will be illustrated in the paper by limited number of tables and figures. Description of each phase and its results in detail as well the methodology applications to EU HCLL and HCPB TBSs will be published in separate papers. The developed methodology is applicable to accident analyses of other TBSs to be tested in ITER and as well to DEMO breeding blankets.« less

  10. SutraPrep, a pre-processor for SUTRA, a model for ground-water flow with solute or energy transport

    USGS Publications Warehouse

    Provost, Alden M.

    2002-01-01

    SutraPrep facilitates the creation of three-dimensional (3D) input datasets for the USGS ground-water flow and transport model SUTRA Version 2D3D.1. It is most useful for applications in which the geometry of the 3D model domain and the spatial distribution of physical properties and boundary conditions is relatively simple. SutraPrep can be used to create a SUTRA main input (?.inp?) file, an initial conditions (?.ics?) file, and a 3D plot of the finite-element mesh in Virtual Reality Modeling Language (VRML) format. Input and output are text-based. The code can be run on any platform that has a standard FORTRAN-90 compiler. Executable code is available for Microsoft Windows.

  11. Neutron flux and power in RTP core-15

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rabir, Mohamad Hairie, E-mail: m-hairie@nuclearmalaysia.gov.my; Zin, Muhammad Rawi Md; Usang, Mark Dennis

    PUSPATI TRIGA Reactor achieved initial criticality on June 28, 1982. The reactor is designed to effectively implement the various fields of basic nuclear research, manpower training, and production of radioisotopes. This paper describes the reactor parameters calculation for the PUSPATI TRIGA REACTOR (RTP); focusing on the application of the developed reactor 3D model for criticality calculation, analysis of power and neutron flux distribution of TRIGA core. The 3D continuous energy Monte Carlo code MCNP was used to develop a versatile and accurate full model of the TRIGA reactor. The model represents in detailed all important components of the core withmore » literally no physical approximation. The consistency and accuracy of the developed RTP MCNP model was established by comparing calculations to the available experimental results and TRIGLAV code calculation.« less

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao, Zhiming; Abdelaziz, Omar; Qu, Ming

    This paper introduces a first-order physics-based model that accounts for the fundamental heat and mass transfer between a humid-air vapor stream on feed side to another flow stream on permeate side. The model comprises a few optional submodels for membrane mass transport; and it adopts a segment-by-segment method for discretizing heat and mass transfer governing equations for flow streams on feed and permeate sides. The model is able to simulate both dehumidifiers and energy recovery ventilators in parallel-flow, cross-flow, and counter-flow configurations. The predicted tresults are compared reasonably well with the measurements. The open-source codes are written in C++. Themore » model and open-source codes are expected to become a fundament tool for the analysis of membrane-based dehumidification in the future.« less

  13. Energy and technology review: Engineering modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cabayan, H.S.; Goudreau, G.L.; Ziolkowski, R.W.

    1986-10-01

    This report presents information concerning: Modeling Canonical Problems in Electromagnetic Coupling Through Apertures; Finite-Element Codes for Computing Electrostatic Fields; Finite-Element Modeling of Electromagnetic Phenomena; Modeling Microwave-Pulse Compression in a Resonant Cavity; Lagrangian Finite-Element Analysis of Penetration Mechanics; Crashworthiness Engineering; Computer Modeling of Metal-Forming Processes; Thermal-Mechanical Modeling of Tungsten Arc Welding; Modeling Air Breakdown Induced by Electromagnetic Fields; Iterative Techniques for Solving Boltzmann's Equations for p-Type Semiconductors; Semiconductor Modeling; and Improved Numerical-Solution Techniques in Large-Scale Stress Analysis.

  14. Development of a two-equation turbulence model for hypersonic flows. Volume 1; Evaluation of a low Reynolds number correction to the Kappa - epsilon two equation compressible turbulence model

    NASA Technical Reports Server (NTRS)

    Knight, Doyle D.; Becht, Robert J.

    1995-01-01

    The objective of the current research is the development of an improved k-epsilon two-equation compressible turbulence model for turbulent boundary layer flows experiencing strong viscous-inviscid interactions. The development of an improved model is important in the design of hypersonic vehicles such as the National Aerospace Plane (NASP) and the High Speed Civil Transport (HSCT). Improvements have been made to the low Reynolds number functions in the eddy viscosity and dissipation of solenoidal dissipation of the k-epsilon turbulence mode. These corrections offer easily applicable modifications that may be utilized for more complex geometries. The low Reynolds number corrections are functions of the turbulent Reynolds number and are therefore independent of the coordinate system. The proposed model offers advantages over some current models which are based upon the physical distance from the wall, that modify the constants of the standard model, or that make more corrections than are necessary to the governing equations. The code has been developed to solve the Favre averaged, boundary layer equations for mass, momentum, energy, turbulence kinetic energy, and dissipation of solenoidal dissipation using Keller's box scheme and the Newton spatial marching method. The code has been validated by removing the turbulent terms and comparing the solution with the Blasius solution, and by comparing the turbulent solution with an existing k-epsilon model code using wall function boundary conditions. Excellent agreement is seen between the computed solution and the Blasius solution, and between the two codes. The model has been tested for both subsonic and supersonic flat-plate turbulent boundary layer flow by comparing the computed skin friction with the Van Driest II theory and the experimental data of Weighardt; by comparing the transformed velocity profile with the data of Weighardt, and the Law of the Wall and the Law of the Wake; and by comparing the computed results of an adverse pressure gradient with the experimental data of Fernando and Smits. Good agreement is obtained with the experimental correlations for all flow conditions.

  15. Preliminary Three-Dimensional Simulation of Sediment and Cesium Transport in the Ogi Dam Reservoir using FLESCOT – Task 6, Subtask 2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Onishi, Yasuo; Kurikami, Hiroshi; Yokuda, Satoru T.

    2014-03-28

    After the accident at the Fukushima Daiichi Nuclear Power Plant in March 2011, the Japan Atomic Energy Agency and the Pacific Northwest National Laboratory initiated a collaborative project on environmental restoration. In October 2013, the collaborative team started a task of three-dimensional modeling of sediment and cesium transport in the Fukushima environment using the FLESCOT (Flow, Energy, Salinity, Sediment Contaminant Transport) code. As the first trial, we applied it to the Ogi Dam Reservoir that is one of the reservoirs in the Japan Atomic Energy Agency’s (JAEA’s) investigation project. Three simulation cases under the following different temperature conditions were studied:more » • incoming rivers and the Ogi Dam Reservoir have the same water temperature • incoming rivers have lower water temperature than that of the reservoir • incoming rivers have higher water temperature than that of the reservoir. The preliminary simulations suggest that seasonal temperature changes influence the sediment and cesium transport. The preliminary results showed the following: • Suspended sand, and cesium adsorbed by sand, coming into the reservoirs from upstream rivers is deposited near the reservoir entrance. • Suspended silt, and cesium adsorbed by silt, is deposited farther in the reservoir. • Suspended clay, and cesium adsorbed by clay, travels the farthest into the reservoir. With sufficient time, the dissolved cesium reaches the downstream end of the reservoir. This preliminary modeling also suggests the possibility of a suitable dam operation to control the cesium migration farther downstream from the dam. JAEA has been sampling in the Ogi Dam Reservoir, but these data were not yet available for the current model calibration and validation for this reservoir. Nonetheless these preliminary FLESCOT modeling results were qualitatively valid and confirmed the applicability of the FLESCOT code to the Ogi Dam Reservoir, and in general to other reservoirs in the Fukushima environment. The issues to be addressed in future are the following: • Validate the simulation results by comparison with the investigation data. • Confirm the applicability of the FLESCOT code to Fukushima coastal areas. • Increase computation speed by parallelizing the FLESCOT code.« less

  16. Structural mechanics simulations

    NASA Technical Reports Server (NTRS)

    Biffle, Johnny H.

    1992-01-01

    Sandia National Laboratory has a very broad structural capability. Work has been performed in support of reentry vehicles, nuclear reactor safety, weapons systems and components, nuclear waste transport, strategic petroleum reserve, nuclear waste storage, wind and solar energy, drilling technology, and submarine programs. The analysis environment contains both commercial and internally developed software. Included are mesh generation capabilities, structural simulation codes, and visual codes for examining simulation results. To effectively simulate a wide variety of physical phenomena, a large number of constitutive models have been developed.

  17. Simulation model for wind energy storage systems. Volume II. Operation manual. [SIMWEST code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Warren, A.W.; Edsinger, R.W.; Burroughs, J.D.

    1977-08-01

    The effort developed a comprehensive computer program for the modeling of wind energy/storage systems utilizing any combination of five types of storage (pumped hydro, battery, thermal, flywheel and pneumatic). An acronym for the program is SIMWEST (Simulation Model for Wind Energy Storage). The level of detail of SIMWEST is consistent with a role of evaluating the economic feasibility as well as the general performance of wind energy systems. The software package consists of two basic programs and a library of system, environmental, and load components. Volume II, the SIMWEST operation manual, describes the usage of the SIMWEST program, the designmore » of the library components, and a number of simple example simulations intended to familiarize the user with the program's operation. Volume II also contains a listing of each SIMWEST library subroutine.« less

  18. Description and availability of the SMARTS spectral model for photovoltaic applications

    NASA Astrophysics Data System (ADS)

    Myers, Daryl R.; Gueymard, Christian A.

    2004-11-01

    Limited spectral response range of photocoltaic (PV) devices requires device performance be characterized with respect to widely varying terrestrial solar spectra. The FORTRAN code "Simple Model for Atmospheric Transmission of Sunshine" (SMARTS) was developed for various clear-sky solar renewable energy applications. The model is partly based on parameterizations of transmittance functions in the MODTRAN/LOWTRAN band model family of radiative transfer codes. SMARTS computes spectra with a resolution of 0.5 nanometers (nm) below 400 nm, 1.0 nm from 400 nm to 1700 nm, and 5 nm from 1700 nm to 4000 nm. Fewer than 20 input parameters are required to compute spectral irradiance distributions including spectral direct beam, total, and diffuse hemispherical radiation, and up to 30 other spectral parameters. A spreadsheet-based graphical user interface can be used to simplify the construction of input files for the model. The model is the basis for new terrestrial reference spectra developed by the American Society for Testing and Materials (ASTM) for photovoltaic and materials degradation applications. We describe the model accuracy, functionality, and the availability of source and executable code. Applications to PV rating and efficiency and the combined effects of spectral selectivity and varying atmospheric conditions are briefly discussed.

  19. Advances in Geologic Disposal System Modeling and Application to Crystalline Rock

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mariner, Paul E.; Stein, Emily R.; Frederick, Jennifer M.

    The Used Fuel Disposition Campaign (UFDC) of the U.S. Department of Energy (DOE) Office of Nuclear Energy (NE), Office of Fuel Cycle Technology (OFCT) is conducting research and development (R&D) on geologic disposal of used nuclear fuel (UNF) and high-level nuclear waste (HLW). Two of the high priorities for UFDC disposal R&D are design concept development and disposal system modeling (DOE 2011). These priorities are directly addressed in the UFDC Generic Disposal Systems Analysis (GDSA) work package, which is charged with developing a disposal system modeling and analysis capability for evaluating disposal system performance for nuclear waste in geologic mediamore » (e.g., salt, granite, clay, and deep borehole disposal). This report describes specific GDSA activities in fiscal year 2016 (FY 2016) toward the development of the enhanced disposal system modeling and analysis capability for geologic disposal of nuclear waste. The GDSA framework employs the PFLOTRAN thermal-hydrologic-chemical multi-physics code and the Dakota uncertainty sampling and propagation code. Each code is designed for massively-parallel processing in a high-performance computing (HPC) environment. Multi-physics representations in PFLOTRAN are used to simulate various coupled processes including heat flow, fluid flow, waste dissolution, radionuclide release, radionuclide decay and ingrowth, precipitation and dissolution of secondary phases, and radionuclide transport through engineered barriers and natural geologic barriers to the biosphere. Dakota is used to generate sets of representative realizations and to analyze parameter sensitivity.« less

  20. JASMIN: Japanese-American study of muon interactions and neutron detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nakashima, Hiroshi; /JAEA, Ibaraki; Mokhov, N.V.

    Experimental studies of shielding and radiation effects at Fermi National Accelerator Laboratory (FNAL) have been carried out under collaboration between FNAL and Japan, aiming at benchmarking of simulation codes and study of irradiation effects for upgrade and design of new high-energy accelerator facilities. The purposes of this collaboration are (1) acquisition of shielding data in a proton beam energy domain above 100GeV; (2) further evaluation of predictive accuracy of the PHITS and MARS codes; (3) modification of physics models and data in these codes if needed; (4) establishment of irradiation field for radiation effect tests; and (5) development of amore » code module for improved description of radiation effects. A series of experiments has been performed at the Pbar target station and NuMI facility, using irradiation of targets with 120 GeV protons for antiproton and neutrino production, as well as the M-test beam line (M-test) for measuring nuclear data and detector responses. Various nuclear and shielding data have been measured by activation methods with chemical separation techniques as well as by other detectors such as a Bonner ball counter. Analyses with the experimental data are in progress for benchmarking the PHITS and MARS15 codes. In this presentation recent activities and results are reviewed.« less

  1. The equation of state package FEOS for high energy density matter

    NASA Astrophysics Data System (ADS)

    Faik, Steffen; Tauschwitz, Anna; Iosilevskiy, Igor

    2018-06-01

    Adequate equation of state (EOS) data is of high interest in the growing field of high energy density physics and especially essential for hydrodynamic simulation codes. The semi-analytical method used in the newly developed Frankfurt equation of state (FEOS) package provides an easy and fast access to the EOS of - in principle - arbitrary materials. The code is based on the well known QEOS model (More et al., 1988; Young and Corey, 1995) and is a further development of the MPQeos code (Kemp and Meyer-ter Vehn, 1988; Kemp and Meyer-ter Vehn, 1998) from Max-Planck-Institut für Quantenoptik (MPQ) in Garching Germany. The list of features contains the calculation of homogeneous mixtures of chemical elements and the description of the liquid-vapor two-phase region with or without a Maxwell construction. Full flexibility of the package is assured by its structure: A program library provides the EOS with an interface designed for Fortran or C/C++ codes. Two additional software tools allow for the generation of EOS tables in different file output formats and for the calculation and visualization of isolines and Hugoniot shock adiabats. As an example the EOS of fused silica (SiO2) is calculated and compared to experimental data and other EOS codes.

  2. Spallation reactions: A successful interplay between modeling and applications

    NASA Astrophysics Data System (ADS)

    David, J.-C.

    2015-06-01

    The spallation reactions are a type of nuclear reaction which occur in space by interaction of the cosmic rays with interstellar bodies. The first spallation reactions induced with an accelerator took place in 1947 at the Berkeley cyclotron (University of California) with 200MeV deuterons and 400MeV alpha beams. They highlighted the multiple emission of neutrons and charged particles and the production of a large number of residual nuclei far different from the target nuclei. In the same year, R. Serber described the reaction in two steps: a first and fast one with high-energy particle emission leading to an excited remnant nucleus, and a second one, much slower, the de-excitation of the remnant. In 2010 IAEA organized a workshop to present the results of the most widely used spallation codes within a benchmark of spallation models. If one of the goals was to understand the deficiencies, if any, in each code, one remarkable outcome points out the overall high-quality level of some models and so the great improvements achieved since Serber. Particle transport codes can then rely on such spallation models to treat the reactions between a light particle and an atomic nucleus with energies spanning from few tens of MeV up to some GeV. An overview of the spallation reactions modeling is presented in order to point out the incomparable contribution of models based on basic physics to numerous applications where such reactions occur. Validations or benchmarks, which are necessary steps in the improvement process, are also addressed, as well as the potential future domains of development. Spallation reactions modeling is a representative case of continuous studies aiming at understanding a reaction mechanism and which end up in a powerful tool.

  3. Flow Analysis of a Gas Turbine Low- Pressure Subsystem

    NASA Technical Reports Server (NTRS)

    Veres, Joseph P.

    1997-01-01

    The NASA Lewis Research Center is coordinating a project to numerically simulate aerodynamic flow in the complete low-pressure subsystem (LPS) of a gas turbine engine. The numerical model solves the three-dimensional Navier-Stokes flow equations through all components within the low-pressure subsystem as well as the external flow around the engine nacelle. The Advanced Ducted Propfan Analysis Code (ADPAC), which is being developed jointly by Allison Engine Company and NASA, is the Navier-Stokes flow code being used for LPS simulation. The majority of the LPS project is being done under a NASA Lewis contract with Allison. Other contributors to the project are NYMA and the University of Toledo. For this project, the Energy Efficient Engine designed by GE Aircraft Engines is being modeled. This engine includes a low-pressure system and a high-pressure system. An inlet, a fan, a booster stage, a bypass duct, a lobed mixer, a low-pressure turbine, and a jet nozzle comprise the low-pressure subsystem within this engine. The tightly coupled flow analysis evaluates aerodynamic interactions between all components of the LPS. The high-pressure core engine of this engine is simulated with a one-dimensional thermodynamic cycle code in order to provide boundary conditions to the detailed LPS model. This core engine consists of a high-pressure compressor, a combustor, and a high-pressure turbine. The three-dimensional LPS flow model is coupled to the one-dimensional core engine model to provide a "hybrid" flow model of the complete gas turbine Energy Efficient Engine. The resulting hybrid engine model evaluates the detailed interaction between the LPS components at design and off-design engine operating conditions while considering the lumped-parameter performance of the core engine.

  4. Sandia’s Current Energy Conversion module for the Flexible-Mesh Delft3D flow solver v. 1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chartand, Chris; Jagers, Bert

    The DOE has funded Sandia National Labs (SNL) to develop an open-source modeling tool to guide the design and layout of marine hydrokinetic (MHK) arrays to maximize power production while minimizing environmental effects. This modeling framework simulates flows through and around a MHK arrays while quantifying environmental responses. As an augmented version of the Dutch company, Deltares’s, environmental hydrodynamics code, Delft3D, SNL-Delft3D-CEC-FM includes a new module that simulates energy conversion (momentum withdrawal) by MHK current energy conversion devices with commensurate changes in the turbulent kinetic energy and its dissipation rate. SNL-Delft3D-CEC-FM modified the Delft3D flexible mesh flow solver, DFlowFM.

  5. Software Tools for Stochastic Simulations of Turbulence

    DTIC Science & Technology

    2015-08-28

    client interface to FTI. Specefic client programs using this interface include the weather forecasting code WRF ; the high energy physics code, FLASH...client programs using this interface include the weather forecasting code WRF ; the high energy physics code, FLASH; and two locally constructed fluid...45 4.4.2.2 FLASH . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 4.4.2.3 WRF

  6. EMPIRE: Nuclear Reaction Model Code System for Data Evaluation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Herman, M.; Capote, R.; Carlson, B.V.

    EMPIRE is a modular system of nuclear reaction codes, comprising various nuclear models, and designed for calculations over a broad range of energies and incident particles. A projectile can be a neutron, proton, any ion (including heavy-ions) or a photon. The energy range extends from the beginning of the unresolved resonance region for neutron-induced reactions ({approx} keV) and goes up to several hundred MeV for heavy-ion induced reactions. The code accounts for the major nuclear reaction mechanisms, including direct, pre-equilibrium and compound nucleus ones. Direct reactions are described by a generalized optical model (ECIS03) or by the simplified coupled-channels approachmore » (CCFUS). The pre-equilibrium mechanism can be treated by a deformation dependent multi-step direct (ORION + TRISTAN) model, by a NVWY multi-step compound one or by either a pre-equilibrium exciton model with cluster emission (PCROSS) or by another with full angular momentum coupling (DEGAS). Finally, the compound nucleus decay is described by the full featured Hauser-Feshbach model with {gamma}-cascade and width-fluctuations. Advanced treatment of the fission channel takes into account transmission through a multiple-humped fission barrier with absorption in the wells. The fission probability is derived in the WKB approximation within the optical model of fission. Several options for nuclear level densities include the EMPIRE-specific approach, which accounts for the effects of the dynamic deformation of a fast rotating nucleus, the classical Gilbert-Cameron approach and pre-calculated tables obtained with a microscopic model based on HFB single-particle level schemes with collective enhancement. A comprehensive library of input parameters covers nuclear masses, optical model parameters, ground state deformations, discrete levels and decay schemes, level densities, fission barriers, moments of inertia and {gamma}-ray strength functions. The results can be converted into ENDF-6 formatted files using the accompanying code EMPEND and completed with neutron resonances extracted from the existing evaluations. The package contains the full EXFOR (CSISRS) library of experimental reaction data that are automatically retrieved during the calculations. Publication quality graphs can be obtained using the powerful and flexible plotting package ZVView. The graphic user interface, written in Tcl/Tk, provides for easy operation of the system. This paper describes the capabilities of the code, outlines physical models and indicates parameter libraries used by EMPIRE to predict reaction cross sections and spectra, mainly for nucleon-induced reactions. Selected applications of EMPIRE are discussed, the most important being an extensive use of the code in evaluations of neutron reactions for the new US library ENDF/B-VII.0. Future extensions of the system are outlined, including neutron resonance module as well as capabilities of generating covariances, using both KALMAN and Monte-Carlo methods, that are still being advanced and refined.« less

  7. Time-dependent simulations of disk-embedded planetary atmospheres

    NASA Astrophysics Data System (ADS)

    Stökl, A.; Dorfi, E. A.

    2014-03-01

    At the early stages of evolution of planetary systems, young Earth-like planets still embedded in the protoplanetary disk accumulate disk gas gravitationally into planetary atmospheres. The established way to study such atmospheres are hydrostatic models, even though in many cases the assumption of stationarity is unlikely to be fulfilled. Furthermore, such models rely on the specification of a planetary luminosity, attributed to a continuous, highly uncertain accretion of planetesimals onto the surface of the solid core. We present for the first time time-dependent, dynamic simulations of the accretion of nebula gas into an atmosphere around a proto-planet and the evolution of such embedded atmospheres while integrating the thermal energy budget of the solid core. The spherical symmetric models computed with the TAPIR-Code (short for The adaptive, implicit RHD-Code) range from the surface of the rocky core up to the Hill radius where the surrounding protoplanetary disk provides the boundary conditions. The TAPIR-Code includes the hydrodynamics equations, gray radiative transport and convective energy transport. The results indicate that diskembedded planetary atmospheres evolve along comparatively simple outlines and in particular settle, dependent on the mass of the solid core, at characteristic surface temperatures and planetary luminosities, quite independent on numerical parameters and initial conditions. For sufficiently massive cores, this evolution ultimately also leads to runaway accretion and the formation of a gas planet.

  8. Energy and Environment Guide to Action - Chapter 4.3: Building Codes for Energy Efficiency

    EPA Pesticide Factsheets

    Provides guidance and recommendations for establishing, implementing, and evaluating state building codes for energy efficiency, which improve energy efficiency in new construction and major renovations. State success stories are included for reference.

  9. Evaluation of a new neutron energy spectrum unfolding code based on an Adaptive Neuro-Fuzzy Inference System (ANFIS).

    PubMed

    Hosseini, Seyed Abolfazl; Esmaili Paeen Afrakoti, Iman

    2018-01-17

    The purpose of the present study was to reconstruct the energy spectrum of a poly-energetic neutron source using an algorithm developed based on an Adaptive Neuro-Fuzzy Inference System (ANFIS). ANFIS is a kind of artificial neural network based on the Takagi-Sugeno fuzzy inference system. The ANFIS algorithm uses the advantages of both fuzzy inference systems and artificial neural networks to improve the effectiveness of algorithms in various applications such as modeling, control and classification. The neutron pulse height distributions used as input data in the training procedure for the ANFIS algorithm were obtained from the simulations performed by MCNPX-ESUT computational code (MCNPX-Energy engineering of Sharif University of Technology). Taking into account the normalization condition of each energy spectrum, 4300 neutron energy spectra were generated randomly. (The value in each bin was generated randomly, and finally a normalization of each generated energy spectrum was performed). The randomly generated neutron energy spectra were considered as output data of the developed ANFIS computational code in the training step. To calculate the neutron energy spectrum using conventional methods, an inverse problem with an approximately singular response matrix (with the determinant of the matrix close to zero) should be solved. The solution of the inverse problem using the conventional methods unfold neutron energy spectrum with low accuracy. Application of the iterative algorithms in the solution of such a problem, or utilizing the intelligent algorithms (in which there is no need to solve the problem), is usually preferred for unfolding of the energy spectrum. Therefore, the main reason for development of intelligent algorithms like ANFIS for unfolding of neutron energy spectra is to avoid solving the inverse problem. In the present study, the unfolded neutron energy spectra of 252Cf and 241Am-9Be neutron sources using the developed computational code were found to have excellent agreement with the reference data. Also, the unfolded energy spectra of the neutron sources as obtained using ANFIS were more accurate than the results reported from calculations performed using artificial neural networks in previously published papers. © The Author(s) 2018. Published by Oxford University Press on behalf of The Japan Radiation Research Society and Japanese Society for Radiation Oncology.

  10. Assessing Potential Energy Cost Savings from Increased Energy Code Compliance in Commercial Buildings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rosenberg, Michael I.; Hart, Philip R.; Athalye, Rahul A.

    The US Department of Energy’s most recent commercial energy code compliance evaluation efforts focused on determining a percent compliance rating for states to help them meet requirements under the American Recovery and Reinvestment Act (ARRA) of 2009. That approach included a checklist of code requirements, each of which was graded pass or fail. Percent compliance for any given building was simply the percent of individual requirements that passed. With its binary approach to compliance determination, the previous methodology failed to answer some important questions. In particular, how much energy cost could be saved by better compliance with the commercial energymore » code and what are the relative priorities of code requirements from an energy cost savings perspective? This paper explores an analytical approach and pilot study using a single building type and climate zone to answer those questions.« less

  11. Comparison of equilibrium ohmic and nonequilibrium swarm models for monitoring conduction electron evolution in high-altitude EMP calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pusateri, Elise N.; Morris, Heidi E.; Nelson, Eric

    2016-10-17

    Here, atmospheric electromagnetic pulse (EMP) events are important physical phenomena that occur through both man-made and natural processes. Radiation-induced currents and voltages in EMP can couple with electrical systems, such as those found in satellites, and cause significant damage. Due to the disruptive nature of EMP, it is important to accurately predict EMP evolution and propagation with computational models. CHAP-LA (Compton High Altitude Pulse-Los Alamos) is a state-of-the-art EMP code that solves Maxwell inline images equations for gamma source-induced electromagnetic fields in the atmosphere. In EMP, low-energy, conduction electrons constitute a conduction current that limits the EMP by opposing themore » Compton current. CHAP-LA calculates the conduction current using an equilibrium ohmic model. The equilibrium model works well at low altitudes, where the electron energy equilibration time is short compared to the rise time or duration of the EMP. At high altitudes, the equilibration time increases beyond the EMP rise time and the predicted equilibrium ionization rate becomes very large. The ohmic model predicts an unphysically large production of conduction electrons which prematurely and abruptly shorts the EMP in the simulation code. An electron swarm model, which implicitly accounts for the time evolution of the conduction electron energy distribution, can be used to overcome the limitations exhibited by the equilibrium ohmic model. We have developed and validated an electron swarm model previously in Pusateri et al. (2015). Here we demonstrate EMP damping behavior caused by the ohmic model at high altitudes and show improvements on high-altitude, upward EMP modeling obtained by integrating a swarm model into CHAP-LA.« less

  12. Comparison of equilibrium ohmic and nonequilibrium swarm models for monitoring conduction electron evolution in high-altitude EMP calculations

    NASA Astrophysics Data System (ADS)

    Pusateri, Elise N.; Morris, Heidi E.; Nelson, Eric; Ji, Wei

    2016-10-01

    Atmospheric electromagnetic pulse (EMP) events are important physical phenomena that occur through both man-made and natural processes. Radiation-induced currents and voltages in EMP can couple with electrical systems, such as those found in satellites, and cause significant damage. Due to the disruptive nature of EMP, it is important to accurately predict EMP evolution and propagation with computational models. CHAP-LA (Compton High Altitude Pulse-Los Alamos) is a state-of-the-art EMP code that solves Maxwell's equations for gamma source-induced electromagnetic fields in the atmosphere. In EMP, low-energy, conduction electrons constitute a conduction current that limits the EMP by opposing the Compton current. CHAP-LA calculates the conduction current using an equilibrium ohmic model. The equilibrium model works well at low altitudes, where the electron energy equilibration time is short compared to the rise time or duration of the EMP. At high altitudes, the equilibration time increases beyond the EMP rise time and the predicted equilibrium ionization rate becomes very large. The ohmic model predicts an unphysically large production of conduction electrons which prematurely and abruptly shorts the EMP in the simulation code. An electron swarm model, which implicitly accounts for the time evolution of the conduction electron energy distribution, can be used to overcome the limitations exhibited by the equilibrium ohmic model. We have developed and validated an electron swarm model previously in Pusateri et al. (2015). Here we demonstrate EMP damping behavior caused by the ohmic model at high altitudes and show improvements on high-altitude, upward EMP modeling obtained by integrating a swarm model into CHAP-LA.

  13. On-the-Fly Generation of Differential Resonance Scattering Probability Distribution Functions for Monte Carlo Codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davidson, Eva E.; Martin, William R.

    Current Monte Carlo codes use one of three models: (1) the asymptotic scattering model, (2) the free gas scattering model, or (3) the S(α,β) model, depending on the neutron energy and the specific Monte Carlo code. This thesis addresses the consequences of using the free gas scattering model, which assumes that the neutron interacts with atoms in thermal motion in a monatomic gas in thermal equilibrium at material temperature, T. Most importantly, the free gas model assumes the scattering cross section is constant over the neutron energy range, which is usually a good approximation for light nuclei, but not formore » heavy nuclei where the scattering cross section may have several resonances in the epithermal region. Several researchers in the field have shown that the exact resonance scattering model is temperaturedependent, and neglecting the resonances in the lower epithermal range can under-predict resonance absorption due to the upscattering phenomenon mentioned above, leading to an over-prediction of keff by several hundred pcm. Existing methods to address this issue involve changing the neutron weights or implementing an extra rejection scheme in the free gas sampling scheme, and these all involve performing the collision analysis in the center-of-mass frame, followed by a conversion back to the laboratory frame to continue the random walk of the neutron. The goal of this paper was to develop a sampling methodology that (1) accounted for the energydependent scattering cross sections in the collision analysis and (2) was performed in the laboratory frame,avoiding the conversion to the center-of-mass frame. The energy dependence of the scattering cross section was modeled with even-ordered polynomials (2nd and 4th order) to approximate the scattering cross section in Blackshaw’s equations for the moments of the differential scattering PDFs. These moments were used to sample the outgoing neutron speed and angle in the laboratory frame on-the-fly during the random walk of the neutron. Results for criticality studies on fuel pin and fuel assembly calculations using methods developed in this dissertation showed very close comparison to results using the reference Dopplerbroadened rejection correction (DBRC) scheme.« less

  14. On-the-Fly Generation of Differential Resonance Scattering Probability Distribution Functions for Monte Carlo Codes

    DOE PAGES

    Davidson, Eva E.; Martin, William R.

    2017-05-26

    Current Monte Carlo codes use one of three models: (1) the asymptotic scattering model, (2) the free gas scattering model, or (3) the S(α,β) model, depending on the neutron energy and the specific Monte Carlo code. This thesis addresses the consequences of using the free gas scattering model, which assumes that the neutron interacts with atoms in thermal motion in a monatomic gas in thermal equilibrium at material temperature, T. Most importantly, the free gas model assumes the scattering cross section is constant over the neutron energy range, which is usually a good approximation for light nuclei, but not formore » heavy nuclei where the scattering cross section may have several resonances in the epithermal region. Several researchers in the field have shown that the exact resonance scattering model is temperaturedependent, and neglecting the resonances in the lower epithermal range can under-predict resonance absorption due to the upscattering phenomenon mentioned above, leading to an over-prediction of keff by several hundred pcm. Existing methods to address this issue involve changing the neutron weights or implementing an extra rejection scheme in the free gas sampling scheme, and these all involve performing the collision analysis in the center-of-mass frame, followed by a conversion back to the laboratory frame to continue the random walk of the neutron. The goal of this paper was to develop a sampling methodology that (1) accounted for the energydependent scattering cross sections in the collision analysis and (2) was performed in the laboratory frame,avoiding the conversion to the center-of-mass frame. The energy dependence of the scattering cross section was modeled with even-ordered polynomials (2nd and 4th order) to approximate the scattering cross section in Blackshaw’s equations for the moments of the differential scattering PDFs. These moments were used to sample the outgoing neutron speed and angle in the laboratory frame on-the-fly during the random walk of the neutron. Results for criticality studies on fuel pin and fuel assembly calculations using methods developed in this dissertation showed very close comparison to results using the reference Dopplerbroadened rejection correction (DBRC) scheme.« less

  15. ASR4: A computer code for fitting and processing 4-gage anelastic strain recovery data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Warpinski, N.R.

    A computer code for analyzing four-gage Anelastic Strain Recovery (ASR) data has been modified for use on a personal computer. This code fits the viscoelastic model of Warpinski and Teufel to measured ASR data, calculates the stress orientation directly, and computes stress magnitudes if sufficient input data are available. The code also calculates the stress orientation using strain-rosette equations, and its calculates stress magnitudes using Blanton's approach, assuming sufficient input data are available. The program is written in FORTRAN, compiled with Ryan-McFarland Version 2.4. Graphics use PLOT88 software by Plotworks, Inc., but the graphics software must be obtained by themore » user because of licensing restrictions. A version without graphics can also be run. This code is available through the National Energy Software Center (NESC), operated by Argonne National Laboratory. 5 refs., 3 figs.« less

  16. The effects of nuclear data library processing on Geant4 and MCNP simulations of the thermal neutron scattering law

    NASA Astrophysics Data System (ADS)

    Hartling, K.; Ciungu, B.; Li, G.; Bentoumi, G.; Sur, B.

    2018-05-01

    Monte Carlo codes such as MCNP and Geant4 rely on a combination of physics models and evaluated nuclear data files (ENDF) to simulate the transport of neutrons through various materials and geometries. The grid representation used to represent the final-state scattering energies and angles associated with neutron scattering interactions can significantly affect the predictions of these codes. In particular, the default thermal scattering libraries used by MCNP6.1 and Geant4.10.3 do not accurately reproduce the ENDF/B-VII.1 model in simulations of the double-differential cross section for thermal neutrons interacting with hydrogen nuclei in a thin layer of water. However, agreement between model and simulation can be achieved within the statistical error by re-processing ENDF/B-VII.I thermal scattering libraries with the NJOY code. The structure of the thermal scattering libraries and sampling algorithms in MCNP and Geant4 are also reviewed.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rosenberg, Michael; Jonlin, Duane; Nadel, Steven

    Today’s building energy codes focus on prescriptive requirements for features of buildings that are directly controlled by the design and construction teams and verifiable by municipal inspectors. Although these code requirements have had a significant impact, they fail to influence a large slice of the building energy use pie – including not only miscellaneous plug loads, cooking equipment and commercial/industrial processes, but the maintenance and optimization of the code-mandated systems as well. Currently, code compliance is verified only through the end of construction, and there are no limits or consequences for the actual energy use in an occupied building. Inmore » the future, our suite of energy regulations will likely expand to include building efficiency, energy use or carbon emission budgets over their full life cycle. Intelligent building systems, extensive renewable energy, and a transition from fossil fuel to electric heating systems will likely be required to meet ultra-low-energy targets. This paper lays out the authors’ perspectives on how buildings may evolve over the course of the 21st century and the roles that codes and regulations will play in shaping those buildings of the future.« less

  18. Plasma Separation Process: Betacell (BCELL) code: User's manual. [Bipolar barrier junction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taherzadeh, M.

    1987-11-13

    The emergence of clearly defined applications for (small or large) amounts of long-life and reliable power sources has given the design and production of betavoltaic systems a new life. Moreover, because of the availability of the plasma separation program, (PSP) at TRW, it is now possible to separate the most desirable radioisotopes for betacell power generating devices. A computer code, named BCELL, has been developed to model the betavoltaic concept by utilizing the available up-to-date source/cell parameters. In this program, attempts have been made to determine the betacell energy device maximum efficiency, degradation due to the emitting source radiation andmore » source/cell lifetime power reduction processes. Additionally, comparison is made between the Schottky and PN junction devices for betacell battery design purposes. Certain computer code runs have been made to determine the JV distribution function and the upper limit of the betacell generated power for specified energy sources. A Ni beta emitting radioisotope was used for the energy source and certain semiconductors were used for the converter subsystem of the betacell system. Some results for a Promethium source are also given here for comparison. 16 refs.« less

  19. Ultraviolet Communication for Medical Applications

    DTIC Science & Technology

    2015-06-01

    In the previous Phase I effort, Directed Energy Inc.’s (DEI) parent company Imaging Systems Technology (IST) demonstrated feasibility of several key...accurately model high path loss. Custom photon scatter code was rewritten for parallel execution on a graphics processing unit (GPU). The NVidia CUDA

  20. Building America Top Innovations 2012: Building Science-Based Climate Maps

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    none,

    2013-01-01

    This Building America Top Innovations profile describes the Building America-developed climate zone map, which serves as a consistent framework for energy-efficiency requirements in the national model energy code starting with the 2004 IECC Supplement and the ASHRAE 90.1 2004 edition. The map also provides a critical foundation for climate-specific guidance in the widely disseminated EEBA Builder Guides and Building America Best Practice Guides.

  1. Use of SCALE Continuous-Energy Monte Carlo Tools for Eigenvalue Sensitivity Coefficient Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perfetti, Christopher M; Rearden, Bradley T

    2013-01-01

    The TSUNAMI code within the SCALE code system makes use of eigenvalue sensitivity coefficients for an extensive number of criticality safety applications, such as quantifying the data-induced uncertainty in the eigenvalue of critical systems, assessing the neutronic similarity between different critical systems, and guiding nuclear data adjustment studies. The need to model geometrically complex systems with improved fidelity and the desire to extend TSUNAMI analysis to advanced applications has motivated the development of a methodology for calculating sensitivity coefficients in continuous-energy (CE) Monte Carlo applications. The CLUTCH and Iterated Fission Probability (IFP) eigenvalue sensitivity methods were recently implemented in themore » CE KENO framework to generate the capability for TSUNAMI-3D to perform eigenvalue sensitivity calculations in continuous-energy applications. This work explores the improvements in accuracy that can be gained in eigenvalue and eigenvalue sensitivity calculations through the use of the SCALE CE KENO and CE TSUNAMI continuous-energy Monte Carlo tools as compared to multigroup tools. The CE KENO and CE TSUNAMI tools were used to analyze two difficult models of critical benchmarks, and produced eigenvalue and eigenvalue sensitivity coefficient results that showed a marked improvement in accuracy. The CLUTCH sensitivity method in particular excelled in terms of efficiency and computational memory requirements.« less

  2. ORBIT modelling of fast particle redistribution induced by sawtooth instability

    NASA Astrophysics Data System (ADS)

    Kim, Doohyun; Podestà, Mario; Poli, Francesca; Princeton Plasma Physics Laboratory Team

    2017-10-01

    Initial tests on NSTX-U show that introducing energy selectivity for sawtooth (ST) induced fast ion redistribution improves the agreement between experimental and simulated quantities, e.g. neutron rate. Thus, it is expected that a proper description of the fast particle redistribution due to ST can improve the modelling of ST instability and interpretation of experiments using a transport code. In this work, we use ORBIT code to characterise the redistribution of fast particles. In order to simulate a ST crash, a spatial and temporal displacement is implemented as ξ (ρ , t , θ , ϕ) = ∑ξmn (ρ , t) cos (mθ + nϕ) to produce perturbed magnetic fields from the equilibrium field B-> , δB-> = ∇ × (ξ-> × B->) , which affect the fast particle distribution. From ORBIT simulations, we find suitable amplitudes of ξ for each ST crash to reproduce the experimental results. The comparison of the simulation and the experimental results will be discussed as well as the dependence of fast ion redistribution on fast ion phase space variables (i.e. energy, magnetic moment and toroidal angular momentum). Work supported by the U.S. Department of Energy, Office of Science, Office of Fusion Energy Sciences under Contract Number DE-AC02-09CH11466.

  3. Final Report from The University of Texas at Austin for DEGAS: Dynamic Global Address Space programming environments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Erez, Mattan; Yelick, Katherine; Sarkar, Vivek

    The Dynamic, Exascale Global Address Space programming environment (DEGAS) project will develop the next generation of programming models and runtime systems to meet the challenges of Exascale computing. Our approach is to provide an efficient and scalable programming model that can be adapted to application needs through the use of dynamic runtime features and domain-specific languages for computational kernels. We address the following technical challenges: Programmability: Rich set of programming constructs based on a Hierarchical Partitioned Global Address Space (HPGAS) model, demonstrated in UPC++. Scalability: Hierarchical locality control, lightweight communication (extended GASNet), and ef- ficient synchronization mechanisms (Phasers). Performance Portability:more » Just-in-time specialization (SEJITS) for generating hardware-specific code and scheduling libraries for domain-specific adaptive runtimes (Habanero). Energy Efficiency: Communication-optimal code generation to optimize energy efficiency by re- ducing data movement. Resilience: Containment Domains for flexible, domain-specific resilience, using state capture mechanisms and lightweight, asynchronous recovery mechanisms. Interoperability: Runtime and language interoperability with MPI and OpenMP to encourage broad adoption.« less

  4. Capturing atmospheric effects on 3D millimeter wave radar propagation patterns

    NASA Astrophysics Data System (ADS)

    Cook, Richard D.; Fiorino, Steven T.; Keefer, Kevin J.; Stringer, Jeremy

    2016-05-01

    Traditional radar propagation modeling is done using a path transmittance with little to no input for weather and atmospheric conditions. As radar advances into the millimeter wave (MMW) regime, atmospheric effects such as attenuation and refraction become more pronounced than at traditional radar wavelengths. The DoD High Energy Laser Joint Technology Offices High Energy Laser End-to-End Operational Simulation (HELEEOS) in combination with the Laser Environmental Effects Definition and Reference (LEEDR) code have shown great promise simulating atmospheric effects on laser propagation. Indeed, the LEEDR radiative transfer code has been validated in the UV through RF. Our research attempts to apply these models to characterize the far field radar pattern in three dimensions as a signal propagates from an antenna towards a point in space. Furthermore, we do so using realistic three dimensional atmospheric profiles. The results from these simulations are compared to those from traditional radar propagation software packages. In summary, a fast running method has been investigated which can be incorporated into computational models to enhance understanding and prediction of MMW propagation through various atmospheric and weather conditions.

  5. Department of Energy's Virtual Lab Infrastructure for Integrated Earth System Science Data

    NASA Astrophysics Data System (ADS)

    Williams, D. N.; Palanisamy, G.; Shipman, G.; Boden, T.; Voyles, J.

    2014-12-01

    The U.S. Department of Energy (DOE) Office of Biological and Environmental Research (BER) Climate and Environmental Sciences Division (CESD) produces a diversity of data, information, software, and model codes across its research and informatics programs and facilities. This information includes raw and reduced observational and instrumentation data, model codes, model-generated results, and integrated data products. Currently, most of this data and information are prepared and shared for program specific activities, corresponding to CESD organization research. A major challenge facing BER CESD is how best to inventory, integrate, and deliver these vast and diverse resources for the purpose of accelerating Earth system science research. This talk provides a concept for a CESD Integrated Data Ecosystem and an initial roadmap for its implementation to address this integration challenge in the "Big Data" domain. Towards this end, a new BER Virtual Laboratory Infrastructure will be presented, which will include services and software connecting the heterogeneous CESD data holdings, and constructed with open source software based on industry standards, protocols, and state-of-the-art technology.

  6. 76 FR 57982 - Building Energy Codes Cost Analysis

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-19

    ... DEPARTMENT OF ENERGY Office of Energy Efficiency and Renewable Energy [Docket No. EERE-2011-BT-BC-0046] Building Energy Codes Cost Analysis Correction In notice document 2011-23236 beginning on page... heading ``Table 1. Cash flow components'' should read ``Table 7. Cash flow components''. [FR Doc. C1-2011...

  7. Energy Extraction from a Hypothetical MHK Array in a Section of the Mississippi River

    NASA Astrophysics Data System (ADS)

    Barco, J.; James, S. C.; Roberts, J. D.; Jones, C. A.; Jepsen, R. A.

    2010-12-01

    The world is facing many challenges meeting the energy demands for the future. Growing populations and developing economies as well as increasing energy expenditures highlight the need for a spectrum of energy sources. Concerns about pollution and climate change have led to increased interest in all forms of renewable energy to stabilize or decrease consumption of fossil fuels. One promising renewable is marine and hydrokinetic (MHK) energy, which has the potential to make important contributions to energy portfolios of the future. However, a primary question remains: How much energy can be extracted from MHK devices in rivers and oceans without significant environmental effects? This study focuses on the potential energy extraction from different hypothetical MHK array configurations in a section of the Mississippi River located near to Scotlandville Bend, Louisiana. Bathymetry data, obtained from Free Flow Power Corporation (FFP) via the US Army Corps bathymetry survey library, were interpolated onto a DELFT3D curvilinear, orthogonal grid of the system using ArcGIS 9.3.1. Boundary conditions are constrained by the upstream and downstream river flow rates and gage heights obtained from USGS website. Acoustic Doppler Current Profiler (ADCP) measurements obtained from FFP are used for pre-array model validation. Energy extraction is simulated using momentum sinks recently coded into SNL-EFDC, which is an augmented version of US EPA’s Environmental Fluid Dynamics Code (EFDC). SNL-EFDC model includes a new module which considers energy removal by MHK devices and commensurate changes to the turbulent kinetic energy and turbulent kinetic energy dissipation rate. As expected, average velocities decrease downstream of each MHK device due to energy extraction and blunt-body form drag from the MHK support structures. Changes in the flow field can alter sediment transport dynamics around and downstream of an MHK array; various hypothetical scenarios are examined. This study highlights concepts that should be considered when planning, designing, and optimizing MHK devices arrays in riverine resources. Future efforts will focus on validating and verifying these sorts of models as data become available.

  8. Finite element code development for modeling detonation of HMX composites

    NASA Astrophysics Data System (ADS)

    Duran, Adam V.; Sundararaghavan, Veera

    2017-01-01

    In this work, we present a hydrodynamics code for modeling shock and detonation waves in HMX. A stable efficient solution strategy based on a Taylor-Galerkin finite element (FE) discretization was developed to solve the reactive Euler equations. In our code, well calibrated equations of state for the solid unreacted material and gaseous reaction products have been implemented, along with a chemical reaction scheme and a mixing rule to define the properties of partially reacted states. A linear Gruneisen equation of state was employed for the unreacted HMX calibrated from experiments. The JWL form was used to model the EOS of gaseous reaction products. It is assumed that the unreacted explosive and reaction products are in both pressure and temperature equilibrium. The overall specific volume and internal energy was computed using the rule of mixtures. Arrhenius kinetics scheme was integrated to model the chemical reactions. A locally controlled dissipation was introduced that induces a non-oscillatory stabilized scheme for the shock front. The FE model was validated using analytical solutions for SOD shock and ZND strong detonation models. Benchmark problems are presented for geometries in which a single HMX crystal is subjected to a shock condition.

  9. Spherical combustion clouds in explosions

    NASA Astrophysics Data System (ADS)

    Kuhl, A. L.; Bell, J. B.; Beckner, V. E.; Balakrishnan, K.; Aspden, A. J.

    2013-05-01

    This study explores the properties of spherical combustion clouds in explosions. Two cases are investigated: (1) detonation of a TNT charge and combustion of its detonation products with air, and (2) shock dispersion of aluminum powder and its combustion with air. The evolution of the blast wave and ensuing combustion cloud dynamics are studied via numerical simulations with our adaptive mesh refinement combustion code. The code solves the multi-phase conservation laws for a dilute heterogeneous continuum as formulated by Nigmatulin. Single-phase combustion (e.g., TNT with air) is modeled in the fast-chemistry limit. Two-phase combustion (e.g., Al powder with air) uses an induction time model based on Arrhenius fits to Boiko's shock tube data, along with an ignition temperature criterion based on fits to Gurevich's data, and an ignition probability model that accounts for multi-particle effects on cloud ignition. Equations of state are based on polynomial fits to thermodynamic calculations with the Cheetah code, assuming frozen reactants and equilibrium products. Adaptive mesh refinement is used to resolve thin reaction zones and capture the energy-bearing scales of turbulence on the computational mesh (ILES approach). Taking advantage of the symmetry of the problem, azimuthal averaging was used to extract the mean and rms fluctuations from the numerical solution, including: thermodynamic profiles, kinematic profiles, and reaction-zone profiles across the combustion cloud. Fuel consumption was limited to ˜ 60-70 %, due to the limited amount of air a spherical combustion cloud can entrain before the turbulent velocity field decays away. Turbulent kinetic energy spectra of the solution were found to have both rotational and dilatational components, due to compressibility effects. The dilatational component was typically about 1 % of the rotational component; both seemed to preserve their spectra as they decayed. Kinetic energy of the blast wave decayed due to the pressure field. Turbulent kinetic energy of the combustion cloud decayed due to enstrophy overline{ω 2} and dilatation overline{Δ 2}.

  10. Proteus two-dimensional Navier-Stokes computer code, version 2.0. Volume 1: Analysis description

    NASA Technical Reports Server (NTRS)

    Towne, Charles E.; Schwab, John R.; Bui, Trong T.

    1993-01-01

    A computer code called Proteus 2D was developed to solve the two-dimensional planar or axisymmetric, Reynolds-averaged, unsteady compressible Navier-Stokes equations in strong conservation law form. The objective in this effort was to develop a code for aerospace propulsion applications that is easy to use and easy to modify. Code readability, modularity, and documentation were emphasized. The governing equations are solved in generalized nonorthogonal body-fitted coordinates, by marching in time using a fully-coupled ADI solution procedure. The boundary conditions are treated implicitly. All terms, including the diffusion terms, are linearized using second-order Taylor series expansions. Turbulence is modeled using either an algebraic or two-equation eddy viscosity model. The thin-layer or Euler equations may also be solved. The energy equation may be eliminated by the assumption of constant total enthalpy. Explicit and implicit artificial viscosity may be used. Several time step options are available for convergence acceleration. The documentation is divided into three volumes. This is the Analysis Description, and presents the equations and solution procedure. The governing equations, the turbulence model, the linearization of the equations and boundary conditions, the time and space differencing formulas, the ADI solution procedure, and the artificial viscosity models are described in detail.

  11. Proteus three-dimensional Navier-Stokes computer code, version 1.0. Volume 1: Analysis description

    NASA Technical Reports Server (NTRS)

    Towne, Charles E.; Schwab, John R.; Bui, Trong T.

    1993-01-01

    A computer code called Proteus 3D has been developed to solve the three dimensional, Reynolds averaged, unsteady compressible Navier-Stokes equations in strong conservation law form. The objective in this effort has been to develop a code for aerospace propulsion applications that is easy to use and easy to modify. Code readability, modularity, and documentation have been emphasized. The governing equations are solved in generalized non-orthogonal body-fitted coordinates by marching in time using a fully-coupled ADI solution procedure. The boundary conditions are treated implicitly. All terms, including the diffusion terms, are linearized using second-order Taylor series expansions. Turbulence is modeled using either an algebraic or two-equation eddy viscosity model. The thin-layer or Euler equations may also be solved. The energy equation may be eliminated by the assumption of constant total enthalpy. Explicit and implicit artificial viscosity may be used. Several time step options are available for convergence acceleration. The documentation is divided into three volumes. This is the Analysis Description, and presents the equations and solution procedure. It describes in detail the governing equations, the turbulence model, the linearization of the equations and boundary conditions, the time and space differencing formulas, the ADI solution procedure, and the artificial viscosity models.

  12. First results of coupled IPS/NIMROD/GENRAY simulations

    NASA Astrophysics Data System (ADS)

    Jenkins, Thomas; Kruger, S. E.; Held, E. D.; Harvey, R. W.; Elwasif, W. R.; Schnack, D. D.

    2010-11-01

    The Integrated Plasma Simulator (IPS) framework, developed by the SWIM Project Team, facilitates self-consistent simulations of complicated plasma behavior via the coupling of various codes modeling different spatial/temporal scales in the plasma. Here, we apply this capability to investigate the stabilization of tearing modes by ECCD. Under IPS control, the NIMROD code (MHD) evolves fluid equations to model bulk plasma behavior, while the GENRAY code (RF) calculates the self-consistent propagation and deposition of RF power in the resulting plasma profiles. GENRAY data is then used to construct moments of the quasilinear diffusion tensor (induced by the RF) which influence the dynamics of momentum/energy evolution in NIMROD's equations. We present initial results from these coupled simulations and demonstrate that they correctly capture the physics of magnetic island stabilization [Jenkins et al, PoP 17, 012502 (2010)] in the low-beta limit. We also discuss the process of code verification in these simulations, demonstrating good agreement between NIMROD and GENRAY predictions for the flux-surface-averaged, RF-induced currents. An overview of ongoing model development (synthetic diagnostics/plasma control systems; neoclassical effects; etc.) is also presented. Funded by US DoE.

  13. Comparison of Model Calculations of Biological Damage from Exposure to Heavy Ions with Measurements

    NASA Technical Reports Server (NTRS)

    Kim, Myung-Hee Y.; Hada, Megumi; Cucinotta, Francis A.; Wu, Honglu

    2014-01-01

    The space environment consists of a varying field of radiation particles including high-energy ions, with spacecraft shielding material providing the major protection to astronauts from harmful exposure. Unlike low-LET gamma or X rays, the presence of shielding does not always reduce the radiation risks for energetic charged-particle exposure. Dose delivered by the charged particle increases sharply at the Bragg peak. However, the Bragg curve does not necessarily represent the biological damage along the particle path since biological effects are influenced by the track structures of both primary and secondary particles. Therefore, the ''biological Bragg curve'' is dependent on the energy and the type of the primary particle and may vary for different biological end points. Measurements of the induction of micronuclei (MN) have made across the Bragg curve in human fibroblasts exposed to energetic silicon and iron ions in vitro at two different energies, 300 MeV/nucleon and 1 GeV/nucleon. Although the data did not reveal an increased yield of MN at the location of the Bragg peak, the increased inhibition of cell progression, which is related to cell death, was found at the Bragg peak location. These results are compared to the calculations of biological damage using a stochastic Monte-Carlo track structure model, Galactic Cosmic Ray Event-based Risk Model (GERM) code (Cucinotta, et al., 2011). The GERM code estimates the basic physical properties along the passage of heavy ions in tissue and shielding materials, by which the experimental set-up can be interpreted. The code can also be used to describe the biophysical events of interest in radiobiology, cancer therapy, and space exploration. The calculation has shown that the severely damaged cells at the Bragg peak are more likely to go through reproductive death, the so called "overkill".

  14. Fenestration system energy performance research, implementation, and international harmonization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McGowan, Raymond F

    The research conducted by the NFRC and its contractors adds significantly to the understanding of several areas of investigation. NFRC enables manufacturers to rate fenestration energy performance to comply with building energy codes, participate in ENERGY STAR, and compete fairly. NFRC continuously seeks to improve its ratings and also seeks to simplify the rating process. Several research projects investigated rating improvement potential such as • Complex Product VT Rating Research • Window 6 and Therm 6 Validation Research Project • Complex Product VT Rating Research Conclusions from these research projects led to important changes and increased confidence in the existingmore » NFRC rating process. Conclusions from the Window 6/Therm 6 project will enable window manufacturers to rate an expanded array of products and improve existing ratings. Some research lead to an improved new rating method called the Component Modeling Approach. A primary goal of the CMA was a simplification of the commercial energy rating process to increase participation and make the commercial industry more competitive and code compliant. The project below contributed towards CMA development: • Component Modeling Approach Condensation Resistance Research NFRC continues to implement the Component Modeling Approach program. The program includes the CMA software tool, CMAST, and several procedural documents to govern the certification process. This significant accomplishment was a response the commercial fenestration industry’s need for a simplification of the present NFRC energy rating method (named site built). To date, most commercial fenestration is self-rated by a variety of techniques. The CMA enables commercial fenestration manufacturers to rate according to the NFRC 100/200 as most commercial energy codes require. International Harmonization NFRC achieved significant international harmonization success by continuing its licensing agreements with the Australian Fenestration Rating Council and the Association of Architectural Aluminum Manufacturers of South Africa (AAAMSA) to produce NFRC certified product ratings in their respective nations. NFRC worked in several other nations to introduce the NFRC ratings system: • India • China • Japan • Canada • Thailand • South Africa • Brazil • Korea NFRC attended or hosted several meetings in each of these nations establishing academic, commercial, industrial, and governmental contacts. NFRC presented the NFRC process and then necessary infrastructure steps necessary to achieve harmonization with the NFRC labeling system. NFRC looks forward to continued work toward harmonization in these nations and potentially others.« less

  15. TEA: A Code Calculating Thermochemical Equilibrium Abundances

    NASA Astrophysics Data System (ADS)

    Blecic, Jasmina; Harrington, Joseph; Bowman, M. Oliver

    2016-07-01

    We present an open-source Thermochemical Equilibrium Abundances (TEA) code that calculates the abundances of gaseous molecular species. The code is based on the methodology of White et al. and Eriksson. It applies Gibbs free-energy minimization using an iterative, Lagrangian optimization scheme. Given elemental abundances, TEA calculates molecular abundances for a particular temperature and pressure or a list of temperature-pressure pairs. We tested the code against the method of Burrows & Sharp, the free thermochemical equilibrium code Chemical Equilibrium with Applications (CEA), and the example given by Burrows & Sharp. Using their thermodynamic data, TEA reproduces their final abundances, but with higher precision. We also applied the TEA abundance calculations to models of several hot-Jupiter exoplanets, producing expected results. TEA is written in Python in a modular format. There is a start guide, a user manual, and a code document in addition to this theory paper. TEA is available under a reproducible-research, open-source license via https://github.com/dzesmin/TEA.

  16. TEA: A CODE CALCULATING THERMOCHEMICAL EQUILIBRIUM ABUNDANCES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blecic, Jasmina; Harrington, Joseph; Bowman, M. Oliver, E-mail: jasmina@physics.ucf.edu

    2016-07-01

    We present an open-source Thermochemical Equilibrium Abundances (TEA) code that calculates the abundances of gaseous molecular species. The code is based on the methodology of White et al. and Eriksson. It applies Gibbs free-energy minimization using an iterative, Lagrangian optimization scheme. Given elemental abundances, TEA calculates molecular abundances for a particular temperature and pressure or a list of temperature–pressure pairs. We tested the code against the method of Burrows and Sharp, the free thermochemical equilibrium code Chemical Equilibrium with Applications (CEA), and the example given by Burrows and Sharp. Using their thermodynamic data, TEA reproduces their final abundances, but withmore » higher precision. We also applied the TEA abundance calculations to models of several hot-Jupiter exoplanets, producing expected results. TEA is written in Python in a modular format. There is a start guide, a user manual, and a code document in addition to this theory paper. TEA is available under a reproducible-research, open-source license via https://github.com/dzesmin/TEA.« less

  17. Coupled Finite Element ? Potts Model Simulations of Grain Growth in Copper Interconnects

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Radhakrishnan, Balasubramaniam; Gorti, Sarma B

    The paper addresses grain growth in copper interconnects in the presence of thermal expansion mismatch stresses. The evolution of grain structure and texture in copper in the simultaneous presence of two driving forces, curvature and elastic stored energy difference, is modeled by using a hybrid Potts model simulation approach. The elastic stored energy is calculated by using the commercial finite element code ABAQUS, where the effect of elastic anisotropy on the thermal mismatch stress and strain distribution within a polycrystalline grain structure is modeled through a user material (UMAT) interface. Parametric studies on the effect of trench width and themore » height of the overburden were carried out. The results show that the grain structure and texture evolution are significantly altered by the presence of elastic strain energy.« less

  18. Numerical computation of viscous flow about unconventional airfoil shapes

    NASA Technical Reports Server (NTRS)

    Ahmed, S.; Tannehill, J. C.

    1990-01-01

    A new two-dimensional computer code was developed to analyze the viscous flow around unconventional airfoils at various Mach numbers and angles of attack. The Navier-Stokes equations are solved using an implicit, upwind, finite-volume scheme. Both laminar and turbulent flows can be computed. A new nonequilibrium turbulence closure model was developed for computing turbulent flows. This two-layer eddy viscosity model was motivated by the success of the Johnson-King model in separated flow regions. The influence of history effects are described by an ordinary differential equation developed from the turbulent kinetic energy equation. The performance of the present code was evaluated by solving the flow around three airfoils using the Reynolds time-averaged Navier-Stokes equations. Excellent results were obtained for both attached and separated flows about the NACA 0012 airfoil, the RAE 2822 airfoil, and the Integrated Technology A 153W airfoil. Based on the comparison of the numerical solutions with the available experimental data, it is concluded that the present code in conjunction with the new nonequilibrium turbulence model gives excellent results.

  19. Supersonic propulsion simulation by incorporating component models in the large perturbation inlet (LAPIN) computer code

    NASA Technical Reports Server (NTRS)

    Cole, Gary L.; Richard, Jacques C.

    1991-01-01

    An approach to simulating the internal flows of supersonic propulsion systems is presented. The approach is based on a fairly simple modification of the Large Perturbation Inlet (LAPIN) computer code. LAPIN uses a quasi-one dimensional, inviscid, unsteady formulation of the continuity, momentum, and energy equations. The equations are solved using a shock capturing, finite difference algorithm. The original code, developed for simulating supersonic inlets, includes engineering models of unstart/restart, bleed, bypass, and variable duct geometry, by means of source terms in the equations. The source terms also provide a mechanism for incorporating, with the inlet, propulsion system components such as compressor stages, combustors, and turbine stages. This requires each component to be distributed axially over a number of grid points. Because of the distributed nature of such components, this representation should be more accurate than a lumped parameter model. Components can be modeled by performance map(s), which in turn are used to compute the source terms. The general approach is described. Then, simulation of a compressor/fan stage is discussed to show the approach in detail.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Temporal, Mauro; Canaud, Benoit; Cayzac, Witold

    The alpha-particle energy deposition mechanism modifies the ignition conditions of the thermonuclear Deuterium-Tritium fusion reactions, and constitutes a key issue in achieving high gain in Inertial Confinement Fusion implosions. One-dimensional hydrodynamic calculations have been performed with the code Multi-IFE to simulate the implosion of a capsule directly irradiated by a laser beam. The diffusion approximation for the alpha energy deposition has been used to optimize three laser profiles corresponding to different implosion velocities. A Monte-Carlo package has been included in Multi-IFE to calculate the alpha energy transport, and in this case the energy deposition uses both the LP and themore » BPS stopping power models. Homothetic transformations that maintain a constant implosion velocity have been used to map out the transition region between marginally-igniting and high-gain configurations. Furthermore, the results provided by the two models have been compared and it is found that – close to the ignition threshold – in order to produce the same fusion energy, the calculations performed with the BPS model require about 10% more invested energy with respect to the LP model.« less

  1. Initialization of hydrodynamics in relativistic heavy ion collisions with an energy-momentum transport model

    NASA Astrophysics Data System (ADS)

    Naboka, V. Yu.; Akkelin, S. V.; Karpenko, Iu. A.; Sinyukov, Yu. M.

    2015-01-01

    A key ingredient of hydrodynamical modeling of relativistic heavy ion collisions is thermal initial conditions, an input that is the consequence of a prethermal dynamics which is not completely understood yet. In the paper we employ a recently developed energy-momentum transport model of the prethermal stage to study influence of the alternative initial states in nucleus-nucleus collisions on flow and energy density distributions of the matter at the starting time of hydrodynamics. In particular, the dependence of the results on isotropic and anisotropic initial states is analyzed. It is found that at the thermalization time the transverse flow is larger and the maximal energy density is higher for the longitudinally squeezed initial momentum distributions. The results are also sensitive to the relaxation time parameter, equation of state at the thermalization time, and transverse profile of initial energy density distribution: Gaussian approximation, Glauber Monte Carlo profiles, etc. Also, test results ensure that the numerical code based on the energy-momentum transport model is capable of providing both averaged and fluctuating initial conditions for the hydrodynamic simulations of relativistic nuclear collisions.

  2. Effect of Surface Nonequilibrium Thermochemistry in Simulation of Carbon Based Ablators

    NASA Technical Reports Server (NTRS)

    Chen, Yih-Kang; Gokcen, Tahir

    2012-01-01

    This study demonstrates that coupling of a material thermal response code and a flow solver using finite-rate gas/surface interaction model provides time-accurate solutions for multidimensional ablation of carbon based charring ablators. The material thermal response code used in this study is the Two-dimensional Implicit Thermal Response and Ablation Program (TITAN), which predicts charring material thermal response and shape change on hypersonic space vehicles. Its governing equations include total energy balance, pyrolysis gas momentum conservation, and a three-component decomposition model. The flow code solves the reacting Navier-Stokes equations using Data Parallel Line Relaxation (DPLR) method. Loose coupling between material response and flow codes is performed by solving the surface mass balance in DPLR and the surface energy balance in TITAN. Thus, the material surface recession is predicted by finite-rate gas/surface interaction boundary conditions implemented in DPLR, and the surface temperature and pyrolysis gas injection rate are computed in TITAN. Two sets of gas/surface interaction chemistry between air and carbon surface developed by Park and Zhluktov, respectively, are studied. Coupled fluid-material response analyses of stagnation tests conducted in NASA Ames Research Center arc-jet facilities are considered. The ablating material used in these arc-jet tests was a Phenolic Impregnated Carbon Ablator (PICA). Computational predictions of in-depth material thermal response and surface recession are compared with the experimental measurements for stagnation cold wall heat flux ranging from 107 to 1100 Watts per square centimeter.

  3. Effect of Non-Equilibrium Surface Thermochemistry in Simulation of Carbon Based Ablators

    NASA Technical Reports Server (NTRS)

    Chen, Yih-Kanq; Gokcen, Tahir

    2012-01-01

    This study demonstrates that coupling of a material thermal response code and a flow solver using non-equilibrium gas/surface interaction model provides time-accurate solutions for the multidimensional ablation of carbon based charring ablators. The material thermal response code used in this study is the Two-dimensional Implicit Thermal-response and AblatioN Program (TITAN), which predicts charring material thermal response and shape change on hypersonic space vehicles. Its governing equations include total energy balance, pyrolysis gas mass conservation, and a three-component decomposition model. The flow code solves the reacting Navier-Stokes equations using Data Parallel Line Relaxation (DPLR) method. Loose coupling between the material response and flow codes is performed by solving the surface mass balance in DPLR and the surface energy balance in TITAN. Thus, the material surface recession is predicted by finite-rate gas/surface interaction boundary conditions implemented in DPLR, and the surface temperature and pyrolysis gas injection rate are computed in TITAN. Two sets of nonequilibrium gas/surface interaction chemistry between air and the carbon surface developed by Park and Zhluktov, respectively, are studied. Coupled fluid-material response analyses of stagnation tests conducted in NASA Ames Research Center arc-jet facilities are considered. The ablating material used in these arc-jet tests was Phenolic Impregnated Carbon Ablator (PICA). Computational predictions of in-depth material thermal response and surface recession are compared with the experimental measurements for stagnation cold wall heat flux ranging from 107 to 1100 Watts per square centimeter.

  4. Building Standards and Codes for Energy Conservation

    ERIC Educational Resources Information Center

    Gross, James G.; Pierlert, James H.

    1977-01-01

    Current activity intended to lead to energy conservation measures in building codes and standards is reviewed by members of the Office of Building Standards and Codes Services of the National Bureau of Standards. For journal availability see HE 508 931. (LBH)

  5. Numerical Modeling and Testing of an Inductively-Driven and High-Energy Pulsed Plasma Thrusters

    NASA Technical Reports Server (NTRS)

    Parma, Brian

    2004-01-01

    Pulsed Plasma Thrusters (PPTs) are advanced electric space propulsion devices that are characterized by simplicity and robustness. They suffer, however, from low thrust efficiencies. This summer, two approaches to improve the thrust efficiency of PPTs will be investigated through both numerical modeling and experimental testing. The first approach, an inductively-driven PPT, uses a double-ignition circuit to fire two PPTs in succession. This effectively changes the PPTs configuration from an LRC circuit to an LR circuit. The LR circuit is expected to provide better impedance matching and improving the efficiency of the energy transfer to the plasma. An added benefit of the LR circuit is an exponential decay of the current, whereas a traditional PPT s under damped LRC circuit experiences the characteristic "ringing" of its current. The exponential decay may provide improved lifetime and sustained electromagnetic acceleration. The second approach, a high-energy PPT, is a traditional PPT with a variable size capacitor bank. This PPT will be simulated and tested at energy levels between 100 and 450 joules in order to investigate the relationship between efficiency and energy level. Arbitrary Coordinate Hydromagnetic (MACH2) code is used. The MACH2 code, designed by the Center for Plasma Theory and Computation at the Air Force Research Laboratory, has been used to gain insight into a variety of plasma problems, including electric plasma thrusters. The goals for this summer include numerical predictions of performance for both the inductively-driven PPT and high-energy PFT, experimental validation of the numerical models, and numerical optimization of the designs. These goals will be met through numerical and experimental investigation of the PPTs current waveforms, mass loss (or ablation), and impulse bit characteristics.

  6. Three dimensional δf simulations of beams in the SSC

    NASA Astrophysics Data System (ADS)

    Koga, J.; Tajima, T.; Machida, S.

    1993-12-01

    A three dimensional δf strong-strong algorithm has been developed to apply to the study of such effects as space charge and beam-beam interaction phenomena in the Superconducting Super Collider (SSC). The algorithm is obtained from the merging of the particle tracking code Simpsons used for 3 dimensional space charge effects and a δf code. The δf method is used to follow the evolution of the non-gaussian part of the beam distribution. The advantages of this method are twofold. First, the Simpsons code utilizes a realistic accelerator model including synchrotron oscillations and energy ramping in 6 dimensional phase space with electromagnetic fields of the beams calculated using a realistic 3 dimensional field solver. Second, the beams are evolving in the fully self-consistent strong-strong sense with finite particle fluctuation noise is greatly reduced as opposed to the weak-strong models where one beam is fixed.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Doebling, Scott William

    This paper documents the escape of high explosive (HE) products problem. The problem, first presented by Fickett & Rivard, tests the implementation and numerical behavior of a high explosive detonation and energy release model and its interaction with an associated compressible hydrodynamics simulation code. The problem simulates the detonation of a finite-length, one-dimensional piece of HE that is driven by a piston from one end and adjacent to a void at the other end. The HE equation of state is modeled as a polytropic ideal gas. The HE detonation is assumed to be instantaneous with an infinitesimal reaction zone. Viamore » judicious selection of the material specific heat ratio, the problem has an exact solution with linear characteristics, enabling a straightforward calculation of the physical variables as a function of time and space. Lastly, implementation of the exact solution in the Python code ExactPack is discussed, as are verification cases for the exact solution code.« less

  8. Cosmic-ray propagation with DRAGON2: I. numerical solver and astrophysical ingredients

    NASA Astrophysics Data System (ADS)

    Evoli, Carmelo; Gaggero, Daniele; Vittino, Andrea; Di Bernardo, Giuseppe; Di Mauro, Mattia; Ligorini, Arianna; Ullio, Piero; Grasso, Dario

    2017-02-01

    We present version 2 of the DRAGON code designed for computing realistic predictions of the CR densities in the Galaxy. The code numerically solves the interstellar CR transport equation (including inhomogeneous and anisotropic diffusion, either in space and momentum, advective transport and energy losses), under realistic conditions. The new version includes an updated numerical solver and several models for the astrophysical ingredients involved in the transport equation. Improvements in the accuracy of the numerical solution are proved against analytical solutions and in reference diffusion scenarios. The novel features implemented in the code allow to simulate the diverse scenarios proposed to reproduce the most recent measurements of local and diffuse CR fluxes, going beyond the limitations of the homogeneous galactic transport paradigm. To this end, several applications using DRAGON2 are presented as well. This new version facilitates the users to include their own physical models by means of a modular C++ structure.

  9. SCORE-EVET: a computer code for the multidimensional transient thermal-hydraulic analysis of nuclear fuel rod arrays. [BWR; PWR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Benedetti, R. L.; Lords, L. V.; Kiser, D. M.

    1978-02-01

    The SCORE-EVET code was developed to study multidimensional transient fluid flow in nuclear reactor fuel rod arrays. The conservation equations used were derived by volume averaging the transient compressible three-dimensional local continuum equations in Cartesian coordinates. No assumptions associated with subchannel flow have been incorporated into the derivation of the conservation equations. In addition to the three-dimensional fluid flow equations, the SCORE-EVET code ocntains: (a) a one-dimensional steady state solution scheme to initialize the flow field, (b) steady state and transient fuel rod conduction models, and (c) comprehensive correlation packages to describe fluid-to-fuel rod interfacial energy and momentum exchange. Velocitymore » and pressure boundary conditions can be specified as a function of time and space to model reactor transient conditions such as a hypothesized loss-of-coolant accident (LOCA) or flow blockage.« less

  10. Systematic Comparison of Photoionized Plasma Codes with Application to Spectroscopic Studies of AGN in X-Rays

    NASA Technical Reports Server (NTRS)

    Mehdipour, M.; Kaastra, J. S.; Kallman, T.

    2016-01-01

    Atomic data and plasma models play a crucial role in the diagnosis and interpretation of astrophysical spectra, thus influencing our understanding of the Universe. In this investigation we present a systematic comparison of the leading photoionization codes to determine how much their intrinsic differences impact X-ray spectroscopic studies of hot plasmas in photoionization equilibrium. We carry out our computations using the Cloudy, SPEX, and XSTAR photoionization codes, and compare their derived thermal and ionization states for various ionizing spectral energy distributions. We examine the resulting absorption-line spectra from these codes for the case of ionized outflows in active galactic nuclei. By comparing the ionic abundances as a function of ionization parameter, we find that on average there is about 30 deviation between the codes in where ionic abundances peak. For H-like to B-like sequence ions alone, this deviation in is smaller at about 10 on average. The comparison of the absorption-line spectra in the X-ray band shows that there is on average about 30 deviation between the codes in the optical depth of the lines produced at log 1 to 2, reducing to about 20 deviation at log 3. We also simulate spectra of the ionized outflows with the current and upcoming high-resolution X-ray spectrometers, on board XMM-Newton, Chandra, Hitomi, and Athena. From these simulations we obtain the deviation on the best-fit model parameters, arising from the use of different photoionization codes, which is about 10 to40. We compare the modeling uncertainties with the observational uncertainties from the simulations. The results highlight the importance of continuous development and enhancement of photoionization codes for the upcoming era of X-ray astronomy with Athena.

  11. The ePLAS Code for Ignition Studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mason, Rodney J

    2012-09-20

    Inertial Confinement Fusion (ICF) presents unique opportunities for the extraction of clean energy from Fusion. Intense lasers and particle beams can create and interact with such plasmas, potentially yielding sufficient energy to satisfy all our national needs. However, few models are available to help aid the scientific community in the study and optimization of such interactions. This project enhanced and disseminated the computer code ePLAS for the early understanding and control of Ignition in ICF. ePLAS is a unique simulation code that tracks the transport of laser light to a target, the absorption of that light resulting in the generationmore » and transport of hot electrons, and the heating and flow dynamics of the background plasma. It uses an implicit electromagnetic field-solving method to greatly reduce computing demands, so that useful target interaction studies can often be completed in 15 minutes on a portable 2.1 GHz PC. The code permits the rapid scoping of calculations for the optimization of laser target interactions aimed at fusion. Recent efforts have initiated the use of analytic equations of state (EOS), K-alpha image rendering graphics, allocatable memory for source-free usage, and adaption to the latest Mac and Linux Operating Systems. The speed and utility of ePLAS are unequaled in the ICF simulation community. This project evaluated the effects of its new EOSs on target heating, compared fluid and particle models for the ions, initiated the simultaneous use of both ion models in the code, and studied long time scale 500 ps hot electron deposition for shock ignition. ePLAS has been granted EAR99 export control status, permitting export without a license to most foreign countries. Beta-test versions of ePLAS have been granted to several Universities and Commercial users. The net Project was aimed at achieving early success in the laboratory ignition of thermonuclear targets and the mastery of controlled fusion power for the nation.« less

  12. Modeling and Simulations on the Effects of Shortwave Energy on Micropartile and Nanoparticle Filled Composites

    DTIC Science & Technology

    2014-06-01

    NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA THESIS Approved for public release; distribution is unlimited MODELING AND... Public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing instruction...12a. DISTRIBUTION / AVAILABILITY STATEMENT Approved for public release; distribution is unlimited 12b. DISTRIBUTION CODE A 13. ABSTRACT

  13. Modeling Well Sampled Composite Spectral Energy Distributions of Distant Galaxies via an MCMC-driven Inference Framework

    NASA Astrophysics Data System (ADS)

    Pasha, Imad; Kriek, Mariska; Johnson, Benjamin; Conroy, Charlie

    2018-01-01

    Using a novel, MCMC-driven inference framework, we have modeled the stellar and dust emission of 32 composite spectral energy distributions (SEDs), which span from the near-ultraviolet (NUV) to far infrared (FIR). The composite SEDs were originally constructed in a previous work from the photometric catalogs of the NEWFIRM Medium-Band Survey, in which SEDs of individual galaxies at 0.5 < z < 2.0 were iteratively matched and sorted into types based on their rest-frame UV-to-NIR photometry. In a subsequent work, MIPS 24 μm was added for each SED type, and in this work, PACS 100 μm, PACS160 μm, SPIRE 25 μm, and SPIRE 350 μm photometry have been added to extend the range of the composite SEDs into the FIR. We fit the composite SEDs with the Prospector code, which utilizes an MCMC sampling to explore the parameter space for models created by the Flexible Stellar Population Synthesis (FSPS) code, in order to investigate how specific star formation rate (sSFR), dust temperature, and other galaxy properties vary with SED type.This work is also being used to better constrain the SPS models within FSPS.

  14. Latest astronomical constraints on some non-linear parametric dark energy models

    NASA Astrophysics Data System (ADS)

    Yang, Weiqiang; Pan, Supriya; Paliathanasis, Andronikos

    2018-04-01

    We consider non-linear redshift-dependent equation of state parameters as dark energy models in a spatially flat Friedmann-Lemaître-Robertson-Walker universe. To depict the expansion history of the universe in such cosmological scenarios, we take into account the large-scale behaviour of such parametric models and fit them using a set of latest observational data with distinct origin that includes cosmic microwave background radiation, Supernove Type Ia, baryon acoustic oscillations, redshift space distortion, weak gravitational lensing, Hubble parameter measurements from cosmic chronometers, and finally the local Hubble constant from Hubble space telescope. The fitting technique avails the publicly available code Cosmological Monte Carlo (COSMOMC), to extract the cosmological information out of these parametric dark energy models. From our analysis, it follows that those models could describe the late time accelerating phase of the universe, while they are distinguished from the Λ-cosmology.

  15. Energy dynamics and current sheet structure in fluid and kinetic simulations of decaying magnetohydrodynamic turbulence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Makwana, K. D., E-mail: kirit.makwana@gmx.com; Cattaneo, F.; Zhdankin, V.

    Simulations of decaying magnetohydrodynamic (MHD) turbulence are performed with a fluid and a kinetic code. The initial condition is an ensemble of long-wavelength, counter-propagating, shear-Alfvén waves, which interact and rapidly generate strong MHD turbulence. The total energy is conserved and the rate of turbulent energy decay is very similar in both codes, although the fluid code has numerical dissipation, whereas the kinetic code has kinetic dissipation. The inertial range power spectrum index is similar in both the codes. The fluid code shows a perpendicular wavenumber spectral slope of k{sub ⊥}{sup −1.3}. The kinetic code shows a spectral slope of k{submore » ⊥}{sup −1.5} for smaller simulation domain, and k{sub ⊥}{sup −1.3} for larger domain. We estimate that collisionless damping mechanisms in the kinetic code can account for the dissipation of the observed nonlinear energy cascade. Current sheets are geometrically characterized. Their lengths and widths are in good agreement between the two codes. The length scales linearly with the driving scale of the turbulence. In the fluid code, their thickness is determined by the grid resolution as there is no explicit diffusivity. In the kinetic code, their thickness is very close to the skin-depth, irrespective of the grid resolution. This work shows that kinetic codes can reproduce the MHD inertial range dynamics at large scales, while at the same time capturing important kinetic physics at small scales.« less

  16. Verification of fluid-structure-interaction algorithms through the method of manufactured solutions for actuator-line applications

    NASA Astrophysics Data System (ADS)

    Vijayakumar, Ganesh; Sprague, Michael

    2017-11-01

    Demonstrating expected convergence rates with spatial- and temporal-grid refinement is the ``gold standard'' of code and algorithm verification. However, the lack of analytical solutions and generating manufactured solutions presents challenges for verifying codes for complex systems. The application of the method of manufactured solutions (MMS) for verification for coupled multi-physics phenomena like fluid-structure interaction (FSI) has only seen recent investigation. While many FSI algorithms for aeroelastic phenomena have focused on boundary-resolved CFD simulations, the actuator-line representation of the structure is widely used for FSI simulations in wind-energy research. In this work, we demonstrate the verification of an FSI algorithm using MMS for actuator-line CFD simulations with a simplified structural model. We use a manufactured solution for the fluid velocity field and the displacement of the SMD system. We demonstrate the convergence of both the fluid and structural solver to second-order accuracy with grid and time-step refinement. This work was funded by the U.S. Department of Energy, Office of Energy Efficiency and Renewable Energy, Wind Energy Technologies Office, under Contract No. DE-AC36-08-GO28308 with the National Renewable Energy Laboratory.

  17. Statistical mechanics of light elements at high pressure. IV - A model free energy for the metallic phase. [for Jovian type planet interiors

    NASA Technical Reports Server (NTRS)

    Dewitt, H. E.; Hubbard, W. B.

    1976-01-01

    A large quantity of data on the thermodynamic properties of hydrogen-helium metallic liquids have been obtained in extended computer calculations in which a Monte Carlo code essentially identical to that described by Hubbard (1972) was used. A model free energy for metallic hydrogen with a relatively small mass fraction of helium is discussed, taking into account the definition of variables, a procedure for choosing the free energy, values for the fitting parameters, and the evaluation of the entropy constants. Possibilities concerning a use of the obtained data in studies of the interiors of the outer planets are briefly considered.

  18. Review of the Microdosimetric Studies for High-Energy Charged Particle Beams Using a Tissue-Equivalent Proportional Counter

    NASA Astrophysics Data System (ADS)

    Tsuda, Shuichi; Sato, Tatsuhiko; Ogawa, Tatsuhiko; Sasaki, Shinichi

    Lineal energy (y) distributions were measured for various types of charged particles such as protons and iron, with kinetic energies of up to 500 MeV/u, via the use of a wall-less tissue-equivalent proportional counter (TEPC). Radial dependencies of y distributions were also experimentally evaluated to investigate the track structures of protons, carbon, and iron beams. This paper reviews a series of measured data using the aforementioned TEPC as well as assesses the systematic verification of a microdosimetric calculation model of a y distribution incorporated into the particle and heavy ion transport code system (PHITS) and associated track structure models.

  19. Evaluating the benefits of commercial building energy codes and improving federal incentives for code adoption.

    PubMed

    Gilbraith, Nathaniel; Azevedo, Inês L; Jaramillo, Paulina

    2014-12-16

    The federal government has the goal of decreasing commercial building energy consumption and pollutant emissions by incentivizing the adoption of commercial building energy codes. Quantitative estimates of code benefits at the state level that can inform the size and allocation of these incentives are not available. We estimate the state-level climate, environmental, and health benefits (i.e., social benefits) and reductions in energy bills (private benefits) of a more stringent code (ASHRAE 90.1-2010) relative to a baseline code (ASHRAE 90.1-2007). We find that reductions in site energy use intensity range from 93 MJ/m(2) of new construction per year (California) to 270 MJ/m(2) of new construction per year (North Dakota). Total annual benefits from more stringent codes total $506 million for all states, where $372 million are from reductions in energy bills, and $134 million are from social benefits. These total benefits range from $0.6 million in Wyoming to $49 million in Texas. Private benefits range from $0.38 per square meter in Washington State to $1.06 per square meter in New Hampshire. Social benefits range from $0.2 per square meter annually in California to $2.5 per square meter in Ohio. Reductions in human/environmental damages and future climate damages account for nearly equal shares of social benefits.

  20. Energy Savings Modeling and Inspection Guidelines for Commercial Building Federal Tax Deductions for Buildings in 2016 and Later

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deru, Michael; Field-Macumber, Kristin

    This document provides guidance for modeling and inspecting energy-efficient property in commercial buildings for certification of the energy and power cost savings related to Section 179D of the Internal Revenue Code (IRC) enacted in Section 1331 of the 2005 Energy Policy Act (EPAct) of 2005, noted in Internal Revenue Service (IRS) Notices 2006-52 (IRS 2006), 2008-40 (IRS 2008) and 2012-26 (IRS 2012), and updated by the Protecting Americans from Tax Hikes (PATH) Act of 2015. Specifically, Section 179D provides federal tax deductions for energy-efficient property related to a commercial building's envelope; interior lighting; heating, ventilating, and air conditioning (HVAC); andmore » service hot water (SHW) systems. This document applies to buildings placed in service on or after January 1, 2016.« less

Top