Sample records for core design codes

  1. A New Capability for Nuclear Thermal Propulsion Design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Amiri, Benjamin W.; Nuclear and Radiological Engineering Department, University of Florida, Gainesville, FL 32611; Kapernick, Richard J.

    2007-01-30

    This paper describes a new capability for Nuclear Thermal Propulsion (NTP) design that has been developed, and presents the results of some analyses performed with this design tool. The purpose of the tool is to design to specified mission and material limits, while maximizing system thrust to weight. The head end of the design tool utilizes the ROCket Engine Transient Simulation (ROCETS) code to generate a system design and system design requirements as inputs to the core analysis. ROCETS is a modular system level code which has been used extensively in the liquid rocket engine industry for many years. Themore » core design tool performs high-fidelity reactor core nuclear and thermal-hydraulic design analysis. At the heart of this process are two codes TMSS-NTP and NTPgen, which together greatly automate the analysis, providing the capability to rapidly produce designs that meet all specified requirements while minimizing mass. A PERL based command script, called CORE DESIGNER controls the execution of these two codes, and checks for convergence throughout the process. TMSS-NTP is executed first, to produce a suite of core designs that meet the specified reactor core mechanical, thermal-hydraulic and structural requirements. The suite of designs consists of a set of core layouts and, for each core layout specific designs that span a range of core fuel volumes. NTPgen generates MCNPX models for each of the core designs from TMSS-NTP. Iterative analyses are performed in NTPgen until a reactor design (fuel volume) is identified for each core layout that meets cold and hot operation reactivity requirements and that is zoned to meet a radial core power distribution requirement.« less

  2. Core Physics and Kinetics Calculations for the Fissioning Plasma Core Reactor

    NASA Technical Reports Server (NTRS)

    Butler, C.; Albright, D.

    2007-01-01

    Highly efficient, compact nuclear reactors would provide high specific impulse spacecraft propulsion. This analysis and numerical simulation effort has focused on the technical feasibility issues related to the nuclear design characteristics of a novel reactor design. The Fissioning Plasma Core Reactor (FPCR) is a shockwave-driven gaseous-core nuclear reactor, which uses Magneto Hydrodynamic effects to generate electric power to be used for propulsion. The nuclear design of the system depends on two major calculations: core physics calculations and kinetics calculations. Presently, core physics calculations have concentrated on the use of the MCNP4C code. However, initial results from other codes such as COMBINE/VENTURE and SCALE4a. are also shown. Several significant modifications were made to the ISR-developed QCALC1 kinetics analysis code. These modifications include testing the state of the core materials, an improvement to the calculation of the material properties of the core, the addition of an adiabatic core temperature model and improvement of the first order reactivity correction model. The accuracy of these modifications has been verified, and the accuracy of the point-core kinetics model used by the QCALC1 code has also been validated. Previously calculated kinetics results for the FPCR were described in the ISR report, "QCALC1: A code for FPCR Kinetics Model Feasibility Analysis" dated June 1, 2002.

  3. Recent improvements of reactor physics codes in MHI

    NASA Astrophysics Data System (ADS)

    Kosaka, Shinya; Yamaji, Kazuya; Kirimura, Kazuki; Kamiyama, Yohei; Matsumoto, Hideki

    2015-12-01

    This paper introduces recent improvements for reactor physics codes in Mitsubishi Heavy Industries, Ltd(MHI). MHI has developed a new neutronics design code system Galaxy/Cosmo-S(GCS) for PWR core analysis. After TEPCO's Fukushima Daiichi accident, it is required to consider design extended condition which has not been covered explicitly by the former safety licensing analyses. Under these circumstances, MHI made some improvements for GCS code system. A new resonance calculation model of lattice physics code and homogeneous cross section representative model for core simulator have been developed to apply more wide range core conditions corresponding to severe accident status such like anticipated transient without scram (ATWS) analysis and criticality evaluation of dried-up spent fuel pit. As a result of these improvements, GCS code system has very wide calculation applicability with good accuracy for any core conditions as far as fuel is not damaged. In this paper, the outline of GCS code system is described briefly and recent relevant development activities are presented.

  4. Recent improvements of reactor physics codes in MHI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kosaka, Shinya, E-mail: shinya-kosaka@mhi.co.jp; Yamaji, Kazuya; Kirimura, Kazuki

    2015-12-31

    This paper introduces recent improvements for reactor physics codes in Mitsubishi Heavy Industries, Ltd(MHI). MHI has developed a new neutronics design code system Galaxy/Cosmo-S(GCS) for PWR core analysis. After TEPCO’s Fukushima Daiichi accident, it is required to consider design extended condition which has not been covered explicitly by the former safety licensing analyses. Under these circumstances, MHI made some improvements for GCS code system. A new resonance calculation model of lattice physics code and homogeneous cross section representative model for core simulator have been developed to apply more wide range core conditions corresponding to severe accident status such like anticipatedmore » transient without scram (ATWS) analysis and criticality evaluation of dried-up spent fuel pit. As a result of these improvements, GCS code system has very wide calculation applicability with good accuracy for any core conditions as far as fuel is not damaged. In this paper, the outline of GCS code system is described briefly and recent relevant development activities are presented.« less

  5. Development of an extensible dual-core wireless sensing node for cyber-physical systems

    NASA Astrophysics Data System (ADS)

    Kane, Michael; Zhu, Dapeng; Hirose, Mitsuhito; Dong, Xinjun; Winter, Benjamin; Häckell, Mortiz; Lynch, Jerome P.; Wang, Yang; Swartz, A.

    2014-04-01

    The introduction of wireless telemetry into the design of monitoring and control systems has been shown to reduce system costs while simplifying installations. To date, wireless nodes proposed for sensing and actuation in cyberphysical systems have been designed using microcontrollers with one computational pipeline (i.e., single-core microcontrollers). While concurrent code execution can be implemented on single-core microcontrollers, concurrency is emulated by splitting the pipeline's resources to support multiple threads of code execution. For many applications, this approach to multi-threading is acceptable in terms of speed and function. However, some applications such as feedback controls demand deterministic timing of code execution and maximum computational throughput. For these applications, the adoption of multi-core processor architectures represents one effective solution. Multi-core microcontrollers have multiple computational pipelines that can execute embedded code in parallel and can be interrupted independent of one another. In this study, a new wireless platform named Martlet is introduced with a dual-core microcontroller adopted in its design. The dual-core microcontroller design allows Martlet to dedicate one core to standard wireless sensor operations while the other core is reserved for embedded data processing and real-time feedback control law execution. Another distinct feature of Martlet is a standardized hardware interface that allows specialized daughter boards (termed wing boards) to be interfaced to the Martlet baseboard. This extensibility opens opportunity to encapsulate specialized sensing and actuation functions in a wing board without altering the design of Martlet. In addition to describing the design of Martlet, a few example wings are detailed, along with experiments showing the Martlet's ability to monitor and control physical systems such as wind turbines and buildings.

  6. The development of a thermal hydraulic feedback mechanism with a quasi-fixed point iteration scheme for control rod position modeling for the TRIGSIMS-TH application

    NASA Astrophysics Data System (ADS)

    Karriem, Veronica V.

    Nuclear reactor design incorporates the study and application of nuclear physics, nuclear thermal hydraulic and nuclear safety. Theoretical models and numerical methods implemented in computer programs are utilized to analyze and design nuclear reactors. The focus of this PhD study's is the development of an advanced high-fidelity multi-physics code system to perform reactor core analysis for design and safety evaluations of research TRIGA-type reactors. The fuel management and design code system TRIGSIMS was further developed to fulfill the function of a reactor design and analysis code system for the Pennsylvania State Breazeale Reactor (PSBR). TRIGSIMS, which is currently in use at the PSBR, is a fuel management tool, which incorporates the depletion code ORIGEN-S (part of SCALE system) and the Monte Carlo neutronics solver MCNP. The diffusion theory code ADMARC-H is used within TRIGSIMS to accelerate the MCNP calculations. It manages the data and fuel isotopic content and stores it for future burnup calculations. The contribution of this work is the development of an improved version of TRIGSIMS, named TRIGSIMS-TH. TRIGSIMS-TH incorporates a thermal hydraulic module based on the advanced sub-channel code COBRA-TF (CTF). CTF provides the temperature feedback needed in the multi-physics calculations as well as the thermal hydraulics modeling capability of the reactor core. The temperature feedback model is using the CTF-provided local moderator and fuel temperatures for the cross-section modeling for ADMARC-H and MCNP calculations. To perform efficient critical control rod calculations, a methodology for applying a control rod position was implemented in TRIGSIMS-TH, making this code system a modeling and design tool for future core loadings. The new TRIGSIMS-TH is a computer program that interlinks various other functional reactor analysis tools. It consists of the MCNP5, ADMARC-H, ORIGEN-S, and CTF. CTF was coupled with both MCNP and ADMARC-H to provide the heterogeneous temperature distribution throughout the core. Each of these codes is written in its own computer language performing its function and outputs a set of data. TRIGSIMS-TH provides an effective use and data manipulation and transfer between different codes. With the implementation of feedback and control- rod-position modeling methodologies, the TRIGSIMS-TH calculations are more accurate and in a better agreement with measured data. The PSBR is unique in many ways and there are no "off-the-shelf" codes, which can model this design in its entirety. In particular, PSBR has an open core design, which is cooled by natural convection. Combining several codes into a unique system brings many challenges. It also requires substantial knowledge of both operation and core design of the PSBR. This reactor is in operation decades and there is a fair amount of studies and developments in both PSBR thermal hydraulics and neutronics. Measured data is also available for various core loadings and can be used for validation activities. The previous studies and developments in PSBR modeling also aids as a guide to assess the findings of the work herein. In order to incorporate new methods and codes into exiting TRIGSIMS, a re-evaluation of various components of the code was performed to assure the accuracy and efficiency of the existing CTF/MCNP5/ADMARC-H multi-physics coupling. A new set of ADMARC-H diffusion coefficients and cross sections was generated using the SERPENT code. This was needed as the previous data was not generated with thermal hydraulic feedback and the ARO position was used as the critical rod position. The B4C was re-evaluated for this update. The data exchange between ADMARC-H and MCNP5 was modified. The basic core model is given a flexibility to allow for various changes within the core model, and this feature was implemented in TRIGSIMS-TH. The PSBR core in the new code model can be expanded and changed. This allows the new code to be used as a modeling tool for design and analyses of future code loadings.

  7. Coding for parallel execution of hardware-in-the-loop millimeter-wave scene generation models on multicore SIMD processor architectures

    NASA Astrophysics Data System (ADS)

    Olson, Richard F.

    2013-05-01

    Rendering of point scatterer based radar scenes for millimeter wave (mmW) seeker tests in real-time hardware-in-the-loop (HWIL) scene generation requires efficient algorithms and vector-friendly computer architectures for complex signal synthesis. New processor technology from Intel implements an extended 256-bit vector SIMD instruction set (AVX, AVX2) in a multi-core CPU design providing peak execution rates of hundreds of GigaFLOPS (GFLOPS) on one chip. Real world mmW scene generation code can approach peak SIMD execution rates only after careful algorithm and source code design. An effective software design will maintain high computing intensity emphasizing register-to-register SIMD arithmetic operations over data movement between CPU caches or off-chip memories. Engineers at the U.S. Army Aviation and Missile Research, Development and Engineering Center (AMRDEC) applied two basic parallel coding methods to assess new 256-bit SIMD multi-core architectures for mmW scene generation in HWIL. These include use of POSIX threads built on vector library functions and more portable, highlevel parallel code based on compiler technology (e.g. OpenMP pragmas and SIMD autovectorization). Since CPU technology is rapidly advancing toward high processor core counts and TeraFLOPS peak SIMD execution rates, it is imperative that coding methods be identified which produce efficient and maintainable parallel code. This paper describes the algorithms used in point scatterer target model rendering, the parallelization of those algorithms, and the execution performance achieved on an AVX multi-core machine using the two basic parallel coding methods. The paper concludes with estimates for scale-up performance on upcoming multi-core technology.

  8. WOMBAT: A Scalable and High-performance Astrophysical Magnetohydrodynamics Code

    NASA Astrophysics Data System (ADS)

    Mendygral, P. J.; Radcliffe, N.; Kandalla, K.; Porter, D.; O'Neill, B. J.; Nolting, C.; Edmon, P.; Donnert, J. M. F.; Jones, T. W.

    2017-02-01

    We present a new code for astrophysical magnetohydrodynamics specifically designed and optimized for high performance and scaling on modern and future supercomputers. We describe a novel hybrid OpenMP/MPI programming model that emerged from a collaboration between Cray, Inc. and the University of Minnesota. This design utilizes MPI-RMA optimized for thread scaling, which allows the code to run extremely efficiently at very high thread counts ideal for the latest generation of multi-core and many-core architectures. Such performance characteristics are needed in the era of “exascale” computing. We describe and demonstrate our high-performance design in detail with the intent that it may be used as a model for other, future astrophysical codes intended for applications demanding exceptional performance.

  9. WOMBAT: A Scalable and High-performance Astrophysical Magnetohydrodynamics Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mendygral, P. J.; Radcliffe, N.; Kandalla, K.

    2017-02-01

    We present a new code for astrophysical magnetohydrodynamics specifically designed and optimized for high performance and scaling on modern and future supercomputers. We describe a novel hybrid OpenMP/MPI programming model that emerged from a collaboration between Cray, Inc. and the University of Minnesota. This design utilizes MPI-RMA optimized for thread scaling, which allows the code to run extremely efficiently at very high thread counts ideal for the latest generation of multi-core and many-core architectures. Such performance characteristics are needed in the era of “exascale” computing. We describe and demonstrate our high-performance design in detail with the intent that it maymore » be used as a model for other, future astrophysical codes intended for applications demanding exceptional performance.« less

  10. Development of 3D pseudo pin-by-pin calculation methodology in ANC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, B.; Mayhue, L.; Huria, H.

    2012-07-01

    Advanced cores and fuel assembly designs have been developed to improve operational flexibility, economic performance and further enhance safety features of nuclear power plants. The simulation of these new designs, along with strong heterogeneous fuel loading, have brought new challenges to the reactor physics methodologies currently employed in the industrial codes for core analyses. Control rod insertion during normal operation is one operational feature in the AP1000{sup R} plant of Westinghouse next generation Pressurized Water Reactor (PWR) design. This design improves its operational flexibility and efficiency but significantly challenges the conventional reactor physics methods, especially in pin power calculations. Themore » mixture loading of fuel assemblies with significant neutron spectrums causes a strong interaction between different fuel assembly types that is not fully captured with the current core design codes. To overcome the weaknesses of the conventional methods, Westinghouse has developed a state-of-the-art 3D Pin-by-Pin Calculation Methodology (P3C) and successfully implemented in the Westinghouse core design code ANC. The new methodology has been qualified and licensed for pin power prediction. The 3D P3C methodology along with its application and validation will be discussed in the paper. (authors)« less

  11. Preliminary engineering design of sodium-cooled CANDLE core

    NASA Astrophysics Data System (ADS)

    Takaki, Naoyuki; Namekawa, Azuma; Yoda, Tomoyuki; Mizutani, Akihiko; Sekimoto, Hiroshi

    2012-06-01

    The CANDLE burning process is characterized by the autonomous shifting of burning region with constant reactivity and constant spacial power distribution. Evaluations of such critical burning process by using widely used neutron diffusion and burning codes under some realistic engineering constraints are valuable to confirm the technical feasibility of the CANDLE concept and to put the idea into concrete core design. In the first part of this paper, it is discussed that whether the sustainable and stable CANDLE burning process can be reproduced even by using conventional core analysis tools such as SLAROM and CITATION-FBR. As a result, it is certainly possible to demonstrate it if the proper core configuration and initial fuel composition required as CANDLE core are applied to the analysis. In the latter part, an example of a concrete image of sodium cooled, metal fuel, 2000MWt rating CANDLE core has been presented by assuming an emerging inevitable technology of recladding. The core satisfies engineering design criteria including cladding temperature, pressure drop, linear heat rate, and cumulative damage fraction (CDF) of cladding, fast neutron fluence and sodium void reactivity which are defined in the Japanese FBR design project. It can be concluded that it is feasible to design CADLE core by using conventional codes while satisfying some realistic engineering design constraints assuming that recladding at certain time interval is technically feasible.

  12. Nuclear thermal propulsion engine system design analysis code development

    NASA Astrophysics Data System (ADS)

    Pelaccio, Dennis G.; Scheil, Christine M.; Petrosky, Lyman J.; Ivanenok, Joseph F.

    1992-01-01

    A Nuclear Thermal Propulsion (NTP) Engine System Design Analyis Code has recently been developed to characterize key NTP engine system design features. Such a versatile, standalone NTP system performance and engine design code is required to support ongoing and future engine system and vehicle design efforts associated with proposed Space Exploration Initiative (SEI) missions of interest. Key areas of interest in the engine system modeling effort were the reactor, shielding, and inclusion of an engine multi-redundant propellant pump feed system design option. A solid-core nuclear thermal reactor and internal shielding code model was developed to estimate the reactor's thermal-hydraulic and physical parameters based on a prescribed thermal output which was integrated into a state-of-the-art engine system design model. The reactor code module has the capability to model graphite, composite, or carbide fuels. Key output from the model consists of reactor parameters such as thermal power, pressure drop, thermal profile, and heat generation in cooled structures (reflector, shield, and core supports), as well as the engine system parameters such as weight, dimensions, pressures, temperatures, mass flows, and performance. The model's overall analysis methodology and its key assumptions and capabilities are summarized in this paper.

  13. Heuristic rules embedded genetic algorithm for in-core fuel management optimization

    NASA Astrophysics Data System (ADS)

    Alim, Fatih

    The objective of this study was to develop a unique methodology and a practical tool for designing loading pattern (LP) and burnable poison (BP) pattern for a given Pressurized Water Reactor (PWR) core. Because of the large number of possible combinations for the fuel assembly (FA) loading in the core, the design of the core configuration is a complex optimization problem. It requires finding an optimal FA arrangement and BP placement in order to achieve maximum cycle length while satisfying the safety constraints. Genetic Algorithms (GA) have been already used to solve this problem for LP optimization for both PWR and Boiling Water Reactor (BWR). The GA, which is a stochastic method works with a group of solutions and uses random variables to make decisions. Based on the theories of evaluation, the GA involves natural selection and reproduction of the individuals in the population for the next generation. The GA works by creating an initial population, evaluating it, and then improving the population by using the evaluation operators. To solve this optimization problem, a LP optimization package, GARCO (Genetic Algorithm Reactor Code Optimization) code is developed in the framework of this thesis. This code is applicable for all types of PWR cores having different geometries and structures with an unlimited number of FA types in the inventory. To reach this goal, an innovative GA is developed by modifying the classical representation of the genotype. To obtain the best result in a shorter time, not only the representation is changed but also the algorithm is changed to use in-core fuel management heuristics rules. The improved GA code was tested to demonstrate and verify the advantages of the new enhancements. The developed methodology is explained in this thesis and preliminary results are shown for the VVER-1000 reactor hexagonal geometry core and the TMI-1 PWR. The improved GA code was tested to verify the advantages of new enhancements. The core physics code used for VVER in this research is Moby-Dick, which was developed to analyze the VVER by SKODA Inc. The SIMULATE-3 code, which is an advanced two-group nodal code, is used to analyze the TMI-1.

  14. Development of a three-dimensional transient code for reactivity-initiated events of BWRs (boiling water reactors) - Models and code verifications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Uematsu, Hitoshi; Yamamoto, Toru; Izutsu, Sadayuki

    1990-06-01

    A reactivity-initiated event is a design-basis accident for the safety analysis of boiling water reactors. It is defined as a rapid transient of reactor power caused by a reactivity insertion of over $1.0 due to a postulated drop or abnormal withdrawal of the control rod from the core. Strong space-dependent feedback effects are associated with the local power increase due to control rod movement. A realistic treatment of the core status in a transient by a code with a detailed core model is recommended in evaluating this event. A three-dimensional transient code, ARIES, has been developed to meet this need.more » The code simulates the event with three-dimensional neutronics, coupled with multichannel thermal hydraulics, based on a nonequilibrium separated flow model. The experimental data obtained in reactivity accident tests performed with the SPERT III-E core are used to verify the entire code, including thermal-hydraulic models.« less

  15. Preliminary LOCA analysis of the westinghouse small modular reactor using the WCOBRA/TRAC-TF2 thermal-hydraulics code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liao, J.; Kucukboyaci, V. N.; Nguyen, L.

    2012-07-01

    The Westinghouse Small Modular Reactor (SMR) is an 800 MWt (> 225 MWe) integral pressurized water reactor (iPWR) with all primary components, including the steam generator and the pressurizer located inside the reactor vessel. The reactor core is based on a partial-height 17x17 fuel assembly design used in the AP1000{sup R} reactor core. The Westinghouse SMR utilizes passive safety systems and proven components from the AP1000 plant design with a compact containment that houses the integral reactor vessel and the passive safety systems. A preliminary loss of coolant accident (LOCA) analysis of the Westinghouse SMR has been performed using themore » WCOBRA/TRAC-TF2 code, simulating a transient caused by a double ended guillotine (DEG) break in the direct vessel injection (DVI) line. WCOBRA/TRAC-TF2 is a new generation Westinghouse LOCA thermal-hydraulics code evolving from the US NRC licensed WCOBRA/TRAC code. It is designed to simulate PWR LOCA events from the smallest break size to the largest break size (DEG cold leg). A significant number of fluid dynamics models and heat transfer models were developed or improved in WCOBRA/TRAC-TF2. A large number of separate effects and integral effects tests were performed for a rigorous code assessment and validation. WCOBRA/TRAC-TF2 was introduced into the Westinghouse SMR design phase to assist a quick and robust passive cooling system design and to identify thermal-hydraulic phenomena for the development of the SMR Phenomena Identification Ranking Table (PIRT). The LOCA analysis of the Westinghouse SMR demonstrates that the DEG DVI break LOCA is mitigated by the injection and venting from the Westinghouse SMR passive safety systems without core heat up, achieving long term core cooling. (authors)« less

  16. Comparison of ENDF/B-VII.1 and JEFF-3.2 in VVER-1000 operational data calculation

    NASA Astrophysics Data System (ADS)

    Frybort, Jan

    2017-09-01

    Safe operation of a nuclear reactor requires an extensive calculational support. Operational data are determined by full-core calculations during the design phase of a fuel loading. Loading pattern and design of fuel assemblies are adjusted to meet safety requirements and optimize reactor operation. Nodal diffusion code ANDREA is used for this task in case of Czech VVER-1000 reactors. Nuclear data for this diffusion code are prepared regularly by lattice code HELIOS. These calculations are conducted in 2D on fuel assembly level. There is also possibility to calculate these macroscopic data by Monte-Carlo Serpent code. It can make use of alternative evaluated libraries. All calculations are affected by inherent uncertainties in nuclear data. It is useful to see results of full-core calculations based on two sets of diffusion data obtained by Serpent code calculations with ENDF/B-VII.1 and JEFF-3.2 nuclear data including also decay data library and fission yields data. The comparison is based directly on fuel assembly level macroscopic data and resulting operational data. This study illustrates effect of evaluated nuclear data library on full-core calculations of a large PWR reactor core. The level of difference which results exclusively from nuclear data selection can help to understand the level of inherent uncertainties of such full-core calculations.

  17. Fuel burnup analysis for IRIS reactor using MCNPX and WIMS-D5 codes

    NASA Astrophysics Data System (ADS)

    Amin, E. A.; Bashter, I. I.; Hassan, Nabil M.; Mustafa, S. S.

    2017-02-01

    International Reactor Innovative and Secure (IRIS) reactor is a compact power reactor designed with especial features. It contains Integral Fuel Burnable Absorber (IFBA). The core is heterogeneous both axially and radially. This work provides the full core burn up analysis for IRIS reactor using MCNPX and WIMDS-D5 codes. Criticality calculations, radial and axial power distributions and nuclear peaking factor at the different stages of burnup were studied. Effective multiplication factor values for the core were estimated by coupling MCNPX code with WIMS-D5 code and compared with SAS2H/KENO-V code values at different stages of burnup. The two calculation codes show good agreement and correlation. The values of radial and axial powers for the full core were also compared with published results given by SAS2H/KENO-V code (at the beginning and end of reactor operation). The behavior of both radial and axial power distribution is quiet similar to the other data published by SAS2H/KENO-V code. The peaking factor values estimated in the present work are close to its values calculated by SAS2H/KENO-V code.

  18. ARCADIA{sup R} - A New Generation of Coupled Neutronics / Core Thermal- Hydraulics Code System at AREVA NP

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Curca-Tivig, Florin; Merk, Stephan; Pautz, Andreas

    2007-07-01

    Anticipating future needs of our customers and willing to concentrate synergies and competences existing in the company for the benefit of our customers, AREVA NP decided in 2002 to develop the next generation of coupled neutronics/ core thermal-hydraulic (TH) code systems for fuel assembly and core design calculations for both, PWR and BWR applications. The global CONVERGENCE project was born: after a feasibility study of one year (2002) and a conceptual phase of another year (2003), development was started at the beginning of 2004. The present paper introduces the CONVERGENCE project, presents the main feature of the new code systemmore » ARCADIA{sup R} and concludes on customer benefits. ARCADIA{sup R} is designed to meet AREVA NP market and customers' requirements worldwide. Besides state-of-the-art physical modeling, numerical performance and industrial functionality, the ARCADIA{sup R} system is featuring state-of-the-art software engineering. The new code system will bring a series of benefits for our customers: e.g. improved accuracy for heterogeneous cores (MOX/ UOX, Gd...), better description of nuclide chains, and access to local neutronics/ thermal-hydraulics and possibly thermal-mechanical information (3D pin by pin full core modeling). ARCADIA is a registered trademark of AREVA NP. (authors)« less

  19. Three-dimensional pin-to-pin analyses of VVER-440 cores by the MOBY-DICK code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lehmann, M.; Mikolas, P.

    1994-12-31

    Nuclear design for the Dukovany (EDU) VVER-440s nuclear power plant is routinely performed by the MOBY-DICK system. After its implementation on Hewlett Packard series 700 workstations, it is able to perform routinely three-dimensional pin-to-pin core analyses. For purposes of code validation, the benchmark prepared from EDU operational data was solved.

  20. Development and validation of a low-frequency modeling code for high-moment transmitter rod antennas

    NASA Astrophysics Data System (ADS)

    Jordan, Jared Williams; Sternberg, Ben K.; Dvorak, Steven L.

    2009-12-01

    The goal of this research is to develop and validate a low-frequency modeling code for high-moment transmitter rod antennas to aid in the design of future low-frequency TX antennas with high magnetic moments. To accomplish this goal, a quasi-static modeling algorithm was developed to simulate finite-length, permeable-core, rod antennas. This quasi-static analysis is applicable for low frequencies where eddy currents are negligible, and it can handle solid or hollow cores with winding insulation thickness between the antenna's windings and its core. The theory was programmed in Matlab, and the modeling code has the ability to predict the TX antenna's gain, maximum magnetic moment, saturation current, series inductance, and core series loss resistance, provided the user enters the corresponding complex permeability for the desired core magnetic flux density. In order to utilize the linear modeling code to model the effects of nonlinear core materials, it is necessary to use the correct complex permeability for a specific core magnetic flux density. In order to test the modeling code, we demonstrated that it can accurately predict changes in the electrical parameters associated with variations in the rod length and the core thickness for antennas made out of low carbon steel wire. These tests demonstrate that the modeling code was successful in predicting the changes in the rod antenna characteristics under high-current nonlinear conditions due to changes in the physical dimensions of the rod provided that the flux density in the core was held constant in order to keep the complex permeability from changing.

  1. Overview and Current Status of Analyses of Potential LEU Design Concepts for TREAT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Connaway, H. M.; Kontogeorgakos, D. C.; Papadias, D. D.

    2015-10-01

    Neutronic and thermal-hydraulic analyses have been performed to evaluate the performance of different low-enriched uranium (LEU) fuel design concepts for the conversion of the Transient Reactor Test Facility (TREAT) from its current high-enriched uranium (HEU) fuel. TREAT is an experimental reactor developed to generate high neutron flux transients for the testing of nuclear fuels. The goal of this work was to identify an LEU design which can maintain the performance of the existing HEU core while continuing to operate safely. A wide variety of design options were considered, with a focus on minimizing peak fuel temperatures and optimizing the powermore » coupling between the TREAT core and test samples. Designs were also evaluated to ensure that they provide sufficient reactivity and shutdown margin for each control rod bank. Analyses were performed using the core loading and experiment configuration of historic M8 Power Calibration experiments (M8CAL). The Monte Carlo code MCNP was utilized for steady-state analyses, and transient calculations were performed with the point kinetics code TREKIN. Thermal analyses were performed with the COMSOL multi-physics code. Using the results of this study, a new LEU Baseline design concept is being established, which will be evaluated in detail in a future report.« less

  2. Thermal hydraulic-severe accident code interfaces for SCDAP/RELAP5/MOD3.2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coryell, E.W.; Siefken, L.J.; Harvego, E.A.

    1997-07-01

    The SCDAP/RELAP5 computer code is designed to describe the overall reactor coolant system thermal-hydraulic response, core damage progression, and fission product release during severe accidents. The code is being developed at the Idaho National Engineering Laboratory under the primary sponsorship of the Office of Nuclear Regulatory Research of the U.S. Nuclear Regulatory Commission. The code is the result of merging the RELAP5, SCDAP, and COUPLE codes. The RELAP5 portion of the code calculates the overall reactor coolant system, thermal-hydraulics, and associated reactor system responses. The SCDAP portion of the code describes the response of the core and associated vessel structures.more » The COUPLE portion of the code describes response of lower plenum structures and debris and the failure of the lower head. The code uses a modular approach with the overall structure, input/output processing, and data structures following the pattern established for RELAP5. The code uses a building block approach to allow the code user to easily represent a wide variety of systems and conditions through a powerful input processor. The user can represent a wide variety of experiments or reactor designs by selecting fuel rods and other assembly structures from a range of representative core component models, and arrange them in a variety of patterns within the thermalhydraulic network. The COUPLE portion of the code uses two-dimensional representations of the lower plenum structures and debris beds. The flow of information between the different portions of the code occurs at each system level time step advancement. The RELAP5 portion of the code describes the fluid transport around the system. These fluid conditions are used as thermal and mass transport boundary conditions for the SCDAP and COUPLE structures and debris beds.« less

  3. MPACT Standard Input User s Manual, Version 2.2.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Collins, Benjamin S.; Downar, Thomas; Fitzgerald, Andrew

    The MPACT (Michigan PArallel Charactistics based Transport) code is designed to perform high-fidelity light water reactor (LWR) analysis using whole-core pin-resolved neutron transport calculations on modern parallel-computing hardware. The code consists of several libraries which provide the functionality necessary to solve steady-state eigenvalue problems. Several transport capabilities are available within MPACT including both 2-D and 3-D Method of Characteristics (MOC). A three-dimensional whole core solution based on the 2D-1D solution method provides the capability for full core depletion calculations.

  4. Improvements and applications of COBRA-TF for stand-alone and coupled LWR safety analyses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Avramova, M.; Cuervo, D.; Ivanov, K.

    2006-07-01

    The advanced thermal-hydraulic subchannel code COBRA-TF has been recently improved and applied for stand-alone and coupled LWR core calculations at the Pennsylvania State Univ. in cooperation with AREVA NP GmbH (Germany)) and the Technical Univ. of Madrid. To enable COBRA-TF for academic and industrial applications including safety margins evaluations and LWR core design analyses, the code programming, numerics, and basic models were revised and substantially improved. The code has undergone through an extensive validation, verification, and qualification program. (authors)

  5. Progress Towards a Rad-Hydro Code for Modern Computing Architectures LA-UR-10-02825

    NASA Astrophysics Data System (ADS)

    Wohlbier, J. G.; Lowrie, R. B.; Bergen, B.; Calef, M.

    2010-11-01

    We are entering an era of high performance computing where data movement is the overwhelming bottleneck to scalable performance, as opposed to the speed of floating-point operations per processor. All multi-core hardware paradigms, whether heterogeneous or homogeneous, be it the Cell processor, GPGPU, or multi-core x86, share this common trait. In multi-physics applications such as inertial confinement fusion or astrophysics, one may be solving multi-material hydrodynamics with tabular equation of state data lookups, radiation transport, nuclear reactions, and charged particle transport in a single time cycle. The algorithms are intensely data dependent, e.g., EOS, opacity, nuclear data, and multi-core hardware memory restrictions are forcing code developers to rethink code and algorithm design. For the past two years LANL has been funding a small effort referred to as Multi-Physics on Multi-Core to explore ideas for code design as pertaining to inertial confinement fusion and astrophysics applications. The near term goals of this project are to have a multi-material radiation hydrodynamics capability, with tabular equation of state lookups, on cartesian and curvilinear block structured meshes. In the longer term we plan to add fully implicit multi-group radiation diffusion and material heat conduction, and block structured AMR. We will report on our progress to date.

  6. Convergence studies of deterministic methods for LWR explicit reflector methodology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Canepa, S.; Hursin, M.; Ferroukhi, H.

    2013-07-01

    The standard approach in modem 3-D core simulators, employed either for steady-state or transient simulations, is to use Albedo coefficients or explicit reflectors at the core axial and radial boundaries. In the latter approach, few-group homogenized nuclear data are a priori produced with lattice transport codes using 2-D reflector models. Recently, the explicit reflector methodology of the deterministic CASMO-4/SIMULATE-3 code system was identified to potentially constitute one of the main sources of errors for core analyses of the Swiss operating LWRs, which are all belonging to GII design. Considering that some of the new GIII designs will rely on verymore » different reflector concepts, a review and assessment of the reflector methodology for various LWR designs appeared as relevant. Therefore, the purpose of this paper is to first recall the concepts of the explicit reflector modelling approach as employed by CASMO/SIMULATE. Then, for selected reflector configurations representative of both GII and GUI designs, a benchmarking of the few-group nuclear data produced with the deterministic lattice code CASMO-4 and its successor CASMO-5, is conducted. On this basis, a convergence study with regards to geometrical requirements when using deterministic methods with 2-D homogenous models is conducted and the effect on the downstream 3-D core analysis accuracy is evaluated for a typical GII deflector design in order to assess the results against available plant measurements. (authors)« less

  7. Rubus: A compiler for seamless and extensible parallelism.

    PubMed

    Adnan, Muhammad; Aslam, Faisal; Nawaz, Zubair; Sarwar, Syed Mansoor

    2017-01-01

    Nowadays, a typical processor may have multiple processing cores on a single chip. Furthermore, a special purpose processing unit called Graphic Processing Unit (GPU), originally designed for 2D/3D games, is now available for general purpose use in computers and mobile devices. However, the traditional programming languages which were designed to work with machines having single core CPUs, cannot utilize the parallelism available on multi-core processors efficiently. Therefore, to exploit the extraordinary processing power of multi-core processors, researchers are working on new tools and techniques to facilitate parallel programming. To this end, languages like CUDA and OpenCL have been introduced, which can be used to write code with parallelism. The main shortcoming of these languages is that programmer needs to specify all the complex details manually in order to parallelize the code across multiple cores. Therefore, the code written in these languages is difficult to understand, debug and maintain. Furthermore, to parallelize legacy code can require rewriting a significant portion of code in CUDA or OpenCL, which can consume significant time and resources. Thus, the amount of parallelism achieved is proportional to the skills of the programmer and the time spent in code optimizations. This paper proposes a new open source compiler, Rubus, to achieve seamless parallelism. The Rubus compiler relieves the programmer from manually specifying the low-level details. It analyses and transforms a sequential program into a parallel program automatically, without any user intervention. This achieves massive speedup and better utilization of the underlying hardware without a programmer's expertise in parallel programming. For five different benchmarks, on average a speedup of 34.54 times has been achieved by Rubus as compared to Java on a basic GPU having only 96 cores. Whereas, for a matrix multiplication benchmark the average execution speedup of 84 times has been achieved by Rubus on the same GPU. Moreover, Rubus achieves this performance without drastically increasing the memory footprint of a program.

  8. Rubus: A compiler for seamless and extensible parallelism

    PubMed Central

    Adnan, Muhammad; Aslam, Faisal; Sarwar, Syed Mansoor

    2017-01-01

    Nowadays, a typical processor may have multiple processing cores on a single chip. Furthermore, a special purpose processing unit called Graphic Processing Unit (GPU), originally designed for 2D/3D games, is now available for general purpose use in computers and mobile devices. However, the traditional programming languages which were designed to work with machines having single core CPUs, cannot utilize the parallelism available on multi-core processors efficiently. Therefore, to exploit the extraordinary processing power of multi-core processors, researchers are working on new tools and techniques to facilitate parallel programming. To this end, languages like CUDA and OpenCL have been introduced, which can be used to write code with parallelism. The main shortcoming of these languages is that programmer needs to specify all the complex details manually in order to parallelize the code across multiple cores. Therefore, the code written in these languages is difficult to understand, debug and maintain. Furthermore, to parallelize legacy code can require rewriting a significant portion of code in CUDA or OpenCL, which can consume significant time and resources. Thus, the amount of parallelism achieved is proportional to the skills of the programmer and the time spent in code optimizations. This paper proposes a new open source compiler, Rubus, to achieve seamless parallelism. The Rubus compiler relieves the programmer from manually specifying the low-level details. It analyses and transforms a sequential program into a parallel program automatically, without any user intervention. This achieves massive speedup and better utilization of the underlying hardware without a programmer’s expertise in parallel programming. For five different benchmarks, on average a speedup of 34.54 times has been achieved by Rubus as compared to Java on a basic GPU having only 96 cores. Whereas, for a matrix multiplication benchmark the average execution speedup of 84 times has been achieved by Rubus on the same GPU. Moreover, Rubus achieves this performance without drastically increasing the memory footprint of a program. PMID:29211758

  9. Analytical methods in the high conversion reactor core design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zeggel, W.; Oldekop, W.; Axmann, J.K.

    High conversion reactor (HCR) design methods have been used at the Technical University of Braunschweig (TUBS) with the technological support of Kraftwerk Union (KWU). The present state and objectives of this cooperation between KWU and TUBS in the field of HCRs have been described using existing design models and current activities aimed at further development and validation of the codes. The hard physical and thermal-hydraulic boundary conditions of pressurized water reactor (PWR) cores with a high degree of fuel utilization result from the tight packing of the HCR fuel rods and the high fissionable plutonium content of the fuel. Inmore » terms of design, the problem will be solved with rod bundles whose fuel rods are adjusted by helical spacers to the proposed small rod pitches. These HCR properties require novel computational models for neutron physics, thermal hydraulics, and fuel rod design. By means of a survey of the codes, the analytical procedure for present-day HCR core design is presented. The design programs are currently under intensive development, as design tools with a solid, scientific foundation and with essential parameters that are widely valid and are required for a promising optimization of the HCR core. Design results and a survey of future HCR development are given. In this connection, the reoptimization of the PWR core in the direction of an HCR is considered a fascinating scientific task, with respect to both economic and safety aspects.« less

  10. Development of a Next Generation Concurrent Framework for the ATLAS Experiment

    NASA Astrophysics Data System (ADS)

    Calafiura, P.; Lampl, W.; Leggett, C.; Malon, D.; Stewart, G.; Wynne, B.

    2015-12-01

    The ATLAS experiment has successfully used its Gaudi/Athena software framework for data taking and analysis during the first LHC run, with billions of events successfully processed. However, the design of Gaudi/Athena dates from early 2000 and the software and the physics code has been written using a single threaded, serial design. This programming model has increasing difficulty in exploiting the potential of current CPUs, which offer their best performance only through taking full advantage of multiple cores and wide vector registers. Future CPU evolution will intensify this trend, with core counts increasing and memory per core falling. With current memory consumption for 64 bit ATLAS reconstruction in a high luminosity environment approaching 4GB, it will become impossible to fully occupy all cores in a machine without exhausting available memory. However, since maximizing performance per watt will be a key metric, a mechanism must be found to use all cores as efficiently as possible. In this paper we report on our progress with a practical demonstration of the use of multithreading in the ATLAS reconstruction software, using the GaudiHive framework. We have expanded support to Calorimeter, Inner Detector, and Tracking code, discussing what changes were necessary in order to allow the serially designed ATLAS code to run, both to the framework and to the tools and algorithms used. We report on both the performance gains, and what general lessons were learned about the code patterns that had been employed in the software and which patterns were identified as particularly problematic for multi-threading. We also present our findings on implementing a hybrid multi-threaded / multi-process framework, to take advantage of the strengths of each type of concurrency, while avoiding some of their corresponding limitations.

  11. A Monte Carlo model system for core analysis and epithermal neutron beam design at the Washington State University Radiation Center

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burns, T.D. Jr.

    1996-05-01

    The Monte Carlo Model System (MCMS) for the Washington State University (WSU) Radiation Center provides a means through which core criticality and power distributions can be calculated, as well as providing a method for neutron and photon transport necessary for BNCT epithermal neutron beam design. The computational code used in this Model System is MCNP4A. The geometric capability of this Monte Carlo code allows the WSU system to be modeled very accurately. A working knowledge of the MCNP4A neutron transport code increases the flexibility of the Model System and is recommended, however, the eigenvalue/power density problems can be run withmore » little direct knowledge of MCNP4A. Neutron and photon particle transport require more experience with the MCNP4A code. The Model System consists of two coupled subsystems; the Core Analysis and Source Plane Generator Model (CASP), and the BeamPort Shell Particle Transport Model (BSPT). The CASP Model incorporates the S({alpha}, {beta}) thermal treatment, and is run as a criticality problem yielding, the system eigenvalue (k{sub eff}), the core power distribution, and an implicit surface source for subsequent particle transport in the BSPT Model. The BSPT Model uses the source plane generated by a CASP run to transport particles through the thermal column beamport. The user can create filter arrangements in the beamport and then calculate characteristics necessary for assessing the BNCT potential of the given filter want. Examples of the characteristics to be calculated are: neutron fluxes, neutron currents, fast neutron KERMAs and gamma KERMAs. The MCMS is a useful tool for the WSU system. Those unfamiliar with the MCNP4A code can use the MCMS transparently for core analysis, while more experienced users will find the particle transport capabilities very powerful for BNCT filter design.« less

  12. Portable LQCD Monte Carlo code using OpenACC

    NASA Astrophysics Data System (ADS)

    Bonati, Claudio; Calore, Enrico; Coscetti, Simone; D'Elia, Massimo; Mesiti, Michele; Negro, Francesco; Fabio Schifano, Sebastiano; Silvi, Giorgio; Tripiccione, Raffaele

    2018-03-01

    Varying from multi-core CPU processors to many-core GPUs, the present scenario of HPC architectures is extremely heterogeneous. In this context, code portability is increasingly important for easy maintainability of applications; this is relevant in scientific computing where code changes are numerous and frequent. In this talk we present the design and optimization of a state-of-the-art production level LQCD Monte Carlo application, using the OpenACC directives model. OpenACC aims to abstract parallel programming to a descriptive level, where programmers do not need to specify the mapping of the code on the target machine. We describe the OpenACC implementation and show that the same code is able to target different architectures, including state-of-the-art CPUs and GPUs.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harrison, Cyrus; Larsen, Matt; Brugger, Eric

    Strawman is a system designed to explore the in situ visualization and analysis needs of simulation code teams running multi-physics calculations on many-core HPC architectures. It porvides rendering pipelines that can leverage both many-core CPUs and GPUs to render images of simulation meshes.

  14. The Modeling of Advanced BWR Fuel Designs with the NRC Fuel Depletion Codes PARCS/PATHS

    DOE PAGES

    Ward, Andrew; Downar, Thomas J.; Xu, Y.; ...

    2015-04-22

    The PATHS (PARCS Advanced Thermal Hydraulic Solver) code was developed at the University of Michigan in support of U.S. Nuclear Regulatory Commission research to solve the steady-state, two-phase, thermal-hydraulic equations for a boiling water reactor (BWR) and to provide thermal-hydraulic feedback for BWR depletion calculations with the neutronics code PARCS (Purdue Advanced Reactor Core Simulator). The simplified solution methodology, including a three-equation drift flux formulation and an optimized iteration scheme, yields very fast run times in comparison to conventional thermal-hydraulic systems codes used in the industry, while still retaining sufficient accuracy for applications such as BWR depletion calculations. Lastly, themore » capability to model advanced BWR fuel designs with part-length fuel rods and heterogeneous axial channel flow geometry has been implemented in PATHS, and the code has been validated against previously benchmarked advanced core simulators as well as BWR plant and experimental data. We describe the modifications to the codes and the results of the validation in this paper.« less

  15. Multi-Core Processor Memory Contention Benchmark Analysis Case Study

    NASA Technical Reports Server (NTRS)

    Simon, Tyler; McGalliard, James

    2009-01-01

    Multi-core processors dominate current mainframe, server, and high performance computing (HPC) systems. This paper provides synthetic kernel and natural benchmark results from an HPC system at the NASA Goddard Space Flight Center that illustrate the performance impacts of multi-core (dual- and quad-core) vs. single core processor systems. Analysis of processor design, application source code, and synthetic and natural test results all indicate that multi-core processors can suffer from significant memory subsystem contention compared to similar single-core processors.

  16. Low-Power Embedded DSP Core for Communication Systems

    NASA Astrophysics Data System (ADS)

    Tsao, Ya-Lan; Chen, Wei-Hao; Tan, Ming Hsuan; Lin, Maw-Ching; Jou, Shyh-Jye

    2003-12-01

    This paper proposes a parameterized digital signal processor (DSP) core for an embedded digital signal processing system designed to achieve demodulation/synchronization with better performance and flexibility. The features of this DSP core include parameterized data path, dual MAC unit, subword MAC, and optional function-specific blocks for accelerating communication system modulation operations. This DSP core also has a low-power structure, which includes the gray-code addressing mode, pipeline sharing, and advanced hardware looping. Users can select the parameters and special functional blocks based on the character of their applications and then generating a DSP core. The DSP core has been implemented via a cell-based design method using a synthesizable Verilog code with TSMC 0.35[InlineEquation not available: see fulltext.]m SPQM and 0.25[InlineEquation not available: see fulltext.]m 1P5M library. The equivalent gate count of the core area without memory is approximately 50 k. Moreover, the maximum operating frequency of a[InlineEquation not available: see fulltext.] version is 100 MHz (0.35[InlineEquation not available: see fulltext.]m) and 140 MHz (0.25[InlineEquation not available: see fulltext.]m).

  17. Statistical core design methodology using the VIPRE thermal-hydraulics code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lloyd, M.W.; Feltus, M.A.

    1994-12-31

    This Penn State Statistical Core Design Methodology (PSSCDM) is unique because it not only includes the EPRI correlation/test data standard deviation but also the computational uncertainty for the VIPRE code model and the new composite box design correlation. The resultant PSSCDM equation mimics the EPRI DNBR correlation results well, with an uncertainty of 0.0389. The combined uncertainty yields a new DNBR limit of 1.18 that will provide more plant operational flexibility. This methodology and its associated correlation and uniqe coefficients are for a very particular VIPRE model; thus, the correlation will be specifically linked with the lumped channel and subchannelmore » layout. The results of this research and methodology, however, can be applied to plant-specific VIPRE models.« less

  18. Investigation on the Core Bypass Flow in a Very High Temperature Reactor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hassan, Yassin

    2013-10-22

    Uncertainties associated with the core bypass flow are some of the key issues that directly influence the coolant mass flow distribution and magnitude, and thus the operational core temperature profiles, in the very high-temperature reactor (VHTR). Designers will attempt to configure the core geometry so the core cooling flow rate magnitude and distribution conform to the design values. The objective of this project is to study the bypass flow both experimentally and computationally. Researchers will develop experimental data using state-of-the-art particle image velocimetry in a small test facility. The team will attempt to obtain full field temperature distribution using racksmore » of thermocouples. The experimental data are intended to benchmark computational fluid dynamics (CFD) codes by providing detailed information. These experimental data are urgently needed for validation of the CFD codes. The following are the project tasks: • Construct a small-scale bench-top experiment to resemble the bypass flow between the graphite blocks, varying parameters to address their impact on bypass flow. Wall roughness of the graphite block walls, spacing between the blocks, and temperature of the blocks are some of the parameters to be tested. • Perform CFD to evaluate pre- and post-test calculations and turbulence models, including sensitivity studies to achieve high accuracy. • Develop the state-of-the art large eddy simulation (LES) using appropriate subgrid modeling. • Develop models to be used in systems thermal hydraulics codes to account and estimate the bypass flows. These computer programs include, among others, RELAP3D, MELCOR, GAMMA, and GAS-NET. Actual core bypass flow rate may vary considerably from the design value. Although the uncertainty of the bypass flow rate is not known, some sources have stated that the bypass flow rates in the Fort St. Vrain reactor were between 8 and 25 percent of the total reactor mass flow rate. If bypass flow rates are on the high side, the quantity of cooling flow through the core may be considerably less than the nominal design value, causing some regions of the core to operate at temperatures in excess of the design values. These effects are postulated to lead to localized hot regions in the core that must be considered when evaluating the VHTR operational and accident scenarios.« less

  19. Investigation of Abnormal Heat Transfer and Flow in a VHTR Reactor Core

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kawaji, Masahiro; Valentin, Francisco I.; Artoun, Narbeh

    2015-12-21

    The main objective of this project was to identify and characterize the conditions under which abnormal heat transfer phenomena would occur in a Very High Temperature Reactor (VHTR) with a prismatic core. High pressure/high temperature experiments have been conducted to obtain data that could be used for validation of VHTR design and safety analysis codes. The focus of these experiments was on the generation of benchmark data for design and off-design heat transfer for forced, mixed and natural circulation in a VHTR core. In particular, a flow laminarization phenomenon was intensely investigated since it could give rise to hot spotsmore » in the VHTR core.« less

  20. Design and optimization of a portable LQCD Monte Carlo code using OpenACC

    NASA Astrophysics Data System (ADS)

    Bonati, Claudio; Coscetti, Simone; D'Elia, Massimo; Mesiti, Michele; Negro, Francesco; Calore, Enrico; Schifano, Sebastiano Fabio; Silvi, Giorgio; Tripiccione, Raffaele

    The present panorama of HPC architectures is extremely heterogeneous, ranging from traditional multi-core CPU processors, supporting a wide class of applications but delivering moderate computing performance, to many-core Graphics Processor Units (GPUs), exploiting aggressive data-parallelism and delivering higher performances for streaming computing applications. In this scenario, code portability (and performance portability) become necessary for easy maintainability of applications; this is very relevant in scientific computing where code changes are very frequent, making it tedious and prone to error to keep different code versions aligned. In this work, we present the design and optimization of a state-of-the-art production-level LQCD Monte Carlo application, using the directive-based OpenACC programming model. OpenACC abstracts parallel programming to a descriptive level, relieving programmers from specifying how codes should be mapped onto the target architecture. We describe the implementation of a code fully written in OpenAcc, and show that we are able to target several different architectures, including state-of-the-art traditional CPUs and GPUs, with the same code. We also measure performance, evaluating the computing efficiency of our OpenACC code on several architectures, comparing with GPU-specific implementations and showing that a good level of performance-portability can be reached.

  1. Full 3D visualization tool-kit for Monte Carlo and deterministic transport codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frambati, S.; Frignani, M.

    2012-07-01

    We propose a package of tools capable of translating the geometric inputs and outputs of many Monte Carlo and deterministic radiation transport codes into open source file formats. These tools are aimed at bridging the gap between trusted, widely-used radiation analysis codes and very powerful, more recent and commonly used visualization software, thus supporting the design process and helping with shielding optimization. Three main lines of development were followed: mesh-based analysis of Monte Carlo codes, mesh-based analysis of deterministic codes and Monte Carlo surface meshing. The developed kit is considered a powerful and cost-effective tool in the computer-aided design formore » radiation transport code users of the nuclear world, and in particular in the fields of core design and radiation analysis. (authors)« less

  2. The CRONOS Code for Astrophysical Magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Kissmann, R.; Kleimann, J.; Krebl, B.; Wiengarten, T.

    2018-06-01

    We describe the magnetohydrodynamics (MHD) code CRONOS, which has been used in astrophysics and space-physics studies in recent years. CRONOS has been designed to be easily adaptable to the problem in hand, where the user can expand or exchange core modules or add new functionality to the code. This modularity comes about through its implementation using a C++ class structure. The core components of the code include solvers for both hydrodynamical (HD) and MHD problems. These problems are solved on different rectangular grids, which currently support Cartesian, spherical, and cylindrical coordinates. CRONOS uses a finite-volume description with different approximate Riemann solvers that can be chosen at runtime. Here, we describe the implementation of the code with a view toward its ongoing development. We illustrate the code’s potential through several (M)HD test problems and some astrophysical applications.

  3. Current and anticipated uses of thermal hydraulic codes at the Japan Atomic Energy Research Institute

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Akimoto, Hajime; Kukita; Ohnuki, Akira

    1997-07-01

    The Japan Atomic Energy Research Institute (JAERI) is conducting several research programs related to thermal-hydraulic and neutronic behavior of light water reactors (LWRs). These include LWR safety research projects, which are conducted in accordance with the Nuclear Safety Commission`s research plan, and reactor engineering projects for the development of innovative reactor designs or core/fuel designs. Thermal-hydraulic and neutronic codes are used for various purposes including experimental analysis, nuclear power plant (NPP) safety analysis, and design assessment.

  4. Impact of thorium based molten salt reactor on the closure of the nuclear fuel cycle

    NASA Astrophysics Data System (ADS)

    Jaradat, Safwan Qasim Mohammad

    Molten salt reactor (MSR) is one of six reactors selected by the Generation IV International Forum (GIF). The liquid fluoride thorium reactor (LFTR) is a MSR concept based on thorium fuel cycle. LFTR uses liquid fluoride salts as a nuclear fuel. It uses 232Th and 233U as the fertile and fissile materials, respectively. Fluoride salt of these nuclides is dissolved in a mixed carrier salt of lithium and beryllium (FLiBe). The objective of this research was to complete feasibility studies of a small commercial thermal LFTR. The focus was on neutronic calculations in order to prescribe core design parameter such as core size, fuel block pitch (p), fuel channel radius, fuel path, reflector thickness, fuel salt composition, and power. In order to achieve this objective, the applicability of Monte Carlo N-Particle Transport Code (MCNP) to MSR modeling was verified. Then, a prescription for conceptual small thermal reactor LFTR and relevant calculations were performed using MCNP to determine the main neutronic parameters of the core reactor. The MCNP code was used to study the reactor physics characteristics for the FUJI-U3 reactor. The results were then compared with the results obtained from the original FUJI-U3 using the reactor physics code SRAC95 and the burnup analysis code ORIPHY2. The results were comparable with each other. Based on the results, MCNP was found to be a reliable code to model a small thermal LFTR and study all the related reactor physics characteristics. The results of this study were promising and successful in demonstrating a prefatory small commercial LFTR design. The outcome of using a small core reactor with a diameter/height of 280/260 cm that would operate for more than five years at a power level of 150 MWth was studied. The fuel system 7LiF - BeF2 - ThF4 - UF4 with a (233U/ 232Th) = 2.01 % was the candidate fuel for this reactor core.

  5. Parameter study of dual-mode space nuclear fission solid core power and propulsion systems, NUROC3A. AMS report No. 1239c

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, W.W.; Layton, J.P.

    1976-09-13

    The three-volume report describes a dual-mode nuclear space power and propulsion system concept that employs an advanced solid-core nuclear fission reactor coupled via heat pipes to one of several electric power conversion systems. The NUROC3A systems analysis code was designed to provide the user with performance characteristics of the dual-mode system. Volume 3 describes utilization of the NUROC3A code to produce a detailed parameter study of the system.

  6. Analysis and Design of ITER 1 MV Core Snubber

    NASA Astrophysics Data System (ADS)

    Wang, Haitian; Li, Ge

    2012-11-01

    The core snubber, as a passive protection device, can suppress arc current and absorb stored energy in stray capacitance during the electrical breakdown in accelerating electrodes of ITER NBI. In order to design the core snubber of ITER, the control parameters of the arc peak current have been firstly analyzed by the Fink-Baker-Owren (FBO) method, which are used for designing the DIIID 100 kV snubber. The B-H curve can be derived from the measured voltage and current waveforms, and the hysteresis loss of the core snubber can be derived using the revised parallelogram method. The core snubber can be a simplified representation as an equivalent parallel resistance and inductance, which has been neglected by the FBO method. A simulation code including the parallel equivalent resistance and inductance has been set up. The simulation and experiments result in dramatically large arc shorting currents due to the parallel inductance effect. The case shows that the core snubber utilizing the FBO method gives more compact design.

  7. The change of radial power factor distribution due to RCCA insertion at the first cycle core of AP1000

    NASA Astrophysics Data System (ADS)

    Susilo, J.; Suparlina, L.; Deswandri; Sunaryo, G. R.

    2018-02-01

    The using of a computer program for the PWR type core neutronic design parameters analysis has been carried out in some previous studies. These studies included a computer code validation on the neutronic parameters data values resulted from measurements and benchmarking calculation. In this study, the AP1000 first cycle core radial power peaking factor validation and analysis were performed using CITATION module of the SRAC2006 computer code. The computer code has been also validated with a good result to the criticality values of VERA benchmark core. The AP1000 core power distribution calculation has been done in two-dimensional X-Y geometry through ¼ section modeling. The purpose of this research is to determine the accuracy of the SRAC2006 code, and also the safety performance of the AP1000 core first cycle operating. The core calculations were carried out with the several conditions, those are without Rod Cluster Control Assembly (RCCA), by insertion of a single RCCA (AO, M1, M2, MA, MB, MC, MD) and multiple insertion RCCA (MA + MB, MA + MB + MC, MA + MB + MC + MD, and MA + MB + MC + MD + M1). The maximum power factor of the fuel rods value in the fuel assembly assumedapproximately 1.406. The calculation results analysis showed that the 2-dimensional CITATION module of SRAC2006 code is accurate in AP1000 power distribution calculation without RCCA and with MA+MB RCCA insertion.The power peaking factor on the first operating cycle of the AP1000 core without RCCA, as well as with single and multiple RCCA are still below in the safety limit values (less then about 1.798). So in terms of thermal power generated by the fuel assembly, then it can be considered that the AP100 core at the first operating cycle is safe.

  8. Initial Neutronics Analyses for HEU to LEU Fuel Conversion of the Transient Reactor Test Facility (TREAT) at the Idaho National Laboratory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kontogeorgakos, D.; Derstine, K.; Wright, A.

    2013-06-01

    The purpose of the TREAT reactor is to generate large transient neutron pulses in test samples without over-heating the core to simulate fuel assembly accident conditions. The power transients in the present HEU core are inherently self-limiting such that the core prevents itself from overheating even in the event of a reactivity insertion accident. The objective of this study was to support the assessment of the feasibility of the TREAT core conversion based on the present reactor performance metrics and the technical specifications of the HEU core. The LEU fuel assembly studied had the same overall design, materials (UO 2more » particles finely dispersed in graphite) and impurities content as the HEU fuel assembly. The Monte Carlo N–Particle code (MCNP) and the point kinetics code TREKIN were used in the analyses.« less

  9. Preliminary Analysis of the BASALA-H Experimental Programme

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blaise, Patrick; Fougeras, Philippe; Philibert, Herve

    2002-07-01

    This paper is focused on the preliminary analysis of results obtained on the first cores of the first phase of the BASALA (Boiling water reactor Advanced core physics Study Aimed at mox fuel Lattice) programme, aimed at studying the neutronic parameters in ABWR core in hot conditions, currently under investigation in the French EOLE critical facility, within the framework of a cooperation between NUPEC, CEA and Cogema. The first 'on-line' analysis of the results has been made, using a new preliminary design and safety scheme based on the French APOLLO-2 code in its 2.4 qualified version and associated CEA-93 V4more » (JEF-2.2) Library, that will enable the Experimental Physics Division (SPEx) to perform future core designs. It describes the scheme adopted and the results obtained in various cases, going to the critical size determination to the reactivity worth of the perturbed configurations (voided, over-moderated, and poisoned with Gd{sub 2}O{sub 3}-UO{sub 2} pins). A preliminary study on the experimental results on the MISTRAL-4 is resumed, and the comparison of APOLLO-2 versus MCNP-4C calculations on these cores is made. The results obtained show very good agreements between the two codes, and versus the experiment. This work opens the way to the future full analysis of the experimental results of the qualifying teams with completely validated schemes, based on the new 2.5 version of the APOLLO-2 code. (authors)« less

  10. Development of direct-inverse 3-D methods for applied transonic aerodynamic wing design and analysis

    NASA Technical Reports Server (NTRS)

    Carlson, Leland A.

    1989-01-01

    An inverse wing design method was developed around an existing transonic wing analysis code. The original analysis code, TAWFIVE, has as its core the numerical potential flow solver, FLO30, developed by Jameson and Caughey. Features of the analysis code include a finite-volume formulation; wing and fuselage fitted, curvilinear grid mesh; and a viscous boundary layer correction that also accounts for viscous wake thickness and curvature. The development of the inverse methods as an extension of previous methods existing for design in Cartesian coordinates is presented. Results are shown for inviscid wing design cases in super-critical flow regimes. The test cases selected also demonstrate the versatility of the design method in designing an entire wing or discontinuous sections of a wing.

  11. Etude des performances de solveurs deterministes sur un coeur rapide a caloporteur sodium

    NASA Astrophysics Data System (ADS)

    Bay, Charlotte

    The reactors of next generation, in particular SFR model, represent a true challenge for current codes and solvers, used mainly for thermic cores. There is no guarantee that their competences could be straight adapted to fast neutron spectrum, or to major design differences. Thus it is necessary to assess the validity of solvers and their potential shortfall in the case of fast neutron reactors. As part of an internship with CEA (France), and at the instigation of EPM Nuclear Institute, this study concerns the following codes : DRAGON/DONJON, ERANOS, PARIS and APOLLO3. The precision assessment has been performed using Monte Carlo code TRIPOLI4. Only core calculation was of interest, namely numerical methods competences in precision and rapidity. Lattice code was not part of the study, that is to say nuclear data, self-shielding, or isotopic compositions. Nor was tackled burnup or time evolution effects. The study consists in two main steps : first evaluating the sensitivity of each solver to calculation parameters, and obtain its optimal calculation set ; then compare their competences in terms of precision and rapidity, by collecting usual quantities (effective multiplication factor, reaction rates map), but also more specific quantities which are crucial to the SFR design, namely control rod worth and sodium void effect. The calculation time is also a key factor. Whatever conclusion or recommendation that could be drawn from this study, they must first of all be applied within similar frameworks, that is to say small fast neutron cores with hexagonal geometry. Eventual adjustments for big cores will have to be demonstrated in developments of this study.

  12. Integration of the SSPM and STAGE with the MPACT Virtual Facility Distributed Test Bed.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cipiti, Benjamin B.; Shoman, Nathan

    The Material Protection Accounting and Control Technologies (MPACT) program within DOE NE is working toward a 2020 milestone to demonstrate a Virtual Facility Distributed Test Bed. The goal of the Virtual Test Bed is to link all MPACT modeling tools, technology development, and experimental work to create a Safeguards and Security by Design capability for fuel cycle facilities. The Separation and Safeguards Performance Model (SSPM) forms the core safeguards analysis tool, and the Scenario Toolkit and Generation Environment (STAGE) code forms the core physical security tool. These models are used to design and analyze safeguards and security systems and generatemore » performance metrics. Work over the past year has focused on how these models will integrate with the other capabilities in the MPACT program and specific model changes to enable more streamlined integration in the future. This report describes the model changes and plans for how the models will be used more collaboratively. The Virtual Facility is not designed to integrate all capabilities into one master code, but rather to maintain stand-alone capabilities that communicate results between codes more effectively.« less

  13. Modifying scoping codes to accurately calculate TMI-cores with lifetimes greater than 500 effective full-power days

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bai, D.; Levine, S.L.; Luoma, J.

    1992-01-01

    The Three Mile Island unit 1 core reloads have been designed using fast but accurate scoping codes, PSUI-LEOPARD and ADMARC. PSUI-LEOPARD has been normalized to EPRI-CPM2 results and used to calculate the two-group constants, whereas ADMARC is a modern two-dimensional, two-group diffusion theory nodal code. Problems in accuracy were encountered for cycles 8 and higher as the core lifetime was increased beyond 500 effective full-power days. This is because the heavier loaded cores in both {sup 235}U and {sup 10}B have harder neutron spectra, which produces a change in the transport effect in the baffle reflector region, and the burnablemore » poison (BP) simulations were not accurate enough for the cores containing the increased amount of {sup 10}B required in the BP rods. In the authors study, a technique has been developed to take into account the change in the transport effect in the baffle region by modifying the fast neutron diffusion coefficient as a function of cycle length and core exposure or burnup. A more accurate BP simulation method is also developed, using integral transport theory and CPM2 data, to calculate the BP contribution to the equivalent fuel assembly (supercell) two-group constants. The net result is that the accuracy of the scoping codes is as good as that produced by CASMO/SIMULATE or CPM2/SIMULATE when comparing with measured data.« less

  14. Development and preliminary verification of the 3D core neutronic code: COCO

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, H.; Mo, K.; Li, W.

    As the recent blooming economic growth and following environmental concerns (China)) is proactively pushing forward nuclear power development and encouraging the tapping of clean energy. Under this situation, CGNPC, as one of the largest energy enterprises in China, is planning to develop its own nuclear related technology in order to support more and more nuclear plants either under construction or being operation. This paper introduces the recent progress in software development for CGNPC. The focus is placed on the physical models and preliminary verification results during the recent development of the 3D Core Neutronic Code: COCO. In the COCO code,more » the non-linear Green's function method is employed to calculate the neutron flux. In order to use the discontinuity factor, the Neumann (second kind) boundary condition is utilized in the Green's function nodal method. Additionally, the COCO code also includes the necessary physical models, e.g. single-channel thermal-hydraulic module, burnup module, pin power reconstruction module and cross-section interpolation module. The preliminary verification result shows that the COCO code is sufficient for reactor core design and analysis for pressurized water reactor (PWR). (authors)« less

  15. Automating the generation of finite element dynamical cores with Firedrake

    NASA Astrophysics Data System (ADS)

    Ham, David; Mitchell, Lawrence; Homolya, Miklós; Luporini, Fabio; Gibson, Thomas; Kelly, Paul; Cotter, Colin; Lange, Michael; Kramer, Stephan; Shipton, Jemma; Yamazaki, Hiroe; Paganini, Alberto; Kärnä, Tuomas

    2017-04-01

    The development of a dynamical core is an increasingly complex software engineering undertaking. As the equations become more complete, the discretisations more sophisticated and the hardware acquires ever more fine-grained parallelism and deeper memory hierarchies, the problem of building, testing and modifying dynamical cores becomes increasingly complex. Here we present Firedrake, a code generation system for the finite element method with specialist features designed to support the creation of geoscientific models. Using Firedrake, the dynamical core developer writes the partial differential equations in weak form in a high level mathematical notation. Appropriate function spaces are chosen and time stepping loops written at the same high level. When the programme is run, Firedrake generates high performance C code for the resulting numerics which are executed in parallel. Models in Firedrake typically take a tiny fraction of the lines of code required by traditional hand-coding techniques. They support more sophisticated numerics than are easily achieved by hand, and the resulting code is frequently higher performance. Critically, debugging, modifying and extending a model written in Firedrake is vastly easier than by traditional methods due to the small, highly mathematical code base. Firedrake supports a wide range of key features for dynamical core creation: A vast range of discretisations, including both continuous and discontinuous spaces and mimetic (C-grid-like) elements which optimally represent force balances in geophysical flows. High aspect ratio layered meshes suitable for ocean and atmosphere domains. Curved elements for high accuracy representations of the sphere. Support for non-finite element operators, such as parametrisations. Access to PETSc, a world-leading library of programmable linear and nonlinear solvers. High performance adjoint models generated automatically by symbolically reasoning about the forward model. This poster will present the key features of the Firedrake system, as well as those of Gusto, an atmospheric dynamical core, and Thetis, a coastal ocean model, both of which are written in Firedrake.

  16. Evolution of the ATLAS Software Framework towards Concurrency

    NASA Astrophysics Data System (ADS)

    Jones, R. W. L.; Stewart, G. A.; Leggett, C.; Wynne, B. M.

    2015-05-01

    The ATLAS experiment has successfully used its Gaudi/Athena software framework for data taking and analysis during the first LHC run, with billions of events successfully processed. However, the design of Gaudi/Athena dates from early 2000 and the software and the physics code has been written using a single threaded, serial design. This programming model has increasing difficulty in exploiting the potential of current CPUs, which offer their best performance only through taking full advantage of multiple cores and wide vector registers. Future CPU evolution will intensify this trend, with core counts increasing and memory per core falling. Maximising performance per watt will be a key metric, so all of these cores must be used as efficiently as possible. In order to address the deficiencies of the current framework, ATLAS has embarked upon two projects: first, a practical demonstration of the use of multi-threading in our reconstruction software, using the GaudiHive framework; second, an exercise to gather requirements for an updated framework, going back to the first principles of how event processing occurs. In this paper we report on both these aspects of our work. For the hive based demonstrators, we discuss what changes were necessary in order to allow the serially designed ATLAS code to run, both to the framework and to the tools and algorithms used. We report on what general lessons were learned about the code patterns that had been employed in the software and which patterns were identified as particularly problematic for multi-threading. These lessons were fed into our considerations of a new framework and we present preliminary conclusions on this work. In particular we identify areas where the framework can be simplified in order to aid the implementation of a concurrent event processing scheme. Finally, we discuss the practical difficulties involved in migrating a large established code base to a multi-threaded framework and how this can be achieved for LHC Run 3.

  17. Processor-in-memory-and-storage architecture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeBenedictis, Erik

    A method and apparatus for performing reliable general-purpose computing. Each sub-core of a plurality of sub-cores of a processor core processes a same instruction at a same time. A code analyzer receives a plurality of residues that represents a code word corresponding to the same instruction and an indication of whether the code word is a memory address code or a data code from the plurality of sub-cores. The code analyzer determines whether the plurality of residues are consistent or inconsistent. The code analyzer and the plurality of sub-cores perform a set of operations based on whether the code wordmore » is a memory address code or a data code and a determination of whether the plurality of residues are consistent or inconsistent.« less

  18. Method for depleting BWRs using optimal control rod patterns

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taner, M.S.; Levine, S.H.; Hsiao, M.Y.

    1991-01-01

    Control rod (CR) programming is an essential core management activity for boiling water reactors (BWRs). After establishing a core reload design for a BWR, CR programming is performed to develop a sequence of exposure-dependent CR patterns that assure the safe and effective depletion of the core through a reactor cycle. A time-variant target power distribution approach has been assumed in this study. The authors have developed OCTOPUS to implement a new two-step method for designing semioptimal CR programs for BWRs. The optimization procedure of OCTOPUS is based on the method of approximation programming and uses the SIMULATE-E code for nucleonicsmore » calculations.« less

  19. Neutron flux and power in RTP core-15

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rabir, Mohamad Hairie, E-mail: m-hairie@nuclearmalaysia.gov.my; Zin, Muhammad Rawi Md; Usang, Mark Dennis

    PUSPATI TRIGA Reactor achieved initial criticality on June 28, 1982. The reactor is designed to effectively implement the various fields of basic nuclear research, manpower training, and production of radioisotopes. This paper describes the reactor parameters calculation for the PUSPATI TRIGA REACTOR (RTP); focusing on the application of the developed reactor 3D model for criticality calculation, analysis of power and neutron flux distribution of TRIGA core. The 3D continuous energy Monte Carlo code MCNP was used to develop a versatile and accurate full model of the TRIGA reactor. The model represents in detailed all important components of the core withmore » literally no physical approximation. The consistency and accuracy of the developed RTP MCNP model was established by comparing calculations to the available experimental results and TRIGLAV code calculation.« less

  20. Flow Analysis of a Gas Turbine Low- Pressure Subsystem

    NASA Technical Reports Server (NTRS)

    Veres, Joseph P.

    1997-01-01

    The NASA Lewis Research Center is coordinating a project to numerically simulate aerodynamic flow in the complete low-pressure subsystem (LPS) of a gas turbine engine. The numerical model solves the three-dimensional Navier-Stokes flow equations through all components within the low-pressure subsystem as well as the external flow around the engine nacelle. The Advanced Ducted Propfan Analysis Code (ADPAC), which is being developed jointly by Allison Engine Company and NASA, is the Navier-Stokes flow code being used for LPS simulation. The majority of the LPS project is being done under a NASA Lewis contract with Allison. Other contributors to the project are NYMA and the University of Toledo. For this project, the Energy Efficient Engine designed by GE Aircraft Engines is being modeled. This engine includes a low-pressure system and a high-pressure system. An inlet, a fan, a booster stage, a bypass duct, a lobed mixer, a low-pressure turbine, and a jet nozzle comprise the low-pressure subsystem within this engine. The tightly coupled flow analysis evaluates aerodynamic interactions between all components of the LPS. The high-pressure core engine of this engine is simulated with a one-dimensional thermodynamic cycle code in order to provide boundary conditions to the detailed LPS model. This core engine consists of a high-pressure compressor, a combustor, and a high-pressure turbine. The three-dimensional LPS flow model is coupled to the one-dimensional core engine model to provide a "hybrid" flow model of the complete gas turbine Energy Efficient Engine. The resulting hybrid engine model evaluates the detailed interaction between the LPS components at design and off-design engine operating conditions while considering the lumped-parameter performance of the core engine.

  1. Flow Field and Acoustic Predictions for Three-Stream Jets

    NASA Technical Reports Server (NTRS)

    Simmons, Shaun Patrick; Henderson, Brenda S.; Khavaran, Abbas

    2014-01-01

    Computational fluid dynamics was used to analyze a three-stream nozzle parametric design space. The study varied bypass-to-core area ratio, tertiary-to-core area ratio and jet operating conditions. The flowfield solutions from the Reynolds-Averaged Navier-Stokes (RANS) code Overflow 2.2e were used to pre-screen experimental models for a future test in the Aero-Acoustic Propulsion Laboratory (AAPL) at the NASA Glenn Research Center (GRC). Flowfield solutions were considered in conjunction with the jet-noise-prediction code JeNo to screen the design concepts. A two-stream versus three-stream computation based on equal mass flow rates showed a reduction in peak turbulent kinetic energy (TKE) for the three-stream jet relative to that for the two-stream jet which resulted in reduced acoustic emission. Additional three-stream solutions were analyzed for salient flowfield features expected to impact farfield noise. As tertiary power settings were increased there was a corresponding near nozzle increase in shear rate that resulted in an increase in high frequency noise and a reduction in peak TKE. As tertiary-to-core area ratio was increased the tertiary potential core elongated and the peak TKE was reduced. The most noticeable change occurred as secondary-to-core area ratio was increased thickening the secondary potential core, elongating the primary potential core and reducing peak TKE. As forward flight Mach number was increased the jet plume region decreased and reduced peak TKE.

  2. Deterministic Modeling of the High Temperature Test Reactor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ortensi, J.; Cogliati, J. J.; Pope, M. A.

    2010-06-01

    Idaho National Laboratory (INL) is tasked with the development of reactor physics analysis capability of the Next Generation Nuclear Power (NGNP) project. In order to examine INL’s current prismatic reactor deterministic analysis tools, the project is conducting a benchmark exercise based on modeling the High Temperature Test Reactor (HTTR). This exercise entails the development of a model for the initial criticality, a 19 column thin annular core, and the fully loaded core critical condition with 30 columns. Special emphasis is devoted to the annular core modeling, which shares more characteristics with the NGNP base design. The DRAGON code is usedmore » in this study because it offers significant ease and versatility in modeling prismatic designs. Despite some geometric limitations, the code performs quite well compared to other lattice physics codes. DRAGON can generate transport solutions via collision probability (CP), method of characteristics (MOC), and discrete ordinates (Sn). A fine group cross section library based on the SHEM 281 energy structure is used in the DRAGON calculations. HEXPEDITE is the hexagonal z full core solver used in this study and is based on the Green’s Function solution of the transverse integrated equations. In addition, two Monte Carlo (MC) based codes, MCNP5 and PSG2/SERPENT, provide benchmarking capability for the DRAGON and the nodal diffusion solver codes. The results from this study show a consistent bias of 2–3% for the core multiplication factor. This systematic error has also been observed in other HTTR benchmark efforts and is well documented in the literature. The ENDF/B VII graphite and U235 cross sections appear to be the main source of the error. The isothermal temperature coefficients calculated with the fully loaded core configuration agree well with other benchmark participants but are 40% higher than the experimental values. This discrepancy with the measurement stems from the fact that during the experiments the control rods were adjusted to maintain criticality, whereas in the model, the rod positions were fixed. In addition, this work includes a brief study of a cross section generation approach that seeks to decouple the domain in order to account for neighbor effects. This spectral interpenetration is a dominant effect in annular HTR physics. This analysis methodology should be further explored in order to reduce the error that is systematically propagated in the traditional generation of cross sections.« less

  3. Implicit time-integration method for simultaneous solution of a coupled non-linear system

    NASA Astrophysics Data System (ADS)

    Watson, Justin Kyle

    Historically large physical problems have been divided into smaller problems based on the physics involved. This is no different in reactor safety analysis. The problem of analyzing a nuclear reactor for design basis accidents is performed by a handful of computer codes each solving a portion of the problem. The reactor thermal hydraulic response to an event is determined using a system code like TRAC RELAP Advanced Computational Engine (TRACE). The core power response to the same accident scenario is determined using a core physics code like Purdue Advanced Core Simulator (PARCS). Containment response to the reactor depressurization in a Loss Of Coolant Accident (LOCA) type event is calculated by a separate code. Sub-channel analysis is performed with yet another computer code. This is just a sample of the computer codes used to solve the overall problems of nuclear reactor design basis accidents. Traditionally each of these codes operates independently from each other using only the global results from one calculation as boundary conditions to another. Industry's drive to uprate power for reactors has motivated analysts to move from a conservative approach to design basis accident towards a best estimate method. To achieve a best estimate calculation efforts have been aimed at coupling the individual physics models to improve the accuracy of the analysis and reduce margins. The current coupling techniques are sequential in nature. During a calculation time-step data is passed between the two codes. The individual codes solve their portion of the calculation and converge to a solution before the calculation is allowed to proceed to the next time-step. This thesis presents a fully implicit method of simultaneous solving the neutron balance equations, heat conduction equations and the constitutive fluid dynamics equations. It discusses the problems involved in coupling different physics phenomena within multi-physics codes and presents a solution to these problems. The thesis also outlines the basic concepts behind the nodal balance equations, heat transfer equations and the thermal hydraulic equations, which will be coupled to form a fully implicit nonlinear system of equations. The coupling of separate physics models to solve a larger problem and improve accuracy and efficiency of a calculation is not a new idea, however implementing them in an implicit manner and solving the system simultaneously is. Also the application to reactor safety codes is new and has not be done with thermal hydraulics and neutronics codes on realistic applications in the past. The coupling technique described in this thesis is applicable to other similar coupled thermal hydraulic and core physics reactor safety codes. This technique is demonstrated using coupled input decks to show that the system is solved correctly and then verified by using two derivative test problems based on international benchmark problems the OECD/NRC Three mile Island (TMI) Main Steam Line Break (MSLB) problem (representative of pressurized water reactor analysis) and the OECD/NRC Peach Bottom (PB) Turbine Trip (TT) benchmark (representative of boiling water reactor analysis).

  4. Power Peaking Effect of OTTO Fuel Scheme Pebble Bed Reactor

    NASA Astrophysics Data System (ADS)

    Setiadipura, T.; Suwoto; Zuhair; Bakhri, S.; Sunaryo, G. R.

    2018-02-01

    Pebble Bed Reactor (PBR) type of Hight Temperature Gas-cooled Reactor (HTGR) is a very interesting nuclear reactor design to fulfill the growing electricity and heat demand with a superior passive safety features. Effort to introduce the PBR design to the market can be strengthen by simplifying its system with the Once-through-then-out (OTTO) cycle PBR in which the pebble fuel only pass the core once. Important challenge in the OTTO fuel scheme is the power peaking effect which limit the maximum nominal power or burnup of the design. Parametric survey is perform in this study to investigate the contribution of different design parameters to power peaking effect of OTTO cycle PBR. PEBBED code is utilized in this study to perform the equilibrium PBR core analysis for different design parameter and fuel scheme. The parameters include its core diameter, height-per-diameter (H/D), power density, and core nominal power. Results of this study show that diameter and H/D effectsare stronger compare to the power density and nominal core power. Results of this study might become an importance guidance for design optimization of OTTO fuel scheme PBR.

  5. Optimizing zonal advection of the Advanced Research WRF (ARW) dynamics for Intel MIC

    NASA Astrophysics Data System (ADS)

    Mielikainen, Jarno; Huang, Bormin; Huang, Allen H.

    2014-10-01

    The Weather Research and Forecast (WRF) model is the most widely used community weather forecast and research model in the world. There are two distinct varieties of WRF. The Advanced Research WRF (ARW) is an experimental, advanced research version featuring very high resolution. The WRF Nonhydrostatic Mesoscale Model (WRF-NMM) has been designed for forecasting operations. WRF consists of dynamics code and several physics modules. The WRF-ARW core is based on an Eulerian solver for the fully compressible nonhydrostatic equations. In the paper, we will use Intel Intel Many Integrated Core (MIC) architecture to substantially increase the performance of a zonal advection subroutine for optimization. It is of the most time consuming routines in the ARW dynamics core. Advection advances the explicit perturbation horizontal momentum equations by adding in the large-timestep tendency along with the small timestep pressure gradient tendency. We will describe the challenges we met during the development of a high-speed dynamics code subroutine for MIC architecture. Furthermore, lessons learned from the code optimization process will be discussed. The results show that the optimizations improved performance of the original code on Xeon Phi 5110P by a factor of 2.4x.

  6. Fundamental approaches for analysis thermal hydraulic parameter for Puspati Research Reactor

    NASA Astrophysics Data System (ADS)

    Hashim, Zaredah; Lanyau, Tonny Anak; Farid, Mohamad Fairus Abdul; Kassim, Mohammad Suhaimi; Azhar, Noraishah Syahirah

    2016-01-01

    The 1-MW PUSPATI Research Reactor (RTP) is the one and only nuclear pool type research reactor developed by General Atomic (GA) in Malaysia. It was installed at Malaysian Nuclear Agency and has reached the first criticality on 8 June 1982. Based on the initial core which comprised of 80 standard TRIGA fuel elements, the very fundamental thermal hydraulic model was investigated during steady state operation using the PARET-code. The main objective of this paper is to determine the variation of temperature profiles and Departure of Nucleate Boiling Ratio (DNBR) of RTP at full power operation. The second objective is to confirm that the values obtained from PARET-code are in agreement with Safety Analysis Report (SAR) for RTP. The code was employed for the hot and average channels in the core in order to calculate of fuel's center and surface, cladding, coolant temperatures as well as DNBR's values. In this study, it was found that the results obtained from the PARET-code showed that the thermal hydraulic parameters related to safety for initial core which was cooled by natural convection was in agreement with the designed values and safety limit in SAR.

  7. Design Study of Modular Nuclear Power Plant with Small Long Life Gas Cooled Fast Reactors Utilizing MOX Fuel

    NASA Astrophysics Data System (ADS)

    Ilham, Muhammad; Su'ud, Zaki

    2017-01-01

    Growing energy needed due to increasing of the world’s population encourages development of technology and science of nuclear power plant in its safety and security. In this research, it will be explained about design study of modular fast reactor with helium gas cooling (GCFR) small long life reactor, which can be operated over 20 years. It had been conducted about neutronic design GCFR with Mixed Oxide (UO2-PuO2) fuel in range of 100-200 MWth NPPs of power and 50-60% of fuel fraction variation with cylindrical pin cell and cylindrical balance of reactor core geometry. Calculation method used SRAC-CITATION code. The obtained results are the effective multiplication factor and density value of core reactor power (with geometry optimalization) to obtain optimum design core reactor power, whereas the obtained of optimum core reactor power is 200 MWth with 55% of fuel fraction and 9-13% of percentages.

  8. VLSI design of lossless frame recompression using multi-orientation prediction

    NASA Astrophysics Data System (ADS)

    Lee, Yu-Hsuan; You, Yi-Lun; Chen, Yi-Guo

    2016-01-01

    Pursuing an experience of high-end visual quality drives human to demand a higher display resolution and a higher frame rate. Hence, a lot of powerful coding tools are aggregated together in emerging video coding standards to improve coding efficiency. This also makes video coding standards suffer from two design challenges: heavy computation and tremendous memory bandwidth. The first issue can be properly solved by a careful hardware architecture design with advanced semiconductor processes. Nevertheless, the second one becomes a critical design bottleneck for a modern video coding system. In this article, a lossless frame recompression using multi-orientation prediction technique is proposed to overcome this bottleneck. This work is realised into a silicon chip with the technology of TSMC 0.18 µm CMOS process. Its encoding capability can reach full-HD (1920 × 1080)@48 fps. The chip power consumption is 17.31 mW@100 MHz. Core area and chip area are 0.83 × 0.83 mm2 and 1.20 × 1.20 mm2, respectively. Experiment results demonstrate that this work exhibits an outstanding performance on lossless compression ratio with a competitive hardware performance.

  9. Design and implementation of H.264 based embedded video coding technology

    NASA Astrophysics Data System (ADS)

    Mao, Jian; Liu, Jinming; Zhang, Jiemin

    2016-03-01

    In this paper, an embedded system for remote online video monitoring was designed and developed to capture and record the real-time circumstances in elevator. For the purpose of improving the efficiency of video acquisition and processing, the system selected Samsung S5PV210 chip as the core processor which Integrated graphics processing unit. And the video was encoded with H.264 format for storage and transmission efficiently. Based on S5PV210 chip, the hardware video coding technology was researched, which was more efficient than software coding. After running test, it had been proved that the hardware video coding technology could obviously reduce the cost of system and obtain the more smooth video display. It can be widely applied for the security supervision [1].

  10. Neutron dose rate analysis on HTGR-10 reactor using Monte Carlo code

    NASA Astrophysics Data System (ADS)

    Suwoto; Adrial, H.; Hamzah, A.; Zuhair; Bakhri, S.; Sunaryo, G. R.

    2018-02-01

    The HTGR-10 reactor is cylinder-shaped core fuelled with kernel TRISO coated fuel particles in the spherical pebble with helium cooling system. The outlet helium gas coolant temperature outputted from the reactor core is designed to 700 °C. One advantage HTGR type reactor is capable of co-generation, as an addition to generating electricity, the reactor was designed to produce heat at high temperature can be used for other processes. The spherical fuel pebble contains 8335 TRISO UO2 kernel coated particles with enrichment of 10% and 17% are dispersed in a graphite matrix. The main purpose of this study was to analysis the distribution of neutron dose rates generated from HTGR-10 reactors. The calculation and analysis result of neutron dose rate in the HTGR-10 reactor core was performed using Monte Carlo MCNP5v1.6 code. The problems of double heterogeneity in kernel fuel coated particles TRISO and spherical fuel pebble in the HTGR-10 core are modelled well with MCNP5v1.6 code. The neutron flux to dose conversion factors taken from the International Commission on Radiological Protection (ICRP-74) was used to determine the dose rate that passes through the active core, reflectors, core barrel, reactor pressure vessel (RPV) and a biological shield. The calculated results of neutron dose rate with MCNP5v1.6 code using a conversion factor of ICRP-74 (2009) for radiation workers in the radial direction on the outside of the RPV (radial position = 220 cm from the center of the patio HTGR-10) provides the respective value of 9.22E-4 μSv/h and 9.58E-4 μSv/h for enrichment 10% and 17%, respectively. The calculated values of neutron dose rates are compliant with BAPETEN Chairman’s Regulation Number 4 Year 2013 on Radiation Protection and Safety in Nuclear Energy Utilization which sets the limit value for the average effective dose for radiation workers 20 mSv/year or 10μSv/h. Thus the protection and safety for radiation workers to be safe from the radiation source has been fulfilled. From the result analysis, it can be concluded that the model of calculation result of neutron dose rate for HTGR-10 core has met the required radiation safety standards.

  11. Design of 9.271-pressure-ratio 5-stage core compressor and overall performance for first 3 stages

    NASA Technical Reports Server (NTRS)

    Steinke, Ronald J.

    1986-01-01

    Overall aerodynamic design information is given for all five stages of an axial flow core compressor (74A) having a 9.271 pressure ratio and 29.710 kg/sec flow. For the inlet stage group (first three stages), detailed blade element design information and experimental overall performance are given. At rotor 1 inlet tip speed was 430.291 m/sec, and hub to tip radius ratio was 0.488. A low number of blades per row was achieved by the use of low-aspect-ratio blading of moderate solidity. The high reaction stages have about equal energy addition. Radial energy varied to give constant total pressure at the rotor exit. The blade element profile and shock losses and the incidence and deviation angles were based on relevant experimental data. Blade shapes are mostly double circular arc. Analysis by a three-dimensional Euler code verified the experimentally measured high flow at design speed and IGV-stator setting angles. An optimization code gave an optimal IGV-stator reset schedule for higher measured efficiency at all speeds.

  12. Profugus

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Evans, Thomas; Hamilton, Steven; Slattery, Stuart

    Profugus is an open-source mini-application (mini-app) for radiation transport and reactor applications. It contains the fundamental computational kernels used in the Exnihilo code suite from Oak Ridge National Laboratory. However, Exnihilo is production code with a substantial user base. Furthermore, Exnihilo is export controlled. This makes collaboration with computer scientists and computer engineers difficult. Profugus is designed to bridge that gap. By encapsulating the core numerical algorithms in an abbreviated code base that is open-source, computer scientists can analyze the algorithms and easily make code-architectural changes to test performance without compromising the production code values of Exnihilo. Profugus is notmore » meant to be production software with respect to problem analysis. The computational kernels in Profugus are designed to analyze performance, not correctness. Nonetheless, users of Profugus can setup and run problems with enough real-world features to be useful as proof-of-concept for actual production work.« less

  13. Extension of the XGC code for global gyrokinetic simulations in stellarator geometry

    NASA Astrophysics Data System (ADS)

    Cole, Michael; Moritaka, Toseo; White, Roscoe; Hager, Robert; Ku, Seung-Hoe; Chang, Choong-Seock

    2017-10-01

    In this work, the total-f, gyrokinetic particle-in-cell code XGC is extended to treat stellarator geometries. Improvements to meshing tools and the code itself have enabled the first physics studies, including single particle tracing and flux surface mapping in the magnetic geometry of the heliotron LHD and quasi-isodynamic stellarator Wendelstein 7-X. These have provided the first successful test cases for our approach. XGC is uniquely placed to model the complex edge physics of stellarators. A roadmap to such a global confinement modeling capability will be presented. Single particle studies will include the physics of energetic particles' global stochastic motions and their effect on confinement. Good confinement of energetic particles is vital for a successful stellarator reactor design. These results can be compared in the core region with those of other codes, such as ORBIT3d. In subsequent work, neoclassical transport and turbulence can then be considered and compared to results from codes such as EUTERPE and GENE. After sufficient verification in the core region, XGC will move into the stellarator edge region including the material wall and neutral particle recycling.

  14. Documentation to the NCES Common Core of Data Public Elementary/Secondary School Locale Code File: School Year 2003-04. Version 1a. NCES 2006-332

    ERIC Educational Resources Information Center

    Geverdt, Douglas; Phan, Tai

    2006-01-01

    The Common Core of Data (CCD) Nonfiscal surveys consist of data submitted annually by state education agencies (SEAs) to the National Center for Education Statistics (NCES). School, local education agency, and state data are sent to NCES by SEA personnel who are designated CCD Coordinators. The data are edited and maintained in machine-readable…

  15. The MELTSPREAD Code for Modeling of Ex-Vessel Core Debris Spreading Behavior, Code Manual – Version3-beta

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Farmer, M. T.

    MELTSPREAD3 is a transient one-dimensional computer code that has been developed to predict the gravity-driven flow and freezing behavior of molten reactor core materials (corium) in containment geometries. Predictions can be made for corium flowing across surfaces under either dry or wet cavity conditions. The spreading surfaces that can be selected are steel, concrete, a user-specified material (e.g., a ceramic), or an arbitrary combination thereof. The corium can have a wide range of compositions of reactor core materials that includes distinct oxide phases (predominantly Zr, and steel oxides) plus metallic phases (predominantly Zr and steel). The code requires input thatmore » describes the containment geometry, melt “pour” conditions, and cavity atmospheric conditions (i.e., pressure, temperature, and cavity flooding information). For cases in which the cavity contains a preexisting water layer at the time of RPV failure, melt jet breakup and particle bed formation can be calculated mechanistically given the time-dependent melt pour conditions (input data) as well as the heatup and boiloff of water in the melt impingement zone (calculated). For core debris impacting either the containment floor or previously spread material, the code calculates the transient hydrodynamics and heat transfer which determine the spreading and freezing behavior of the melt. The code predicts conditions at the end of the spreading stage, including melt relocation distance, depth and material composition profiles, substrate ablation profile, and wall heatup. Code output can be used as input to other models such as CORQUENCH that evaluate long term core-concrete interaction behavior following the transient spreading stage. MELTSPREAD3 was originally developed to investigate BWR Mark I liner vulnerability, but has been substantially upgraded and applied to other reactor designs (e.g., the EPR), and more recently to the plant accidents at Fukushima Daiichi. The most recent round of improvements that are documented in this report have been specifically implemented to support industry in developing Severe Accident Water Management (SAWM) strategies for Boiling Water Reactors.« less

  16. Representing metabolic pathway information: an object-oriented approach.

    PubMed

    Ellis, L B; Speedie, S M; McLeish, R

    1998-01-01

    The University of Minnesota Biocatalysis/Biodegradation Database (UM-BBD) is a website providing information and dynamic links for microbial metabolic pathways, enzyme reactions, and their substrates and products. The Compound, Organism, Reaction and Enzyme (CORE) object-oriented database management system was developed to contain and serve this information. CORE was developed using Java, an object-oriented programming language, and PSE persistent object classes from Object Design, Inc. CORE dynamically generates descriptive web pages for reactions, compounds and enzymes, and reconstructs ad hoc pathway maps starting from any UM-BBD reaction. CORE code is available from the authors upon request. CORE is accessible through the UM-BBD at: http://www. labmed.umn.edu/umbbd/index.html.

  17. FPGA acceleration of rigid-molecule docking codes

    PubMed Central

    Sukhwani, B.; Herbordt, M.C.

    2011-01-01

    Modelling the interactions of biological molecules, or docking, is critical both to understanding basic life processes and to designing new drugs. The field programmable gate array (FPGA) based acceleration of a recently developed, complex, production docking code is described. The authors found that it is necessary to extend their previous three-dimensional (3D) correlation structure in several ways, most significantly to support simultaneous computation of several correlation functions. The result for small-molecule docking is a 100-fold speed-up of a section of the code that represents over 95% of the original run-time. An additional 2% is accelerated through a previously described method, yielding a total acceleration of 36× over a single core and 10× over a quad-core. This approach is found to be an ideal complement to graphics processing unit (GPU) based docking, which excels in the protein–protein domain. PMID:21857870

  18. Design and Synthesis of Tight Pitch FLCs for Analog Voltage-Limited Electro-Optic Modulators

    DTIC Science & Technology

    1994-05-27

    or decision, unless so designated b other documentation. 12a. DISTRIBUTION/AVAILABILITY STATEMENT 𔃼b, DISTRIBUTION CODE Approved for public release...other end, a wide variety of cores can be synthesized. This is the preferred method for making about 40% of the cores shown in Figure 1. [PGO(Ar) -- B ...methanol, and finally dehydration to the nitrile 51 using phosphorus pentoxide. 0 OHO %%’QL X1) Bu2 BOTf A). 482 ) CH3CHO 49 it N •N O =Xc DEAD, TPP C8H17 O H

  19. Candidate molten salt investigation for an accelerator driven subcritical core

    NASA Astrophysics Data System (ADS)

    Sooby, E.; Baty, A.; Beneš, O.; McIntyre, P.; Pogue, N.; Salanne, M.; Sattarov, A.

    2013-09-01

    We report a design for accelerator-driven subcritical fission in a molten salt core (ADSMS) that utilizes a fuel salt composed of NaCl and transuranic (TRU) chlorides. The ADSMS core is designed for fast neutronics (28% of neutrons >1 MeV) to optimize TRU destruction. The choice of a NaCl-based salt offers benefits for corrosion, operating temperature, and actinide solubility as compared with LiF-based fuel salts. A molecular dynamics (MD) code has been used to estimate properties of the molten salt system which are important for ADSMS design but have never been measured experimentally. Results from the MD studies are reported. Experimental measurements of fuel salt properties and studies of corrosion and radiation damage on candidate metals for the core vessel are anticipated. A special thanks is due to Prof. Paul Madden for introducing the ADSMS group to the concept of using the molten salt as the spallation target, rather than a conventional heavy metal spallation target. This feature helps to optimize this core as a Pu/TRU burner.

  20. Monte Carlo Analysis of the Battery-Type High Temperature Gas Cooled Reactor

    NASA Astrophysics Data System (ADS)

    Grodzki, Marcin; Darnowski, Piotr; Niewiński, Grzegorz

    2017-12-01

    The paper presents a neutronic analysis of the battery-type 20 MWth high-temperature gas cooled reactor. The developed reactor model is based on the publicly available data being an `early design' variant of the U-battery. The investigated core is a battery type small modular reactor, graphite moderated, uranium fueled, prismatic, helium cooled high-temperature gas cooled reactor with graphite reflector. The two core alternative designs were investigated. The first has a central reflector and 30×4 prismatic fuel blocks and the second has no central reflector and 37×4 blocks. The SERPENT Monte Carlo reactor physics computer code, with ENDF and JEFF nuclear data libraries, was applied. Several nuclear design static criticality calculations were performed and compared with available reference results. The analysis covered the single assembly models and full core simulations for two geometry models: homogenous and heterogenous (explicit). A sensitivity analysis of the reflector graphite density was performed. An acceptable agreement between calculations and reference design was obtained. All calculations were performed for the fresh core state.

  1. Revisiting Molecular Dynamics on a CPU/GPU system: Water Kernel and SHAKE Parallelization.

    PubMed

    Ruymgaart, A Peter; Elber, Ron

    2012-11-13

    We report Graphics Processing Unit (GPU) and Open-MP parallel implementations of water-specific force calculations and of bond constraints for use in Molecular Dynamics simulations. We focus on a typical laboratory computing-environment in which a CPU with a few cores is attached to a GPU. We discuss in detail the design of the code and we illustrate performance comparable to highly optimized codes such as GROMACS. Beside speed our code shows excellent energy conservation. Utilization of water-specific lists allows the efficient calculations of non-bonded interactions that include water molecules and results in a speed-up factor of more than 40 on the GPU compared to code optimized on a single CPU core for systems larger than 20,000 atoms. This is up four-fold from a factor of 10 reported in our initial GPU implementation that did not include a water-specific code. Another optimization is the implementation of constrained dynamics entirely on the GPU. The routine, which enforces constraints of all bonds, runs in parallel on multiple Open-MP cores or entirely on the GPU. It is based on Conjugate Gradient solution of the Lagrange multipliers (CG SHAKE). The GPU implementation is partially in double precision and requires no communication with the CPU during the execution of the SHAKE algorithm. The (parallel) implementation of SHAKE allows an increase of the time step to 2.0fs while maintaining excellent energy conservation. Interestingly, CG SHAKE is faster than the usual bond relaxation algorithm even on a single core if high accuracy is expected. The significant speedup of the optimized components transfers the computational bottleneck of the MD calculation to the reciprocal part of Particle Mesh Ewald (PME).

  2. Summary of papers on current and anticipated uses of thermal-hydraulic codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Caruso, R.

    1997-07-01

    The author reviews a range of recent papers which discuss possible uses and future development needs for thermal/hydraulic codes in the nuclear industry. From this review, eight common recommendations are extracted. They are: improve the user interface so that more people can use the code, so that models are easier and less expensive to prepare and maintain, and so that the results are scrutable; design the code so that it can easily be coupled to other codes, such as core physics, containment, fission product behaviour during severe accidents; improve the numerical methods to make the code more robust and especiallymore » faster running, particularly for low pressure transients; ensure that future code development includes assessment of code uncertainties as integral part of code verification and validation; provide extensive user guidelines or structure the code so that the `user effect` is minimized; include the capability to model multiple fluids (gas and liquid phase); design the code in a modular fashion so that new models can be added easily; provide the ability to include detailed or simplified component models; build on work previously done with other codes (RETRAN, RELAP, TRAC, CATHARE) and other code validation efforts (CSAU, CSNI SET and IET matrices).« less

  3. Development and Implementation of CFD-Informed Models for the Advanced Subchannel Code CTF

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blyth, Taylor S.; Avramova, Maria

    The research described in this PhD thesis contributes to the development of efficient methods for utilization of high-fidelity models and codes to inform low-fidelity models and codes in the area of nuclear reactor core thermal-hydraulics. The objective is to increase the accuracy of predictions of quantities of interests using high-fidelity CFD models while preserving the efficiency of low-fidelity subchannel core calculations. An original methodology named Physics- based Approach for High-to-Low Model Information has been further developed and tested. The overall physical phenomena and corresponding localized effects, which are introduced by the presence of spacer grids in light water reactor (LWR)more » cores, are dissected in corresponding four building basic processes, and corresponding models are informed using high-fidelity CFD codes. These models are a spacer grid-directed cross-flow model, a grid-enhanced turbulent mixing model, a heat transfer enhancement model, and a spacer grid pressure loss model. The localized CFD-models are developed and tested using the CFD code STAR-CCM+, and the corresponding global model development and testing in sub-channel formulation is performed in the thermal- hydraulic subchannel code CTF. The improved CTF simulations utilize data-files derived from CFD STAR-CCM+ simulation results covering the spacer grid design desired for inclusion in the CTF calculation. The current implementation of these models is examined and possibilities for improvement and further development are suggested. The validation experimental database is extended by including the OECD/NRC PSBT benchmark data. The outcome is an enhanced accuracy of CTF predictions while preserving the computational efficiency of a low-fidelity subchannel code.« less

  4. Development and Implementation of CFD-Informed Models for the Advanced Subchannel Code CTF

    NASA Astrophysics Data System (ADS)

    Blyth, Taylor S.

    The research described in this PhD thesis contributes to the development of efficient methods for utilization of high-fidelity models and codes to inform low-fidelity models and codes in the area of nuclear reactor core thermal-hydraulics. The objective is to increase the accuracy of predictions of quantities of interests using high-fidelity CFD models while preserving the efficiency of low-fidelity subchannel core calculations. An original methodology named Physics-based Approach for High-to-Low Model Information has been further developed and tested. The overall physical phenomena and corresponding localized effects, which are introduced by the presence of spacer grids in light water reactor (LWR) cores, are dissected in corresponding four building basic processes, and corresponding models are informed using high-fidelity CFD codes. These models are a spacer grid-directed cross-flow model, a grid-enhanced turbulent mixing model, a heat transfer enhancement model, and a spacer grid pressure loss model. The localized CFD-models are developed and tested using the CFD code STAR-CCM+, and the corresponding global model development and testing in sub-channel formulation is performed in the thermal-hydraulic subchannel code CTF. The improved CTF simulations utilize data-files derived from CFD STAR-CCM+ simulation results covering the spacer grid design desired for inclusion in the CTF calculation. The current implementation of these models is examined and possibilities for improvement and further development are suggested. The validation experimental database is extended by including the OECD/NRC PSBT benchmark data. The outcome is an enhanced accuracy of CTF predictions while preserving the computational efficiency of a low-fidelity subchannel code.

  5. Mechatronic Materials and Systems. Design and Demonstration of High Aughtority Shape Morphing Structures

    DTIC Science & Technology

    2005-09-01

    thermal expansion of these truss elements. One side of the structure is fully clamped, while the other is free to displace. As in prior assessments [6...levels, by using the finite element package ABAQUS . To simulate the complete system, the core and the Kagome face members are modeled using linear...code ABAQUS . To simulate the complete actuation system, the core and Kagome members are modeled using linear Timoshenko-type beams, while the solid

  6. Creep relaxation of fuel pin bending and ovalling stresses. [BEND code, OVAL code, MARC-CDC code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chan, D.P.; Jackson, R.J.

    1981-10-01

    Analytical methods for calculating fuel pin cladding bending and ovalling stresses due to pin bundle-duct mechanical interaction taking into account nonlinear creep are presented. Calculated results are in agreement with finite element results by MARC-CDC program. The methods are used to investigate the effect of creep on the FTR fuel cladding bending and ovalling stresses. It is concluded that the cladding of 316 SS 20 percent CW and reference design has high creep rates in the FTR core region to keep the bending and ovalling stresses to acceptable levels. 6 refs.

  7. Optimization and parallelization of the thermal–hydraulic subchannel code CTF for high-fidelity multi-physics applications

    DOE PAGES

    Salko, Robert K.; Schmidt, Rodney C.; Avramova, Maria N.

    2014-11-23

    This study describes major improvements to the computational infrastructure of the CTF subchannel code so that full-core, pincell-resolved (i.e., one computational subchannel per real bundle flow channel) simulations can now be performed in much shorter run-times, either in stand-alone mode or as part of coupled-code multi-physics calculations. These improvements support the goals of the Department Of Energy Consortium for Advanced Simulation of Light Water Reactors (CASL) Energy Innovation Hub to develop high fidelity multi-physics simulation tools for nuclear energy design and analysis.

  8. Chargemaster maintenance: think 'spring cleaning' all year round.

    PubMed

    Barton, Shawn; Lancaster, Dani; Bieker, Mike

    2008-11-01

    Steps toward maintaining a standardized chargemaster include: Building a corporate chargemaster maintenance team. Developing a core research function. Designating hospital liaisons. Publishing timely reports on facility compliance. Using system codes to identify charges. Selecting chargemaster maintenance software. Developing a standard chargemaster data repository. Educating staff.

  9. Programming for 1.6 Millon cores: Early experiences with IBM's BG/Q SMP architecture

    NASA Astrophysics Data System (ADS)

    Glosli, James

    2013-03-01

    With the stall in clock cycle improvements a decade ago, the drive for computational performance has continues along a path of increasing core counts on a processor. The multi-core evolution has been expressed in both a symmetric multi processor (SMP) architecture and cpu/GPU architecture. Debates rage in the high performance computing (HPC) community which architecture best serves HPC. In this talk I will not attempt to resolve that debate but perhaps fuel it. I will discuss the experience of exploiting Sequoia, a 98304 node IBM Blue Gene/Q SMP at Lawrence Livermore National Laboratory. The advantages and challenges of leveraging the computational power BG/Q will be detailed through the discussion of two applications. The first application is a Molecular Dynamics code called ddcMD. This is a code developed over the last decade at LLNL and ported to BG/Q. The second application is a cardiac modeling code called Cardioid. This is a code that was recently designed and developed at LLNL to exploit the fine scale parallelism of BG/Q's SMP architecture. Through the lenses of these efforts I'll illustrate the need to rethink how we express and implement our computational approaches. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  10. Conceptual Underpinnings of the Quality of Life in Neurological Disorders (Neuro-QoL): Comparisons of Core Sets for Stroke, Multiple Sclerosis, Spinal Cord Injury, and Traumatic Brain Injury.

    PubMed

    Wong, Alex W K; Lau, Stephen C L; Fong, Mandy W M; Cella, David; Lai, Jin-Shei; Heinemann, Allen W

    2018-04-03

    To determine the extent to which the content of the Quality of Life in Neurological Disorders (Neuro-QoL) covers the International Classification of Functioning, Disability and Health (ICF) Core Sets for multiple sclerosis (MS), stroke, spinal cord injury (SCI), and traumatic brain injury (TBI) using summary linkage indicators. Content analysis by linking content of the Neuro-QoL to corresponding ICF codes of each Core Set for MS, stroke, SCI, and TBI. Three academic centers. None. None. Four summary linkage indicators proposed by MacDermid et al were estimated to compare the content coverage between Neuro-QoL and the ICF codes of Core Sets for MS, stroke, MS, and TBI. Neuro-QoL represented 20% to 30% Core Set codes for different conditions in which more codes in Core Sets for MS (29%), stroke (28%), and TBI (28%) were covered than those for SCI in the long-term (20%) and early postacute (19%) contexts. Neuro-QoL represented nearly half of the unique Activity and Participation codes (43%-49%) and less than one third of the unique Body Function codes (12%-32%). It represented fewer Environmental Factors codes (2%-6%) and no Body Structures codes. Absolute linkage indicators found that at least 60% of Neuro-QoL items were linked to Core Set codes (63%-95%), but many items covered the same codes as revealed by unique linkage indicators (7%-13%), suggesting high concept redundancy among items. The Neuro-QoL links more closely to ICF Core Sets for stroke, MS, and TBI than to those for SCI, and primarily covers activity and participation ICF domains. Other instruments are needed to address concepts not measured by the Neuro-QoL when a comprehensive health assessment is needed. Copyright © 2018 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  11. Comparative Study on Various Geometrical Core Design of 300 MWth Gas Cooled Fast Reactor with UN-PuN Fuel Longlife without Refuelling

    NASA Astrophysics Data System (ADS)

    Dewi Syarifah, Ratna; Su'ud, Zaki; Basar, Khairul; Irwanto, Dwi

    2017-07-01

    Nuclear power has progressive improvement in the operating performance of exiting reactors and ensuring economic competitiveness of nuclear electricity around the world. The GFR use gas coolant and fast neutron spectrum. This research use helium coolant which has low neutron moderation, chemical inert and single phase. Comparative study on various geometrical core design for modular GFR with UN-PuN fuel long life without refuelling has been done. The calculation use SRAC2006 code both PIJ calculation and CITATION calculation. The data libraries use JENDL 4.0. The variation of fuel fraction is 40% until 65%. In this research, we varied the geometry of core reactor to find the optimum geometry design. The variation of the geometry design is balance cylinder; it means that the diameter active core (D) same with height active core (H). Second, pancake cylinder (D>H) and third, tall cylinder (D

  12. Elaborate SMART MCNP Modelling Using ANSYS and Its Applications

    NASA Astrophysics Data System (ADS)

    Song, Jaehoon; Surh, Han-bum; Kim, Seung-jin; Koo, Bonsueng

    2017-09-01

    An MCNP 3-dimensional model can be widely used to evaluate various design parameters such as a core design or shielding design. Conventionally, a simplified 3-dimensional MCNP model is applied to calculate these parameters because of the cumbersomeness of modelling by hand. ANSYS has a function for converting the CAD `stp' format into an MCNP input in the geometry part. Using ANSYS and a 3- dimensional CAD file, a very detailed and sophisticated MCNP 3-dimensional model can be generated. The MCNP model is applied to evaluate the assembly weighting factor at the ex-core detector of SMART, and the result is compared with a simplified MCNP SMART model and assembly weighting factor calculated by DORT, which is a deterministic Sn code.

  13. Mean Flow and Noise Prediction for a Separate Flow Jet With Chevron Mixers

    NASA Technical Reports Server (NTRS)

    Koch, L. Danielle; Bridges, James; Khavaran, Abbas

    2004-01-01

    Experimental and numerical results are presented here for a separate flow nozzle employing chevrons arranged in an alternating pattern on the core nozzle. Comparisons of these results demonstrate that the combination of the WIND/MGBK suite of codes can predict the noise reduction trends measured between separate flow jets with and without chevrons on the core nozzle. Mean flow predictions were validated against Particle Image Velocimetry (PIV), pressure, and temperature data, and noise predictions were validated against acoustic measurements recorded in the NASA Glenn Aeroacoustic Propulsion Lab. Comparisons are also made to results from the CRAFT code. The work presented here is part of an on-going assessment of the WIND/MGBK suite for use in designing the next generation of quiet nozzles for turbofan engines.

  14. Modeling and simulation of CANDU reactor and its regulating system

    NASA Astrophysics Data System (ADS)

    Javidnia, Hooman

    Analytical computer codes are indispensable tools in design, optimization, and control of nuclear power plants. Numerous codes have been developed to perform different types of analyses related to the nuclear power plants. A large number of these codes are designed to perform safety analyses. In the context of safety analyses, the control system is often neglected. Although there are good reasons for such a decision, that does not mean that the study of control systems in the nuclear power plants should be neglected altogether. In this thesis, a proof of concept code is developed as a tool that can be used in the design. optimization. and operation stages of the control system. The main objective in the design of this computer code is providing a tool that is easy to use by its target audience and is capable of producing high fidelity results that can be trusted to design the control system and optimize its performance. Since the overall plant control system covers a very wide range of processes, in this thesis the focus has been on one particular module of the the overall plant control system, namely, the reactor regulating system. The center of the reactor regulating system is the CANDU reactor. A nodal model for the reactor is used to represent the spatial neutronic kinetics of the core. The nodal model produces better results compared to the point kinetics model which is often used in the design and analysis of control system for nuclear reactors. The model can capture the spatial effects to some extent. although it is not as detailed as the finite difference methods. The criteria for choosing a nodal model of the core are: (1) the model should provide more detail than point kinetics and capture spatial effects, (2) it should not be too complex or overly detailed to slow down the simulation and provide details that are extraneous or unnecessary for a control engineer. Other than the reactor itself, there are auxiliary models that describe dynamics of different phenomena related to the transfer of the energy from the core. The main function of the reactor regulating system is to control the power of the reactor. This is achieved by using a set of detectors. reactivity devices. and digital control algorithms. Three main reactivity devices that are activated during short-term or intermediate-term transients are modeled in this thesis. The main elements of the digital control system are implemented in accordance to the program specifications for the actual control system in CANDU reactors. The simulation results are validated against requirements of the reactor regulating system. actual plant data. and pre-validated data from other computer codes. The validation process shows that the simulation results can be trusted in making engineering decisions regarding the reactor regulating system and prediction of the system performance in response to upset conditions or disturbances. KEYWORDS: CANDU reactors. reactor regulating system. nodal model. spatial kinetics. reactivity devices. simulation.

  15. Initial verification and validation of RAZORBACK - A research reactor transient analysis code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Talley, Darren G.

    2015-09-01

    This report describes the work and results of the initial verification and validation (V&V) of the beta release of the Razorback code. Razorback is a computer code designed to simulate the operation of a research reactor (such as the Annular Core Research Reactor (ACRR)) by a coupled numerical solution of the point reactor kinetics equations, the energy conservation equation for fuel element heat transfer, and the mass, momentum, and energy conservation equations for the water cooling of the fuel elements. This initial V&V effort was intended to confirm that the code work to-date shows good agreement between simulation and actualmore » ACRR operations, indicating that the subsequent V&V effort for the official release of the code will be successful.« less

  16. Imprinting Community College Computer Science Education with Software Engineering Principles

    ERIC Educational Resources Information Center

    Hundley, Jacqueline Holliday

    2012-01-01

    Although the two-year curriculum guide includes coverage of all eight software engineering core topics, the computer science courses taught in Alabama community colleges limit student exposure to the programming, or coding, phase of the software development lifecycle and offer little experience in requirements analysis, design, testing, and…

  17. Construction, classification and parametrization of complex Hadamard matrices

    NASA Astrophysics Data System (ADS)

    Szöllősi, Ferenc

    To improve the design of nuclear systems, high-fidelity neutron fluxes are required. Leadership-class machines provide platforms on which very large problems can be solved. Computing such fluxes efficiently requires numerical methods with good convergence properties and algorithms that can scale to hundreds of thousands of cores. Many 3-D deterministic transport codes are decomposable in space and angle only, limiting them to tens of thousands of cores. Most codes rely on methods such as Gauss Seidel for fixed source problems and power iteration for eigenvalue problems, which can be slow to converge for challenging problems like those with highly scattering materials or high dominance ratios. Three methods have been added to the 3-D SN transport code Denovo that are designed to improve convergence and enable the full use of cutting-edge computers. The first is a multigroup Krylov solver that converges more quickly than Gauss Seidel and parallelizes the code in energy such that Denovo can use hundreds of thousand of cores effectively. The second is Rayleigh quotient iteration (RQI), an old method applied in a new context. This eigenvalue solver finds the dominant eigenvalue in a mathematically optimal way and should converge in fewer iterations than power iteration. RQI creates energy-block-dense equations that the new Krylov solver treats efficiently. However, RQI can have convergence problems because it creates poorly conditioned systems. This can be overcome with preconditioning. The third method is a multigrid-in-energy preconditioner. The preconditioner takes advantage of the new energy decomposition because the grids are in energy rather than space or angle. The preconditioner greatly reduces iteration count for many problem types and scales well in energy. It also allows RQI to be successful for problems it could not solve otherwise. The methods added to Denovo accomplish the goals of this work. They converge in fewer iterations than traditional methods and enable the use of hundreds of thousands of cores. Each method can be used individually, with the multigroup Krylov solver and multigrid-in-energy preconditioner being particularly successful on their own. The largest benefit, though, comes from using these methods in concert.

  18. RAZORBACK - A Research Reactor Transient Analysis Code Version 1.0 - Volume 3: Verification and Validation Report.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Talley, Darren G.

    2017-04-01

    This report describes the work and results of the verification and validation (V&V) of the version 1.0 release of the Razorback code. Razorback is a computer code designed to simulate the operation of a research reactor (such as the Annular Core Research Reactor (ACRR)) by a coupled numerical solution of the point reactor kinetics equations, the energy conservation equation for fuel element heat transfer, the equation of motion for fuel element thermal expansion, and the mass, momentum, and energy conservation equations for the water cooling of the fuel elements. This V&V effort was intended to confirm that the code showsmore » good agreement between simulation and actual ACRR operations.« less

  19. Ex-Vessel Core Melt Modeling Comparison between MELTSPREAD-CORQUENCH and MELCOR 2.1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robb, Kevin R.; Farmer, Mitchell; Francis, Matthew W.

    System-level code analyses by both United States and international researchers predict major core melting, bottom head failure, and corium-concrete interaction for Fukushima Daiichi Unit 1 (1F1). Although system codes such as MELCOR and MAAP are capable of capturing a wide range of accident phenomena, they currently do not contain detailed models for evaluating some ex-vessel core melt behavior. However, specialized codes containing more detailed modeling are available for melt spreading such as MELTSPREAD as well as long-term molten corium-concrete interaction (MCCI) and debris coolability such as CORQUENCH. In a preceding study, Enhanced Ex-Vessel Analysis for Fukushima Daiichi Unit 1: Meltmore » Spreading and Core-Concrete Interaction Analyses with MELTSPREAD and CORQUENCH, the MELTSPREAD-CORQUENCH codes predicted the 1F1 core melt readily cooled in contrast to predictions by MELCOR. The user community has taken notice and is in the process of updating their systems codes; specifically MAAP and MELCOR, to improve and reduce conservatism in their ex-vessel core melt models. This report investigates why the MELCOR v2.1 code, compared to the MELTSPREAD and CORQUENCH 3.03 codes, yield differing predictions of ex-vessel melt progression. To accomplish this, the differences in the treatment of the ex-vessel melt with respect to melt spreading and long-term coolability are examined. The differences in modeling approaches are summarized, and a comparison of example code predictions is provided.« less

  20. VERAIn

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simunovic, Srdjan

    2015-02-16

    CASL's modeling and simulation technology, the Virtual Environment for Reactor Applications (VERA), incorporates coupled physics and science-based models, state-of-the-art numerical methods, modern computational science, integrated uncertainty quantification (UQ) and validation against data from operating pressurized water reactors (PWRs), single-effect experiments, and integral tests. The computational simulation component of VERA is the VERA Core Simulator (VERA-CS). The core simulator is the specific collection of multi-physics computer codes used to model and deplete a LWR core over multiple cycles. The core simulator has a single common input file that drives all of the different physics codes. The parser code, VERAIn, converts VERAmore » Input into an XML file that is used as input to different VERA codes.« less

  1. Preliminary Assessment of the Impact on Reactor Vessel dpa Rates Due to Installation of a Proposed Low Enriched Uranium (LEU) Core in the High Flux Isotope Reactor (HFIR)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Daily, Charles R.

    2015-10-01

    An assessment of the impact on the High Flux Isotope Reactor (HFIR) reactor vessel (RV) displacements-per-atom (dpa) rates due to operations with the proposed low enriched uranium (LEU) core described by Ilas and Primm has been performed and is presented herein. The analyses documented herein support the conclusion that conversion of HFIR to low-enriched uranium (LEU) core operations using the LEU core design of Ilas and Primm will have no negative impact on HFIR RV dpa rates. Since its inception, HFIR has been operated with highly enriched uranium (HEU) cores. As part of an effort sponsored by the National Nuclearmore » Security Administration (NNSA), conversion to LEU cores is being considered for future HFIR operations. The HFIR LEU configurations analyzed are consistent with the LEU core models used by Ilas and Primm and the HEU balance-of-plant models used by Risner and Blakeman in the latest analyses performed to support the HFIR materials surveillance program. The Risner and Blakeman analyses, as well as the studies documented herein, are the first to apply the hybrid transport methods available in the Automated Variance reduction Generator (ADVANTG) code to HFIR RV dpa rate calculations. These calculations have been performed on the Oak Ridge National Laboratory (ORNL) Institutional Cluster (OIC) with version 1.60 of the Monte Carlo N-Particle 5 (MCNP5) computer code.« less

  2. Aeroacoustic Codes For Rotor Harmonic and BVI Noise--CAMRAD.Mod1/HIRES

    NASA Technical Reports Server (NTRS)

    Brooks, Thomas F.; Boyd, D. Douglas, Jr.; Burley, Casey L.; Jolly, J. Ralph, Jr.

    1996-01-01

    This paper presents a status of non-CFD aeroacoustic codes at NASA Langley Research Center for the prediction of helicopter harmonic and Blade-Vortex Interaction (BVI) noise. The prediction approach incorporates three primary components: CAMRAD.Mod1 - a substantially modified version of the performance/trim/wake code CAMRAD; HIRES - a high resolution blade loads post-processor; and WOPWOP - an acoustic code. The functional capabilities and physical modeling in CAMRAD.Mod1/HIRES will be summarized and illustrated. A new multi-core roll-up wake modeling approach is introduced and validated. Predictions of rotor wake and radiated noise are compared with to the results of the HART program, a model BO-105 windtunnel test at the DNW in Europe. Additional comparisons are made to results from a DNW test of a contemporary design four-bladed rotor, as well as from a Langley test of a single proprotor (tiltrotor) three-bladed model configuration. Because the method is shown to help eliminate the necessity of guesswork in setting code parameters between different rotor configurations, it should prove useful as a rotor noise design tool.

  3. Optimizing meridional advection of the Advanced Research WRF (ARW) dynamics for Intel Xeon Phi coprocessor

    NASA Astrophysics Data System (ADS)

    Mielikainen, Jarno; Huang, Bormin; Huang, Allen H.-L.

    2015-05-01

    The most widely used community weather forecast and research model in the world is the Weather Research and Forecast (WRF) model. Two distinct varieties of WRF exist. The one we are interested is the Advanced Research WRF (ARW) is an experimental, advanced research version featuring very high resolution. The WRF Nonhydrostatic Mesoscale Model (WRF-NMM) has been designed for forecasting operations. WRF consists of dynamics code and several physics modules. The WRF-ARW core is based on an Eulerian solver for the fully compressible nonhydrostatic equations. In the paper, we optimize a meridional (north-south direction) advection subroutine for Intel Xeon Phi coprocessor. Advection is of the most time consuming routines in the ARW dynamics core. It advances the explicit perturbation horizontal momentum equations by adding in the large-timestep tendency along with the small timestep pressure gradient tendency. We will describe the challenges we met during the development of a high-speed dynamics code subroutine for MIC architecture. Furthermore, lessons learned from the code optimization process will be discussed. The results show that the optimizations improved performance of the original code on Xeon Phi 7120P by a factor of 1.2x.

  4. Core Dynamics Analysis for Reactivity Insertion and Loss of Coolant Flow Tests Using the High Temperature Engineering Test Reactor

    NASA Astrophysics Data System (ADS)

    Takamatsu, Kuniyoshi; Nakagawa, Shigeaki; Takeda, Tetsuaki

    Safety demonstration tests using the High Temperature Engineering Test Reactor (HTTR) are in progress to verify its inherent safety features and improve the safety technology and design methodology for High-temperature Gas-cooled Reactors (HTGRs). The reactivity insertion test is one of the safety demonstration tests for the HTTR. This test simulates the rapid increase in the reactor power by withdrawing the control rod without operating the reactor power control system. In addition, the loss of coolant flow tests has been conducted to simulate the rapid decrease in the reactor power by tripping one, two or all out of three gas circulators. The experimental results have revealed the inherent safety features of HTGRs, such as the negative reactivity feedback effect. The numerical analysis code, which was named-ACCORD-, was developed to analyze the reactor dynamics including the flow behavior in the HTTR core. We have modified this code to use a model with four parallel channels and twenty temperature coefficients. Furthermore, we added another analytical model of the core for calculating the heat conduction between the fuel channels and the core in the case of the loss of coolant flow tests. This paper describes the validation results for the newly developed code using the experimental results. Moreover, the effect of the model is formulated quantitatively with our proposed equation. Finally, the pre-analytical result of the loss of coolant flow test by tripping all gas circulators is also discussed.

  5. A front-end automation tool supporting design, verification and reuse of SOC.

    PubMed

    Yan, Xiao-lang; Yu, Long-li; Wang, Jie-bing

    2004-09-01

    This paper describes an in-house developed language tool called VPerl used in developing a 250 MHz 32-bit high-performance low power embedded CPU core. The authors showed that use of this tool can compress the Verilog code by more than a factor of 5, increase the efficiency of the front-end design, reduce the bug rate significantly. This tool can be used to enhance the reusability of an intellectual property model, and facilitate porting design for different platforms.

  6. Expert system for maintenance management of a boiling water reactor power plant

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hong Shen; Liou, L.W.; Levine, S.

    1992-01-01

    An expert system code has been developed for the maintenance of two boiling water reactor units in Berwick, Pennsylvania, that are operated by the Pennsylvania Power and Light Company (PP and L). The objective of this expert system code, where the knowledge of experienced operators and engineers is captured and implemented, is to support the decisions regarding which components can be safely and reliably removed from service for maintenance. It can also serve as a query-answering facility for checking the plant system status and for training purposes. The operating and maintenance information of a large number of support systems, whichmore » must be available for emergencies and/or in the event of an accident, is stored in the data base of the code. It identifies the relevant technical specifications and management rules for shutting down any one of the systems or removing a component from service to support maintenance. Because of the complexity and time needed to incorporate a large number of systems and their components, the first phase of the expert system develops a prototype code, which includes only the reactor core isolation coolant system, the high-pressure core injection system, the instrument air system, the service water system, and the plant electrical system. The next phase is scheduled to expand the code to include all other systems. This paper summarizes the prototype code and the design concept of the complete expert system code for maintenance management of all plant systems and components.« less

  7. Coupled Analysis of an Inlet and Fan for a Quiet Supersonic Aircraft

    NASA Technical Reports Server (NTRS)

    Chima, Rodrick V.; Conners, Timothy R.; Wayman, Thomas R.

    2009-01-01

    A computational analysis of a Gulfstream isentropic external compression supersonic inlet coupled to a Rolls-Royce fan was completed. The inlet was designed for a small, low sonic boom supersonic vehicle with a design cruise condition of M = 1.6 at 45,000 feet. The inlet design included an annular bypass duct that routed flow subsonically around an engine-mounted gearbox and diverted flow with high shock losses away from the fan tip. Two Reynolds-averaged Navier-Stokes codes were used for the analysis: an axisymmetric code called AVCS for the inlet and a 3-D code called SWIFT for the fan. The codes were coupled at a mixing plane boundary using a separate code for data exchange. The codes were used to determine the performance of the inlet/fan system at the design point and to predict the performance and operability of the system over the flight profile. At the design point the core inlet had a recovery of 96 percent, and the fan operated near its peak efficiency and pressure ratio. A large hub radial distortion generated in the inlet was not eliminated by the fan and could pose a challenge for subsequent booster stages. The system operated stably at all points along the flight profile. Reduced stall margin was seen at low altitude and Mach number where flow separated on the interior lips of the cowl and bypass ducts. The coupled analysis gave consistent solutions at all points on the flight profile that would be difficult or impossible to predict by analysis of isolated components.

  8. Coupled Analysis of an Inlet and Fan for a Quiet Supersonic Jet

    NASA Technical Reports Server (NTRS)

    Chima, Rodrick V.; Conners, Timothy R.; Wayman, Thomas R.

    2010-01-01

    A computational analysis of a Gulfstream isentropic external compression supersonic inlet coupled to a Rolls-Royce fan has been completed. The inlet was designed for a small, low sonic boom supersonic vehicle with a design cruise condition of M = 1.6 at 45,000 ft. The inlet design included an annular bypass duct that routed flow subsonically around an engine-mounted gearbox and diverted flow with high shock losses away from the fan tip. Two Reynolds-averaged Navier-Stokes codes were used for the analysis: an axisymmetric code called AVCS for the inlet and a three dimensional (3-D) code called SWIFT for the fan. The codes were coupled at a mixing plane boundary using a separate code for data exchange. The codes were used to determine the performance of the inlet/fan system at the design point and to predict the performance and operability of the system over the flight profile. At the design point the core inlet had a recovery of 96 percent, and the fan operated near its peak efficiency and pressure ratio. A large hub radial distortion generated in the inlet was not eliminated by the fan and could pose a challenge for subsequent booster stages. The system operated stably at all points along the flight profile. Reduced stall margin was seen at low altitude and Mach number where flow separated on the interior lips of the cowl and bypass ducts. The coupled analysis gave consistent solutions at all points on the flight profile that would be difficult or impossible to predict by analysis of isolated components.

  9. Numerical optimization of perturbative coils for tokamaks

    NASA Astrophysics Data System (ADS)

    Lazerson, Samuel; Park, Jong-Kyu; Logan, Nikolas; Boozer, Allen; NSTX-U Research Team

    2014-10-01

    Numerical optimization of coils which apply three dimensional (3D) perturbative fields to tokamaks is presented. The application of perturbative 3D magnetic fields in tokamaks is now commonplace for control of error fields, resistive wall modes, resonant field drive, and neoclassical toroidal viscosity (NTV) torques. The design of such systems has focused on control of toroidal mode number, with coil shapes based on simple window-pane designs. In this work, a numerical optimization suite based on the STELLOPT 3D equilibrium optimization code is presented. The new code, IPECOPT, replaces the VMEC equilibrium code with the IPEC perturbed equilibrium code, and targets NTV torque by coupling to the PENT code. Fixed boundary optimizations of the 3D fields for the NSTX-U experiment are underway. Initial results suggest NTV torques can be driven by normal field spectrums which are not pitch-resonant with the magnetic field lines. Work has focused on driving core torque with n = 1 and edge torques with n = 3 fields. Optimizations of the coil currents for the planned NSTX-U NCC coils highlight the code's free boundary capability. This manuscript has been authored by Princeton University under Contract Number DE-AC02-09CH11466 with the U.S. Department of Energy.

  10. PFLOTRAN: Reactive Flow & Transport Code for Use on Laptops to Leadership-Class Supercomputers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hammond, Glenn E.; Lichtner, Peter C.; Lu, Chuan

    PFLOTRAN, a next-generation reactive flow and transport code for modeling subsurface processes, has been designed from the ground up to run efficiently on machines ranging from leadership-class supercomputers to laptops. Based on an object-oriented design, the code is easily extensible to incorporate additional processes. It can interface seamlessly with Fortran 9X, C and C++ codes. Domain decomposition parallelism is employed, with the PETSc parallel framework used to manage parallel solvers, data structures and communication. Features of the code include a modular input file, implementation of high-performance I/O using parallel HDF5, ability to perform multiple realization simulations with multiple processors permore » realization in a seamless manner, and multiple modes for multiphase flow and multicomponent geochemical transport. Chemical reactions currently implemented in the code include homogeneous aqueous complexing reactions and heterogeneous mineral precipitation/dissolution, ion exchange, surface complexation and a multirate kinetic sorption model. PFLOTRAN has demonstrated petascale performance using 2{sup 17} processor cores with over 2 billion degrees of freedom. Accomplishments achieved to date include applications to the Hanford 300 Area and modeling CO{sub 2} sequestration in deep geologic formations.« less

  11. LIDAR TS for ITER core plasma. Part II: simultaneous two wavelength LIDAR TS

    NASA Astrophysics Data System (ADS)

    Gowers, C.; Nielsen, P.; Salzmann, H.

    2017-12-01

    We have shown recently, and in more detail at this conference (Salzmann et al) that the LIDAR approach to ITER core TS measurements requires only two mirrors in the inaccessible port plug area of the machine. This leads to simplified and robust alignment, lower risk of mirror damage by plasma contamination and much simpler calibration, compared with the awkward and vulnerable optical geometry of the conventional imaging TS approach, currently under development by ITER. In the present work we have extended the simulation code used previously to include the case of launching two laser pulses, of different wavelengths, simultaneously in LIDAR geometry. The aim of this approach is to broaden the choice of lasers available for the diagnostic. In the simulation code it is assumed that two short duration (300 ps) laser pulses of different wavelengths, from an Nd:YAG laser are launched through the plasma simultaneously. The temperature and density profiles are deduced in the usual way but from the resulting combined scattered signals in the different spectral channels of the single spectrometer. The spectral response and quantum efficiencies of the detectors used in the simulation are taken from catalogue data for commercially available Hamamatsu MCP-PMTs. The response times, gateability and tolerance to stray light levels of this type of photomultiplier have already been demonstrated in the JET LIDAR system and give sufficient spatial resolution to meet the ITER specification. Here we present the new simulation results from the code. They demonstrate that when the detectors are combined with this two laser, LIDAR approach, the full range of the specified ITER core plasma Te and ne can be measured with sufficient accuracy. So, with commercially available detectors and a simple modification of a Nd:YAG laser similar to that currently being used in the design of the conventional ITER core TS design mentioned above, the ITER requirements can be met.

  12. Nuclear modules for space electric propulsion

    NASA Technical Reports Server (NTRS)

    Difilippo, F. C.

    1998-01-01

    Analysis of interplanetary cargo and piloted missions requires calculations of the performances and masses of subsystems to be integrated in a final design. In a preliminary and scoping stage the designer needs to evaluate options iteratively by using fast computer simulations. The Oak Ridge National Laboratory (ORNL) has been involved in the development of models and calculational procedures for the analysis (neutronic and thermal hydraulic) of power sources for nuclear electric propulsion. The nuclear modules will be integrated into the whole simulation of the nuclear electric propulsion system. The vehicles use either a Brayton direct-conversion cycle, using the heated helium from a NERVA-type reactor, or a potassium Rankine cycle, with the working fluid heated on the secondary side of a heat exchanger and lithium on the primary side coming from a fast reactor. Given a set of input conditions, the codes calculate composition. dimensions, volumes, and masses of the core, reflector, control system, pressure vessel, neutron and gamma shields, as well as the thermal hydraulic conditions of the coolant, clad and fuel. Input conditions are power, core life, pressure and temperature of the coolant at the inlet of the core, either the temperature of the coolant at the outlet of the core or the coolant mass flow and the fluences and integrated doses at the cargo area. Using state-of-the-art neutron cross sections and transport codes, a database was created for the neutronic performance of both reactor designs. The free parameters of the models are the moderator/fuel mass ratio for the NERVA reactor and the enrichment and the pitch of the lattice for the fast reactor. Reactivity and energy balance equations are simultaneously solved to find the reactor design. Thermalhydraulic conditions are calculated by solving the one-dimensional versions of the equations of conservation of mass, energy, and momentum with compressible flow.

  13. SMT-Aware Instantaneous Footprint Optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roy, Probir; Liu, Xu; Song, Shuaiwen

    Modern architectures employ simultaneous multithreading (SMT) to increase thread-level parallelism. SMT threads share many functional units and the whole memory hierarchy of a physical core. Without a careful code design, SMT threads can easily contend with each other for these shared resources, causing severe performance degradation. Minimizing SMT thread contention for HPC applications running on dedicated platforms is very challenging, because they usually spawn threads within Single Program Multiple Data (SPMD) models. To address this important issue, we introduce a simple scheme for SMT-aware code optimization, which aims to reduce the memory contention across SMT threads.

  14. Production Level CFD Code Acceleration for Hybrid Many-Core Architectures

    NASA Technical Reports Server (NTRS)

    Duffy, Austen C.; Hammond, Dana P.; Nielsen, Eric J.

    2012-01-01

    In this work, a novel graphics processing unit (GPU) distributed sharing model for hybrid many-core architectures is introduced and employed in the acceleration of a production-level computational fluid dynamics (CFD) code. The latest generation graphics hardware allows multiple processor cores to simultaneously share a single GPU through concurrent kernel execution. This feature has allowed the NASA FUN3D code to be accelerated in parallel with up to four processor cores sharing a single GPU. For codes to scale and fully use resources on these and the next generation machines, codes will need to employ some type of GPU sharing model, as presented in this work. Findings include the effects of GPU sharing on overall performance. A discussion of the inherent challenges that parallel unstructured CFD codes face in accelerator-based computing environments is included, with considerations for future generation architectures. This work was completed by the author in August 2010, and reflects the analysis and results of the time.

  15. Network Coding on Heterogeneous Multi-Core Processors for Wireless Sensor Networks

    PubMed Central

    Kim, Deokho; Park, Karam; Ro, Won W.

    2011-01-01

    While network coding is well known for its efficiency and usefulness in wireless sensor networks, the excessive costs associated with decoding computation and complexity still hinder its adoption into practical use. On the other hand, high-performance microprocessors with heterogeneous multi-cores would be used as processing nodes of the wireless sensor networks in the near future. To this end, this paper introduces an efficient network coding algorithm developed for the heterogenous multi-core processors. The proposed idea is fully tested on one of the currently available heterogeneous multi-core processors referred to as the Cell Broadband Engine. PMID:22164053

  16. Design of an MR image processing module on an FPGA chip

    NASA Astrophysics Data System (ADS)

    Li, Limin; Wyrwicz, Alice M.

    2015-06-01

    We describe the design and implementation of an image processing module on a single-chip Field-Programmable Gate Array (FPGA) for real-time image processing. We also demonstrate that through graphical coding the design work can be greatly simplified. The processing module is based on a 2D FFT core. Our design is distinguished from previously reported designs in two respects. No off-chip hardware resources are required, which increases portability of the core. Direct matrix transposition usually required for execution of 2D FFT is completely avoided using our newly-designed address generation unit, which saves considerable on-chip block RAMs and clock cycles. The image processing module was tested by reconstructing multi-slice MR images from both phantom and animal data. The tests on static data show that the processing module is capable of reconstructing 128 × 128 images at speed of 400 frames/second. The tests on simulated real-time streaming data demonstrate that the module works properly under the timing conditions necessary for MRI experiments.

  17. Design of an MR image processing module on an FPGA chip

    PubMed Central

    Li, Limin; Wyrwicz, Alice M.

    2015-01-01

    We describe the design and implementation of an image processing module on a single-chip Field-Programmable Gate Array (FPGA) for real-time image processing. We also demonstrate that through graphical coding the design work can be greatly simplified. The processing module is based on a 2D FFT core. Our design is distinguished from previously reported designs in two respects. No off-chip hardware resources are required, which increases portability of the core. Direct matrix transposition usually required for execution of 2D FFT is completely avoided using our newly-designed address generation unit, which saves considerable on-chip block RAMs and clock cycles. The image processing module was tested by reconstructing multi-slice MR images from both phantom and animal data. The tests on static data show that the processing module is capable of reconstructing 128 × 128 images at speed of 400 frames/second. The tests on simulated real-time streaming data demonstrate that the module works properly under the timing conditions necessary for MRI experiments. PMID:25909646

  18. Development of high-fidelity multiphysics system for light water reactor analysis

    NASA Astrophysics Data System (ADS)

    Magedanz, Jeffrey W.

    There has been a tendency in recent years toward greater heterogeneity in reactor cores, due to the use of mixed-oxide (MOX) fuel, burnable absorbers, and longer cycles with consequently higher fuel burnup. The resulting asymmetry of the neutron flux and energy spectrum between regions with different compositions causes a need to account for the directional dependence of the neutron flux, instead of the traditional diffusion approximation. Furthermore, the presence of both MOX and high-burnup fuel in the core increases the complexity of the heat conduction. The heat transfer properties of the fuel pellet change with irradiation, and the thermal and mechanical expansion of the pellet and cladding strongly affect the size of the gap between them, and its consequent thermal resistance. These operational tendencies require higher fidelity multi-physics modeling capabilities, and this need is addressed by the developments performed within this PhD research. The dissertation describes the development of a High-Fidelity Multi-Physics System for Light Water Reactor Analysis. It consists of three coupled codes -- CTF for Thermal Hydraulics, TORT-TD for Neutron Kinetics, and FRAPTRAN for Fuel Performance. It is meant to address these modeling challenges in three ways: (1) by resolving the state of the system at the level of each fuel pin, rather than homogenizing entire fuel assemblies, (2) by using the multi-group Discrete Ordinates method to account for the directional dependence of the neutron flux, and (3) by using a fuel-performance code, rather than a Thermal Hydraulics code's simplified fuel model, to account for the material behavior of the fuel and its feedback to the hydraulic and neutronic behavior of the system. While the first two are improvements, the third, the use of a fuel-performance code for feedback, constitutes an innovation in this PhD project. Also important to this work is the manner in which such coupling is written. While coupling involves combining codes into a single executable, they are usually still developed and maintained separately. It should thus be a design objective to minimize the changes to those codes, and keep the changes to each code free of dependence on the details of the other codes. This will ease the incorporation of new versions of the code into the coupling, as well as re-use of parts of the coupling to couple with different codes. In order to fulfill this objective, an interface for each code was created in the form of an object-oriented abstract data type. Object-oriented programming is an effective method for enforcing a separation between different parts of a program, and clarifying the communication between them. The interfaces enable the main program to control the codes in terms of high-level functionality. This differs from the established practice of a master/slave relationship, in which the slave code is incorporated into the master code as a set of subroutines. While this PhD research continues previous work with a coupling between CTF and TORT-TD, it makes two major original contributions: (1) using a fuel-performance code, instead of a thermal-hydraulics code's simplified built-in models, to model the feedback from the fuel rods, and (2) the design of an object-oriented interface as an innovative method to interact with a coupled code in a high-level, easily-understandable manner. The resulting code system will serve as a tool to study the question of under what conditions, and to what extent, these higher-fidelity methods will provide benefits to reactor core analysis. (Abstract shortened by UMI.)

  19. The nucleotide composition of microbial genomes indicates differential patterns of selection on core and accessory genomes.

    PubMed

    Bohlin, Jon; Eldholm, Vegard; Pettersson, John H O; Brynildsrud, Ola; Snipen, Lars

    2017-02-10

    The core genome consists of genes shared by the vast majority of a species and is therefore assumed to have been subjected to substantially stronger purifying selection than the more mobile elements of the genome, also known as the accessory genome. Here we examine intragenic base composition differences in core genomes and corresponding accessory genomes in 36 species, represented by the genomes of 731 bacterial strains, to assess the impact of selective forces on base composition in microbes. We also explore, in turn, how these results compare with findings for whole genome intragenic regions. We found that GC content in coding regions is significantly higher in core genomes than accessory genomes and whole genomes. Likewise, GC content variation within coding regions was significantly lower in core genomes than in accessory genomes and whole genomes. Relative entropy in coding regions, measured as the difference between observed and expected trinucleotide frequencies estimated from mononucleotide frequencies, was significantly higher in the core genomes than in accessory and whole genomes. Relative entropy was positively associated with coding region GC content within the accessory genomes, but not within the corresponding coding regions of core or whole genomes. The higher intragenic GC content and relative entropy, as well as the lower GC content variation, observed in the core genomes is most likely associated with selective constraints. It is unclear whether the positive association between GC content and relative entropy in the more mobile accessory genomes constitutes signatures of selection or selective neutral processes.

  20. Neutronics Analysis of SMART Small Modular Reactor using SRAC 2006 Code

    NASA Astrophysics Data System (ADS)

    Ramdhani, Rahmi N.; Prastyo, Puguh A.; Waris, Abdul; Widayani; Kurniadi, Rizal

    2017-07-01

    Small modular reactors (SMRs) are part of a new generation of nuclear reactor being developed worldwide. One of the advantages of SMR is the flexibility to adopt the advanced design concepts and technology. SMART (System integrated Modular Advanced ReacTor) is a small sized integral type PWR with a thermal power of 330 MW that has been developed by KAERI (Korea Atomic Energy Research Institute). SMART core consists of 57 fuel assemblies which are based on the well proven 17×17 array that has been used in Korean commercial PWRs. SMART is soluble boron free, and the high initial reactivity is mainly controlled by burnable absorbers. The goal of this study is to perform neutronics evaluation of SMART core with UO2 as main fuel. Neutronics calculation was performed by using PIJ and CITATION modules of SRAC 2006 code with JENDL 3.3 as nuclear data library.

  1. Study of a Fine Grained Threaded Framework Design

    NASA Astrophysics Data System (ADS)

    Jones, C. D.

    2012-12-01

    Traditionally, HEP experiments exploit the multiple cores in a CPU by having each core process one event. However, future PC designs are expected to use CPUs which double the number of processing cores at the same rate as the cost of memory falls by a factor of two. This effectively means the amount of memory per processing core will remain constant. This is a major challenge for LHC processing frameworks since the LHC is expected to deliver more complex events (e.g. greater pileup events) in the coming years while the LHC experiment's frameworks are already memory constrained. Therefore in the not so distant future we may need to be able to efficiently use multiple cores to process one event. In this presentation we will discuss a design for an HEP processing framework which can allow very fine grained parallelization within one event as well as supporting processing multiple events simultaneously while minimizing the memory footprint of the job. The design is built around the libdispatch framework created by Apple Inc. (a port for Linux is available) whose central concept is the use of task queues. This design also accommodates the reality that not all code will be thread safe and therefore allows one to easily mark modules or sub parts of modules as being thread unsafe. In addition, the design efficiently handles the requirement that events in one run must all be processed before starting to process events from a different run. After explaining the design we will provide measurements from simulating different processing scenarios where the processing times used for the simulation are drawn from processing times measured from actual CMS event processing.

  2. Analysis of C/E results of fission rate ratio measurements in several fast lead VENUS-F cores

    NASA Astrophysics Data System (ADS)

    Kochetkov, Anatoly; Krása, Antonín; Baeten, Peter; Vittiglio, Guido; Wagemans, Jan; Bécares, Vicente; Bianchini, Giancarlo; Fabrizio, Valentina; Carta, Mario; Firpo, Gabriele; Fridman, Emil; Sarotto, Massimo

    2017-09-01

    During the GUINEVERE FP6 European project (2006-2011), the zero-power VENUS water-moderated reactor was modified into VENUS-F, a mock-up of a lead cooled fast spectrum system with solid components that can be operated in both critical and subcritical mode. The Fast Reactor Experiments for hybrid Applications (FREYA) FP7 project was launched in 2011 to support the designs of the MYRRHA Accelerator Driven System (ADS) and the ALFRED Lead Fast Reactor (LFR). Three VENUS-F critical core configurations, simulating the complex MYRRHA core design and one configuration devoted to the LFR ALFRED core conditions were investigated in 2015. The MYRRHA related cores simulated step by step design peculiarities like the BeO reflector and in pile sections. For all of these cores the fuel assemblies were of a simple design consisting of 30% enriched metallic uranium, lead rodlets to simulate the coolant and Al2O3 rodlets to simulate the oxide fuel. Fission rate ratios of minor actinides such as Np-237, Am-241 as well as Pu-239, Pu-240, Pu-242 and U-238 to U-235 were measured in these VENUS-F critical assemblies with small fission chambers in specially designed locations, to determine the spectral indices in the different neutron spectrum conditions. The measurements have been analyzed using advanced computational tools including deterministic and stochastic codes and different nuclear data sets like JEFF-3.1, JEFF-3.2, ENDF/B7.1 and JENDL-4.0. The analysis of the C/E discrepancies will help to improve the nuclear data in the specific energy region of fast neutron reactor spectra.

  3. Analysis of Phenix end-of-life natural convection test with the MARS-LMR code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jeong, H. Y.; Ha, K. S.; Lee, K. L.

    The end-of-life test of Phenix reactor performed by the CEA provided an opportunity to have reliable and valuable test data for the validation and verification of a SFR system analysis code. KAERI joined this international program for the analysis of Phenix end-of-life natural circulation test coordinated by the IAEA from 2008. The main objectives of this study were to evaluate the capability of existing SFR system analysis code MARS-LMR and to identify any limitation of the code. The analysis was performed in three stages: pre-test analysis, blind posttest analysis, and final post-test analysis. In the pre-test analysis, the design conditionsmore » provided by the CEA were used to obtain a prediction of the test. The blind post-test analysis was based on the test conditions measured during the tests but the test results were not provided from the CEA. The final post-test analysis was performed to predict the test results as accurate as possible by improving the previous modeling of the test. Based on the pre-test analysis and blind test analysis, the modeling for heat structures in the hot pool and cold pool, steel structures in the core, heat loss from roof and vessel, and the flow path at core outlet were reinforced in the final analysis. The results of the final post-test analysis could be characterized into three different phases. In the early phase, the MARS-LMR simulated the heat-up process correctly due to the enhanced heat structure modeling. In the mid phase before the opening of SG casing, the code reproduced the decrease of core outlet temperature successfully. Finally, in the later phase the increase of heat removal by the opening of the SG opening was well predicted with the MARS-LMR code. (authors)« less

  4. HACC: Simulating sky surveys on state-of-the-art supercomputing architectures

    NASA Astrophysics Data System (ADS)

    Habib, Salman; Pope, Adrian; Finkel, Hal; Frontiere, Nicholas; Heitmann, Katrin; Daniel, David; Fasel, Patricia; Morozov, Vitali; Zagaris, George; Peterka, Tom; Vishwanath, Venkatram; Lukić, Zarija; Sehrish, Saba; Liao, Wei-keng

    2016-01-01

    Current and future surveys of large-scale cosmic structure are associated with a massive and complex datastream to study, characterize, and ultimately understand the physics behind the two major components of the 'Dark Universe', dark energy and dark matter. In addition, the surveys also probe primordial perturbations and carry out fundamental measurements, such as determining the sum of neutrino masses. Large-scale simulations of structure formation in the Universe play a critical role in the interpretation of the data and extraction of the physics of interest. Just as survey instruments continue to grow in size and complexity, so do the supercomputers that enable these simulations. Here we report on HACC (Hardware/Hybrid Accelerated Cosmology Code), a recently developed and evolving cosmology N-body code framework, designed to run efficiently on diverse computing architectures and to scale to millions of cores and beyond. HACC can run on all current supercomputer architectures and supports a variety of programming models and algorithms. It has been demonstrated at scale on Cell- and GPU-accelerated systems, standard multi-core node clusters, and Blue Gene systems. HACC's design allows for ease of portability, and at the same time, high levels of sustained performance on the fastest supercomputers available. We present a description of the design philosophy of HACC, the underlying algorithms and code structure, and outline implementation details for several specific architectures. We show selected accuracy and performance results from some of the largest high resolution cosmological simulations so far performed, including benchmarks evolving more than 3.6 trillion particles.

  5. HACC: Simulating sky surveys on state-of-the-art supercomputing architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Habib, Salman; Pope, Adrian; Finkel, Hal

    2016-01-01

    Current and future surveys of large-scale cosmic structure are associated with a massive and complex datastream to study, characterize, and ultimately understand the physics behind the two major components of the ‘Dark Universe’, dark energy and dark matter. In addition, the surveys also probe primordial perturbations and carry out fundamental measurements, such as determining the sum of neutrino masses. Large-scale simulations of structure formation in the Universe play a critical role in the interpretation of the data and extraction of the physics of interest. Just as survey instruments continue to grow in size and complexity, so do the supercomputers thatmore » enable these simulations. Here we report on HACC (Hardware/Hybrid Accelerated Cosmology Code), a recently developed and evolving cosmology N-body code framework, designed to run efficiently on diverse computing architectures and to scale to millions of cores and beyond. HACC can run on all current supercomputer architectures and supports a variety of programming models and algorithms. It has been demonstrated at scale on Cell- and GPU-accelerated systems, standard multi-core node clusters, and Blue Gene systems. HACC’s design allows for ease of portability, and at the same time, high levels of sustained performance on the fastest supercomputers available. We present a description of the design philosophy of HACC, the underlying algorithms and code structure, and outline implementation details for several specific architectures. We show selected accuracy and performance results from some of the largest high resolution cosmological simulations so far performed, including benchmarks evolving more than 3.6 trillion particles.« less

  6. Development of the V4.2m5 and V5.0m0 Multigroup Cross Section Libraries for MPACT for PWR and BWR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Kang Seog; Clarno, Kevin T.; Gentry, Cole

    2017-03-01

    The MPACT neutronics module of the Consortium for Advanced Simulation of Light Water Reactors (CASL) core simulator is a 3-D whole core transport code being developed for the CASL toolset, Virtual Environment for Reactor Analysis (VERA). Key characteristics of the MPACT code include (1) a subgroup method for resonance selfshielding and (2) a whole-core transport solver with a 2-D/1-D synthesis method. The MPACT code requires a cross section library to support all the MPACT core simulation capabilities which would be the most influencing component for simulation accuracy.

  7. Safety and core design of large liquid-metal cooled fast breeder reactors

    NASA Astrophysics Data System (ADS)

    Qvist, Staffan Alexander

    In light of the scientific evidence for changes in the climate caused by greenhouse-gas emissions from human activities, the world is in ever more desperate need of new, inexhaustible, safe and clean primary energy sources. A viable solution to this problem is the widespread adoption of nuclear breeder reactor technology. Innovative breeder reactor concepts using liquid-metal coolants such as sodium or lead will be able to utilize the waste produced by the current light water reactor fuel cycle to power the entire world for several centuries to come. Breed & burn (B&B) type fast reactor cores can unlock the energy potential of readily available fertile material such as depleted uranium without the need for chemical reprocessing. Using B&B technology, nuclear waste generation, uranium mining needs and proliferation concerns can be greatly reduced, and after a transitional period, enrichment facilities may no longer be needed. In this dissertation, new passively operating safety systems for fast reactors cores are presented. New analysis and optimization methods for B&B core design have been developed, along with a comprehensive computer code that couples neutronics, thermal-hydraulics and structural mechanics and enables a completely automated and optimized fast reactor core design process. In addition, an experiment that expands the knowledge-base of corrosion issues of lead-based coolants in nuclear reactors was designed and built. The motivation behind the work presented in this thesis is to help facilitate the widespread adoption of safe and efficient fast reactor technology.

  8. Evolvix BEST Names for semantic reproducibility across code2brain interfaces

    PubMed Central

    Scheuer, Katherine S.; Keel, Seth A.; Vyas, Vaibhav; Liblit, Ben; Hanlon, Bret; Ferris, Michael C.; Yin, John; Dutra, Inês; Pietsch, Anthony; Javid, Christine G.; Moog, Cecilia L.; Meyer, Jocelyn; Dresel, Jerdon; McLoone, Brian; Loberger, Sonya; Movaghar, Arezoo; Gilchrist‐Scott, Morgaine; Sabri, Yazeed; Sescleifer, Dave; Pereda‐Zorrilla, Ivan; Zietlow, Andrew; Smith, Rodrigo; Pietenpol, Samantha; Goldfinger, Jacob; Atzen, Sarah L.; Freiberg, Erika; Waters, Noah P.; Nusbaum, Claire; Nolan, Erik; Hotz, Alyssa; Kliman, Richard M.; Mentewab, Ayalew; Fregien, Nathan; Loewe, Martha

    2016-01-01

    Names in programming are vital for understanding the meaning of code and big data. We define code2brain (C2B) interfaces as maps in compilers and brains between meaning and naming syntax, which help to understand executable code. While working toward an Evolvix syntax for general‐purpose programming that makes accurate modeling easy for biologists, we observed how names affect C2B quality. To protect learning and coding investments, C2B interfaces require long‐term backward compatibility and semantic reproducibility (accurate reproduction of computational meaning from coder‐brains to reader‐brains by code alone). Semantic reproducibility is often assumed until confusing synonyms degrade modeling in biology to deciphering exercises. We highlight empirical naming priorities from diverse individuals and roles of names in different modes of computing to show how naming easily becomes impossibly difficult. We present the Evolvix BEST (Brief, Explicit, Summarizing, Technical) Names concept for reducing naming priority conflicts, test it on a real challenge by naming subfolders for the Project Organization Stabilizing Tool system, and provide naming questionnaires designed to facilitate C2B debugging by improving names used as keywords in a stabilizing programming language. Our experiences inspired us to develop Evolvix using a flipped programming language design approach with some unexpected features and BEST Names at its core. PMID:27918836

  9. Medical reliable network using concatenated channel codes through GSM network.

    PubMed

    Ahmed, Emtithal; Kohno, Ryuji

    2013-01-01

    Although the 4(th) generation (4G) of global mobile communication network, i.e. Long Term Evolution (LTE) coexisting with the 3(rd) generation (3G) has successfully started; the 2(nd) generation (2G), i.e. Global System for Mobile communication (GSM) still playing an important role in many developing countries. Without any other reliable network infrastructure, GSM can be applied for tele-monitoring applications, where high mobility and low cost are necessary. A core objective of this paper is to introduce the design of a more reliable and dependable Medical Network Channel Code system (MNCC) through GSM Network. MNCC design based on simple concatenated channel code, which is cascade of an inner code (GSM) and an extra outer code (Convolution Code) in order to protect medical data more robust against channel errors than other data using the existing GSM network. In this paper, the MNCC system will provide Bit Error Rate (BER) equivalent to the BER for medical tele monitoring of physiological signals, which is 10(-5) or less. The performance of the MNCC has been proven and investigated using computer simulations under different channels condition such as, Additive White Gaussian Noise (AWGN), Rayleigh noise and burst noise. Generally the MNCC system has been providing better performance as compared to GSM.

  10. The VENUS/NWChem software package. Tight coupling between chemical dynamics simulations and electronic structure theory

    NASA Astrophysics Data System (ADS)

    Lourderaj, Upakarasamy; Sun, Rui; Kohale, Swapnil C.; Barnes, George L.; de Jong, Wibe A.; Windus, Theresa L.; Hase, William L.

    2014-03-01

    The interface for VENUS and NWChem, and the resulting software package for direct dynamics simulations are described. The coupling of the two codes is considered to be a tight coupling since the two codes are compiled and linked together and act as one executable with data being passed between the two codes through routine calls. The advantages of this type of coupling are discussed. The interface has been designed to have as little interference as possible with the core codes of both VENUS and NWChem. VENUS is the code that propagates the direct dynamics trajectories and, therefore, is the program that drives the overall execution of VENUS/NWChem. VENUS has remained an essentially sequential code, which uses the highly parallel structure of NWChem. Subroutines of the interface that accomplish the data transmission and communication between the two computer programs are described. Recent examples of the use of VENUS/NWChem for direct dynamics simulations are summarized.

  11. GPU Optimizations for a Production Molecular Docking Code*

    PubMed Central

    Landaverde, Raphael; Herbordt, Martin C.

    2015-01-01

    Modeling molecular docking is critical to both understanding life processes and designing new drugs. In previous work we created the first published GPU-accelerated docking code (PIPER) which achieved a roughly 5× speed-up over a contemporaneous 4 core CPU. Advances in GPU architecture and in the CPU code, however, have since reduced this relalative performance by a factor of 10. In this paper we describe the upgrade of GPU PIPER. This required an entire rewrite, including algorithm changes and moving most remaining non-accelerated CPU code onto the GPU. The result is a 7× improvement in GPU performance and a 3.3× speedup over the CPU-only code. We find that this difference in time is almost entirely due to the difference in run times of the 3D FFT library functions on CPU (MKL) and GPU (cuFFT), respectively. The GPU code has been integrated into the ClusPro docking server which has over 4000 active users. PMID:26594667

  12. GPU Optimizations for a Production Molecular Docking Code.

    PubMed

    Landaverde, Raphael; Herbordt, Martin C

    2014-09-01

    Modeling molecular docking is critical to both understanding life processes and designing new drugs. In previous work we created the first published GPU-accelerated docking code (PIPER) which achieved a roughly 5× speed-up over a contemporaneous 4 core CPU. Advances in GPU architecture and in the CPU code, however, have since reduced this relalative performance by a factor of 10. In this paper we describe the upgrade of GPU PIPER. This required an entire rewrite, including algorithm changes and moving most remaining non-accelerated CPU code onto the GPU. The result is a 7× improvement in GPU performance and a 3.3× speedup over the CPU-only code. We find that this difference in time is almost entirely due to the difference in run times of the 3D FFT library functions on CPU (MKL) and GPU (cuFFT), respectively. The GPU code has been integrated into the ClusPro docking server which has over 4000 active users.

  13. An Architecture for Coexistence with Multiple Users in Frequency Hopping Cognitive Radio Networks

    DTIC Science & Technology

    2013-03-01

    the base WARP system, a custom IP core written in VHDL , and the Virtex IV’s embedded PowerPC core with C code to implement the radio and hopset...shown in Appendix C as Figure C.2. All VHDL code necessary to implement this IP core is included in Appendix G. 69 Figure 3.19: FPGA bus structure...subsystem functionality. A total of 1,430 lines of VHDL code were implemented for this research. 1 library ieee; 2 use ieee.std logic 1164.all; 3 use

  14. 2nd Generation QUATARA Flight Computer Project

    NASA Technical Reports Server (NTRS)

    Falker, Jay; Keys, Andrew; Fraticelli, Jose Molina; Capo-Iugo, Pedro; Peeples, Steven

    2015-01-01

    Single core flight computer boards have been designed, developed, and tested (DD&T) to be flown in small satellites for the last few years. In this project, a prototype flight computer will be designed as a distributed multi-core system containing four microprocessors running code in parallel. This flight computer will be capable of performing multiple computationally intensive tasks such as processing digital and/or analog data, controlling actuator systems, managing cameras, operating robotic manipulators and transmitting/receiving from/to a ground station. In addition, this flight computer will be designed to be fault tolerant by creating both a robust physical hardware connection and by using a software voting scheme to determine the processor's performance. This voting scheme will leverage on the work done for the Space Launch System (SLS) flight software. The prototype flight computer will be constructed with Commercial Off-The-Shelf (COTS) components which are estimated to survive for two years in a low-Earth orbit.

  15. An approach for coupled-code multiphysics core simulations from a common input

    DOE PAGES

    Schmidt, Rodney; Belcourt, Kenneth; Hooper, Russell; ...

    2014-12-10

    This study describes an approach for coupled-code multiphysics reactor core simulations that is being developed by the Virtual Environment for Reactor Applications (VERA) project in the Consortium for Advanced Simulation of Light-Water Reactors (CASL). In this approach a user creates a single problem description, called the “VERAIn” common input file, to define and setup the desired coupled-code reactor core simulation. A preprocessing step accepts the VERAIn file and generates a set of fully consistent input files for the different physics codes being coupled. The problem is then solved using a single-executable coupled-code simulation tool applicable to the problem, which ismore » built using VERA infrastructure software tools and the set of physics codes required for the problem of interest. The approach is demonstrated by performing an eigenvalue and power distribution calculation of a typical three-dimensional 17 × 17 assembly with thermal–hydraulic and fuel temperature feedback. All neutronics aspects of the problem (cross-section calculation, neutron transport, power release) are solved using the Insilico code suite and are fully coupled to a thermal–hydraulic analysis calculated by the Cobra-TF (CTF) code. The single-executable coupled-code (Insilico-CTF) simulation tool is created using several VERA tools, including LIME (Lightweight Integrating Multiphysics Environment for coupling codes), DTK (Data Transfer Kit), Trilinos, and TriBITS. Parallel calculations are performed on the Titan supercomputer at Oak Ridge National Laboratory using 1156 cores, and a synopsis of the solution results and code performance is presented. Finally, ongoing development of this approach is also briefly described.« less

  16. Posttest analysis of the FFTF inherent safety tests

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Padilla, A. Jr.; Claybrook, S.W.

    Inherent safety tests were performed during 1986 in the 400-MW (thermal) Fast Flux Test Facility (FFTF) reactor to demonstrate the effectiveness of an inherent shutdown device called the gas expansion module (GEM). The GEM device provided a strong negative reactivity feedback during loss-of-flow conditions by increasing the neutron leakage as a result of an expanding gas bubble. The best-estimate pretest calculations for these tests were performed using the IANUS plant analysis code (Westinghouse Electric Corporation proprietary code) and the MELT/SIEX3 core analysis code. These two codes were also used to perform the required operational safety analyses for the FFTF reactormore » and plant. Although it was intended to also use the SASSYS systems (core and plant) analysis code, the calibration of the SASSYS code for FFTF core and plant analysis was not completed in time to perform pretest analyses. The purpose of this paper is to present the results of the posttest analysis of the 1986 FFTF inherent safety tests using the SASSYS code.« less

  17. Methodes d'optimisation des parametres 2D du reflecteur dans un reacteur a eau pressurisee

    NASA Astrophysics Data System (ADS)

    Clerc, Thomas

    With a third of the reactors in activity, the Pressurized Water Reactor (PWR) is today the most used reactor design in the world. This technology equips all the 19 EDF power plants. PWRs fit into the category of thermal reactors, because it is mainly the thermal neutrons that contribute to the fission reaction. The pressurized light water is both used as the moderator of the reaction and as the coolant. The active part of the core is composed of uranium, slightly enriched in uranium 235. The reflector is a region surrounding the active core, and containing mostly water and stainless steel. The purpose of the reflector is to protect the vessel from radiations, and also to slow down the neutrons and reflect them into the core. Given that the neutrons participate to the reaction of fission, the study of their behavior within the core is capital to understand the general functioning of how the reactor works. The neutrons behavior is ruled by the transport equation, which is very complex to solve numerically, and requires very long calculation. This is the reason why the core codes that will be used in this study solve simplified equations to approach the neutrons behavior in the core, in an acceptable calculation time. In particular, we will focus our study on the diffusion equation and approximated transport equations, such as SPN or S N equations. The physical properties of the reflector are radically different from those of the fissile core, and this structural change causes important tilt in the neutron flux at the core/reflector interface. This is why it is very important to accurately design the reflector, in order to precisely recover the neutrons behavior over the whole core. Existing reflector calculation techniques are based on the Lefebvre-Lebigot method. This method is only valid if the energy continuum of the neutrons is discretized in two energy groups, and if the diffusion equation is used. The method leads to the calculation of a homogeneous reflector. The aim of this study is to create a computational scheme able to compute the parameters of heterogeneous, multi-group reflectors, with both diffusion and SPN/SN operators. For this purpose, two computational schemes are designed to perform such a reflector calculation. The strategy used in both schemes is to minimize the discrepancies between a power distribution computed with a core code and a reference distribution, which will be obtained with an APOLLO2 calculation based on the method Method Of Characteristics (MOC). In both computational schemes, the optimization parameters, also called control variables, are the diffusion coefficients in each zone of the reflector, for diffusion calculations, and the P-1 corrected macroscopic total cross-sections in each zone of the reflector, for SPN/SN calculations (or correction factors on these parameters). After a first validation of our computational schemes, the results are computed, always by optimizing the fast diffusion coefficient for each zone of the reflector. All the tools of the data assimilation have been used to reflect the different behavior of the solvers in the different parts of the core. Moreover, the reflector is refined in six separated zones, corresponding to the physical structure of the reflector. There will be then six control variables for the optimization algorithms. [special characters omitted]. Our computational schemes are then able to compute heterogeneous, 2-group or multi-group reflectors, using diffusion or SPN/SN operators. The optimization performed reduces the discrepancies distribution between the power computed with the core codes and the reference power. However, there are two main limitations to this study: first the homogeneous modeling of the reflector assemblies doesn't allow to properly describe its physical structure near the core/reflector interface. Moreover, the fissile assemblies are modeled in infinite medium, and this model reaches its limit at the core/reflector interface. These two problems should be tackled in future studies. (Abstract shortened by UMI.).

  18. MATCHED-INDEX-OF-REFRACTION FLOW FACILITY FOR FUNDAMENTAL AND APPLIED RESEARCH

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Piyush Sabharwall; Carl Stoots; Donald M. McEligot

    2014-11-01

    Significant challenges face reactor designers with regard to thermal hydraulic design and associated modeling for advanced reactor concepts. Computational thermal hydraulic codes solve only a piece of the core. There is a need for a whole core dynamics system code with local resolution to investigate and understand flow behavior with all the relevant physics and thermo-mechanics. The matched index of refraction (MIR) flow facility at Idaho National Laboratory (INL) has a unique capability to contribute to the development of validated computational fluid dynamics (CFD) codes through the use of state-of-the-art optical measurement techniques, such as Laser Doppler Velocimetry (LDV) andmore » Particle Image Velocimetry (PIV). PIV is a non-intrusive velocity measurement technique that tracks flow by imaging the movement of small tracer particles within a fluid. At the heart of a PIV calculation is the cross correlation algorithm, which is used to estimate the displacement of particles in some small part of the image over the time span between two images. Generally, the displacement is indicated by the location of the largest peak. To quantify these measurements accurately, sophisticated processing algorithms correlate the locations of particles within the image to estimate the velocity (Ref. 1). Prior to use with reactor deign, the CFD codes have to be experimentally validated, which requires rigorous experimental measurements to produce high quality, multi-dimensional flow field data with error quantification methodologies. Computational thermal hydraulic codes solve only a piece of the core. There is a need for a whole core dynamics system code with local resolution to investigate and understand flow behavior with all the relevant physics and thermo-mechanics. Computational techniques with supporting test data may be needed to address the heat transfer from the fuel to the coolant during the transition from turbulent to laminar flow, including the possibility of an early laminarization of the flow (Refs. 2 and 3) (laminarization is caused when the coolant velocity is theoretically in the turbulent regime, but the heat transfer properties are indicative of the coolant velocity being in the laminar regime). Such studies are complicated enough that computational fluid dynamics (CFD) models may not converge to the same conclusion. Thus, experimentally scaled thermal hydraulic data with uncertainties should be developed to support modeling and simulation for verification and validation activities. The fluid/solid index of refraction matching technique allows optical access in and around geometries that would otherwise be impossible while the large test section of the INL system provides better spatial and temporal resolution than comparable facilities. Benchmark data for assessing computational fluid dynamics can be acquired for external flows, internal flows, and coupled internal/external flows for better understanding of physical phenomena of interest. The core objective of this study is to describe MIR and its capabilities, and mention current development areas for uncertainty quantification, mainly the uncertainty surface method and cross-correlation method. Using these methods, it is anticipated to establish a suitable approach to quantify PIV uncertainty for experiments performed in the MIR.« less

  19. Hardware accelerated high performance neutron transport computation based on AGENT methodology

    NASA Astrophysics Data System (ADS)

    Xiao, Shanjie

    The spatial heterogeneity of the next generation Gen-IV nuclear reactor core designs brings challenges to the neutron transport analysis. The Arbitrary Geometry Neutron Transport (AGENT) AGENT code is a three-dimensional neutron transport analysis code being developed at the Laboratory for Neutronics and Geometry Computation (NEGE) at Purdue University. It can accurately describe the spatial heterogeneity in a hierarchical structure through the R-function solid modeler. The previous version of AGENT coupled the 2D transport MOC solver and the 1D diffusion NEM solver to solve the three dimensional Boltzmann transport equation. In this research, the 2D/1D coupling methodology was expanded to couple two transport solvers, the radial 2D MOC solver and the axial 1D MOC solver, for better accuracy. The expansion was benchmarked with the widely applied C5G7 benchmark models and two fast breeder reactor models, and showed good agreement with the reference Monte Carlo results. In practice, the accurate neutron transport analysis for a full reactor core is still time-consuming and thus limits its application. Therefore, another content of my research is focused on designing a specific hardware based on the reconfigurable computing technique in order to accelerate AGENT computations. It is the first time that the application of this type is used to the reactor physics and neutron transport for reactor design. The most time consuming part of the AGENT algorithm was identified. Moreover, the architecture of the AGENT acceleration system was designed based on the analysis. Through the parallel computation on the specially designed, highly efficient architecture, the acceleration design on FPGA acquires high performance at the much lower working frequency than CPUs. The whole design simulations show that the acceleration design would be able to speedup large scale AGENT computations about 20 times. The high performance AGENT acceleration system will drastically shortening the computation time for 3D full-core neutron transport analysis, making the AGENT methodology unique and advantageous, and thus supplies the possibility to extend the application range of neutron transport analysis in either industry engineering or academic research.

  20. Adaptive Core Simulation Employing Discrete Inverse Theory - Part II: Numerical Experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abdel-Khalik, Hany S.; Turinsky, Paul J.

    2005-07-15

    Use of adaptive simulation is intended to improve the fidelity and robustness of important core attribute predictions such as core power distribution, thermal margins, and core reactivity. Adaptive simulation utilizes a selected set of past and current reactor measurements of reactor observables, i.e., in-core instrumentation readings, to adapt the simulation in a meaningful way. The companion paper, ''Adaptive Core Simulation Employing Discrete Inverse Theory - Part I: Theory,'' describes in detail the theoretical background of the proposed adaptive techniques. This paper, Part II, demonstrates several computational experiments conducted to assess the fidelity and robustness of the proposed techniques. The intentmore » is to check the ability of the adapted core simulator model to predict future core observables that are not included in the adaption or core observables that are recorded at core conditions that differ from those at which adaption is completed. Also, this paper demonstrates successful utilization of an efficient sensitivity analysis approach to calculate the sensitivity information required to perform the adaption for millions of input core parameters. Finally, this paper illustrates a useful application for adaptive simulation - reducing the inconsistencies between two different core simulator code systems, where the multitudes of input data to one code are adjusted to enhance the agreement between both codes for important core attributes, i.e., core reactivity and power distribution. Also demonstrated is the robustness of such an application.« less

  1. Modeling of Flow Blockage in a Liquid Metal-Cooled Reactor Subassembly with a Subchannel Analysis Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jeong, Hae-Yong; Ha, Kwi-Seok; Chang, Won-Pyo

    The local blockage in a subassembly of a liquid metal-cooled reactor (LMR) is of importance to the plant safety because of the compact design and the high power density of the core. To analyze the thermal-hydraulic parameters in a subassembly of a liquid metal-cooled reactor with a flow blockage, the Korea Atomic Energy Research Institute has developed the MATRA-LMR-FB code. This code uses the distributed resistance model to describe the sweeping flow formed by the wire wrap around the fuel rods and to model the recirculation flow after a blockage. The hybrid difference scheme is also adopted for the descriptionmore » of the convective terms in the recirculating wake region of low velocity. Some state-of-the-art turbulent mixing models were implemented in the code, and the models suggested by Rehme and by Zhukov are analyzed and found to be appropriate for the description of the flow blockage in an LMR subassembly. The MATRA-LMR-FB code predicts accurately the experimental data of the Oak Ridge National Laboratory 19-pin bundle with a blockage for both the high-flow and low-flow conditions. The influences of the distributed resistance model, the hybrid difference method, and the turbulent mixing models are evaluated step by step with the experimental data. The appropriateness of the models also has been evaluated through a comparison with the results from the COMMIX code calculation. The flow blockage for the KALIMER design has been analyzed with the MATRA-LMR-FB code and is compared with the SABRE code to guarantee the design safety for the flow blockage.« less

  2. A comparative analysis of moral principles and behavioral norms in eight ethical codes relevant to health sciences librarianship, medical informatics, and the health professions.

    PubMed

    Byrd, Gary D; Winkelstein, Peter

    2014-10-01

    Based on the authors' shared interest in the interprofessional challenges surrounding health information management, this study explores the degree to which librarians, informatics professionals, and core health professionals in medicine, nursing, and public health share common ethical behavior norms grounded in moral principles. Using the "Principlism" framework from a widely cited textbook of biomedical ethics, the authors analyze the statements in the ethical codes for associations of librarians (Medical Library Association [MLA], American Library Association, and Special Libraries Association), informatics professionals (American Medical Informatics Association [AMIA] and American Health Information Management Association), and core health professionals (American Medical Association, American Nurses Association, and American Public Health Association). This analysis focuses on whether and how the statements in these eight codes specify core moral norms (Autonomy, Beneficence, Non-Maleficence, and Justice), core behavioral norms (Veracity, Privacy, Confidentiality, and Fidelity), and other norms that are empirically derived from the code statements. These eight ethical codes share a large number of common behavioral norms based most frequently on the principle of Beneficence, then on Autonomy and Justice, but rarely on Non-Maleficence. The MLA and AMIA codes share the largest number of common behavioral norms, and these two associations also share many norms with the other six associations. The shared core of behavioral norms among these professions, all grounded in core moral principles, point to many opportunities for building effective interprofessional communication and collaboration regarding the development, management, and use of health information resources and technologies.

  3. A comparative analysis of moral principles and behavioral norms in eight ethical codes relevant to health sciences librarianship, medical informatics, and the health professions

    PubMed Central

    Byrd, Gary D.; Winkelstein, Peter

    2014-01-01

    Objective: Based on the authors' shared interest in the interprofessional challenges surrounding health information management, this study explores the degree to which librarians, informatics professionals, and core health professionals in medicine, nursing, and public health share common ethical behavior norms grounded in moral principles. Methods: Using the “Principlism” framework from a widely cited textbook of biomedical ethics, the authors analyze the statements in the ethical codes for associations of librarians (Medical Library Association [MLA], American Library Association, and Special Libraries Association), informatics professionals (American Medical Informatics Association [AMIA] and American Health Information Management Association), and core health professionals (American Medical Association, American Nurses Association, and American Public Health Association). This analysis focuses on whether and how the statements in these eight codes specify core moral norms (Autonomy, Beneficence, Non-Maleficence, and Justice), core behavioral norms (Veracity, Privacy, Confidentiality, and Fidelity), and other norms that are empirically derived from the code statements. Results: These eight ethical codes share a large number of common behavioral norms based most frequently on the principle of Beneficence, then on Autonomy and Justice, but rarely on Non-Maleficence. The MLA and AMIA codes share the largest number of common behavioral norms, and these two associations also share many norms with the other six associations. Implications: The shared core of behavioral norms among these professions, all grounded in core moral principles, point to many opportunities for building effective interprofessional communication and collaboration regarding the development, management, and use of health information resources and technologies. PMID:25349543

  4. Verification of combined thermal-hydraulic and heat conduction analysis code FLOWNET/TRUMP

    NASA Astrophysics Data System (ADS)

    Maruyama, Soh; Fujimoto, Nozomu; Kiso, Yoshihiro; Murakami, Tomoyuki; Sudo, Yukio

    1988-09-01

    This report presents the verification results of the combined thermal-hydraulic and heat conduction analysis code, FLOWNET/TRUMP which has been utilized for the core thermal hydraulic design, especially for the analysis of flow distribution among fuel block coolant channels, the determination of thermal boundary conditions for fuel block stress analysis and the estimation of fuel temperature in the case of fuel block coolant channel blockage accident in the design of the High Temperature Engineering Test Reactor(HTTR), which the Japan Atomic Energy Research Institute has been planning to construct in order to establish basic technologies for future advanced very high temperature gas-cooled reactors and to be served as an irradiation test reactor for promotion of innovative high temperature new frontier technologies. The verification of the code was done through the comparison between the analytical results and experimental results of the Helium Engineering Demonstration Loop Multi-channel Test Section(HENDEL T(sub 1-M)) with simulated fuel rods and fuel blocks.

  5. The SAS4A/SASSYS-1 Safety Analysis Code System, Version 5

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fanning, T. H.; Brunett, A. J.; Sumner, T.

    The SAS4A/SASSYS-1 computer code is developed by Argonne National Laboratory for thermal, hydraulic, and neutronic analysis of power and flow transients in liquidmetal- cooled nuclear reactors (LMRs). SAS4A was developed to analyze severe core disruption accidents with coolant boiling and fuel melting and relocation, initiated by a very low probability coincidence of an accident precursor and failure of one or more safety systems. SASSYS-1, originally developed to address loss-of-decay-heat-removal accidents, has evolved into a tool for margin assessment in design basis accident (DBA) analysis and for consequence assessment in beyond-design-basis accident (BDBA) analysis. SAS4A contains detailed, mechanistic models of transientmore » thermal, hydraulic, neutronic, and mechanical phenomena to describe the response of the reactor core, its coolant, fuel elements, and structural members to accident conditions. The core channel models in SAS4A provide the capability to analyze the initial phase of core disruptive accidents, through coolant heat-up and boiling, fuel element failure, and fuel melting and relocation. Originally developed to analyze oxide fuel clad with stainless steel, the models in SAS4A have been extended and specialized to metallic fuel with advanced alloy cladding. SASSYS-1 provides the capability to perform a detailed thermal/hydraulic simulation of the primary and secondary sodium coolant circuits and the balance-ofplant steam/water circuit. These sodium and steam circuit models include component models for heat exchangers, pumps, valves, turbines, and condensers, and thermal/hydraulic models of pipes and plena. SASSYS-1 also contains a plant protection and control system modeling capability, which provides digital representations of reactor, pump, and valve controllers and their response to input signal changes.« less

  6. Nuclear design analysis of square-lattice honeycomb space nuclear rocket engine

    NASA Astrophysics Data System (ADS)

    Widargo, Reza; Anghaie, Samim

    1999-01-01

    The square-lattice honeycomb reactor is designed based on a cylindrical core that is determined to have critical diameter and length of 0.50 m and 0.50 c, respectively. A 0.10-cm thick radial graphite reflector, in addition to a 0.20-m thick axial graphite reflector are used to reduce neutron leakage from the reactor. The core is fueled with solid solution of 93% enriched (U, Zr, Nb)C, which is one of several ternary uranium carbides that are considered for this concept. The fuel is to be fabricated as 2 mm grooved (U, Zr, Nb)C wafers. The fuel wafers are used to form square-lattice honeycomb fuel assemblies, 0.10 m in length with 30% cross-sectional flow area. Five fuel assemblies are stacked up axially to form the reactor core. Based on the 30% void fraction, the width of the square flow channel is about 1.3 mm. The hydrogen propellant is passed through these flow channels and removes the heat from the reactor core. To perform nuclear design analysis, a series of neutron transport and diffusion codes are used. The preliminary results are obtained using a simple four-group cross-section model. To optimize the nuclear design, the fuel densities are varied for each assembly. Tantalum, hafnium and tungsten are considered and used as a replacement for niobium in fuel material to provide water submersion sub-criticality for the reactor. Axial and radial neutron flux and power density distributions are calculated for the core. Results of the neutronic analysis indicate that the core has a relatively fast spectrum. From the results of the thermal hydraulic analyses, eight axial temperature zones are chosen for the calculation of group average cross-sections. An iterative process is conducted to couple the neutronic calculations with the thermal hydraulics calculations. Results of the nuclear design analysis indicate that a compact core can be designed based on ternary uranium carbide square-lattice honeycomb fuel. This design provides a relatively high thrust to weight ratio.

  7. A neutron spectrum unfolding computer code based on artificial neural networks

    NASA Astrophysics Data System (ADS)

    Ortiz-Rodríguez, J. M.; Reyes Alfaro, A.; Reyes Haro, A.; Cervantes Viramontes, J. M.; Vega-Carrillo, H. R.

    2014-02-01

    The Bonner Spheres Spectrometer consists of a thermal neutron sensor placed at the center of a number of moderating polyethylene spheres of different diameters. From the measured readings, information can be derived about the spectrum of the neutron field where measurements were made. Disadvantages of the Bonner system are the weight associated with each sphere and the need to sequentially irradiate the spheres, requiring long exposure periods. Provided a well-established response matrix and adequate irradiation conditions, the most delicate part of neutron spectrometry, is the unfolding process. The derivation of the spectral information is not simple because the unknown is not given directly as a result of the measurements. The drawbacks associated with traditional unfolding procedures have motivated the need of complementary approaches. Novel methods based on Artificial Intelligence, mainly Artificial Neural Networks, have been widely investigated. In this work, a neutron spectrum unfolding code based on neural nets technology is presented. This code is called Neutron Spectrometry and Dosimetry with Artificial Neural networks unfolding code that was designed in a graphical interface. The core of the code is an embedded neural network architecture previously optimized using the robust design of artificial neural networks methodology. The main features of the code are: easy to use, friendly and intuitive to the user. This code was designed for a Bonner Sphere System based on a 6LiI(Eu) neutron detector and a response matrix expressed in 60 energy bins taken from an International Atomic Energy Agency compilation. The main feature of the code is that as entrance data, for unfolding the neutron spectrum, only seven rate counts measured with seven Bonner spheres are required; simultaneously the code calculates 15 dosimetric quantities as well as the total flux for radiation protection purposes. This code generates a full report with all information of the unfolding in the HTML format. NSDann unfolding code is freely available, upon request to the authors.

  8. New Multi-group Transport Neutronics (PHISICS) Capabilities for RELAP5-3D and its Application to Phase I of the OECD/NEA MHTGR-350 MW Benchmark

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gerhard Strydom; Cristian Rabiti; Andrea Alfonsi

    2012-10-01

    PHISICS is a neutronics code system currently under development at the Idaho National Laboratory (INL). Its goal is to provide state of the art simulation capability to reactor designers. The different modules for PHISICS currently under development are a nodal and semi-structured transport core solver (INSTANT), a depletion module (MRTAU) and a cross section interpolation (MIXER) module. The INSTANT module is the most developed of the mentioned above. Basic functionalities are ready to use, but the code is still in continuous development to extend its capabilities. This paper reports on the effort of coupling the nodal kinetics code package PHISICSmore » (INSTANT/MRTAU/MIXER) to the thermal hydraulics system code RELAP5-3D, to enable full core and system modeling. This will enable the possibility to model coupled (thermal-hydraulics and neutronics) problems with more options for 3D neutron kinetics, compared to the existing diffusion theory neutron kinetics module in RELAP5-3D (NESTLE). In the second part of the paper, an overview of the OECD/NEA MHTGR-350 MW benchmark is given. This benchmark has been approved by the OECD, and is based on the General Atomics 350 MW Modular High Temperature Gas Reactor (MHTGR) design. The benchmark includes coupled neutronics thermal hydraulics exercises that require more capabilities than RELAP5-3D with NESTLE offers. Therefore, the MHTGR benchmark makes extensive use of the new PHISICS/RELAP5-3D coupling capabilities. The paper presents the preliminary results of the three steady state exercises specified in Phase I of the benchmark using PHISICS/RELAP5-3D.« less

  9. The ADVANCE Code of Conduct for collaborative vaccine studies.

    PubMed

    Kurz, Xavier; Bauchau, Vincent; Mahy, Patrick; Glismann, Steffen; van der Aa, Lieke Maria; Simondon, François

    2017-04-04

    Lessons learnt from the 2009 (H1N1) flu pandemic highlighted factors limiting the capacity to collect European data on vaccine exposure, safety and effectiveness, including lack of rapid access to available data sources or expertise, difficulties to establish efficient interactions between multiple parties, lack of confidence between private and public sectors, concerns about possible or actual conflicts of interest (or perceptions thereof) and inadequate funding mechanisms. The Innovative Medicines Initiative's Accelerated Development of VAccine benefit-risk Collaboration in Europe (ADVANCE) consortium was established to create an efficient and sustainable infrastructure for rapid and integrated monitoring of post-approval benefit-risk of vaccines, including a code of conduct and governance principles for collaborative studies. The development of the code of conduct was guided by three core and common values (best science, strengthening public health, transparency) and a review of existing guidance and relevant published articles. The ADVANCE Code of Conduct includes 45 recommendations in 10 topics (Scientific integrity, Scientific independence, Transparency, Conflicts of interest, Study protocol, Study report, Publication, Subject privacy, Sharing of study data, Research contract). Each topic includes a definition, a set of recommendations and a list of additional reading. The concept of the study team is introduced as a key component of the ADVANCE Code of Conduct with a core set of roles and responsibilities. It is hoped that adoption of the ADVANCE Code of Conduct by all partners involved in a study will facilitate and speed-up its initiation, design, conduct and reporting. Adoption of the ADVANCE Code of Conduct should be stated in the study protocol, study report and publications and journal editors are encouraged to use it as an indication that good principles of public health, science and transparency were followed throughout the study. Copyright © 2017. Published by Elsevier Ltd.

  10. Decay Heat Removal in GEN IV Gas-Cooled Fast Reactors

    DOE PAGES

    Cheng, Lap-Yan; Wei, Thomas Y. C.

    2009-01-01

    The safety goal of the current designs of advanced high-temperature thermal gas-cooled reactors (HTRs) is that no core meltdown would occur in a depressurization event with a combination of concurrent safety system failures. This study focused on the analysis of passive decay heat removal (DHR) in a GEN IV direct-cycle gas-cooled fast reactor (GFR) which is based on the technology developments of the HTRs. Given the different criteria and design characteristics of the GFR, an approach different from that taken for the HTRs for passive DHR would have to be explored. Different design options based on maintaining core flow weremore » evaluated by performing transient analysis of a depressurization accident using the system code RELAP5-3D. The study also reviewed the conceptual design of autonomous systems for shutdown decay heat removal and recommends that future work in this area should be focused on the potential for Brayton cycle DHRs.« less

  11. Unstructured Grids for Sonic Boom Analysis and Design

    NASA Technical Reports Server (NTRS)

    Campbell, Richard L.; Nayani, Sudheer N.

    2015-01-01

    An evaluation of two methods for improving the process for generating unstructured CFD grids for sonic boom analysis and design has been conducted. The process involves two steps: the generation of an inner core grid using a conventional unstructured grid generator such as VGRID, followed by the extrusion of a sheared and stretched collar grid through the outer boundary of the core grid. The first method evaluated, known as COB, automatically creates a cylindrical outer boundary definition for use in VGRID that makes the extrusion process more robust. The second method, BG, generates the collar grid by extrusion in a very efficient manner. Parametric studies have been carried out and new options evaluated for each of these codes with the goal of establishing guidelines for best practices for maintaining boom signature accuracy with as small a grid as possible. In addition, a preliminary investigation examining the use of the CDISC design method for reducing sonic boom utilizing these grids was conducted, with initial results confirming the feasibility of a new remote design approach.

  12. Optimizing Irregular Applications for Energy and Performance on the Tilera Many-core Architecture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chavarría-Miranda, Daniel; Panyala, Ajay R.; Halappanavar, Mahantesh

    Optimizing applications simultaneously for energy and performance is a complex problem. High performance, parallel, irregular applications are notoriously hard to optimize due to their data-dependent memory accesses, lack of structured locality and complex data structures and code patterns. Irregular kernels are growing in importance in applications such as machine learning, graph analytics and combinatorial scientific computing. Performance- and energy-efficient implementation of these kernels on modern, energy efficient, multicore and many-core platforms is therefore an important and challenging problem. We present results from optimizing two irregular applications { the Louvain method for community detection (Grappolo), and high-performance conjugate gradient (HPCCG) {more » on the Tilera many-core system. We have significantly extended MIT's OpenTuner auto-tuning framework to conduct a detailed study of platform-independent and platform-specific optimizations to improve performance as well as reduce total energy consumption. We explore the optimization design space along three dimensions: memory layout schemes, compiler-based code transformations, and optimization of parallel loop schedules. Using auto-tuning, we demonstrate whole node energy savings of up to 41% relative to a baseline instantiation, and up to 31% relative to manually optimized variants.« less

  13. On the utility of graphics cards to perform massively parallel simulation of advanced Monte Carlo methods

    PubMed Central

    Lee, Anthony; Yau, Christopher; Giles, Michael B.; Doucet, Arnaud; Holmes, Christopher C.

    2011-01-01

    We present a case-study on the utility of graphics cards to perform massively parallel simulation of advanced Monte Carlo methods. Graphics cards, containing multiple Graphics Processing Units (GPUs), are self-contained parallel computational devices that can be housed in conventional desktop and laptop computers and can be thought of as prototypes of the next generation of many-core processors. For certain classes of population-based Monte Carlo algorithms they offer massively parallel simulation, with the added advantage over conventional distributed multi-core processors that they are cheap, easily accessible, easy to maintain, easy to code, dedicated local devices with low power consumption. On a canonical set of stochastic simulation examples including population-based Markov chain Monte Carlo methods and Sequential Monte Carlo methods, we nd speedups from 35 to 500 fold over conventional single-threaded computer code. Our findings suggest that GPUs have the potential to facilitate the growth of statistical modelling into complex data rich domains through the availability of cheap and accessible many-core computation. We believe the speedup we observe should motivate wider use of parallelizable simulation methods and greater methodological attention to their design. PMID:22003276

  14. Concept Inventory Development Reveals Common Student Misconceptions about Microbiology †

    PubMed Central

    Briggs, Amy G.; Hughes, Lee E.; Brennan, Robert E.; Buchner, John; Horak, Rachel E. A.; Amburn, D. Sue Katz; McDonald, Ann H.; Primm, Todd P.; Smith, Ann C.; Stevens, Ann M.; Yung, Sunny B.; Paustian, Timothy D.

    2017-01-01

    Misconceptions, or alternative conceptions, are incorrect understandings that students have incorporated into their prior knowledge. The goal of this study was the identification of misconceptions in microbiology held by undergraduate students upon entry into an introductory, general microbiology course. This work was the first step in developing a microbiology concept inventory based on the American Society for Microbiology’s Recommended Curriculum Guidelines for Undergraduate Microbiology. Responses to true/false (T/F) questions accompanied by written explanations by undergraduate students at a diverse set of institutions were used to reveal misconceptions for fundamental microbiology concepts. These data were analyzed to identify the most difficult core concepts, misalignment between explanations and answer choices, and the most common misconceptions for each core concept. From across the core concepts, nineteen misconception themes found in at least 5% of the coded answers for a given question were identified. The top five misconceptions, with coded responses ranging from 19% to 43% of the explanations, are described, along with suggested classroom interventions. Identification of student misconceptions in microbiology provides a foundation upon which to understand students’ prior knowledge and to design appropriate tools for improving instruction in microbiology. PMID:29854046

  15. Simulation on reactor TRIGA Puspati core kinetics fueled with thorium (Th) based fuel element

    NASA Astrophysics Data System (ADS)

    Mohammed, Abdul Aziz; Pauzi, Anas Muhamad; Rahman, Shaik Mohmmed Haikhal Abdul; Zin, Muhamad Rawi Muhammad; Jamro, Rafhayudi; Idris, Faridah Mohamad

    2016-01-01

    In confronting global energy requirement and the search for better technologies, there is a real case for widening the range of potential variations in the design of nuclear power plants. Smaller and simpler reactors are attractive, provided they can meet safety and security standards and non-proliferation issues. On fuel cycle aspect, thorium fuel cycles produce much less plutonium and other radioactive transuranic elements than uranium fuel cycles. Although not fissile itself, Th-232 will absorb slow neutrons to produce uranium-233 (233U), which is fissile. By introducing Thorium, the numbers of highly enriched uranium fuel element can be reduced while maintaining the core neutronic performance. This paper describes the core kinetic of a small research reactor core like TRIGA fueled with a Th filled fuel element matrix using a general purpose Monte Carlo N-Particle (MCNP) code.

  16. Simulation on reactor TRIGA Puspati core kinetics fueled with thorium (Th) based fuel element

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mohammed, Abdul Aziz, E-mail: azizM@uniten.edu.my; Rahman, Shaik Mohmmed Haikhal Abdul; Pauzi, Anas Muhamad, E-mail: anas@uniten.edu.my

    2016-01-22

    In confronting global energy requirement and the search for better technologies, there is a real case for widening the range of potential variations in the design of nuclear power plants. Smaller and simpler reactors are attractive, provided they can meet safety and security standards and non-proliferation issues. On fuel cycle aspect, thorium fuel cycles produce much less plutonium and other radioactive transuranic elements than uranium fuel cycles. Although not fissile itself, Th-232 will absorb slow neutrons to produce uranium-233 ({sup 233}U), which is fissile. By introducing Thorium, the numbers of highly enriched uranium fuel element can be reduced while maintainingmore » the core neutronic performance. This paper describes the core kinetic of a small research reactor core like TRIGA fueled with a Th filled fuel element matrix using a general purpose Monte Carlo N-Particle (MCNP) code.« less

  17. Design of an MR image processing module on an FPGA chip.

    PubMed

    Li, Limin; Wyrwicz, Alice M

    2015-06-01

    We describe the design and implementation of an image processing module on a single-chip Field-Programmable Gate Array (FPGA) for real-time image processing. We also demonstrate that through graphical coding the design work can be greatly simplified. The processing module is based on a 2D FFT core. Our design is distinguished from previously reported designs in two respects. No off-chip hardware resources are required, which increases portability of the core. Direct matrix transposition usually required for execution of 2D FFT is completely avoided using our newly-designed address generation unit, which saves considerable on-chip block RAMs and clock cycles. The image processing module was tested by reconstructing multi-slice MR images from both phantom and animal data. The tests on static data show that the processing module is capable of reconstructing 128×128 images at speed of 400 frames/second. The tests on simulated real-time streaming data demonstrate that the module works properly under the timing conditions necessary for MRI experiments. Copyright © 2015 Elsevier Inc. All rights reserved.

  18. Moats and Drawbridges: An Isolation Primitive for Reconfigurable Hardware Based Systems

    DTIC Science & Technology

    2007-05-01

    these systems, and after being run through an optimizing CAD tool the resulting circuit is a single entangled mess of gates and wires. To prevent the...translates MATLAB [48] algorithms into HDL, logic synthesis translates this HDL into a netlist, a synthesis tool uses a place-and-route algorithm to...Core Soft Core µ Soft P Core µP Core Hard Soft Algorithms MATLAB gcc ExecutableC Code HDL C Code Bitstream Place and Route NetlistLogic Synthesis EDK µP

  19. Ada Integrated Environment III Computer Program Development Specification. Volume III. Ada Optimizing Compiler.

    DTIC Science & Technology

    1981-12-01

    file.library-unit{.subunit).SYMAP Statement Map: library-file. library-unit.subunit).SMAP Type Map: 1 ibrary.fi le. 1 ibrary-unit{.subunit). TMAP The library...generator SYMAP Symbol Map code generator SMAP Updated Statement Map code generator TMAP Type Map code generator A.3.5 The PUNIT Command The P UNIT...Core.Stmtmap) NAME Tmap (Core.Typemap) END Example A-3 Compiler Command Stream for the Code Generator Texas Instruments A-5 Ada Optimizing Compiler

  20. Web buckling behavior under in-plane compression and shear loads for web reinforced composite sandwich core

    NASA Astrophysics Data System (ADS)

    Toubia, Elias Anis

    Sandwich construction is one of the most functional forms of composite structures developed by the composite industry. Due to the increasing demand of web-reinforced core for composite sandwich construction, a research study is needed to investigate the web plate instability under shear, compression, and combined loading. If the web, which is an integral part of the three dimensional web core sandwich structure, happens to be slender with respect to one or two of its spatial dimensions, then buckling phenomena become an issue in that it must be quantified as part of a comprehensive strength model for a fiber reinforced core. In order to understand the thresholds of thickness, web weight, foam type, and whether buckling will occur before material yielding, a thorough investigation needs to be conducted, and buckling design equations need to be developed. Often in conducting a parametric study, a special purpose analysis is preferred over a general purpose analysis code, such as a finite element code, due to the cost and effort usually involved in generating a large number of results. A suitable methodology based on an energy method is presented to solve the stability of symmetrical and specially orthotropic laminated plates on an elastic foundation. Design buckling equations were developed for the web modeled as a laminated plate resting on elastic foundations. The proposed equations allow for parametric studies without limitation regarding foam stiffness, geometric dimensions, or mechanical properties. General behavioral trends of orthotropic and symmetrical anisotropic plates show pronounced contribution of the elastic foundation and fiber orientations on the buckling resistance of the plate. The effects of flexural anisotropy on the buckling behavior of long rectangular plates when subjected to pure shear loading are well represented in the model. The reliability of the buckling equations as a design tool is confirmed by comparison with experimental results. Comparing to predicted values, the experimental plate shear test results range between 15 and 35 percent, depending on the boundary conditions considered. The compression testing yielded conservative results, and as such, can provide a valuable tool for the designer.

  1. Evaluation of HFIR LEU Fuel Using the COMSOL Multiphysics Platform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Primm, Trent; Ruggles, Arthur; Freels, James D

    2009-03-01

    A finite element computational approach to simulation of the High Flux Isotope Reactor (HFIR) Core Thermal-Fluid behavior is developed. These models were developed to facilitate design of a low enriched core for the HFIR, which will have different axial and radial flux profiles from the current HEU core and thus will require fuel and poison load optimization. This report outlines a stepwise implementation of this modeling approach using the commercial finite element code, COMSOL, with initial assessment of fuel, poison and clad conduction modeling capability, followed by assessment of mating of the fuel conduction models to a one dimensional fluidmore » model typical of legacy simulation techniques for the HFIR core. The model is then extended to fully couple 2-dimensional conduction in the fuel to a 2-dimensional thermo-fluid model of the coolant for a HFIR core cooling sub-channel with additional assessment of simulation outcomes. Finally, 3-dimensional simulations of a fuel plate and cooling channel are presented.« less

  2. Simultaneous optimization of loading pattern and burnable poison placement for PWRs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alim, F.; Ivanov, K.; Yilmaz, S.

    2006-07-01

    To solve in-core fuel management optimization problem, GARCO-PSU (Genetic Algorithm Reactor Core Optimization - Pennsylvania State Univ.) is developed. This code is applicable for all types and geometry of PWR core structures with unlimited number of fuel assembly (FA) types in the inventory. For this reason an innovative genetic algorithm is developed with modifying the classical representation of the genotype. In-core fuel management heuristic rules are introduced into GARCO. The core re-load design optimization has two parts, loading pattern (LP) optimization and burnable poison (BP) placement optimization. These parts depend on each other, but it is difficult to solve themore » combined problem due to its large size. Separating the problem into two parts provides a practical way to solve the problem. However, the result of this method does not reflect the real optimal solution. GARCO-PSU achieves to solve LP optimization and BP placement optimization simultaneously in an efficient manner. (authors)« less

  3. Affinity-aware checkpoint restart

    DOE PAGES

    Saini, Ajay; Rezaei, Arash; Mueller, Frank; ...

    2014-12-08

    Current checkpointing techniques employed to overcome faults for HPC applications result in inferior application performance after restart from a checkpoint for a number of applications. This is due to a lack of page and core affinity awareness of the checkpoint/restart (C/R) mechanism, i.e., application tasks originally pinned to cores may be restarted on different cores, and in case of non-uniform memory architectures (NUMA), quite common today, memory pages associated with tasks on a NUMA node may be associated with a different NUMA node after restart. Here, this work contributes a novel design technique for C/R mechanisms to preserve task-to-core mapsmore » and NUMA node specific page affinities across restarts. Experimental results with BLCR, a C/R mechanism, enhanced with affinity awareness demonstrate significant performance benefits of 37%-73% for the NAS Parallel Benchmark codes and 6-12% for NAMD with negligible overheads instead of up to nearly four times longer an execution times without affinity-aware restarts on 16 cores.« less

  4. Affinity-aware checkpoint restart

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saini, Ajay; Rezaei, Arash; Mueller, Frank

    Current checkpointing techniques employed to overcome faults for HPC applications result in inferior application performance after restart from a checkpoint for a number of applications. This is due to a lack of page and core affinity awareness of the checkpoint/restart (C/R) mechanism, i.e., application tasks originally pinned to cores may be restarted on different cores, and in case of non-uniform memory architectures (NUMA), quite common today, memory pages associated with tasks on a NUMA node may be associated with a different NUMA node after restart. Here, this work contributes a novel design technique for C/R mechanisms to preserve task-to-core mapsmore » and NUMA node specific page affinities across restarts. Experimental results with BLCR, a C/R mechanism, enhanced with affinity awareness demonstrate significant performance benefits of 37%-73% for the NAS Parallel Benchmark codes and 6-12% for NAMD with negligible overheads instead of up to nearly four times longer an execution times without affinity-aware restarts on 16 cores.« less

  5. Neutronics calculation of RTP core

    NASA Astrophysics Data System (ADS)

    Rabir, Mohamad Hairie B.; Zin, Muhammad Rawi B. Mohamed; Karim, Julia Bt. Abdul; Bayar, Abi Muttaqin B. Jalal; Usang, Mark Dennis Anak; Mustafa, Muhammad Khairul Ariff B.; Hamzah, Na'im Syauqi B.; Said, Norfarizan Bt. Mohd; Jalil, Muhammad Husamuddin B.

    2017-01-01

    Reactor calculation and simulation are significantly important to ensure safety and better utilization of a research reactor. The Malaysian's PUSPATI TRIGA Reactor (RTP) achieved initial criticality on June 28, 1982. The reactor is designed to effectively implement the various fields of basic nuclear research, manpower training, and production of radioisotopes. Since early 90s, neutronics modelling were used as part of its routine in-core fuel management activities. The are several computer codes have been used in RTP since then, based on 1D neutron diffusion, 2D neutron diffusion and 3D Monte Carlo neutron transport method. This paper describes current progress and overview on neutronics modelling development in RTP. Several important parameters were analysed such as keff, reactivity, neutron flux, power distribution and fission product build-up for the latest core configuration. The developed core neutronics model was validated by means of comparison with experimental and measurement data. Along with the RTP core model, the calculation procedure also developed to establish better prediction capability of RTP's behaviour.

  6. Unified transform architecture for AVC, AVS, VC-1 and HEVC high-performance codecs

    NASA Astrophysics Data System (ADS)

    Dias, Tiago; Roma, Nuno; Sousa, Leonel

    2014-12-01

    A unified architecture for fast and efficient computation of the set of two-dimensional (2-D) transforms adopted by the most recent state-of-the-art digital video standards is presented in this paper. Contrasting to other designs with similar functionality, the presented architecture is supported on a scalable, modular and completely configurable processing structure. This flexible structure not only allows to easily reconfigure the architecture to support different transform kernels, but it also permits its resizing to efficiently support transforms of different orders (e.g. order-4, order-8, order-16 and order-32). Consequently, not only is it highly suitable to realize high-performance multi-standard transform cores, but it also offers highly efficient implementations of specialized processing structures addressing only a reduced subset of transforms that are used by a specific video standard. The experimental results that were obtained by prototyping several configurations of this processing structure in a Xilinx Virtex-7 FPGA show the superior performance and hardware efficiency levels provided by the proposed unified architecture for the implementation of transform cores for the Advanced Video Coding (AVC), Audio Video coding Standard (AVS), VC-1 and High Efficiency Video Coding (HEVC) standards. In addition, such results also demonstrate the ability of this processing structure to realize multi-standard transform cores supporting all the standards mentioned above and that are capable of processing the 8k Ultra High Definition Television (UHDTV) video format (7,680 × 4,320 at 30 fps) in real time.

  7. MO-E-18C-04: Advanced Computer Simulation and Visualization Tools for Enhanced Understanding of Core Medical Physics Concepts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Naqvi, S

    2014-06-15

    Purpose: Most medical physics programs emphasize proficiency in routine clinical calculations and QA. The formulaic aspect of these calculations and prescriptive nature of measurement protocols obviate the need to frequently apply basic physical principles, which, therefore, gradually decay away from memory. E.g. few students appreciate the role of electron transport in photon dose, making it difficult to understand key concepts such as dose buildup, electronic disequilibrium effects and Bragg-Gray theory. These conceptual deficiencies manifest when the physicist encounters a new system, requiring knowledge beyond routine activities. Methods: Two interactive computer simulation tools are developed to facilitate deeper learning of physicalmore » principles. One is a Monte Carlo code written with a strong educational aspect. The code can “label” regions and interactions to highlight specific aspects of the physics, e.g., certain regions can be designated as “starters” or “crossers,” and any interaction type can be turned on and off. Full 3D tracks with specific portions highlighted further enhance the visualization of radiation transport problems. The second code calculates and displays trajectories of a collection electrons under arbitrary space/time dependent Lorentz force using relativistic kinematics. Results: Using the Monte Carlo code, the student can interactively study photon and electron transport through visualization of dose components, particle tracks, and interaction types. The code can, for instance, be used to study kerma-dose relationship, explore electronic disequilibrium near interfaces, or visualize kernels by using interaction forcing. The electromagnetic simulator enables the student to explore accelerating mechanisms and particle optics in devices such as cyclotrons and linacs. Conclusion: The proposed tools are designed to enhance understanding of abstract concepts by highlighting various aspects of the physics. The simulations serve as virtual experiments that give deeper and long lasting understanding of core principles. The student can then make sound judgements in novel situations encountered beyond routine clinical activities.« less

  8. The effects of stainless steel radial reflector on core reactivity for small modular reactor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kang, Jung Kil, E-mail: jkkang@email.kings.ac.kr; Hah, Chang Joo, E-mail: changhah@kings.ac.kr; Cho, Sung Ju, E-mail: sungju@knfc.co.kr

    Commercial PWR core is surrounded by a radial reflector, which consists of a baffle and water. Radial reflector is designed to reflect neutron back into the core region to improve the neutron efficiency of the reactor and to protect the reactor vessels from the embrittling effects caused by irradiation during power operation. Reflector also helps to flatten the neutron flux and power distributions in the reactor core. The conceptual nuclear design for boron-free small modular reactor (SMR) under development in Korea requires to have the cycle length of 4∼5 years, rated power of 180 MWth and enrichment less than 5more » w/o. The aim of this paper is to analyze the effects of stainless steel radial reflector on the performance of the SMR using UO{sub 2} fuels. Three types of reflectors such as water, water/stainless steel 304 mixture and stainless steel 304 are selected to investigate the effect on core reactivity. Additionally, the thickness of stainless steel and double layer reflector type are also investigated. CASMO-4/SIMULATE-3 code system is used for this analysis. The results of analysis show that single layer stainless steel reflector is the most efficient reflector.« less

  9. ANNarchy: a code generation approach to neural simulations on parallel hardware

    PubMed Central

    Vitay, Julien; Dinkelbach, Helge Ü.; Hamker, Fred H.

    2015-01-01

    Many modern neural simulators focus on the simulation of networks of spiking neurons on parallel hardware. Another important framework in computational neuroscience, rate-coded neural networks, is mostly difficult or impossible to implement using these simulators. We present here the ANNarchy (Artificial Neural Networks architect) neural simulator, which allows to easily define and simulate rate-coded and spiking networks, as well as combinations of both. The interface in Python has been designed to be close to the PyNN interface, while the definition of neuron and synapse models can be specified using an equation-oriented mathematical description similar to the Brian neural simulator. This information is used to generate C++ code that will efficiently perform the simulation on the chosen parallel hardware (multi-core system or graphical processing unit). Several numerical methods are available to transform ordinary differential equations into an efficient C++code. We compare the parallel performance of the simulator to existing solutions. PMID:26283957

  10. Educating CPE supervisors: a grounded theory study.

    PubMed

    Ragsdale, Judith R; Holloway, Elizabeth L; Ivy, Steven S

    2009-01-01

    This qualitative study was designed to cull the wisdom of CPE supervisors doing especially competent supervisory education and to develop a theory of CPE supervisory education. Grounded theory methodology included interviewing 11 supervisors and coding the data to identify themes. Four primary dimensions emerged along with a reciprocal core dimension, Supervisory Wisdom, which refers to work the supervisors do in terms of their continuing growth and development.

  11. Self-Shielded Flux Cored Wire Evaluation

    DTIC Science & Technology

    1980-12-01

    5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS( ES ) Naval Surface Warfare Center CD Code 2230 - Design Integration Tools Building...ADDRESS( ES ) 10. SPONSOR/MONITOR’S ACRONYM(S) 11. SPONSOR/MONITOR’S REPORT NUMBER(S) 12. DISTRIBUTION/AVAILABILITY STATEMENT Approved for public release...tensile and yield strength, percent elongation, and percent reduction of area reported. This testing was performed with a Satec 400 WHVP tensile

  12. Hepatitis C Virus core+1/ARF Protein Modulates the Cyclin D1/pRb Pathway and Promotes Carcinogenesis.

    PubMed

    Moustafa, Savvina; Karakasiliotis, Ioannis; Mavromara, Penelope

    2018-05-01

    Viruses often encompass overlapping reading frames and unconventional translation mechanisms in order to maximize the output from a minimum genome and to orchestrate their timely gene expression. Hepatitis C virus (HCV) possesses such an unconventional open reading frame (ORF) within the core-coding region, encoding an additional protein, initially designated ARFP, F, or core+1. Two predominant isoforms of core+1/ARFP have been reported, core+1/L, initiating from codon 26, and core+1/S, initiating from codons 85/87 of the polyprotein coding region. The biological significance of core+1/ARFP expression remains elusive. The aim of the present study was to gain insight into the functional and pathological properties of core+1/ARFP through its interaction with the host cell, combining in vitro and in vivo approaches. Our data provide strong evidence that the core+1/ARFP of HCV-1a stimulates cell proliferation in Huh7-based cell lines expressing either core+1/S or core+1/L isoforms and in transgenic liver disease mouse models expressing core+1/S protein in a liver-specific manner. Both isoforms of core+1/ARFP increase the levels of cyclin D1 and phosphorylated Rb, thus promoting the cell cycle. In addition, core+1/S was found to enhance liver regeneration and oncogenesis in transgenic mice. The induction of the cell cycle together with increased mRNA levels of cell proliferation-related oncogenes in cells expressing the core+1/ARFP proteins argue for an oncogenic potential of these proteins and an important role in HCV-associated pathogenesis. IMPORTANCE This study sheds light on the biological importance of a unique HCV protein. We show here that core+1/ARFP of HCV-1a interacts with the host machinery, leading to acceleration of the cell cycle and enhancement of liver carcinogenesis. This pathological mechanism(s) may complement the action of other viral proteins with oncogenic properties, leading to the development of hepatocellular carcinoma. In addition, given that immunological responses to core+1/ARFP have been correlated with liver disease severity in chronic HCV patients, we expect that the present work will assist in clarifying the pathophysiological relevance of this protein as a biomarker of disease progression. Copyright © 2018 American Society for Microbiology.

  13. SOFIP: A Short Orbital Flux Integration Program

    NASA Technical Reports Server (NTRS)

    Stassinopoulos, E. G.; Hebert, J. J.; Butler, E. L.; Barth, J. L.

    1979-01-01

    A computer code was developed to evaluate the space radiation environment encountered by geocentric satellites. The Short Orbital Flux Integration Program (SOFIP) is a compact routine of modular compositions, designed mostly with structured programming techniques in order to provide core and time economy and ease of use. The program in its simplest form produces for a given input trajectory a composite integral orbital spectrum of either protons or electrons. Additional features are available separately or in combination with the inclusion of the corresponding (optional) modules. The code is described in detail, and the function and usage of the various modules are explained. A program listing and sample outputs are attached.

  14. Development and Validation of a Supersonic Helium-Air Coannular Jet Facility

    NASA Technical Reports Server (NTRS)

    Carty, Atherton A.; Cutler, Andrew D.

    1999-01-01

    Data are acquired in a simple coannular He/air supersonic jet suitable for validation of CFD (Computational Fluid Dynamics) codes for high speed propulsion. Helium is employed as a non-reacting hydrogen fuel simulant, constituting the core of the coannular flow while the coflow is composed of air. The mixing layer interface between the two flows in the near field and the plume region which develops further downstream constitute the primary regions of interest, similar to those present in all hypersonic air breathing propulsion systems. A computational code has been implemented from the experiment's inception, serving as a tool for model design during the development phase.

  15. A portable platform for accelerated PIC codes and its application to GPUs using OpenACC

    NASA Astrophysics Data System (ADS)

    Hariri, F.; Tran, T. M.; Jocksch, A.; Lanti, E.; Progsch, J.; Messmer, P.; Brunner, S.; Gheller, C.; Villard, L.

    2016-10-01

    We present a portable platform, called PIC_ENGINE, for accelerating Particle-In-Cell (PIC) codes on heterogeneous many-core architectures such as Graphic Processing Units (GPUs). The aim of this development is efficient simulations on future exascale systems by allowing different parallelization strategies depending on the application problem and the specific architecture. To this end, this platform contains the basic steps of the PIC algorithm and has been designed as a test bed for different algorithmic options and data structures. Among the architectures that this engine can explore, particular attention is given here to systems equipped with GPUs. The study demonstrates that our portable PIC implementation based on the OpenACC programming model can achieve performance closely matching theoretical predictions. Using the Cray XC30 system, Piz Daint, at the Swiss National Supercomputing Centre (CSCS), we show that PIC_ENGINE running on an NVIDIA Kepler K20X GPU can outperform the one on an Intel Sandy bridge 8-core CPU by a factor of 3.4.

  16. Contextual Constraint Treatment for coarse coding deficit in adults with right hemisphere brain damage: Generalization to narrative discourse comprehension

    PubMed Central

    Blake, Margaret Lehman; Tompkins, Connie A.; Scharp, Victoria L.; Meigh, Kimberly M.; Wambaugh, Julie

    2014-01-01

    Coarse coding is the activation of broad semantic fields that can include multiple word meanings and a variety of features, including those peripheral to a word’s core meaning. It is a partially domain-general process related to general discourse comprehension and contributes to both literal and non-literal language processing. Adults with damage to the right cerebral hemisphere (RHD) and a coarse coding deficit are particularly slow to activate features of words that are relatively distant or peripheral. This manuscript reports a pre-efficacy study of Contextual Constraint Treatment (CCT), a novel, implicit treatment designed to increase the efficiency of coarse coding with the goal of improving narrative comprehension and other language performance that relies on coarse coding. Participants were four adults with RHD. The study used a single-subject controlled experimental design across subjects and behaviors. The treatment involves pre-stimulation, using a hierarchy of strong- and moderately-biased contexts, to prime the intended distantly-related features of critical stimulus words. Three of the four participants exhibited gains in auditory narrative discourse comprehension, the primary outcome measure. All participants exhibited generalization to untreated items. No strong generalization to processing nonliteral language was evident. The results indicate that CCT yields both improved efficiency of the coarse coding process and generalization to narrative comprehension. PMID:24983133

  17. Interface requirements to couple thermal-hydraulic codes to severe accident codes: ATHLET-CD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trambauer, K.

    1997-07-01

    The system code ATHLET-CD is being developed by GRS in cooperation with IKE and IPSN. Its field of application comprises the whole spectrum of leaks and large breaks, as well as operational and abnormal transients for LWRs and VVERs. At present the analyses cover the in-vessel thermal-hydraulics, the early phases of core degradation, as well as fission products and aerosol release from the core and their transport in the Reactor Coolant System. The aim of the code development is to extend the simulation of core degradation up to failure of the reactor pressure vessel and to cover all physically reasonablemore » accident sequences for western and eastern LWRs including RMBKs. The ATHLET-CD structure is highly modular in order to include a manifold spectrum of models and to offer an optimum basis for further development. The code consists of four general modules to describe the reactor coolant system thermal-hydraulics, the core degradation, the fission product core release, and fission product and aerosol transport. Each general module consists of some basic modules which correspond to the process to be simulated or to its specific purpose. Besides the code structure based on the physical modelling, the code follows four strictly separated steps during the course of a calculation: (1) input of structure, geometrical data, initial and boundary condition, (2) initialization of derived quantities, (3) steady state calculation or input of restart data, and (4) transient calculation. In this paper, the transient solution method is briefly presented and the coupling methods are discussed. Three aspects have to be considered for the coupling of different modules in one code system. First is the conservation of masses and energy in the different subsystems as there are fluid, structures, and fission products and aerosols. Second is the convergence of the numerical solution and stability of the calculation. The third aspect is related to the code performance, and running time.« less

  18. Understanding the haling power depletion (HPD) method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Levine, S.; Blyth, T.; Ivanov, K.

    2012-07-01

    The Pennsylvania State Univ. (PSU) is using the university version of the Studsvik Scandpower Code System (CMS) for research and education purposes. Preparations have been made to incorporate the CMS into the PSU Nuclear Engineering graduate class 'Nuclear Fuel Management' course. The information presented in this paper was developed during the preparation of the material for the course. The Haling Power Depletion (HPD) was presented in the course for the first time. The HPD method has been criticized as not valid by many in the field even though it has been successfully applied at PSU for the past 20 years.more » It was noticed that the radial power distribution (RPD) for low leakage cores during depletion remained similar to that of the HPD during most of the cycle. Thus, the Haling Power Depletion (HPD) may be used conveniently mainly for low leakage cores. Studies were then made to better understand the HPD and the results are presented in this paper. Many different core configurations can be computed quickly with the HPD without using Burnable Poisons (BP) to produce several excellent low leakage core configurations that are viable for power production. Once the HPD core configuration is chosen for further analysis, techniques are available for establishing the BP design to prevent violating any of the safety constraints in such HPD calculated cores. In summary, in this paper it has been shown that the HPD method can be used for guiding the design for the low leakage core. (authors)« less

  19. Scaling Studies for Advanced High Temperature Reactor Concepts, Final Technical Report: October 2014—December 2017

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Woods, Brian; Gutowska, Izabela; Chiger, Howard

    Computer simulations of nuclear reactor thermal-hydraulic phenomena are often used in the design and licensing of nuclear reactor systems. In order to assess the accuracy of these computer simulations, computer codes and methods are often validated against experimental data. This experimental data must be of sufficiently high quality in order to conduct a robust validation exercise. In addition, this experimental data is generally collected at experimental facilities that are of a smaller scale than the reactor systems that are being simulated due to cost considerations. Therefore, smaller scale test facilities must be designed and constructed in such a fashion tomore » ensure that the prototypical behavior of a particular nuclear reactor system is preserved. The work completed through this project has resulted in scaling analyses and conceptual design development for a test facility capable of collecting code validation data for the following high temperature gas reactor systems and events— 1. Passive natural circulation core cooling system, 2. pebble bed gas reactor concept, 3. General Atomics Energy Multiplier Module reactor, and 4. prismatic block design steam-water ingress event. In the event that code validation data for these systems or events is needed in the future, significant progress in the design of an appropriate integral-type test facility has already been completed as a result of this project. Where applicable, the next step would be to begin the detailed design development and material procurement. As part of this project applicable scaling analyses were completed and test facility design requirements developed. Conceptual designs were developed for the implementation of these design requirements at the Oregon State University (OSU) High Temperature Test Facility (HTTF). The original HTTF is based on a ¼-scale model of a high temperature gas reactor concept with the capability for both forced and natural circulation flow through a prismatic core with an electrical heat source. The peak core region temperature capability is 1400°C. As part of this project, an inventory of test facilities that could be used for these experimental programs was completed. Several of these facilities showed some promise, however, upon further investigation it became clear that only the OSU HTTF had the power and/or peak temperature limits that would allow for the experimental programs envisioned herein. Thus the conceptual design and feasibility study development focused on examining the feasibility of configuring the current HTTF to collect validation data for these experimental programs. In addition to the scaling analyses and conceptual design development, a test plan was developed for the envisioned modified test facility. This test plan included a discussion on an appropriate shakedown test program as well as the specific matrix tests. Finally, a feasibility study was completed to determine the cost and schedule considerations that would be important to any test program developed to investigate these designs and events.« less

  20. Pretest predictions of the Fast Flux Test Facility Passive Safety Test Phase IIB transients using United States derived computer codes and methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heard, F.J.; Harris, R.A.; Padilla, A.

    The SASSYS/SAS4A systems analysis code was used to simulate a series of unprotected loss of flow (ULOF) tests planned at the Fast Flux Test Facility (FFTF). The subject tests were designed to investigate the transient performance of the FFTF during various ULOF scenarios for two different loading patterns designed to produce extremes in the assembly load pad clearance and the direction of the initial assembly bows. The tests are part of an international program designed to extend the existing data base on the performance of liquid metal reactors (LMR). The analyses demonstrate that a wide range of power-to-flow ratios canmore » be reached during the transients and, therefore, will yield valuable data on the dynamic character of the structural feedbacks in LMRS. These analyses will be repeated once the actual FFTF core loadings for the tests are available. These predictions, similar ones obtained by other international participants in the FFTF program, and post-test analyses will be used to upgrade and further verify the computer codes used to predict the behavior of LMRS.« less

  1. Design of a fuel element for a lead-cooled fast reactor

    NASA Astrophysics Data System (ADS)

    Sobolev, V.; Malambu, E.; Abderrahim, H. Aït

    2009-03-01

    The options of a lead-cooled fast reactor (LFR) of the fourth generation (GEN-IV) reactor with the electric power of 600 MW are investigated in the ELSY Project. The fuel selection, design and optimization are important steps of the project. Three types of fuel are considered as candidates: highly enriched Pu-U mixed oxide (MOX) fuel for the first core, the MOX containing between 2.5% and 5.0% of the minor actinides (MA) for next core and Pu-U-MA nitride fuel as an advanced option. Reference fuel rods with claddings made of T91 ferrite-martensitic steel and two alternative fuel assembly designs (one uses a closed hexagonal wrapper and the other is an open square variant without wrapper) have been assessed. This study focuses on the core variant with the closed hexagonal fuel assemblies. Based on the neutronic parameters provided by Monte-Carlo modeling with MCNP5 and ALEPH codes, simulations have been carried out to assess the long-term thermal-mechanical behaviour of the hottest fuel rods. A modified version of the fuel performance code FEMAXI-SCK-1, adapted for fast neutron spectrum, new fuels, cladding materials and coolant, was utilized for these calculations. The obtained results show that the fuel rods can withstand more than four effective full power years under the normal operation conditions without pellet-cladding mechanical interaction (PCMI). In a variant with solid fuel pellets, a mild PCMI can appear during the fifth year, however, it remains at an acceptable level up to the end of operation when the peak fuel pellet burnup ∼80 MW d kg-1 of heavy metal (HM) and the maximum clad damage of about 82 displacements per atom (dpa) are reached. Annular pellets permit to delay PCMI for about 1 year. Based on the results of this simulation, further steps are envisioned for the optimization of the fuel rod design, aiming at achieving the fuel burnup of 100 MW d kg-1 of HM.

  2. Coding, Constant Comparisons, and Core Categories: A Worked Example for Novice Constructivist Grounded Theorists.

    PubMed

    Giles, Tracey M; de Lacey, Sheryl; Muir-Cochrane, Eimear

    2016-01-01

    Grounded theory method has been described extensively in the literature. Yet, the varying processes portrayed can be confusing for novice grounded theorists. This article provides a worked example of the data analysis phase of a constructivist grounded theory study that examined family presence during resuscitation in acute health care settings. Core grounded theory methods are exemplified, including initial and focused coding, constant comparative analysis, memo writing, theoretical sampling, and theoretical saturation. The article traces the construction of the core category "Conditional Permission" from initial and focused codes, subcategories, and properties, through to its position in the final substantive grounded theory.

  3. RAMONA-4B a computer code with three-dimensional neutron kinetics for BWR and SBWR system transient - user`s manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rohatgi, U.S.; Cheng, H.S.; Khan, H.J.

    This document is the User`s Manual for the Boiling Water Reactor (BWR), and Simplified Boiling Water Reactor (SBWR) systems transient code RAMONA-4B. The code uses a three-dimensional neutron-kinetics model coupled with a multichannel, nonequilibrium, drift-flux, phase-flow model of the thermal hydraulics of the reactor vessel. The code is designed to analyze a wide spectrum of BWR core and system transients. Chapter 1 gives an overview of the code`s capabilities and limitations; Chapter 2 describes the code`s structure, lists major subroutines, and discusses the computer requirements. Chapter 3 is on code, auxillary codes, and instructions for running RAMONA-4B on Sun SPARCmore » and IBM Workstations. Chapter 4 contains component descriptions and detailed card-by-card input instructions. Chapter 5 provides samples of the tabulated output for the steady-state and transient calculations and discusses the plotting procedures for the steady-state and transient calculations. Three appendices contain important user and programmer information: lists of plot variables (Appendix A) listings of input deck for sample problem (Appendix B), and a description of the plotting program PAD (Appendix C). 24 refs., 18 figs., 11 tabs.« less

  4. GrowYourIC: an open access Python code to facilitate comparison between kinematic models of inner core evolution and seismic observations

    NASA Astrophysics Data System (ADS)

    Lasbleis, M.; Day, E. A.; Waszek, L.

    2017-12-01

    The complex nature of inner core structure has been well-established from seismic studies, with heterogeneities at various length scales, both radially and laterally. Despite this, no geodynamic model has successfully explained all of the observed seismic features. To facilitate comparisons between seismic observations and geodynamic models of inner core growth we have developed a new, open access Python tool - GrowYourIC - that allows users to compare models of inner core structure. The code allows users to simulate different evolution models of the inner core, with user-defined rates of inner core growth, translation and rotation. Once the user has "grown" an inner core with their preferred parameters they can then explore the effect of "their" inner core's evolution on the relative age and growth rate in different regions of the inner core. The code will convert these parameters into seismic properties using either built-in mineral physics models, or user-supplied ones that calculate these seismic properties with users' own preferred mineralogical models. The 3D model of isotropic inner core properties can then be used to calculate the predicted seismic travel time anomalies for a random, or user-specified, set of seismic ray paths through the inner core. A real dataset of inner core body-wave differential travel times is included for the purpose of comparing user-generated models of inner core growth to actual observed travel time anomalies in the top 100km of the inner core. Here, we explore some of the possibilities of our code. We investigate the effect of the limited illumination of the inner core by seismic waves on the robustness of kinematic model interpretation. We test the impact on seismic differential travel time observations of several kinematic models of inner core growth: fast lateral translation; slow differential growth; and inner core super-rotation. We find that a model of inner core evolution incorporating both differential growth and slow super-rotation is able to recreate some of the more intricate details of the seismic observations. Specifically we are able to "grow" an inner core that has an asymmetric shift in isotropic hemisphere boundaries with increasing depth in the inner core.

  5. Development of Safety Analysis Code System of Beam Transport and Core for Accelerator Driven System

    NASA Astrophysics Data System (ADS)

    Aizawa, Naoto; Iwasaki, Tomohiko

    2014-06-01

    Safety analysis code system of beam transport and core for accelerator driven system (ADS) is developed for the analyses of beam transients such as the change of the shape and position of incident beam. The code system consists of the beam transport analysis part and the core analysis part. TRACE 3-D is employed in the beam transport analysis part, and the shape and incident position of beam at the target are calculated. In the core analysis part, the neutronics, thermo-hydraulics and cladding failure analyses are performed by the use of ADS dynamic calculation code ADSE on the basis of the external source database calculated by PHITS and the cross section database calculated by SRAC, and the programs of the cladding failure analysis for thermoelastic and creep. By the use of the code system, beam transient analyses are performed for the ADS proposed by Japan Atomic Energy Agency. As a result, the rapid increase of the cladding temperature happens and the plastic deformation is caused in several seconds. In addition, the cladding is evaluated to be failed by creep within a hundred seconds. These results have shown that the beam transients have caused a cladding failure.

  6. Porting ONETEP to graphical processing unit-based coprocessors. 1. FFT box operations.

    PubMed

    Wilkinson, Karl; Skylaris, Chris-Kriton

    2013-10-30

    We present the first graphical processing unit (GPU) coprocessor-enabled version of the Order-N Electronic Total Energy Package (ONETEP) code for linear-scaling first principles quantum mechanical calculations on materials. This work focuses on porting to the GPU the parts of the code that involve atom-localized fast Fourier transform (FFT) operations. These are among the most computationally intensive parts of the code and are used in core algorithms such as the calculation of the charge density, the local potential integrals, the kinetic energy integrals, and the nonorthogonal generalized Wannier function gradient. We have found that direct porting of the isolated FFT operations did not provide any benefit. Instead, it was necessary to tailor the port to each of the aforementioned algorithms to optimize data transfer to and from the GPU. A detailed discussion of the methods used and tests of the resulting performance are presented, which show that individual steps in the relevant algorithms are accelerated by a significant amount. However, the transfer of data between the GPU and host machine is a significant bottleneck in the reported version of the code. In addition, an initial investigation into a dynamic precision scheme for the ONETEP energy calculation has been performed to take advantage of the enhanced single precision capabilities of GPUs. The methods used here result in no disruption to the existing code base. Furthermore, as the developments reported here concern the core algorithms, they will benefit the full range of ONETEP functionality. Our use of a directive-based programming model ensures portability to other forms of coprocessors and will allow this work to form the basis of future developments to the code designed to support emerging high-performance computing platforms. Copyright © 2013 Wiley Periodicals, Inc.

  7. Optimization of 3D Field Design

    NASA Astrophysics Data System (ADS)

    Logan, Nikolas; Zhu, Caoxiang

    2017-10-01

    Recent progress in 3D tokamak modeling is now leveraged to create a conceptual design of new external 3D field coils for the DIII-D tokamak. Using the IPEC dominant mode as a target spectrum, the Finding Optimized Coils Using Space-curves (FOCUS) code optimizes the currents and 3D geometry of multiple coils to maximize the total set's resonant coupling. The optimized coils are individually distorted in space, creating toroidal ``arrays'' containing a variety of shapes that often wrap around a significant poloidal extent of the machine. The generalized perturbed equilibrium code (GPEC) is used to determine optimally efficient spectra for driving total, core, and edge neoclassical toroidal viscosity (NTV) torque and these too provide targets for the optimization of 3D coil designs. These conceptual designs represent a fundamentally new approach to 3D coil design for tokamaks targeting desired plasma physics phenomena. Optimized coil sets based on plasma response theory will be relevant to designs for future reactors or on any active machine. External coils, in particular, must be optimized for reliable and efficient fusion reactor designs. Work supported by the US Department of Energy under DE-AC02-09CH11466.

  8. The extent of food advertising to children on UK television in 2008.

    PubMed

    Boyland, Emma J; Harrold, Joanne A; Kirkham, Tim C; Halford, Jason C G

    2011-10-01

    To provide the most comprehensive analysis to date of the extent of food advertising on UK television channels popular with young people following regulatory reform of this type of marketing activity. UK television was recorded 06:00-22:00 h for a weekday and a weekend day every month between January and December 2008 for 14 of the most popular commercial channels broadcasting children's/family viewing. Recordings were screened for advertisements, which were coded according to predefined categories including whether they were broadcast in peak/non-peak children's viewing time. Food advertisements were coded as core (healthy)/non-core (unhealthy)/miscellaneous foods. Food and drinks were the third most heavily advertised product category, and there were a significantly greater proportion of advertisements for food/drinks during peak compared to non-peak children's viewing times. A significantly greater proportion of the advertisements broadcast around soap operas than around children's programmes were for food/drinks. Children's channels broadcast a significantly greater proportion of non-core food advertisements than the family channels. There were significant differences between recording months for the proportion of core/non-core/miscellaneous food advertisements. Despite regulation, children in the UK are exposed to more TV advertising for unhealthy than healthy food items, even at peak children's viewing times. There remains scope to strengthen the rules regarding advertising of HFSS foods around programming popular with children and adults alike, where current regulations do not apply. Ongoing, systematic monitoring is essential for evaluation of the effectiveness of regulations designed to reduce children's exposure to HFSS food advertising on television in the UK.

  9. Computational mechanics analysis tools for parallel-vector supercomputers

    NASA Technical Reports Server (NTRS)

    Storaasli, Olaf O.; Nguyen, Duc T.; Baddourah, Majdi; Qin, Jiangning

    1993-01-01

    Computational algorithms for structural analysis on parallel-vector supercomputers are reviewed. These parallel algorithms, developed by the authors, are for the assembly of structural equations, 'out-of-core' strategies for linear equation solution, massively distributed-memory equation solution, unsymmetric equation solution, general eigensolution, geometrically nonlinear finite element analysis, design sensitivity analysis for structural dynamics, optimization search analysis and domain decomposition. The source code for many of these algorithms is available.

  10. Application of ATHLET/DYN3D coupled codes system for fast liquid metal cooled reactor steady state simulation

    NASA Astrophysics Data System (ADS)

    Ivanov, V.; Samokhin, A.; Danicheva, I.; Khrennikov, N.; Bouscuet, J.; Velkov, K.; Pasichnyk, I.

    2017-01-01

    In this paper the approaches used for developing of the BN-800 reactor test model and for validation of coupled neutron-physic and thermohydraulic calculations are described. Coupled codes ATHLET 3.0 (code for thermohydraulic calculations of reactor transients) and DYN3D (3-dimensional code of neutron kinetics) are used for calculations. The main calculation results of reactor steady state condition are provided. 3-D model used for neutron calculations was developed for start reactor BN-800 load. The homogeneous approach is used for description of reactor assemblies. Along with main simplifications, the main reactor BN-800 core zones are described (LEZ, MEZ, HEZ, MOX, blankets). The 3D neutron physics calculations were provided with 28-group library, which is based on estimated nuclear data ENDF/B-7.0. Neutron SCALE code was used for preparation of group constants. Nodalization hydraulic model has boundary conditions by coolant mass-flow rate for core inlet part, by pressure and enthalpy for core outlet part, which can be chosen depending on reactor state. Core inlet and outlet temperatures were chosen according to reactor nominal state. The coolant mass flow rate profiling through the core is based on reactor power distribution. The test thermohydraulic calculations made with using of developed model showed acceptable results in coolant mass flow rate distribution through the reactor core and in axial temperature and pressure distribution. The developed model will be upgraded in future for different transient analysis in metal-cooled fast reactors of BN type including reactivity transients (control rods withdrawal, stop of the main circulation pump, etc.).

  11. Biobar-coded gold nanoparticles and DNAzyme-based dual signal amplification strategy for ultrasensitive detection of protein by electrochemiluminescence.

    PubMed

    Xia, Hui; Li, Lingling; Yin, Zhouyang; Hou, Xiandeng; Zhu, Jun-Jie

    2015-01-14

    A dual signal amplification strategy for electrochemiluminescence (ECL) aptasensor was designed based on biobar-coded gold nanoparticles (Au NPs) and DNAzyme. CdSeTe@ZnS quantum dots (QDs) were chosen as the ECL signal probes. To verify the proposed ultrasensitive ECL aptasensor for biomolecules, we detected thrombin (Tb) as a proof-of-principle analyte. The hairpin DNA designed for the recognition of protein consists of two parts: the sequences of catalytical 8-17 DNAzyme and thrombin aptamer. Only in the presence of thrombin could the hairpin DNA be opened, followed by a recycling cleavage of excess substrates by catalytic core of the DNAzyme to induce the first-step amplification. One part of the fragments was captured to open the capture DNA modified on the Au electrode, which further connected with the prepared biobar-coded Au NPs-CdSeTe@ZnS QDs to get the final dual-amplified ECL signal. The limit of detection for Tb was 0.28 fM with excellent selectivity, and this proposed method possessed good performance in real sample analysis. This design introduces the new concept of dual-signal amplification by a biobar-coded system and DNAzyme recycling into ECL determination, and it is promising to be extended to provide a highly sensitive platform for various target biomolecules.

  12. Proceedings of the Workshop on software tools for distributed intelligent control systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Herget, C.J.

    1990-09-01

    The Workshop on Software Tools for Distributed Intelligent Control Systems was organized by Lawrence Livermore National Laboratory for the United States Army Headquarters Training and Doctrine Command and the Defense Advanced Research Projects Agency. The goals of the workshop were to the identify the current state of the art in tools which support control systems engineering design and implementation, identify research issues associated with writing software tools which would provide a design environment to assist engineers in multidisciplinary control design and implementation, formulate a potential investment strategy to resolve the research issues and develop public domain code which can formmore » the core of more powerful engineering design tools, and recommend test cases to focus the software development process and test associated performance metrics. Recognizing that the development of software tools for distributed intelligent control systems will require a multidisciplinary effort, experts in systems engineering, control systems engineering, and compute science were invited to participate in the workshop. In particular, experts who could address the following topics were selected: operating systems, engineering data representation and manipulation, emerging standards for manufacturing data, mathematical foundations, coupling of symbolic and numerical computation, user interface, system identification, system representation at different levels of abstraction, system specification, system design, verification and validation, automatic code generation, and integration of modular, reusable code.« less

  13. Translation of the first upstream ORF in the hepatitis B virus pregenomic RNA modulates translation at the core and polymerase initiation codons

    PubMed Central

    Chen, Augustine; Kao, Y. F.; Brown, Chris M.

    2005-01-01

    The human hepatitis B virus (HBV) has a compact genome encoding four major overlapping coding regions: the core, polymerase, surface and X. The polymerase initiation codon is preceded by the partially overlapping core and four or more upstream initiation codons. There is evidence that several mechanisms are used to enable the synthesis of the polymerase protein, including leaky scanning and ribosome reinitiation. We have examined the first AUG in the pregenomic RNA, it precedes that of the core. It initiates an uncharacterized short upstream open reading frame (uORF), highly conserved in all HBV subtypes, we designated the C0 ORF. This arrangement suggested that expression of the core and polymerase may be affected by this uORF. Initiation at the C0 ORF was confirmed in reporter constructs in transfected cells. The C0 ORF had an inhibitory role in downstream expression from the core initiation site in HepG2 cells and in vitro, but also stimulated reinitiation at the polymerase start when in an optimal context. Our results indicate that the C0 ORF is a determinant in balancing the synthesis of the core and polymerase proteins. PMID:15731337

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Salko, Robert K; Sung, Yixing; Kucukboyaci, Vefa

    The Virtual Environment for Reactor Applications core simulator (VERA-CS) being developed by the Consortium for the Advanced Simulation of Light Water Reactors (CASL) includes coupled neutronics, thermal-hydraulics, and fuel temperature components with an isotopic depletion capability. The neutronics capability employed is based on MPACT, a three-dimensional (3-D) whole core transport code. The thermal-hydraulics and fuel temperature models are provided by the COBRA-TF (CTF) subchannel code. As part of the CASL development program, the VERA-CS (MPACT/CTF) code system was applied to model and simulate reactor core response with respect to departure from nucleate boiling ratio (DNBR) at the limiting time stepmore » of a postulated pressurized water reactor (PWR) main steamline break (MSLB) event initiated at the hot zero power (HZP), either with offsite power available and the reactor coolant pumps in operation (high-flow case) or without offsite power where the reactor core is cooled through natural circulation (low-flow case). The VERA-CS simulation was based on core boundary conditions from the RETRAN-02 system transient calculations and STAR-CCM+ computational fluid dynamics (CFD) core inlet distribution calculations. The evaluation indicated that the VERA-CS code system is capable of modeling and simulating quasi-steady state reactor core response under the steamline break (SLB) accident condition, the results are insensitive to uncertainties in the inlet flow distributions from the CFD simulations, and the high-flow case is more DNB limiting than the low-flow case.« less

  15. Benchmark of Atucha-2 PHWR RELAP5-3D control rod model by Monte Carlo MCNP5 core calculation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pecchia, M.; D'Auria, F.; Mazzantini, O.

    2012-07-01

    Atucha-2 is a Siemens-designed PHWR reactor under construction in the Republic of Argentina. Its geometrical complexity and peculiarities require the adoption of advanced Monte Carlo codes for performing realistic neutronic simulations. Therefore core models of Atucha-2 PHWR were developed using MCNP5. In this work a methodology was set up to collect the flux in the hexagonal mesh by which the Atucha-2 core is represented. The scope of this activity is to evaluate the effect of obliquely inserted control rod on neutron flux in order to validate the RELAP5-3D{sup C}/NESTLE three dimensional neutron kinetic coupled thermal-hydraulic model, applied by GRNSPG/UNIPI formore » performing selected transients of Chapter 15 FSAR of Atucha-2. (authors)« less

  16. Decay Heat Removal from a GFR Core by Natural Convection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, Wesley C.; Hejzlar, Pavel; Driscoll, Michael J.

    2004-07-01

    One of the primary challenges for Gas-cooled Fast Reactors (GFR) is decay heat removal after a loss of coolant accident (LOCA). Due to the fact that thermal gas cooled reactors currently under design rely on passive mechanisms to dissipate decay heat, there is a strong motivation to accomplish GFR core cooling through natural phenomena. This work investigates the potential of post-LOCA decay heat removal from a GFR core to a heat sink using an external convection loop. A model was developed in the form of the LOCA-COLA (Loss of Coolant Accident - Convection Loop Analysis) computer code as a meansmore » for 1D steady state convective heat transfer loop analysis. The results show that decay heat removal by means of gas cooled natural circulation is feasible under elevated post-LOCA containment pressure conditions. (authors)« less

  17. Reconfigurable Very Long Instruction Word (VLIW) Processor

    NASA Technical Reports Server (NTRS)

    Velev, Miroslav N.

    2015-01-01

    Future NASA missions will depend on radiation-hardened, power-efficient processing systems-on-a-chip (SOCs) that consist of a range of processor cores custom tailored for space applications. Aries Design Automation, LLC, has developed a processing SOC that is optimized for software-defined radio (SDR) uses. The innovation implements the Institute of Electrical and Electronics Engineers (IEEE) RazorII voltage management technique, a microarchitectural mechanism that allows processor cores to self-monitor, self-analyze, and selfheal after timing errors, regardless of their cause (e.g., radiation; chip aging; variations in the voltage, frequency, temperature, or manufacturing process). This highly automated SOC can also execute legacy PowerPC 750 binary code instruction set architecture (ISA), which is used in the flight-control computers of many previous NASA space missions. In developing this innovation, Aries Design Automation has made significant contributions to the fields of formal verification of complex pipelined microprocessors and Boolean satisfiability (SAT) and has developed highly efficient electronic design automation tools that hold promise for future developments.

  18. Design study of long-life PWR using thorium cycle

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Subkhi, Moh. Nurul; Su'ud, Zaki; Waris, Abdul

    2012-06-06

    Design study of long-life Pressurized Water Reactor (PWR) using thorium cycle has been performed. Thorium cycle in general has higher conversion ratio in the thermal spectrum domain than uranium cycle. Cell calculation, Burn-up and multigroup diffusion calculation was performed by PIJ-CITATION-SRAC code using libraries based on JENDL 3.2. The neutronic analysis result of infinite cell calculation shows that {sup 231}Pa better than {sup 237}Np as burnable poisons in thorium fuel system. Thorium oxide system with 8%{sup 233}U enrichment and 7.6{approx} 8%{sup 231}Pa is the most suitable fuel for small-long life PWR core because it gives reactivity swing less than 1%{Delta}k/kmore » and longer burn up period (more than 20 year). By using this result, small long-life PWR core can be designed for long time operation with reduced excess reactivity as low as 0.53%{Delta}k/k and reduced power peaking during its operation.« less

  19. Epitaxial Growth of Hetero-Ln-MOF Hierarchical Single Crystals for Domain- and Orientation-Controlled Multicolor Luminescence 3D Coding Capability.

    PubMed

    Pan, Mei; Zhu, Yi-Xuan; Wu, Kai; Chen, Ling; Hou, Ya-Jun; Yin, Shao-Yun; Wang, Hai-Ping; Fan, Ya-Nan; Su, Cheng-Yong

    2017-11-13

    Core-shell or striped heteroatomic lanthanide metal-organic framework hierarchical single crystals were obtained by liquid-phase anisotropic epitaxial growth, maintaining identical periodic organization while simultaneously exhibiting spatially segregated structure. Different types of domain and orientation-controlled multicolor photophysical models are presented, which show either visually distinguishable or visible/near infrared (NIR) emissive colors. This provides a new bottom-up strategy toward the design of hierarchical molecular systems, offering high-throughput and multiplexed luminescence color tunability and readability. The unique capability of combining spectroscopic coding with 3D (three-dimensional) microscale spatial coding is established, providing potential applications in anti-counterfeiting, color barcoding, and other types of integrated and miniaturized optoelectronic materials and devices. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. HowTo - Easy use of global unique identifier

    NASA Astrophysics Data System (ADS)

    Czerniak, A.; Fleischer, D.; Schirnick, C.

    2013-12-01

    The GEOMAR sample- and core repository covers several thousands of samples and cores and was collected over the last decades. In the actual project, we bring this collection up to the new generation and tag every sample and core with a unique identifier, in our case the International Geo Sample Number (ISGN). This work is done with our digital Ink and hand writing recognition implementation. The Smart Pen technology was save time and resources to record the information on every sample or core. In the procedure of recording, there are several steps systematical are done: 1. Getting all information about the core or sample, such as cruise number, responsible person and so on. 2. Tag with unique identifiers, in our case a QR-Code. 3. Wrote down the location of sample or core. After transmitting the information from Smart Pen, actually via USB but wireless is a choice too, into our server infrastructure the link to other information began. As it linked in our Virtual Research Environment (VRE) with the unique identifier (ISGN) sample or core can be located and the QR-Code was simply linked back from core or sample to ISGN with additional scientific information. On the QR-Code all important information are on it and it was simple to produce thousand of it.

  1. Uncertainty quantification and sensitivity analysis with CASL Core Simulator VERA-CS

    DOE PAGES

    Brown, C. S.; Zhang, Hongbin

    2016-05-24

    Uncertainty quantification and sensitivity analysis are important for nuclear reactor safety design and analysis. A 2x2 fuel assembly core design was developed and simulated by the Virtual Environment for Reactor Applications, Core Simulator (VERA-CS) coupled neutronics and thermal-hydraulics code under development by the Consortium for Advanced Simulation of Light Water Reactors (CASL). An approach to uncertainty quantification and sensitivity analysis with VERA-CS was developed and a new toolkit was created to perform uncertainty quantification and sensitivity analysis with fourteen uncertain input parameters. Furthermore, the minimum departure from nucleate boiling ratio (MDNBR), maximum fuel center-line temperature, and maximum outer clad surfacemore » temperature were chosen as the selected figures of merit. Pearson, Spearman, and partial correlation coefficients were considered for all of the figures of merit in sensitivity analysis and coolant inlet temperature was consistently the most influential parameter. We used parameters as inputs to the critical heat flux calculation with the W-3 correlation were shown to be the most influential on the MDNBR, maximum fuel center-line temperature, and maximum outer clad surface temperature.« less

  2. Nebo: An efficient, parallel, and portable domain-specific language for numerically solving partial differential equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Earl, Christopher; Might, Matthew; Bagusetty, Abhishek

    This study presents Nebo, a declarative domain-specific language embedded in C++ for discretizing partial differential equations for transport phenomena on multiple architectures. Application programmers use Nebo to write code that appears sequential but can be run in parallel, without editing the code. Currently Nebo supports single-thread execution, multi-thread execution, and many-core (GPU-based) execution. With single-thread execution, Nebo performs on par with code written by domain experts. With multi-thread execution, Nebo can linearly scale (with roughly 90% efficiency) up to 12 cores, compared to its single-thread execution. Moreover, Nebo’s many-core execution can be over 140x faster than its single-thread execution.

  3. Nebo: An efficient, parallel, and portable domain-specific language for numerically solving partial differential equations

    DOE PAGES

    Earl, Christopher; Might, Matthew; Bagusetty, Abhishek; ...

    2016-01-26

    This study presents Nebo, a declarative domain-specific language embedded in C++ for discretizing partial differential equations for transport phenomena on multiple architectures. Application programmers use Nebo to write code that appears sequential but can be run in parallel, without editing the code. Currently Nebo supports single-thread execution, multi-thread execution, and many-core (GPU-based) execution. With single-thread execution, Nebo performs on par with code written by domain experts. With multi-thread execution, Nebo can linearly scale (with roughly 90% efficiency) up to 12 cores, compared to its single-thread execution. Moreover, Nebo’s many-core execution can be over 140x faster than its single-thread execution.

  4. Optimizing Performance of Combustion Chemistry Solvers on Intel's Many Integrated Core (MIC) Architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sitaraman, Hariswaran; Grout, Ray W

    This work investigates novel algorithm designs and optimization techniques for restructuring chemistry integrators in zero and multidimensional combustion solvers, which can then be effectively used on the emerging generation of Intel's Many Integrated Core/Xeon Phi processors. These processors offer increased computing performance via large number of lightweight cores at relatively lower clock speeds compared to traditional processors (e.g. Intel Sandybridge/Ivybridge) used in current supercomputers. This style of processor can be productively used for chemistry integrators that form a costly part of computational combustion codes, in spite of their relatively lower clock speeds. Performance commensurate with traditional processors is achieved heremore » through the combination of careful memory layout, exposing multiple levels of fine grain parallelism and through extensive use of vendor supported libraries (Cilk Plus and Math Kernel Libraries). Important optimization techniques for efficient memory usage and vectorization have been identified and quantified. These optimizations resulted in a factor of ~ 3 speed-up using Intel 2013 compiler and ~ 1.5 using Intel 2017 compiler for large chemical mechanisms compared to the unoptimized version on the Intel Xeon Phi. The strategies, especially with respect to memory usage and vectorization, should also be beneficial for general purpose computational fluid dynamics codes.« less

  5. Design of an Object-Oriented Turbomachinery Analysis Code: Initial Results

    NASA Technical Reports Server (NTRS)

    Jones, Scott

    2015-01-01

    Performance prediction of turbomachines is a significant part of aircraft propulsion design. In the conceptual design stage, there is an important need to quantify compressor and turbine aerodynamic performance and develop initial geometry parameters at the 2-D level prior to more extensive Computational Fluid Dynamics (CFD) analyses. The Object-oriented Turbomachinery Analysis Code (OTAC) is being developed to perform 2-D meridional flowthrough analysis of turbomachines using an implicit formulation of the governing equations to solve for the conditions at the exit of each blade row. OTAC is designed to perform meanline or streamline calculations; for streamline analyses simple radial equilibrium is used as a governing equation to solve for spanwise property variations. While the goal for OTAC is to allow simulation of physical effects and architectural features unavailable in other existing codes, it must first prove capable of performing calculations for conventional turbomachines.OTAC is being developed using the interpreted language features available in the Numerical Propulsion System Simulation (NPSS) code described by Claus et al (1991). Using the NPSS framework came with several distinct advantages, including access to the pre-existing NPSS thermodynamic property packages and the NPSS Newton-Raphson solver. The remaining objects necessary for OTAC were written in the NPSS framework interpreted language. These new objects form the core of OTAC and are the BladeRow, BladeSegment, TransitionSection, Expander, Reducer, and OTACstart Elements. The BladeRow and BladeSegment consumed the initial bulk of the development effort and required determining the equations applicable to flow through turbomachinery blade rows given specific assumptions about the nature of that flow. Once these objects were completed, OTAC was tested and found to agree with existing solutions from other codes; these tests included various meanline and streamline comparisons of axial compressors and turbines at design and off-design conditions.

  6. Design of an Object-Oriented Turbomachinery Analysis Code: Initial Results

    NASA Technical Reports Server (NTRS)

    Jones, Scott M.

    2015-01-01

    Performance prediction of turbomachines is a significant part of aircraft propulsion design. In the conceptual design stage, there is an important need to quantify compressor and turbine aerodynamic performance and develop initial geometry parameters at the 2-D level prior to more extensive Computational Fluid Dynamics (CFD) analyses. The Object-oriented Turbomachinery Analysis Code (OTAC) is being developed to perform 2-D meridional flowthrough analysis of turbomachines using an implicit formulation of the governing equations to solve for the conditions at the exit of each blade row. OTAC is designed to perform meanline or streamline calculations; for streamline analyses simple radial equilibrium is used as a governing equation to solve for spanwise property variations. While the goal for OTAC is to allow simulation of physical effects and architectural features unavailable in other existing codes, it must first prove capable of performing calculations for conventional turbomachines. OTAC is being developed using the interpreted language features available in the Numerical Propulsion System Simulation (NPSS) code described by Claus et al (1991). Using the NPSS framework came with several distinct advantages, including access to the pre-existing NPSS thermodynamic property packages and the NPSS Newton-Raphson solver. The remaining objects necessary for OTAC were written in the NPSS framework interpreted language. These new objects form the core of OTAC and are the BladeRow, BladeSegment, TransitionSection, Expander, Reducer, and OTACstart Elements. The BladeRow and BladeSegment consumed the initial bulk of the development effort and required determining the equations applicable to flow through turbomachinery blade rows given specific assumptions about the nature of that flow. Once these objects were completed, OTAC was tested and found to agree with existing solutions from other codes; these tests included various meanline and streamline comparisons of axial compressors and turbines at design and off-design conditions.

  7. Reactivity Coefficient Calculation for AP1000 Reactor Using the NODAL3 Code

    NASA Astrophysics Data System (ADS)

    Pinem, Surian; Malem Sembiring, Tagor; Tukiran; Deswandri; Sunaryo, Geni Rina

    2018-02-01

    The reactivity coefficient is a very important parameter for inherent safety and stability of nuclear reactors operation. To provide the safety analysis of the reactor, the calculation of changes in reactivity caused by temperature is necessary because it is related to the reactor operation. In this paper, the temperature reactivity coefficients of fuel and moderator of the AP1000 core are calculated, as well as the moderator density and boron concentration. All of these coefficients are calculated at the hot full power condition (HFP). All neutron diffusion constant as a function of temperature, water density and boron concentration were generated by the SRAC2006 code. The core calculations for determination of the reactivity coefficient parameter are done by using NODAL3 code. The calculation results show that the fuel temperature, moderator temperature and boron reactivity coefficients are in the range between -2.613 pcm/°C to -4.657pcm/°C, -1.00518 pcm/°C to 1.00649 pcm/°C and -9.11361 pcm/ppm to -8.0751 pcm/ppm, respectively. For the water density reactivity coefficients, the positive reactivity occurs at the water temperature less than 190 °C. The calculation results show that the reactivity coefficients are accurate because the results have a very good agreement with the design value.

  8. Shielding and activation calculations around the reactor core for the MYRRHA ADS design

    NASA Astrophysics Data System (ADS)

    Ferrari, Anna; Mueller, Stefan; Konheiser, J.; Castelliti, D.; Sarotto, M.; Stankovskiy, A.

    2017-09-01

    In the frame of the FP7 European project MAXSIMA, an extensive simulation study has been done to assess the main shielding problems in view of the construction of the MYRRHA accelerator-driven system at SCK·CEN in Mol (Belgium). An innovative method based on the combined use of the two state-of-the-art Monte Carlo codes MCNPX and FLUKA has been used, with the goal to characterize complex, realistic neutron fields around the core barrel, to be used as source terms in detailed analyses of the radiation fields due to the system in operation, and of the coupled residual radiation. The main results of the shielding analysis are presented, as well as the construction of an activation database of all the key structural materials. The results evidenced a powerful way to analyse the shielding and activation problems, with direct and clear implications on the design solutions.

  9. Gas Core Reactor Numerical Simulation Using a Coupled MHD-MCNP Model

    NASA Technical Reports Server (NTRS)

    Kazeminezhad, F.; Anghaie, S.

    2008-01-01

    Analysis is provided in this report of using two head-on magnetohydrodynamic (MHD) shocks to achieve supercritical nuclear fission in an axially elongated cylinder filled with UF4 gas as an energy source for deep space missions. The motivation for each aspect of the design is explained and supported by theory and numerical simulations. A subsequent report will provide detail on relevant experimental work to validate the concept. Here the focus is on the theory of and simulations for the proposed gas core reactor conceptual design from the onset of shock generations to the supercritical state achieved when the shocks collide. The MHD model is coupled to a standard nuclear code (MCNP) to observe the neutron flux and fission power attributed to the supercritical state brought about by the shock collisions. Throughout the modeling, realistic parameters are used for the initial ambient gaseous state and currents to ensure a resulting supercritical state upon shock collisions.

  10. INL Results for Phases I and III of the OECD/NEA MHTGR-350 Benchmark

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gerhard Strydom; Javier Ortensi; Sonat Sen

    2013-09-01

    The Idaho National Laboratory (INL) Very High Temperature Reactor (VHTR) Technology Development Office (TDO) Methods Core Simulation group led the construction of the Organization for Economic Cooperation and Development (OECD) Modular High Temperature Reactor (MHTGR) 350 MW benchmark for comparing and evaluating prismatic VHTR analysis codes. The benchmark is sponsored by the OECD's Nuclear Energy Agency (NEA), and the project will yield a set of reference steady-state, transient, and lattice depletion problems that can be used by the Department of Energy (DOE), the Nuclear Regulatory Commission (NRC), and vendors to assess their code suits. The Methods group is responsible formore » defining the benchmark specifications, leading the data collection and comparison activities, and chairing the annual technical workshops. This report summarizes the latest INL results for Phase I (steady state) and Phase III (lattice depletion) of the benchmark. The INSTANT, Pronghorn and RattleSnake codes were used for the standalone core neutronics modeling of Exercise 1, and the results obtained from these codes are compared in Section 4. Exercise 2 of Phase I requires the standalone steady-state thermal fluids modeling of the MHTGR-350 design, and the results for the systems code RELAP5-3D are discussed in Section 5. The coupled neutronics and thermal fluids steady-state solution for Exercise 3 are reported in Section 6, utilizing the newly developed Parallel and Highly Innovative Simulation for INL Code System (PHISICS)/RELAP5-3D code suit. Finally, the lattice depletion models and results obtained for Phase III are compared in Section 7. The MHTGR-350 benchmark proved to be a challenging simulation set of problems to model accurately, and even with the simplifications introduced in the benchmark specification this activity is an important step in the code-to-code verification of modern prismatic VHTR codes. A final OECD/NEA comparison report will compare the Phase I and III results of all other international participants in 2014, while the remaining Phase II transient case results will be reported in 2015.« less

  11. Investigations of the Application of CFD to Flow Expected in the Lower Plenum of the Prismatic VHTR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Richard W.Johnson; Tara Gallaway; Donna P. Guillen

    2006-09-01

    The Generation IV (Gen IV) very high temperature reactor (VHTR) will either be a prismatic (block) or pebble bed design. However, a prismatic VHTR reference design, based on the General Atomics Gas Turbine-Modular Helium Reactor (GT-MHR) [General Atomics, 1996] has been developed for preliminary analysis purposes [MacDonald, et al., 2003]. Numerical simulation studies reported herein are based on this reference design. In the lower plenum of the prismatic reference design, the flow will be introduced by dozens of turbulent jets from the core above. The jet flow will encounter rows of columns that support the core. The flow from themore » core will have to turn ninety degrees and flow toward the exit duct as it passed through the forest of support columns. Due to the radial variation of the power density in the core, the jets will be at various temperatures at the inlet to the lower plenum. This presents some concerns, including that local hot spots may occur in the lower plenum. This may have a deleterious effect on the materials present as well as cause a variation in temperature to be present as the flow enters the power conversion system machinery, which could cause problems with the operation of the machinery. In the past, systems analysis codes have been used to model flow in nuclear reactor systems. It is recognized, however, that such codes are not capable of modeling the local physics of the flow to be able to analyze for local mixing and temperature variations. This has led to the determination that computational fluid dynamic (CFD) codes be used, which are generally regarded as having the capability of accurately simulating local flow physics. Accurate flow modeling involves determining appropriate modeling strategies needed to obtain accurate analyses. These include determining the fineness of the grid needed, the required iterative convergence tolerance, which numerical discretization method to use, and which turbulence model and wall treatment should be employed. It also involves validating the computer code and turbulence model against a series of separate and combined flow phenomena and selecting the data used for the validation. This report describes progress made to identify proper modeling strategies for simulating the lower plenum flow for the task entitled “CFD software validation of jets in crossflow,” which was designed to investigate the issues pertaining to the validation process. The flow phenomenon previously chosen to investigate is flow in a staggered tube bank because it is shown by preliminary simulations to be the location of the highest turbulence intensity in the lower plenum Numerical simulations were previously obtained assuming that the flow is steady. Various turbulence models were employed along with strategies to reduce numerical error to allow appropriate comparisons of the results. It was determined that the sophisticated Reynolds stress model (RSM) provided the best results. It was later determined that the flow is an unsteady flow wherein circulating eddies grow behind the tube and ‘peel off’ alternately from the top and the bottom of the tube. Additional calculations show that the mean velocity is well predicted when the flow is modeled as an unsteady flow. The results for U are clearly superior for the unsteady computations; the unsteady computations for the turbulence stress are similar to those for the steady calculations, showing the same trends. It is clear that strategie« less

  12. DESIGN CHARACTERISTICS OF THE IDAHO NATIONAL LABORATORY HIGH-TEMPERATURE GAS-COOLED TEST REACTOR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sterbentz, James; Bayless, Paul; Strydom, Gerhard

    2016-11-01

    Uncertainty and sensitivity analysis is an indispensable element of any substantial attempt in reactor simulation validation. The quantification of uncertainties in nuclear engineering has grown more important and the IAEA Coordinated Research Program (CRP) on High-Temperature Gas Cooled Reactor (HTGR) initiated in 2012 aims to investigate the various uncertainty quantification methodologies for this type of reactors. The first phase of the CRP is dedicated to the estimation of cell and lattice model uncertainties due to the neutron cross sections co-variances. Phase II is oriented towards the investigation of propagated uncertainties from the lattice to the coupled neutronics/thermal hydraulics core calculations.more » Nominal results for the prismatic single block (Ex.I-2a) and super cell models (Ex.I-2c) have been obtained using the SCALE 6.1.3 two-dimensional lattice code NEWT coupled to the TRITON sequence for cross section generation. In this work, the TRITON/NEWT-flux-weighted cross sections obtained for Ex.I-2a and various models of Ex.I-2c is utilized to perform a sensitivity analysis of the MHTGR-350 core power densities and eigenvalues. The core solutions are obtained with the INL coupled code PHISICS/RELAP5-3D, utilizing a fixed-temperature feedback for Ex. II-1a.. It is observed that the core power density does not vary significantly in shape, but the magnitude of these variations increases as the moderator-to-fuel ratio increases in the super cell lattice models.« less

  13. Feasibility study on AFR-100 fuel conversion from uranium-based fuel to thorium-based fuel

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heidet, F.; Kim, T.; Grandy, C.

    2012-07-30

    Although thorium has long been considered as an alternative to uranium-based fuels, most of the reactors built to-date have been fueled with uranium-based fuel with the exception of a few reactors. The decision to use uranium-based fuels was initially made based on the technology maturity compared to thorium-based fuels. As a result of this experience, lot of knowledge and data have been accumulated for uranium-based fuels that made it the predominant nuclear fuel type for extant nuclear power. However, following the recent concerns about the extent and availability of uranium resources, thorium-based fuels have regained significant interest worldwide. Thorium ismore » more abundant than uranium and can be readily exploited in many countries and thus is now seen as a possible alternative. As thorium-based fuel technologies mature, fuel conversion from uranium to thorium is expected to become a major interest in both thermal and fast reactors. In this study the feasibility of fuel conversion in a fast reactor is assessed and several possible approaches are proposed. The analyses are performed using the Advanced Fast Reactor (AFR-100) design, a fast reactor core concept recently developed by ANL. The AFR-100 is a small 100 MW{sub e} reactor developed under the US-DOE program relying on innovative fast reactor technologies and advanced structural and cladding materials. It was designed to be inherently safe and offers sufficient margins with respect to the fuel melting temperature and the fuel-cladding eutectic temperature when using U-10Zr binary metal fuel. Thorium-based metal fuel was preferred to other thorium fuel forms because of its higher heavy metal density and it does not need to be alloyed with zirconium to reduce its radiation swelling. The various approaches explored cover the use of pure thorium fuel as well as the use of thorium mixed with transuranics (TRU). Sensitivity studies were performed for the different scenarios envisioned in order to determine the best core performance characteristics for each of them. With the exception of the fuel type and enrichment, the reference AFR-100 core design characteristics were kept unchanged, including the general core layout and dimensions, assembly dimensions, materials and power rating. In addition, the mass of {sup 235}U required was kept within a reasonable range from that of the reference AFR-100 design. The core performance characteristics, kinetics parameters and reactivity feedback coefficients were calculated using the ANL suite of fast reactor analysis code systems. Orifice design calculations and the steady-state thermal-hydraulic analyses were performed using the SE2-ANL code. The thermal margins were evaluated by comparing the peak temperatures to the design limits for parameters such as the fuel melting temperature and the fuel-cladding eutectic temperature. The inherent safety features of AFR-100 cores proposed were assessed using the integral reactivity parameters of the quasi-static reactivity balance analysis. The design objectives and requirements, the computation methods used as well as a description of the core concept are provided in Section 2. The three major approaches considered are introduced in Section 3 and the neutronics performances of those approaches are discussed in the same section. The orifice zoning strategies used and the steady-state thermal-hydraulic performance are provided in Section 4. The kinetics and reactivity coefficients, including the inherent safety characteristics, are provided in Section 5, and the Conclusions in Section 6. Other scenarios studied and sensitivity studies are provided in the Appendix section.« less

  14. Comparison of the PLTEMP code flow instability predictions with measurements made with electrically heated channels for the advanced test reactor.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feldman, E.

    When the University of Missouri Research Reactor (MURR) was designed in the 1960s the potential for fuel element burnout by a phenomenon referred to at that time as 'autocatalytic vapor binding' was of serious concern. This type of burnout was observed to occur at power levels considerably lower than those that were known to cause critical heat flux. The conversion of the MURR from HEU fuel to LEU fuel will probably require significant design changes, such as changes in coolant channel thicknesses, that could affect the thermal-hydraulic behavior of the reactor core. Therefore, the redesign of the MURR to accommodatemore » an LEU core must address the same issues of fuel element burnout that were of concern in the 1960s. The Advanced Test Reactor (ATR) was designed at about the same time as the MURR and had similar concerns with regard to fuel element burnout. These concerns were addressed in the ATR by two groups of thermal-hydraulic tests that employed electrically heated simulated fuel channels. The Croft (1964), Reference 1, tests were performed at ANL. The Waters (1966), Reference 2, tests were performed at Hanford Laboratories in Richland Washington. Since fuel element surface temperatures rise rapidly as burnout conditions are approached, channel surface temperatures were carefully monitored in these experiments. For self-protection, the experimental facilities were designed to cut off the electric power when rapidly increasing surface temperatures were detected. In both the ATR reactor and in the tests with electrically heated channels, the heated length of the fuel plate was 48 inches, which is about twice that of the MURR. Whittle and Forgan (1967) independently conducted tests with electrically heated rectangular channels that were similar to the tests by Croft and by Walters. In the Whittle and Forgan tests the heated length of the channel varied among the tests and was between 16 and 24 inches. Both Waters and Whittle and Forgan show that the cause of the fuel element burnout is due to a form of flow instability. Whittle and Forgan provide a formula that predicts when this flow instability will occur. This formula is included in the PLTEMP/ANL code.Error! Reference source not found. Olson has shown that the PLTEMP/ANL code accurately predicts the powers at which flow instability occurs in the Whittle and Forgan experiments. He also considered the electrically heated tests performed in the ANS Thermal-Hydraulic Test Loop at ORNL and report by M. Siman-Tov et al. The purpose of this memorandum is to demonstrate that the PLTEMP/ANL code accurately predicts the Croft and the Waters tests. This demonstration should provide sufficient confidence that the PLTEMP/ANL code can adequately predict the onset of flow instability for the converted MURR. The MURR core uses light water as a coolant, has a 24-inch active fuel length, downward flow in the core, and an average core velocity of about 7 m/s. The inlet temperature is about 50 C and the peak outlet is about 20 C higher than the inlet for reactor operation at 10 MW. The core pressures range from about 4 to about 5 bar. The peak heat flux is about 110 W/cm{sup 2}. Section 2 describes the mechanism that causes flow instability. Section 3 describes the Whittle and Forgan formula for flow instability. Section 4 briefly describes both the Croft and the Waters experiments. Section 5 describes the PLTEMP/ANL models. Section 6 compares the PLTEMP/ANL predictions based on the Whittle and Forgan formula with the Croft measurements. Section 7 does the same for the Waters measurements. Section 8 provides the range of parameters for the Whittle and Forgan tests. Section 9 discusses the results and provides conclusions. In conclusion, although there is no single test that by itself closely matches the limiting conditions in the MURR, the preponderance of measured data and the ability of the Whittle and Forgan correlation, as implemented in PLTEMP/ANL, to predict the onset of flow instability for these tests leads one to the conclusion that the same method should be able to predict the onset of flow instability in the MURR reasonably well.« less

  15. Structural response of 1/20-scale models of the Clinch River Breeder Reactor to a simulated hypothetical core-disruptive accident

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romander, C M; Cagliostro, D J

    Five experiments were performed to help evaluate the structural integrity of the reactor vessel and head design and to verify code predictions. In the first experiment (SM 1), a detailed model of the head was loaded statically to determine its stiffness. In the remaining four experiments (SM 2 to SM 5), models of the vessel and head were loaded dynamically under a simulated 661 MW-s hypothetical core disruptive accident (HCDA). Models SM 2 to SM 4, each of increasing complexity, systematically showed the effects of upper internals structures, a thermal liner, core support platform, and torospherical bottom on vessel response.more » Model SM 5, identical to SM 4 but more heavily instrumented, demonstrated experimental reproducibility and provided more comprehensive data. The models consisted of a Ni 200 vessel and core barrel, a head with shielding and simulated component masses, and an upper internals structure (UIS).« less

  16. Evolvix BEST Names for semantic reproducibility across code2brain interfaces.

    PubMed

    Loewe, Laurence; Scheuer, Katherine S; Keel, Seth A; Vyas, Vaibhav; Liblit, Ben; Hanlon, Bret; Ferris, Michael C; Yin, John; Dutra, Inês; Pietsch, Anthony; Javid, Christine G; Moog, Cecilia L; Meyer, Jocelyn; Dresel, Jerdon; McLoone, Brian; Loberger, Sonya; Movaghar, Arezoo; Gilchrist-Scott, Morgaine; Sabri, Yazeed; Sescleifer, Dave; Pereda-Zorrilla, Ivan; Zietlow, Andrew; Smith, Rodrigo; Pietenpol, Samantha; Goldfinger, Jacob; Atzen, Sarah L; Freiberg, Erika; Waters, Noah P; Nusbaum, Claire; Nolan, Erik; Hotz, Alyssa; Kliman, Richard M; Mentewab, Ayalew; Fregien, Nathan; Loewe, Martha

    2017-01-01

    Names in programming are vital for understanding the meaning of code and big data. We define code2brain (C2B) interfaces as maps in compilers and brains between meaning and naming syntax, which help to understand executable code. While working toward an Evolvix syntax for general-purpose programming that makes accurate modeling easy for biologists, we observed how names affect C2B quality. To protect learning and coding investments, C2B interfaces require long-term backward compatibility and semantic reproducibility (accurate reproduction of computational meaning from coder-brains to reader-brains by code alone). Semantic reproducibility is often assumed until confusing synonyms degrade modeling in biology to deciphering exercises. We highlight empirical naming priorities from diverse individuals and roles of names in different modes of computing to show how naming easily becomes impossibly difficult. We present the Evolvix BEST (Brief, Explicit, Summarizing, Technical) Names concept for reducing naming priority conflicts, test it on a real challenge by naming subfolders for the Project Organization Stabilizing Tool system, and provide naming questionnaires designed to facilitate C2B debugging by improving names used as keywords in a stabilizing programming language. Our experiences inspired us to develop Evolvix using a flipped programming language design approach with some unexpected features and BEST Names at its core. © 2016 The Authors. Annals of the New York Academy of Sciences published by Wiley Periodicals, Inc. on behalf of New York Academy of Sciences.

  17. WWER-1000 core and reflector parameters investigation in the LR-0 reactor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zaritsky, S. M.; Alekseev, N. I.; Bolshagin, S. N.

    2006-07-01

    Measurements and calculations carried out in the core and reflector of WWER-1000 mock-up are discussed: - the determination of the pin-to-pin power distribution in the core by means of gamma-scanning of fuel pins and pin-to-pin calculations with Monte Carlo code MCU-REA and diffusion codes MOBY-DICK (with WIMS-D4 cell constants preparation) and RADAR - the fast neutron spectra measurements by proton recoil method inside the experimental channel in the core and inside the channel in the baffle, and corresponding calculations in P{sub 3}S{sub 8} approximation of discrete ordinates method with code DORT and BUGLE-96 library - the neutron spectra evaluations (adjustment)more » in the same channels in energy region 0.5 eV-18 MeV based on the activation and solid state track detectors measurements. (authors)« less

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rouxelin, Pascal Nicolas; Strydom, Gerhard

    Best-estimate plus uncertainty analysis of reactors is replacing the traditional conservative (stacked uncertainty) method for safety and licensing analysis. To facilitate uncertainty analysis applications, a comprehensive approach and methodology must be developed and applied. High temperature gas cooled reactors (HTGRs) have several features that require techniques not used in light-water reactor analysis (e.g., coated-particle design and large graphite quantities at high temperatures). The International Atomic Energy Agency has therefore launched the Coordinated Research Project on HTGR Uncertainty Analysis in Modeling to study uncertainty propagation in the HTGR analysis chain. The benchmark problem defined for the prismatic design is represented bymore » the General Atomics Modular HTGR 350. The main focus of this report is the compilation and discussion of the results obtained for various permutations of Exercise I 2c and the use of the cross section data in Exercise II 1a of the prismatic benchmark, which is defined as the last and first steps of the lattice and core simulation phases, respectively. The report summarizes the Idaho National Laboratory (INL) best estimate results obtained for Exercise I 2a (fresh single-fuel block), Exercise I 2b (depleted single-fuel block), and Exercise I 2c (super cell) in addition to the first results of an investigation into the cross section generation effects for the super-cell problem. The two dimensional deterministic code known as the New ESC based Weighting Transport (NEWT) included in the Standardized Computer Analyses for Licensing Evaluation (SCALE) 6.1.2 package was used for the cross section evaluation, and the results obtained were compared to the three dimensional stochastic SCALE module KENO VI. The NEWT cross section libraries were generated for several permutations of the current benchmark super-cell geometry and were then provided as input to the Phase II core calculation of the stand alone neutronics Exercise II 1a. The steady state core calculations were simulated with the INL coupled-code system known as the Parallel and Highly Innovative Simulation for INL Code System (PHISICS) and the system thermal-hydraulics code known as the Reactor Excursion and Leak Analysis Program (RELAP) 5 3D using the nuclear data libraries previously generated with NEWT. It was observed that significant differences in terms of multiplication factor and neutron flux exist between the various permutations of the Phase I super-cell lattice calculations. The use of these cross section libraries only leads to minor changes in the Phase II core simulation results for fresh fuel but shows significantly larger discrepancies for spent fuel cores. Furthermore, large incongruities were found between the SCALE NEWT and KENO VI results for the super cells, and while some trends could be identified, a final conclusion on this issue could not yet be reached. This report will be revised in mid 2016 with more detailed analyses of the super-cell problems and their effects on the core models, using the latest version of SCALE (6.2). The super-cell models seem to show substantial improvements in terms of neutron flux as compared to single-block models, particularly at thermal energies.« less

  19. Computational mechanics analysis tools for parallel-vector supercomputers

    NASA Technical Reports Server (NTRS)

    Storaasli, O. O.; Nguyen, D. T.; Baddourah, M. A.; Qin, J.

    1993-01-01

    Computational algorithms for structural analysis on parallel-vector supercomputers are reviewed. These parallel algorithms, developed by the authors, are for the assembly of structural equations, 'out-of-core' strategies for linear equation solution, massively distributed-memory equation solution, unsymmetric equation solution, general eigen-solution, geometrically nonlinear finite element analysis, design sensitivity analysis for structural dynamics, optimization algorithm and domain decomposition. The source code for many of these algorithms is available from NASA Langley.

  20. Investigating inertial confinement fusion target fuel conditions through x-ray spectroscopya)

    NASA Astrophysics Data System (ADS)

    Hansen, Stephanie B.

    2012-05-01

    Inertial confinement fusion (ICF) targets are designed to produce hot, dense fuel in a neutron-producing core that is surrounded by a shell of compressing material. The x-rays emitted from ICF plasmas can be analyzed to reveal details of the temperatures, densities, gradients, velocities, and mix characteristics of ICF targets. Such diagnostics are critical to understand the target performance and to improve the predictive power of simulation codes.

  1. Methodology, status and plans for development and assessment of the code ATHLET

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Teschendorff, V.; Austregesilo, H.; Lerchl, G.

    1997-07-01

    The thermal-hydraulic computer code ATHLET (Analysis of THermal-hydraulics of LEaks and Transients) is being developed by the Gesellschaft fuer Anlagen- und Reaktorsicherheit (GRS) for the analysis of anticipated and abnormal plant transients, small and intermediate leaks as well as large breaks in light water reactors. The aim of the code development is to cover the whole spectrum of design basis and beyond design basis accidents (without core degradation) for PWRs and BWRs with only one code. The main code features are: advanced thermal-hydraulics; modular code architecture; separation between physical models and numerical methods; pre- and post-processing tools; portability. The codemore » has features that are of special interest for applications to small leaks and transients with accident management, e.g. initialization by a steady-state calculation, full-range drift-flux model, dynamic mixture level tracking. The General Control Simulation Module of ATHLET is a flexible tool for the simulation of the balance-of-plant and control systems including the various operator actions in the course of accident sequences with AM measures. The code development is accompained by a systematic and comprehensive validation program. A large number of integral experiments and separate effect tests, including the major International Standard Problems, have been calculated by GRS and by independent organizations. The ATHLET validation matrix is a well balanced set of integral and separate effects tests derived from the CSNI proposal emphasizing, however, the German combined ECC injection system which was investigated in the UPTF, PKL and LOBI test facilities.« less

  2. Full-f version of GENE for turbulence in open-field-line systems

    NASA Astrophysics Data System (ADS)

    Pan, Q.; Told, D.; Shi, E. L.; Hammett, G. W.; Jenko, F.

    2018-06-01

    Unique properties of plasmas in the tokamak edge, such as large amplitude fluctuations and plasma-wall interactions in the open-field-line regions, require major modifications of existing gyrokinetic codes originally designed for simulating core turbulence. To this end, the global version of the 3D2V gyrokinetic code GENE, so far employing a δf-splitting technique, is extended to simulate electrostatic turbulence in straight open-field-line systems. The major extensions are the inclusion of the velocity-space nonlinearity, the development of a conducting-sheath boundary, and the implementation of the Lenard-Bernstein collision operator. With these developments, the code can be run as a full-f code and can handle particle loss to and reflection from the wall. The extended code is applied to modeling turbulence in the Large Plasma Device (LAPD), with a reduced mass ratio and a much lower collisionality. Similar to turbulence in a tokamak scrape-off layer, LAPD turbulence involves collisions, parallel streaming, cross-field turbulent transport with steep profiles, and particle loss at the parallel boundary.

  3. Motion-adaptive model-assisted compatible coding with spatiotemporal scalability

    NASA Astrophysics Data System (ADS)

    Lee, JaeBeom; Eleftheriadis, Alexandros

    1997-01-01

    We introduce the concept of motion adaptive spatio-temporal model-assisted compatible (MA-STMAC) coding, a technique to selectively encode areas of different importance to the human eye in terms of space and time in moving images with the consideration of object motion. PRevious STMAC was proposed base don the fact that human 'eye contact' and 'lip synchronization' are very important in person-to-person communication. Several areas including the eyes and lips need different types of quality, since different areas have different perceptual significance to human observers. The approach provides a better rate-distortion tradeoff than conventional image coding techniques base don MPEG-1, MPEG- 2, H.261, as well as H.263. STMAC coding is applied on top of an encoder, taking full advantage of its core design. Model motion tracking in our previous STMAC approach was not automatic. The proposed MA-STMAC coding considers the motion of the human face within the STMAC concept using automatic area detection. Experimental results are given using ITU-T H.263, addressing very low bit-rate compression.

  4. Design of permanent magnet synchronous motor speed control system based on SVPWM

    NASA Astrophysics Data System (ADS)

    Wu, Haibo

    2017-04-01

    The control system is designed to realize TMS320F28335 based on the permanent magnet synchronous motor speed control system, and put it to quoting all electric of injection molding machine. The system of the control method used SVPWM, through the sampling motor current and rotating transformer position information, realize speed, current double closed loop control. Through the TMS320F28335 hardware floating-point processing core, realize the application for permanent magnet synchronous motor in the floating point arithmetic, to replace the past fixed-point algorithm, and improve the efficiency of the code.

  5. Results of the Simulation of the HTR-Proteus Core 4.2 Using PEBBED-COMBINE: FY10 Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hans Gougar

    2010-07-01

    ABSTRACT The Idaho National Laboratory’s deterministic neutronics analysis codes and methods were applied to the computation of the core multiplication factor of the HTR-Proteus pebble bed reactor critical facility. This report is a follow-on to INL/EXT-09-16620 in which the same calculation was performed but using earlier versions of the codes and less developed methods. In that report, results indicated that the cross sections generated using COMBINE-7.0 did not yield satisfactory estimates of keff. It was concluded in the report that the modeling of control rods was not satisfactory. In the past year, improvements to the homogenization capability in COMBINE havemore » enabled the explicit modeling of TRIS particles, pebbles, and heterogeneous core zones including control rod regions using a new multi-scale version of COMBINE in which the 1-dimensional discrete ordinate transport code ANISN has been integrated. The new COMBINE is shown to yield benchmark quality results for pebble unit cell models, the first step in preparing few-group diffusion parameters for core simulations. In this report, the full critical core is modeled once again but with cross sections generated using the capabilities and physics of the improved COMBINE code. The new PEBBED-COMBINE model enables the exact modeling of the pebbles and control rod region along with better approximation to structures in the reflector. Initial results for the core multiplication factor indicate significant improvement in the INL’s tools for modeling the neutronic properties of a pebble bed reactor. Errors on the order of 1.6-2.5% in keff are obtained; a significant improvement over the 5-6% error observed in the earlier This is acceptable for a code system and model in the early stages of development but still too high for a production code. Analysis of a simpler core model indicates an over-prediction of the flux in the low end of the thermal spectrum. Causes of this discrepancy are under investigation. New homogenization techniques and assumptions were used in this analysis and as such, they require further confirmation and validation. Further refinement and review of the complex Proteus core model are likely to reduce the errors even further.« less

  6. Scaling up Planetary Dynamo Modeling to Massively Parallel Computing Systems: The Rayleigh Code at ALCF

    NASA Astrophysics Data System (ADS)

    Featherstone, N. A.; Aurnou, J. M.; Yadav, R. K.; Heimpel, M. H.; Soderlund, K. M.; Matsui, H.; Stanley, S.; Brown, B. P.; Glatzmaier, G.; Olson, P.; Buffett, B. A.; Hwang, L.; Kellogg, L. H.

    2017-12-01

    In the past three years, CIG's Dynamo Working Group has successfully ported the Rayleigh Code to the Argonne Leadership Computer Facility's Mira BG/Q device. In this poster, we present some our first results, showing simulations of 1) convection in the solar convection zone; 2) dynamo action in Earth's core and 3) convection in the jovian deep atmosphere. These simulations have made efficient use of 131 thousand cores, 131 thousand cores and 232 thousand cores, respectively, on Mira. In addition to our novel results, the joys and logistical challenges of carrying out such large runs will also be discussed.

  7. Porting plasma physics simulation codes to modern computing architectures using the libmrc framework

    NASA Astrophysics Data System (ADS)

    Germaschewski, Kai; Abbott, Stephen

    2015-11-01

    Available computing power has continued to grow exponentially even after single-core performance satured in the last decade. The increase has since been driven by more parallelism, both using more cores and having more parallelism in each core, e.g. in GPUs and Intel Xeon Phi. Adapting existing plasma physics codes is challenging, in particular as there is no single programming model that covers current and future architectures. We will introduce the open-source libmrc framework that has been used to modularize and port three plasma physics codes: The extended MHD code MRCv3 with implicit time integration and curvilinear grids; the OpenGGCM global magnetosphere model; and the particle-in-cell code PSC. libmrc consolidates basic functionality needed for simulations based on structured grids (I/O, load balancing, time integrators), and also introduces a parallel object model that makes it possible to maintain multiple implementations of computational kernels, on e.g. conventional processors and GPUs. It handles data layout conversions and enables us to port performance-critical parts of a code to a new architecture step-by-step, while the rest of the code can remain unchanged. We will show examples of the performance gains and some physics applications.

  8. Advanced Test Reactor Core Modeling Update Project Annual Report for Fiscal Year 2012

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    David W. Nigg, Principal Investigator; Kevin A. Steuhm, Project Manager

    Legacy computational reactor physics software tools and protocols currently used for support of Advanced Test Reactor (ATR) core fuel management and safety assurance, and to some extent, experiment management, are inconsistent with the state of modern nuclear engineering practice, and are difficult, if not impossible, to properly verify and validate (V&V) according to modern standards. Furthermore, the legacy staff knowledge required for application of these tools and protocols from the 1960s and 1970s is rapidly being lost due to staff turnover and retirements. In late 2009, the Idaho National Laboratory (INL) initiated a focused effort, the ATR Core Modeling Updatemore » Project, to address this situation through the introduction of modern high-fidelity computational software and protocols. This aggressive computational and experimental campaign will have a broad strategic impact on the operation of the ATR, both in terms of improved computational efficiency and accuracy for support of ongoing DOE programs as well as in terms of national and international recognition of the ATR National Scientific User Facility (NSUF). The ATR Core Modeling Update Project, targeted for full implementation in phase with the next anticipated ATR Core Internals Changeout (CIC) in the 2014-2015 time frame, began during the last quarter of Fiscal Year 2009, and has just completed its third full year. Key accomplishments so far have encompassed both computational as well as experimental work. A new suite of stochastic and deterministic transport theory based reactor physics codes and their supporting nuclear data libraries (HELIOS, KENO6/SCALE, NEWT/SCALE, ATTILA, and an extended implementation of MCNP5) has been installed at the INL under various licensing arrangements. Corresponding models of the ATR and ATRC are now operational with all five codes, demonstrating the basic feasibility of the new code packages for their intended purpose. Of particular importance, a set of as-run core depletion HELIOS calculations for all ATR cycles since August 2009, Cycle 145A through Cycle 151B, was successfully completed during 2012. This major effort supported a decision late in the year to proceed with the phased incorporation of the HELIOS methodology into the ATR Core Safety Analysis Package (CSAP) preparation process, in parallel with the established PDQ-based methodology, beginning late in Fiscal Year 2012. Acquisition of the advanced SERPENT (VTT-Finland) and MC21 (DOE-NR) Monte Carlo stochastic neutronics simulation codes was also initiated during the year and some initial applications of SERPENT to ATRC experiment analysis were demonstrated. These two new codes will offer significant additional capability, including the possibility of full-3D Monte Carlo fuel management support capabilities for the ATR at some point in the future. Finally, a capability for rigorous sensitivity analysis and uncertainty quantification based on the TSUNAMI system has been implemented and initial computational results have been obtained. This capability will have many applications as a tool for understanding the margins of uncertainty in the new models as well as for validation experiment design and interpretation.« less

  9. Patient Core Data Set. Standard for a longitudinal health/medical record.

    PubMed

    Renner, A L; Swart, J C

    1997-01-01

    Blue Chip Computers Company, in collaboration with Wright State University-Miami Valley College of Nursing and Health, with support from the Agency for Health Care Policy and Research, Public Health Service, completed Small Business innovative Research research to design a comprehensive integrated Patient information System. The Wright State University consultants undertook the development of a Patient Core Data Set (PCDS) in response to the lack of uniform standards of minimum data sets, and lack of standards in data transfer for continuity of care. The purpose of the Patient Core Data Set is to develop a longitudinal patient health record and medical history using a common set of standard data elements with uniform definitions and coding consistent with Health Level 7 (HL7) protocol and the American Society for Testing and Materials (ASTM) standards. The PCDS, intended for transfer across all patient-care settings, is essential information for clinicians, administrators, researchers, and health policy makers.

  10. TORT/MCNP coupling method for the calculation of neutron flux around a core of BWR.

    PubMed

    Kurosawa, Masahiko

    2005-01-01

    For the analysis of BWR neutronics performance, accurate data are required for neutron flux distribution over the In-Reactor Pressure Vessel equipments taking into account the detailed geometrical arrangement. The TORT code can calculate neutron flux around a core of BWR in a three-dimensional geometry model, but has difficulties in fine geometrical modelling and lacks huge computer resource. On the other hand, the MCNP code enables the calculation of the neutron flux with a detailed geometry model, but requires very long sampling time to give enough number of particles. Therefore, a TORT/MCNP coupling method has been developed to eliminate the two problems mentioned above in each code. In this method, the TORT code calculates angular flux distribution on the core surface and the MCNP code calculates neutron spectrum at the points of interest using the flux distribution. The coupling method will be used as the DOT-DOMINO-MORSE code system. This TORT/MCNP coupling method was applied to calculate the neutron flux at points where induced radioactivity data were measured for 54Mn and 60Co and the radioactivity calculations based on the neutron flux obtained from the above method were compared with the measured data.

  11. An approach to model reactor core nodalization for deterministic safety analysis

    NASA Astrophysics Data System (ADS)

    Salim, Mohd Faiz; Samsudin, Mohd Rafie; Mamat @ Ibrahim, Mohd Rizal; Roslan, Ridha; Sadri, Abd Aziz; Farid, Mohd Fairus Abd

    2016-01-01

    Adopting good nodalization strategy is essential to produce an accurate and high quality input model for Deterministic Safety Analysis (DSA) using System Thermal-Hydraulic (SYS-TH) computer code. The purpose of such analysis is to demonstrate the compliance against regulatory requirements and to verify the behavior of the reactor during normal and accident conditions as it was originally designed. Numerous studies in the past have been devoted to the development of the nodalization strategy for small research reactor (e.g. 250kW) up to the bigger research reactor (e.g. 30MW). As such, this paper aims to discuss the state-of-arts thermal hydraulics channel to be employed in the nodalization for RTP-TRIGA Research Reactor specifically for the reactor core. At present, the required thermal-hydraulic parameters for reactor core, such as core geometrical data (length, coolant flow area, hydraulic diameters, and axial power profile) and material properties (including the UZrH1.6, stainless steel clad, graphite reflector) have been collected, analyzed and consolidated in the Reference Database of RTP using standardized methodology, mainly derived from the available technical documentations. Based on the available information in the database, assumptions made on the nodalization approach and calculations performed will be discussed and presented. The development and identification of the thermal hydraulics channel for the reactor core will be implemented during the SYS-TH calculation using RELAP5-3D® computer code. This activity presented in this paper is part of the development of overall nodalization description for RTP-TRIGA Research Reactor under the IAEA Norwegian Extra-Budgetary Programme (NOKEBP) mentoring project on Expertise Development through the Analysis of Reactor Thermal-Hydraulics for Malaysia, denoted as EARTH-M.

  12. An approach to model reactor core nodalization for deterministic safety analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Salim, Mohd Faiz, E-mail: mohdfaizs@tnb.com.my; Samsudin, Mohd Rafie, E-mail: rafies@tnb.com.my; Mamat Ibrahim, Mohd Rizal, E-mail: m-rizal@nuclearmalaysia.gov.my

    Adopting good nodalization strategy is essential to produce an accurate and high quality input model for Deterministic Safety Analysis (DSA) using System Thermal-Hydraulic (SYS-TH) computer code. The purpose of such analysis is to demonstrate the compliance against regulatory requirements and to verify the behavior of the reactor during normal and accident conditions as it was originally designed. Numerous studies in the past have been devoted to the development of the nodalization strategy for small research reactor (e.g. 250kW) up to the bigger research reactor (e.g. 30MW). As such, this paper aims to discuss the state-of-arts thermal hydraulics channel to bemore » employed in the nodalization for RTP-TRIGA Research Reactor specifically for the reactor core. At present, the required thermal-hydraulic parameters for reactor core, such as core geometrical data (length, coolant flow area, hydraulic diameters, and axial power profile) and material properties (including the UZrH{sub 1.6}, stainless steel clad, graphite reflector) have been collected, analyzed and consolidated in the Reference Database of RTP using standardized methodology, mainly derived from the available technical documentations. Based on the available information in the database, assumptions made on the nodalization approach and calculations performed will be discussed and presented. The development and identification of the thermal hydraulics channel for the reactor core will be implemented during the SYS-TH calculation using RELAP5-3D{sup ®} computer code. This activity presented in this paper is part of the development of overall nodalization description for RTP-TRIGA Research Reactor under the IAEA Norwegian Extra-Budgetary Programme (NOKEBP) mentoring project on Expertise Development through the Analysis of Reactor Thermal-Hydraulics for Malaysia, denoted as EARTH-M.« less

  13. Life jacket design affects dorsal head and chest exposure, core cooling, and cognition in 10 degrees C water.

    PubMed

    Lockhart, Tamara L; Jamieson, Christopher P; Steinman, Alan M; Giesbrecht, Gordon G

    2005-10-01

    Personal floatation devices (PFDs) differ in whether they maintain the head out of the water or allow the dorsum of the head to be immersed. Partial head submersion may hasten systemic cooling, incapacitation, and death in cold water. Six healthy male volunteers (mean age = 26.8 yr; height = 184 cm; weight = 81 kg; body fat = 20%) were immersed in 10 degrees C water for 65 min, or until core temperature = 34 degrees C, under three conditions: PFD#1 maintained the head and upper chest out of the water; PFD#2 allowed the dorsal head and whole body to be immersed; and an insulated drysuit (control) allowed the dorsal head to be immersed. Mental performance tests included: logic reasoning test; Stroop word-color test; digit symbol coding; backward digit span; and paced auditory serial addition test (PASAT). Core cooling was significantly faster for PFD#2 (2.8 +/- 1.6 degrees C x h(-1)) than for PFD#1 (1.5 +/- 0.7 degrees C x h(-1)) or for the drysuit (0.4 +/- 0.2 degrees C x h(-1)). Although no statistically significant effects on cognitive performance were noted for the individual PFDs and drysuit, when analyzed as a group, four of the tests of cognitive performance (Stroop word-color, digit symbol coding, backward digit span, and PASAT) showed significant correlations between decreasing core temperature to 34 degrees C and diminished cognitive performance. Performance in more complicated mental tasks was adversely affected as core temperature decreased to 34 degrees C. The PFD that kept the head and upper chest out of the water preserved body heat and mental performance better than the PFD that produced horizontal flotation.

  14. Computer program for optimal BWR congtrol rod programming

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taner, M.S.; Levine, S.H.; Carmody, J.M.

    1995-12-31

    A fully automated computer program has been developed for designing optimal control rod (CR) patterns for boiling water reactors (BWRs). The new program, called OCTOPUS-3, is based on the OCTOPUS code and employs SIMULATE-3 (Ref. 2) for the analysis. There are three aspects of OCTOPUS-3 that make it successful for use at PECO Energy. It incorporates a new feasibility algorithm that makes the CR design meet all constraints, it has been coupled to a Bourne Shell program 3 to allow the user to run the code interactively without the need for a manual, and it develops a low axial peakmore » to extend the cycle. For PECO Energy Co.`s limericks it increased the energy output by 1 to 2% over the traditional PECO Energy design. The objective of the optimization in OCTOPUS-3 is to approximate a very low axial peaked target power distribution while maintaining criticality, keeping the nodal and assembly peaks below the allowed maximum, and meeting the other constraints. The user-specified input for each exposure point includes: CR groups allowed-to-move, target k{sub eff}, and amount of core flow. The OCTOPUS-3 code uses the CR pattern from the previous step as the initial guess unless indicated otherwise.« less

  15. Progress on the Development of the hPIC Particle-in-Cell Code

    NASA Astrophysics Data System (ADS)

    Dart, Cameron; Hayes, Alyssa; Khaziev, Rinat; Marcinko, Stephen; Curreli, Davide; Laboratory of Computational Plasma Physics Team

    2017-10-01

    Advancements were made in the development of the kinetic-kinetic electrostatic Particle-in-Cell code, hPIC, designed for large-scale simulation of the Plasma-Material Interface. hPIC achieved a weak scaling efficiency of 87% using the Algebraic Multigrid Solver BoomerAMG from the PETSc library on more than 64,000 cores of the Blue Waters supercomputer at the University of Illinois at Urbana-Champaign. The code successfully simulates two-stream instability and a volume of plasma over several square centimeters of surface extending out to the presheath in kinetic-kinetic mode. Results from a parametric study of the plasma sheath in strongly magnetized conditions will be presented, as well as a detailed analysis of the plasma sheath structure at grazing magnetic angles. The distribution function and its moments will be reported for plasma species in the simulation domain and at the material surface for plasma sheath simulations. Membership Pending.

  16. Using Intel Xeon Phi to accelerate the WRF TEMF planetary boundary layer scheme

    NASA Astrophysics Data System (ADS)

    Mielikainen, Jarno; Huang, Bormin; Huang, Allen

    2014-05-01

    The Weather Research and Forecasting (WRF) model is designed for numerical weather prediction and atmospheric research. The WRF software infrastructure consists of several components such as dynamic solvers and physics schemes. Numerical models are used to resolve the large-scale flow. However, subgrid-scale parameterizations are for an estimation of small-scale properties (e.g., boundary layer turbulence and convection, clouds, radiation). Those have a significant influence on the resolved scale due to the complex nonlinear nature of the atmosphere. For the cloudy planetary boundary layer (PBL), it is fundamental to parameterize vertical turbulent fluxes and subgrid-scale condensation in a realistic manner. A parameterization based on the Total Energy - Mass Flux (TEMF) that unifies turbulence and moist convection components produces a better result that the other PBL schemes. For that reason, the TEMF scheme is chosen as the PBL scheme we optimized for Intel Many Integrated Core (MIC), which ushers in a new era of supercomputing speed, performance, and compatibility. It allows the developers to run code at trillions of calculations per second using the familiar programming model. In this paper, we present our optimization results for TEMF planetary boundary layer scheme. The optimizations that were performed were quite generic in nature. Those optimizations included vectorization of the code to utilize vector units inside each CPU. Furthermore, memory access was improved by scalarizing some of the intermediate arrays. The results show that the optimization improved MIC performance by 14.8x. Furthermore, the optimizations increased CPU performance by 2.6x compared to the original multi-threaded code on quad core Intel Xeon E5-2603 running at 1.8 GHz. Compared to the optimized code running on a single CPU socket the optimized MIC code is 6.2x faster.

  17. Using Coding Apps to Support Literacy Instruction and Develop Coding Literacy

    ERIC Educational Resources Information Center

    Hutchison, Amy; Nadolny, Larysa; Estapa, Anne

    2016-01-01

    In this article the authors present the concept of Coding Literacy and describe the ways in which coding apps can support the development of Coding Literacy and disciplinary and digital literacy skills. Through detailed examples, we describe how coding apps can be integrated into literacy instruction to support learning of the Common Core English…

  18. Thermal-hydraulic modeling needs for passive reactors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kelly, J.M.

    1997-07-01

    The U.S. Nuclear Regulatory Commission has received an application for design certification from the Westinghouse Electric Corporation for an Advanced Light Water Reactor design known as the AP600. As part of the design certification process, the USNRC uses its thermal-hydraulic system analysis codes to independently audit the vendor calculations. The focus of this effort has been the small break LOCA transients that rely upon the passive safety features of the design to depressurize the primary system sufficiently so that gravity driven injection can provide a stable source for long term cooling. Of course, large break LOCAs have also been considered,more » but as the involved phenomena do not appear to be appreciably different from those of current plants, they were not discussed in this paper. Although the SBLOCA scenario does not appear to threaten core coolability - indeed, heatup is not even expected to occur - there have been concerns as to the performance of the passive safety systems. For example, the passive systems drive flows with small heads, consequently requiring more precision in the analysis compared to active systems methods for passive plants as compared to current plants with active systems. For the analysis of SBLOCAs and operating transients, the USNRC uses the RELAP5 thermal-hydraulic system analysis code. To assure the applicability of RELAP5 to the analysis of these transients for the AP600 design, a four year long program of code development and assessment has been undertaken.« less

  19. Verifying Architectural Design Rules of the Flight Software Product Line

    NASA Technical Reports Server (NTRS)

    Ganesan, Dharmalingam; Lindvall, Mikael; Ackermann, Chris; McComas, David; Bartholomew, Maureen

    2009-01-01

    This paper presents experiences of verifying architectural design rules of the NASA Core Flight Software (CFS) product line implementation. The goal of the verification is to check whether the implementation is consistent with the CFS architectural rules derived from the developer's guide. The results indicate that consistency checking helps a) identifying architecturally significant deviations that were eluded during code reviews, b) clarifying the design rules to the team, and c) assessing the overall implementation quality. Furthermore, it helps connecting business goals to architectural principles, and to the implementation. This paper is the first step in the definition of a method for analyzing and evaluating product line implementations from an architecture-centric perspective.

  20. THR-TH: a high-temperature gas-cooled nuclear reactor core thermal hydraulics code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vondy, D.R.

    1984-07-01

    The ORNL version of PEBBLE, the (RZ) pebble bed thermal hydraulics code, has been extended for application to a prismatic gas cooled reactor core. The supplemental treatment is of one-dimensional coolant flow in up to a three-dimensional core description. Power density data from a neutronics and exposure calculation are used as the basic information for the thermal hydraulics calculation of heat removal. Two-dimensional neutronics results may be expanded for a three-dimensional hydraulics calculation. The geometric description for the hydraulics problem is the same as used by the neutronics code. A two-dimensional thermal cell model is used to predict temperatures inmore » the fuel channel. The capability is available in the local BOLD VENTURE computation system for reactor core analysis with capability to account for the effect of temperature feedback by nuclear cross section correlation. Some enhancements have also been added to the original code to add pebble bed modeling flexibility and to generate useful auxiliary results. For example, an estimate is made of the distribution of fuel temperatures based on average and extreme conditions regularly calculated at a number of locations.« less

  1. Procedure of recovery of pin-by-pin fields of energy release in the core of VVER-type reactor for the BIPR-8 code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gordienko, P. V., E-mail: gorpavel@vver.kiae.ru; Kotsarev, A. V.; Lizorkin, M. P.

    2014-12-15

    The procedure of recovery of pin-by-pin energy-release fields for the BIPR-8 code and the algorithm of the BIPR-8 code which is used in nodal computation of the reactor core and on which the recovery of pin-by-pin fields of energy release is based are briefly described. The description and results of the verification using the module of recovery of pin-by-pin energy-release fields and the TVS-M program are given.

  2. Performance Study of Monte Carlo Codes on Xeon Phi Coprocessors — Testing MCNP 6.1 and Profiling ARCHER Geometry Module on the FS7ONNi Problem

    NASA Astrophysics Data System (ADS)

    Liu, Tianyu; Wolfe, Noah; Lin, Hui; Zieb, Kris; Ji, Wei; Caracappa, Peter; Carothers, Christopher; Xu, X. George

    2017-09-01

    This paper contains two parts revolving around Monte Carlo transport simulation on Intel Many Integrated Core coprocessors (MIC, also known as Xeon Phi). (1) MCNP 6.1 was recompiled into multithreading (OpenMP) and multiprocessing (MPI) forms respectively without modification to the source code. The new codes were tested on a 60-core 5110P MIC. The test case was FS7ONNi, a radiation shielding problem used in MCNP's verification and validation suite. It was observed that both codes became slower on the MIC than on a 6-core X5650 CPU, by a factor of 4 for the MPI code and, abnormally, 20 for the OpenMP code, and both exhibited limited capability of strong scaling. (2) We have recently added a Constructive Solid Geometry (CSG) module to our ARCHER code to provide better support for geometry modelling in radiation shielding simulation. The functions of this module are frequently called in the particle random walk process. To identify the performance bottleneck we developed a CSG proxy application and profiled the code using the geometry data from FS7ONNi. The profiling data showed that the code was primarily memory latency bound on the MIC. This study suggests that despite low initial porting e_ort, Monte Carlo codes do not naturally lend themselves to the MIC platform — just like to the GPUs, and that the memory latency problem needs to be addressed in order to achieve decent performance gain.

  3. Preliminary weight and costs of sandwich panels to distribute concentrated loads

    NASA Technical Reports Server (NTRS)

    Belleman, G.; Mccarty, J. E.

    1976-01-01

    Minimum mass honeycomb sandwich panels were sized for transmitting a concentrated load to a uniform reaction through various distances. The form skin gages were fully stressed with a finite element computer code. The panel general stability was evaluated with a buckling computer code labeled STAGS-B. Two skin materials were considered; aluminum and graphite-epoxy. The core was constant thickness aluminum honeycomb. Various panel sizes and load levels were considered. The computer generated data were generalized to allow preliminary least mass panel designs for a wide range of panel sizes and load intensities. An assessment of panel fabrication cost was also conducted. Various comparisons between panel mass, panel size, panel loading, and panel cost are presented in both tabular and graphical form.

  4. EBT reactor systems analysis and cost code: description and users guide (Version 1)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Santoro, R.T.; Uckan, N.A.; Barnes, J.M.

    1984-06-01

    An ELMO Bumpy Torus (EBT) reactor systems analysis and cost code that incorporates the most recent advances in EBT physics has been written. The code determines a set of reactors that fall within an allowed operating window determined from the coupling of ring and core plasma properties and the self-consistent treatment of the coupled ring-core stability and power balance requirements. The essential elements of the systems analysis and cost code are described, along with the calculational sequences leading to the specification of the reactor options and their associated costs. The input parameters, the constraints imposed upon them, and the operatingmore » range over which the code provides valid results are discussed. A sample problem and the interpretation of the results are also presented.« less

  5. The effects of temperatures on the pebble flow in a pebble bed high temperature reactor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sen, R. S.; Cogliati, J. J.; Gougar, H. D.

    2012-07-01

    The core of a pebble bed high temperature reactor (PBHTR) moves during operation, a feature which leads to better fuel economy (online refueling with no burnable poisons) and lower fuel stress. The pebbles are loaded at the top and trickle to the bottom of the core after which the burnup of each is measured. The pebbles that are not fully burned are recirculated through the core until the target burnup is achieved. The flow pattern of the pebbles through the core is of importance for core simulations because it couples the burnup distribution to the core temperature and power profiles,more » especially in cores with two or more radial burnup 'zones '. The pebble velocity profile is a strong function of the core geometry and the friction between the pebbles and the surrounding structures (other pebbles or graphite reflector blocks). The friction coefficient for graphite in a helium environment is inversely related to the temperature. The Thorium High Temperature Reactor (THTR) operated in Germany between 1983 and 1989. It featured a two-zone core, an inner core (IC) and outer core (OC), with different fuel mixtures loaded in each zone. The rate at which the IC was refueled relative to the OC in THTR was designed to be 0.56. During its operation, however, this ratio was measured to be 0.76, suggesting the pebbles in the inner core traveled faster than expected. It has been postulated that the positive feedback effect between inner core temperature, burnup, and pebble flow was underestimated in THTR. Because of the power shape, the center of the core in a typical cylindrical PBHTR operates at a higher temperature than the region next to the side reflector. The friction between pebbles in the IC is lower than that in the OC, perhaps causing a higher relative flow rate and lower average burnup, which in turn yield a higher local power density. Furthermore, the pebbles in the center region have higher velocities than the pebbles next to the side reflector due to the interaction between the pebbles and the immobile graphite reflector as well as the geometry of the discharge conus near the bottom of the core. In this paper, the coupling between the temperature profile and the pebble flow dynamics was analyzed by using PEBBED/THERMIX and PEBBLES codes by modeling the HTR-10 reactor in China. Two extreme and opposing velocity profiles are used as a starting point for the iterations. The PEBBED/THERMIX code is used to calculate the burnup, power and temperature profiles with one of the velocity profiles as input. The resulting temperature profile is then passed to PEBBLES code to calculate the updated pebble velocity profile taking the new temperature profile into account. If the aforementioned hypothesis is correct, the strong temperature effect upon the friction coefficients would cause the two cases to converge to different final velocity and temperature profiles. The results of this analysis indicates that a single zone pebble bed core is self-stabilizing in terms of the pebble velocity profile and the effect of the temperature profile on the pebble flow is insignificant. (authors)« less

  6. Aerolastic tailoring and integrated wing design

    NASA Technical Reports Server (NTRS)

    Love, Mike H.; Bohlmann, Jon

    1989-01-01

    Much has been learned from the TSO optimization code over the years in determining aeroelastic tailoring's place in the integrated design process. Indeed, it has become apparent that aeroelastic tailoring is and should be deeply embedded in design. Aeroelastic tailoring can have tremendous effects on the design loads, and design loads affect every aspect of the design process. While optimization enables the evaluation of design sensitivities, valid computational simulations are required to make these sensitivities valid. Aircraft maneuvers simulated must adequately cover the plane's intended flight envelope, realistic design criteria must be included, and models among the various disciplines must be calibrated among themselves and with any hard-core (e.g., wind tunnel) data available. The information gained and benefits derived from aeroelastic tailoring provide a focal point for the various disciplines to become involved and communicate with one another to reach the best design possible.

  7. Verification of Three Dimensional Triangular Prismatic Discrete Ordinates Transport Code ENSEMBLE-TRIZ by Comparison with Monte Carlo Code GMVP

    NASA Astrophysics Data System (ADS)

    Homma, Yuto; Moriwaki, Hiroyuki; Ohki, Shigeo; Ikeda, Kazumi

    2014-06-01

    This paper deals with verification of three dimensional triangular prismatic discrete ordinates transport calculation code ENSEMBLE-TRIZ by comparison with multi-group Monte Carlo calculation code GMVP in a large fast breeder reactor. The reactor is a 750 MWe electric power sodium cooled reactor. Nuclear characteristics are calculated at beginning of cycle of an initial core and at beginning and end of cycle of equilibrium core. According to the calculations, the differences between the two methodologies are smaller than 0.0002 Δk in the multi-plication factor, relatively about 1% in the control rod reactivity, and 1% in the sodium void reactivity.

  8. MILC Code Performance on High End CPU and GPU Supercomputer Clusters

    NASA Astrophysics Data System (ADS)

    DeTar, Carleton; Gottlieb, Steven; Li, Ruizi; Toussaint, Doug

    2018-03-01

    With recent developments in parallel supercomputing architecture, many core, multi-core, and GPU processors are now commonplace, resulting in more levels of parallelism, memory hierarchy, and programming complexity. It has been necessary to adapt the MILC code to these new processors starting with NVIDIA GPUs, and more recently, the Intel Xeon Phi processors. We report on our efforts to port and optimize our code for the Intel Knights Landing architecture. We consider performance of the MILC code with MPI and OpenMP, and optimizations with QOPQDP and QPhiX. For the latter approach, we concentrate on the staggered conjugate gradient and gauge force. We also consider performance on recent NVIDIA GPUs using the QUDA library.

  9. Recent plant studies using Victoria 2.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    BIXLER,NATHAN E.; GASSER,RONALD D.

    2000-03-08

    VICTORIA 2.0 is a mechanistic computer code designed to analyze fission product behavior within the reactor coolant system (RCS) during a severe nuclear reactor accident. It provides detailed predictions of the release of radioactive and nonradioactive materials from the reactor core and transport and deposition of these materials within the RCS and secondary circuits. These predictions account for the chemical and aerosol processes that affect radionuclide behavior. VICTORIA 2.0 was released in early 1999; a new version VICTORIA 2.1, is now under development. The largest improvements in VICTORIA 2.1 are connected with the thermochemical database, which is being revised andmore » expanded following the recommendations of a peer review. Three risk-significant severe accident sequences have recently been investigated using the VICTORIA 2.0 code. The focus here is on how various chemistry options affect the predictions. Additionally, the VICTORIA predictions are compared with ones made using the MELCOR code. The three sequences are a station blackout in a GE BWR and steam generator tube rupture (SGTR) and pump-seal LOCA sequences in a 3-loop Westinghouse PWR. These sequences cover a range of system pressures, from fully depressurized to full system pressure. The chief results of this study are the fission product fractions that are retained in the core, RCS, secondary, and containment and the fractions that are released into the environment.« less

  10. Summary of comparison and analysis of results from exercises 1 and 2 of the OECD PBMR coupled neutronics/thermal hydraulics transient benchmark

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mkhabela, P.; Han, J.; Tyobeka, B.

    2006-07-01

    The Nuclear Energy Agency (NEA) of the Organization for Economic Cooperation and Development (OECD) has accepted, through the Nuclear Science Committee (NSC), the inclusion of the Pebble-Bed Modular Reactor 400 MW design (PBMR-400) coupled neutronics/thermal hydraulics transient benchmark problem as part of their official activities. The scope of the benchmark is to establish a well-defined problem, based on a common given library of cross sections, to compare methods and tools in core simulation and thermal hydraulics analysis with a specific focus on transient events through a set of multi-dimensional computational test problems. The benchmark includes three steady state exercises andmore » six transient exercises. This paper describes the first two steady state exercises, their objectives and the international participation in terms of organization, country and computer code utilized. This description is followed by a comparison and analysis of the participants' results submitted for these two exercises. The comparison of results from different codes allows for an assessment of the sensitivity of a result to the method employed and can thus help to focus the development efforts on the most critical areas. The two first exercises also allow for removing of user-related modeling errors and prepare core neutronics and thermal-hydraulics models of the different codes for the rest of the exercises in the benchmark. (authors)« less

  11. SPOC Benchmark Case: SNRE Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vishal Patel; Michael Eades; Claude Russel Joyner II

    The Small Nuclear Rocket Engine (SNRE) was modeled in the Center for Space Nuclear Research’s (CSNR) Space Propulsion Optimization Code (SPOC). SPOC aims to create nuclear thermal propulsion (NTP) geometries quickly to perform parametric studies on design spaces of historic and new NTP designs. The SNRE geometry was modeled in SPOC and a critical core with a reasonable amount of criticality margin was found. The fuel, tie-tubes, reflector, and control drum masses were predicted rather well. These are all very important for neutronics calculations so the active reactor geometries created with SPOC can continue to be trusted. Thermal calculations ofmore » the average and hot fuel channels agreed very well. The specific impulse calculations used historically and in SPOC disagree so mass flow rates and impulses differed. Modeling peripheral and power balance components that do not affect nuclear characteristics of the core is not a feature of SPOC and as such, these components should continue to be designed using other tools. A full paper detailing the available SNRE data and comparisons with SPOC outputs will be submitted as a follow-up to this abstract.« less

  12. A New Methodology for Open Pit Slope Design in Karst-Prone Ground Conditions Based on Integrated Stochastic-Limit Equilibrium Analysis

    NASA Astrophysics Data System (ADS)

    Zhang, Ke; Cao, Ping; Ma, Guowei; Fan, Wenchen; Meng, Jingjing; Li, Kaihui

    2016-07-01

    Using the Chengmenshan Copper Mine as a case study, a new methodology for open pit slope design in karst-prone ground conditions is presented based on integrated stochastic-limit equilibrium analysis. The numerical modeling and optimization design procedure contain a collection of drill core data, karst cave stochastic model generation, SLIDE simulation and bisection method optimization. Borehole investigations are performed, and the statistical result shows that the length of the karst cave fits a negative exponential distribution model, but the length of carbonatite does not exactly follow any standard distribution. The inverse transform method and acceptance-rejection method are used to reproduce the length of the karst cave and carbonatite, respectively. A code for karst cave stochastic model generation, named KCSMG, is developed. The stability of the rock slope with the karst cave stochastic model is analyzed by combining the KCSMG code and the SLIDE program. This approach is then applied to study the effect of the karst cave on the stability of the open pit slope, and a procedure to optimize the open pit slope angle is presented.

  13. Using the International Classification of Functioning, Disability, and Health to identify outcome domains for a core outcome set for aphasia: a comparison of stakeholder perspectives.

    PubMed

    Wallace, Sarah J; Worrall, Linda; Rose, Tanya; Le Dorze, Guylaine

    2017-11-12

    This study synthesised the findings of three separate consensus processes exploring the perspectives of key stakeholder groups about important aphasia treatment outcomes. This process was conducted to generate recommendations for outcome domains to be included in a core outcome set for aphasia treatment trials. International Classification of Functioning, Disability, and Health codes were examined to identify where the groups of: (1) people with aphasia, (2) family members, (3) aphasia researchers, and (4) aphasia clinicians/managers, demonstrated congruence in their perspectives regarding important treatment outcomes. Codes were contextualized using qualitative data. Congruence across three or more stakeholder groups was evident for ICF chapters: Mental functions; Communication; and Services, systems, and policies. Quality of life was explicitly identified by clinicians/managers and researchers, while people with aphasia and their families identified outcomes known to be determinants of quality of life. Core aphasia outcomes include: language, emotional wellbeing, communication, patient-reported satisfaction with treatment and impact of treatment, and quality of life. International Classification of Functioning, Disability, and Health coding can be used to compare stakeholder perspectives and identify domains for core outcome sets. Pairing coding with qualitative data may ensure important nuances of meaning are retained. Implications for rehabilitation The outcomes measured in treatment research should be relevant to stakeholders and support health care decision making. Core outcome sets (agreed, minimum set of outcomes, and outcome measures) are increasingly being used to ensure the relevancy and consistency of the outcomes measured in treatment studies. Important aphasia treatment outcomes span all components of the International Classification of Functioning, Disability, and Health. Stakeholders demonstrated congruence in the identification of important outcomes which related Mental functions; Communication; Services, systems, and policies; and Quality of life. A core outcome set for aphasia treatment research should include measures relating to: language, emotional wellbeing, communication, patient-reported satisfaction with treatment and impact of treatment, and quality of life. Coding using the International Classification of Functioning, Disability, and Health, presents a novel methodology for the comparison of stakeholder perspectives to inform recommendations for outcome constructs to be included in a core outcome set. Coding can be paired with qualitative data to ensure nuances of meaning are retained.

  14. Variation of SNOMED CT coding of clinical research concepts among coding experts.

    PubMed

    Andrews, James E; Richesson, Rachel L; Krischer, Jeffrey

    2007-01-01

    To compare consistency of coding among professional SNOMED CT coders representing three commercial providers of coding services when coding clinical research concepts with SNOMED CT. A sample of clinical research questions from case report forms (CRFs) generated by the NIH-funded Rare Disease Clinical Research Network (RDCRN) were sent to three coding companies with instructions to code the core concepts using SNOMED CT. The sample consisted of 319 question/answer pairs from 15 separate studies. The companies were asked to select SNOMED CT concepts (in any form, including post-coordinated) that capture the core concept(s) reflected in the question. Also, they were asked to state their level of certainty, as well as how precise they felt their coding was. Basic frequencies were calculated to determine raw level agreement among the companies and other descriptive information. Krippendorff's alpha was used to determine a statistical measure of agreement among the coding companies for several measures (semantic, certainty, and precision). No significant level of agreement among the experts was found. There is little semantic agreement in coding of clinical research data items across coders from 3 professional coding services, even using a very liberal definition of agreement.

  15. Structural response of 1/20-scale models of the Clinch River Breeder Reactor to a simulated hypothetical core disruptive accident. Technical report 4

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romander, C. M.; Cagliostro, D. J.

    Five experiments were performed to help evaluate the structural integrity of the reactor vessel and head design and to verify code predictions. In the first experiment (SM 1), a detailed model of the head was loaded statically to determine its stiffness. In the remaining four experiments (SM 2 to SM 5), models of the vessel and head were loaded dynamically under a simulated 661 MW-sec hypothetical core disruptive accident (HCDA). Models SM 2 to SM 4, each of increasing complexity, systematically showed the effects of upper internals structures, a thermal liner, core support platform, and torospherical bottom on vessel response.more » Model SM 5, identical to SM 4 but more heavily instrumented, demonstrated experimental reproducibility and provided more comprehensive data. The models consisted of a Ni 200 vessel and core barrel, a head with shielding and simulated component masses, an upper internals structure (UIS), and, in the more complex models SM 4 and SM 5, a Ni 200 thermal liner and core support structure. Water simulated the liquid sodium coolant and a low-density explosive simulated the HCDA loads.« less

  16. Systematic void fraction studies with RELAP5, FRANCESCA and HECHAN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stosic, Z.; Preusche, G.

    1996-08-01

    In enhancing the scope of standard thermal-hydraulic codes applications beyond its capabilities, i.e. coupling with a one and/or three-dimensional kinetics core model, the void fraction, transferred from thermal-hydraulics to the core model, plays a determining role in normal operating range and high core flow, as the generated heat and axial power profiles are direct functions of void distribution in the core. Hence, it is very important to know if the void quality models in the programs which have to be coupled are compatible to allow the interactive exchange of data which are based on these constitutive void-quality relations. The presentedmore » void fraction study is performed in order to give the basis for the conclusion whether a transient core simulation using the RELAP5 void fractions can calculate the axial power shapes adequately. Because of that, the void fractions calculated with RELAP5 are compared with those calculated by BWR safety code for licensing--FRANCESCA and the best estimate model for pre- and post-dryout calculation in BWR heated channel--HECHAN. In addition, a comparison with standard experimental void-quality benchmark tube data is performed for the HECHAN code.« less

  17. Upgrade of Irradiation Test Capability of the Experimental Fast Reactor Joyo

    NASA Astrophysics Data System (ADS)

    Sekine, Takashi; Aoyama, Takafumi; Suzuki, Soju; Yamashita, Yoshioki

    2003-06-01

    The JOYO MK-II core was operated from 1983 to 2000 as fast neutron irradiation bed. In order to meet various requirements for irradiation tests for development of FBRs, the JOYO upgrading project named MK-III program was initiated. The irradiation capability in the MK-III core will be about four times larger than that of the MK-II core. Advanced irradiation test subassemblies such as capsule type subassembly and on-line instrumentation rig are planned. As an innovative reactor safety system, the irradiation test of Self-Actuated Shutdown System (SASS) will be conducted. In order to improve the accuracy of neutron fluence, the core management code system was upgraded, and the Monte Carlo code and Helium Accumulation Fluence Monitor (HAFM) were applied. The MK-III core is planned to achieve initial criticality in July 2003.

  18. Locality Aware Concurrent Start for Stencil Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shrestha, Sunil; Gao, Guang R.; Manzano Franco, Joseph B.

    Stencil computations are at the heart of many physical simulations used in scientific codes. Thus, there exists a plethora of optimization efforts for this family of computations. Among these techniques, tiling techniques that allow concurrent start have proven to be very efficient in providing better performance for these critical kernels. Nevertheless, with many core designs being the norm, these optimization techniques might not be able to fully exploit locality (both spatial and temporal) on multiple levels of the memory hierarchy without compromising parallelism. It is no longer true that the machine can be seen as a homogeneous collection of nodesmore » with caches, main memory and an interconnect network. New architectural designs exhibit complex grouping of nodes, cores, threads, caches and memory connected by an ever evolving network-on-chip design. These new designs may benefit greatly from carefully crafted schedules and groupings that encourage parallel actors (i.e. threads, cores or nodes) to be aware of the computational history of other actors in close proximity. In this paper, we provide an efficient tiling technique that allows hierarchical concurrent start for memory hierarchy aware tile groups. Each execution schedule and tile shape exploit the available parallelism, load balance and locality present in the given applications. We demonstrate our technique on the Intel Xeon Phi architecture with selected and representative stencil kernels. We show improvement ranging from 5.58% to 31.17% over existing state-of-the-art techniques.« less

  19. Conceptual Core Analysis of Long Life PWR Utilizing Thorium-Uranium Fuel Cycle

    NASA Astrophysics Data System (ADS)

    Rouf; Su'ud, Zaki

    2016-08-01

    Conceptual core analysis of long life PWR utilizing thorium-uranium based fuel has conducted. The purpose of this study is to evaluate neutronic behavior of reactor core using combined thorium and enriched uranium fuel. Based on this fuel composition, reactor core have higher conversion ratio rather than conventional fuel which could give longer operation length. This simulation performed using SRAC Code System based on library SRACLIB-JDL32. The calculation carried out for (Th-U)O2 and (Th-U)C fuel with uranium composition 30 - 40% and gadolinium (Gd2O3) as burnable poison 0,0125%. The fuel composition adjusted to obtain burn up length 10 - 15 years under thermal power 600 - 1000 MWt. The key properties such as uranium enrichment, fuel volume fraction, percentage of uranium are evaluated. Core calculation on this study adopted R-Z geometry divided by 3 region, each region have different uranium enrichment. The result show multiplication factor every burn up step for 15 years operation length, power distribution behavior, power peaking factor, and conversion ratio. The optimum core design achieved when thermal power 600 MWt, percentage of uranium 35%, U-235 enrichment 11 - 13%, with 14 years operation length, axial and radial power peaking factor about 1.5 and 1.2 respectively.

  20. PRIZMA predictions of in-core detection indications in the VVER-1000 reactor

    NASA Astrophysics Data System (ADS)

    Kandiev, Yadgar Z.; Kashayeva, Elena A.; Malyshin, Gennady N.; Modestov, Dmitry G.; Khatuntsev, Kirill E.

    2014-06-01

    The paper describes calculations which were done by the PRIZMA code(1) to predict indications of in-core rhodium detectors in the VVER-1000 reactor for some core fragments with allowance for fuel and rhodium burnout.

  1. Parallel Index and Query for Large Scale Data Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chou, Jerry; Wu, Kesheng; Ruebel, Oliver

    2011-07-18

    Modern scientific datasets present numerous data management and analysis challenges. State-of-the-art index and query technologies are critical for facilitating interactive exploration of large datasets, but numerous challenges remain in terms of designing a system for process- ing general scientific datasets. The system needs to be able to run on distributed multi-core platforms, efficiently utilize underlying I/O infrastructure, and scale to massive datasets. We present FastQuery, a novel software framework that address these challenges. FastQuery utilizes a state-of-the-art index and query technology (FastBit) and is designed to process mas- sive datasets on modern supercomputing platforms. We apply FastQuery to processing ofmore » a massive 50TB dataset generated by a large scale accelerator modeling code. We demonstrate the scalability of the tool to 11,520 cores. Motivated by the scientific need to search for inter- esting particles in this dataset, we use our framework to reduce search time from hours to tens of seconds.« less

  2. Self-Powered Forward Error-Correcting Biosensor Based on Integration of Paper-Based Microfluidics and Self-Assembled Quick Response Codes.

    PubMed

    Yuan, Mingquan; Liu, Keng-Ku; Singamaneni, Srikanth; Chakrabartty, Shantanu

    2016-10-01

    This paper extends our previous work on silver-enhancement based self-assembling structures for designing reliable, self-powered biosensors with forward error correcting (FEC) capability. At the core of the proposed approach is the integration of paper-based microfluidics with quick response (QR) codes that can be optically scanned using a smart-phone. The scanned information is first decoded to obtain the location of a web-server which further processes the self-assembled QR image to determine the concentration of target analytes. The integration substrate for the proposed FEC biosensor is polyethylene and the patterning of the QR code on the substrate has been achieved using a combination of low-cost ink-jet printing and a regular ballpoint dispensing pen. A paper-based microfluidics channel has been integrated underneath the substrate for acquiring, mixing and flowing the sample to areas on the substrate where different parts of the code can self-assemble in presence of immobilized gold nanorods. In this paper we demonstrate the proof-of-concept detection using prototypes of QR encoded FEC biosensors.

  3. Determination of the NPP Kr\\vsko spent fuel decay heat

    NASA Astrophysics Data System (ADS)

    Kromar, Marjan; Kurinčič, Bojan

    2017-07-01

    Nuclear fuel is designed to support fission process in a reactor core. Some of the isotopes, formed during the fission, decay and produce decay heat and radiation. Accurate knowledge of the nuclide inventory producing decay heat is important after reactor shut down, during the fuel storage and subsequent reprocessing or disposal. In this paper possibility to calculate the fuel isotopic composition and determination of the fuel decay heat with the Serpent code is investigated. Serpent is a well-known Monte Carlo code used primarily for the calculation of the neutron transport in a reactor. It has been validated for the burn-up calculations. In the calculation of the fuel decay heat different set of isotopes is important than in the neutron transport case. Comparison with the Origen code is performed to verify that the Serpent is taking into account all isotopes important to assess the fuel decay heat. After the code validation, a sensitivity study is carried out. Influence of several factors such as enrichment, fuel temperature, moderator temperature (density), soluble boron concentration, average power, burnable absorbers, and burnup is analyzed.

  4. The COPERNIC3 project: how AREVA is successfully developing an advanced global fuel rod performance code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garnier, Ch.; Mailhe, P.; Sontheimer, F.

    2007-07-01

    Fuel performance is a key factor for minimizing operating costs in nuclear plants. One of the important aspects of fuel performance is fuel rod design, based upon reliable tools able to verify the safety of current fuel solutions, prevent potential issues in new core managements and guide the invention of tomorrow's fuels. AREVA is developing its future global fuel rod code COPERNIC3, which is able to calculate the thermal-mechanical behavior of advanced fuel rods in nuclear plants. Some of the best practices to achieve this goal are described, by reviewing the three pillars of a fuel rod code: the database,more » the modelling and the computer and numerical aspects. At first, the COPERNIC3 database content is described, accompanied by the tools developed to effectively exploit the data. Then is given an overview of the main modelling aspects, by emphasizing the thermal, fission gas release and mechanical sub-models. In the last part, numerical solutions are detailed in order to increase the computational performance of the code, with a presentation of software configuration management solutions. (authors)« less

  5. CoreTSAR: Core Task-Size Adapting Runtime

    DOE PAGES

    Scogland, Thomas R. W.; Feng, Wu-chun; Rountree, Barry; ...

    2014-10-27

    Heterogeneity continues to increase at all levels of computing, with the rise of accelerators such as GPUs, FPGAs, and other co-processors into everything from desktops to supercomputers. As a consequence, efficiently managing such disparate resources has become increasingly complex. CoreTSAR seeks to reduce this complexity by adaptively worksharing parallel-loop regions across compute resources without requiring any transformation of the code within the loop. Lastly, our results show performance improvements of up to three-fold over a current state-of-the-art heterogeneous task scheduler as well as linear performance scaling from a single GPU to four GPUs for many codes. In addition, CoreTSAR demonstratesmore » a robust ability to adapt to both a variety of workloads and underlying system configurations.« less

  6. Application of a GPU-Assisted Maxwell Code to Electromagnetic Wave Propagation in ITER

    NASA Astrophysics Data System (ADS)

    Kubota, S.; Peebles, W. A.; Woodbury, D.; Johnson, I.; Zolfaghari, A.

    2014-10-01

    The Low Field Side Reflectometer (LSFR) on ITER is envisioned to provide capabilities for electron density profile and fluctuations measurements in both the plasma core and edge. The current design for the Equatorial Port Plug 11 (EPP11) employs seven monostatic antennas for use with both fixed-frequency and swept-frequency systems. The present work examines the characteristics of this layout using the 3-D version of the GPU-Assisted Maxwell Code (GAMC-3D). Previous studies in this area were performed with either 2-D full wave codes or 3-D ray- and beam-tracing. GAMC-3D is based on the FDTD method and can be run with either a fixed-frequency or modulated (e.g. FMCW) source, and with either a stationary or moving target (e.g. Doppler backscattering). The code is designed to run on a single NVIDIA Tesla GPU accelerator, and utilizes a technique based on the moving window method to overcome the size limitation of the onboard memory. Effects such as beam drift, linear mode conversion, and diffraction/scattering will be examined. Comparisons will be made with beam-tracing calculations using the complex eikonal method. Supported by U.S. DoE Grants DE-FG02-99ER54527 and DE-AC02-09CH11466, and the DoE SULI Program at PPPL.

  7. ISE: An Integrated Search Environment. The manual

    NASA Technical Reports Server (NTRS)

    Chu, Lon-Chan

    1992-01-01

    Integrated Search Environment (ISE), a software package that implements hierarchical searches with meta-control, is described in this manual. ISE is a collection of problem-independent routines to support solving searches. Mainly, these routines are core routines for solving a search problem and they handle the control of searches and maintain the statistics related to searches. By separating the problem-dependent and problem-independent components in ISE, new search methods based on a combination of existing methods can be developed by coding a single master control program. Further, new applications solved by searches can be developed by coding the problem-dependent parts and reusing the problem-independent parts already developed. Potential users of ISE are designers of new application solvers and new search algorithms, and users of experimental application solvers and search algorithms. The ISE is designed to be user-friendly and information rich. In this manual, the organization of ISE is described and several experiments carried out on ISE are also described.

  8. Projected impact of the ICD-10-CM/PCS conversion on longitudinal data and the Joint Commission Core Measures.

    PubMed

    Fenton, Susan H; Benigni, Mary Sue

    2014-01-01

    The transition from ICD-9-CM to ICD-10-CM/PCS is expected to result in longitudinal data discontinuities, as occurred with cause-of-death in 1999. The General Equivalence Maps (GEMs), while useful for suggesting potential maps do not provide guidance regarding the frequency of any matches. Longitudinal data comparisons can only be reliable if they use comparability ratios or factors which have been calculated using records coded in both classification systems. This study utilized 3,969 de-identified dually coded records to examine raw comparability ratios, as well as the comparability ratios between the Joint Commission Core Measures. The raw comparability factor results range from 16.216 for Nicotine dependence, unspecified, uncomplicated to 118.009 for Chronic obstructive pulmonary disease, unspecified. The Joint Commission Core Measure comparability factor results range from 27.15 for Acute Respiratory Failure to 130.16 for Acute Myocardial Infarction. These results indicate significant differences in comparability between ICD-9-CM and ICD-10-CM code assignment, including when the codes are used for external reporting such as the Joint Commission Core Measures. To prevent errors in decision-making and reporting, all stakeholders relying on longitudinal data for measure reporting and other purposes should investigate the impact of the conversion on their data.

  9. Initial results on computational performance of Intel Many Integrated Core (MIC) architecture: implementation of the Weather and Research Forecasting (WRF) Purdue-Lin microphysics scheme

    NASA Astrophysics Data System (ADS)

    Mielikainen, Jarno; Huang, Bormin; Huang, Allen H.

    2014-10-01

    Purdue-Lin scheme is a relatively sophisticated microphysics scheme in the Weather Research and Forecasting (WRF) model. The scheme includes six classes of hydro meteors: water vapor, cloud water, raid, cloud ice, snow and graupel. The scheme is very suitable for massively parallel computation as there are no interactions among horizontal grid points. In this paper, we accelerate the Purdue Lin scheme using Intel Many Integrated Core Architecture (MIC) hardware. The Intel Xeon Phi is a high performance coprocessor consists of up to 61 cores. The Xeon Phi is connected to a CPU via the PCI Express (PICe) bus. In this paper, we will discuss in detail the code optimization issues encountered while tuning the Purdue-Lin microphysics Fortran code for Xeon Phi. In particularly, getting a good performance required utilizing multiple cores, the wide vector operations and make efficient use of memory. The results show that the optimizations improved performance of the original code on Xeon Phi 5110P by a factor of 4.2x. Furthermore, the same optimizations improved performance on Intel Xeon E5-2603 CPU by a factor of 1.2x compared to the original code.

  10. Failure Predictions for VHTR Core Components using a Probabilistic Contiuum Damage Mechanics Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fok, Alex

    2013-10-30

    The proposed work addresses the key research need for the development of constitutive models and overall failure models for graphite and high temperature structural materials, with the long-term goal being to maximize the design life of the Next Generation Nuclear Plant (NGNP). To this end, the capability of a Continuum Damage Mechanics (CDM) model, which has been used successfully for modeling fracture of virgin graphite, will be extended as a predictive and design tool for the core components of the very high- temperature reactor (VHTR). Specifically, irradiation and environmental effects pertinent to the VHTR will be incorporated into the modelmore » to allow fracture of graphite and ceramic components under in-reactor conditions to be modeled explicitly using the finite element method. The model uses a combined stress-based and fracture mechanics-based failure criterion, so it can simulate both the initiation and propagation of cracks. Modern imaging techniques, such as x-ray computed tomography and digital image correlation, will be used during material testing to help define the baseline material damage parameters. Monte Carlo analysis will be performed to address inherent variations in material properties, the aim being to reduce the arbitrariness and uncertainties associated with the current statistical approach. The results can potentially contribute to the current development of American Society of Mechanical Engineers (ASME) codes for the design and construction of VHTR core components.« less

  11. Design of an MSAT-X mobile transceiver and related base and gateway stations

    NASA Technical Reports Server (NTRS)

    Fang, Russell J. F.; Bhaskar, Udaya; Hemmati, Farhad; Mackenthun, Kenneth M.; Shenoy, Ajit

    1987-01-01

    This paper summarizes the results of a design study of the mobile transceiver, base station, and gateway station for NASA's proposed Mobile Satellite Experiment (MSAT-X). Major ground segment system design issues such as frequency stability control, modulation method, linear predictive coding vocoder algorithm, and error control technique are addressed. The modular and flexible transceiver design is described in detail, including the core, RF/IF, modem, vocoder, forward error correction codec, amplitude-companded single sideband, and input/output modules, as well as the flexible interface. Designs for a three-carrier base station and a 10-carrier gateway station are also discussed, including the interface with the controllers and with the public-switched telephone networks at the gateway station. Functional specifications are given for the transceiver, the base station, and the gateway station.

  12. Design of an MSAT-X mobile transceiver and related base and gateway stations

    NASA Astrophysics Data System (ADS)

    Fang, Russell J. F.; Bhaskar, Udaya; Hemmati, Farhad; Mackenthun, Kenneth M.; Shenoy, Ajit

    This paper summarizes the results of a design study of the mobile transceiver, base station, and gateway station for NASA's proposed Mobile Satellite Experiment (MSAT-X). Major ground segment system design issues such as frequency stability control, modulation method, linear predictive coding vocoder algorithm, and error control technique are addressed. The modular and flexible transceiver design is described in detail, including the core, RF/IF, modem, vocoder, forward error correction codec, amplitude-companded single sideband, and input/output modules, as well as the flexible interface. Designs for a three-carrier base station and a 10-carrier gateway station are also discussed, including the interface with the controllers and with the public-switched telephone networks at the gateway station. Functional specifications are given for the transceiver, the base station, and the gateway station.

  13. Preliminary Analysis of the Transient Reactor Test Facility (TREAT) with PROTEUS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Connaway, H. M.; Lee, C. H.

    The neutron transport code PROTEUS has been used to perform preliminary simulations of the Transient Reactor Test Facility (TREAT). TREAT is an experimental reactor designed for the testing of nuclear fuels and other materials under transient conditions. It operated from 1959 to 1994, when it was placed on non-operational standby. The restart of TREAT to support the U.S. Department of Energy’s resumption of transient testing is currently underway. Both single assembly and assembly-homogenized full core models have been evaluated. Simulations were performed using a historic set of WIMS-ANL-generated cross-sections as well as a new set of Serpent-generated cross-sections. To supportmore » this work, further analyses were also performed using additional codes in order to investigate particular aspects of TREAT modeling. DIF3D and the Monte-Carlo codes MCNP and Serpent were utilized in these studies. MCNP and Serpent were used to evaluate the effect of geometry homogenization on the simulation results and to support code-to-code comparisons. New meshes for the PROTEUS simulations were created using the CUBIT toolkit, with additional meshes generated via conversion of selected DIF3D models to support code-to-code verifications. All current analyses have focused on code-to-code verifications, with additional verification and validation studies planned. The analysis of TREAT with PROTEUS-SN is an ongoing project. This report documents the studies that have been performed thus far, and highlights key challenges to address in future work.« less

  14. Mongoose: Creation of a Rad-Hard MIPS R3000

    NASA Technical Reports Server (NTRS)

    Lincoln, Dan; Smith, Brian

    1993-01-01

    This paper describes the development of a 32 Bit, full MIPS R3000 code-compatible Rad-Hard CPU, code named Mongoose. Mongoose progressed from contract award, through the design cycle, to operational silicon in 12 months to meet a space mission for NASA. The goal was the creation of a fully static device capable of operation to the maximum Mil-883 derated speed, worst-case post-rad exposure with full operational integrity. This included consideration of features for functional enhancements relating to mission compatibility and removal of commercial practices not supported by Rad-Hard technology. 'Mongoose' developed from an evolution of LSI Logic's MIPS-I embedded processor, LR33000, code named Cobra, to its Rad-Hard 'equivalent', Mongoose. The term 'equivalent' is used to infer that the core of the processor is functionally identical, allowing the same use and optimizations of the MIPS-I Instruction Set software tool suite for compilation, software program trace, etc. This activity was started in September of 1991 under a contract from NASA-Goddard Space Flight Center (GSFC)-Flight Data Systems. The approach affected a teaming of NASA-GSFC for program development, LSI Logic for system and ASIC design coupled with the Rad-Hard process technology, and Harris (GASD) for Rad-Hard microprocessor design expertise. The program culminated with the generation of Rad-Hard Mongoose prototypes one year later.

  15. Performance comparison of a fiber optic communication system based on optical OFDM and an optical OFDM-MIMO with Alamouti code by using numerical simulations

    NASA Astrophysics Data System (ADS)

    Serpa-Imbett, C. M.; Marín-Alfonso, J.; Gómez-Santamaría, C.; Betancur-Agudelo, L.; Amaya-Fernández, F.

    2013-12-01

    Space division multiplexing in multicore fibers is one of the most promise technologies in order to support transmissions of next-generation peta-to-exaflop-scale supercomputers and mega data centers, owing to advantages in terms of costs and space saving of the new optical fibers with multiple cores. Additionally, multicore fibers allow photonic signal processing in optical communication systems, taking advantage of the mode coupling phenomena. In this work, we numerically have simulated an optical MIMO-OFDM (multiple-input multiple-output orthogonal frequency division multiplexing) by using the coded Alamouti to be transmitted through a twin-core fiber with low coupling. Furthermore, an optical OFDM is transmitted through a core of a singlemode fiber, using pilot-aided channel estimation. We compare the transmission performance in the twin-core fiber and in the singlemode fiber taking into account numerical results of the bit-error rate, considering linear propagation, and Gaussian noise through an optical fiber link. We carry out an optical fiber transmission of OFDM frames using 8 PSK and 16 QAM, with bit rates values of 130 Gb/s and 170 Gb/s, respectively. We obtain a penalty around 4 dB for the 8 PSK transmissions, after 100 km of linear fiber optic propagation for both singlemode and twin core fiber. We obtain a penalty around 6 dB for the 16 QAM transmissions, with linear propagation after 100 km of optical fiber. The transmission in a two-core fiber by using Alamouti coded OFDM-MIMO exhibits a better performance, offering a good alternative in the mitigation of fiber impairments, allowing to expand Alamouti coded in multichannel systems spatially multiplexed in multicore fibers.

  16. From core to coax: extending core RF modelling to include SOL, Antenna, and PFC

    NASA Astrophysics Data System (ADS)

    Shiraiwa, Syun'ichi

    2017-10-01

    A new technique for the calculation of RF waves in toroidal geometry enables the simultaneous incorporation of antenna geometry, plasma facing components (PFCs), the scrape off-layer (SOL), and core propagation. Traditionally, core RF wave propagation and antenna coupling has been calculated separately both using rather simplified SOL plasmas. The new approach, instead, allows capturing wave propagation in the SOL and its interactions with non-conforming PFCs permitting self-consistent calculation of core absorption and edge power loss, as well as investigating far and near field impurity generation from RF sheaths and a breakdown issue from antenna electric fields. Our approach combines the field solutions obtained from a core spectral code with a hot plasma dielectric and an edge FEM code using a cold plasma approximation via surface admittance-like matrix. Our approach was verified using the TORIC core ICRF spectral code and the commercial COMSOL FEM package, and was extended to 3D torus using open-source scalable MFEM library. The simulation result revealed that as the core wave damping gets weaker, the wave absorption in edge could become non-negligible. Three dimensional capabilities with non axisymmetric edge are being applied to study the antenna characteristic difference between the field aligned and toroidally aligned antennas on Alcator C-Mod, as well as the surface wave excitation on NSTX-U. Work supported by the U.S. DoE, OFES, using User Facility Alcator C-Mod, DE-FC02-99ER54512 and Contract No. DE-FC02-01ER54648.

  17. Development of CFD model for augmented core tripropellant rocket engine

    NASA Astrophysics Data System (ADS)

    Jones, Kenneth M.

    1994-10-01

    The Space Shuttle era has made major advances in technology and vehicle design to the point that the concept of a single-stage-to-orbit (SSTO) vehicle appears more feasible. NASA presently is conducting studies into the feasibility of certain advanced concept rocket engines that could be utilized in a SSTO vehicle. One such concept is a tripropellant system which burns kerosene and hydrogen initially and at altitude switches to hydrogen. This system will attain a larger mass fraction because LOX-kerosene engines have a greater average propellant density and greater thrust-to-weight ratio. This report describes the investigation to model the tripropellant augmented core engine. The physical aspects of the engine, the CFD code employed, and results of the numerical model for a single modular thruster are discussed.

  18. Multi level optimization of burnable poison utilization for advanced PWR fuel management

    NASA Astrophysics Data System (ADS)

    Yilmaz, Serkan

    The objective of this study was to develop an unique methodology and a practical tool for designing burnable poison (BP) pattern for a given PWR core. Two techniques were studied in developing this tool. First, the deterministic technique called Modified Power Shape Forced Diffusion (MPSFD) method followed by a fine tuning algorithm, based on some heuristic rules, was developed to achieve this goal. Second, an efficient and a practical genetic algorithm (GA) tool was developed and applied successfully to Burnable Poisons (BPs) placement optimization problem for a reference Three Mile Island-1 (TMI-1) core. This thesis presents the step by step progress in developing such a tool. The developed deterministic method appeared to perform as expected. The GA technique produced excellent BP designs. It was discovered that the Beginning of Cycle (BOC) Kinf of a BP fuel assembly (FA) design is a good filter to eliminate invalid BP designs created during the optimization process. By eliminating all BP designs having BOC Kinf above a set limit, the computational time was greatly reduced since the evaluation process with reactor physics calculations for an invalid solution is canceled. Moreover, the GA was applied to develop the BP loading pattern to minimize the total Gadolinium (Gd) amount in the core together with the residual binding at End-of-Cycle (EOC) and to keep the maximum peak pin power during core depletion and Soluble boron concentration at BOC both less than their limit values. The number of UO2/Gd2O3 pins and Gd 2O3 concentrations for each fresh fuel location in the core are the decision variables and the total amount of the Gd in the core and maximum peak pin power during core depletion are in the fitness functions. The use of different fitness function definition and forcing the solution movement towards to desired region in the solution space accelerated the GA runs. Special emphasize is given to minimizing the residual binding to increase core lifetime as well as minimizing the total Gd amount in the core. The GA code developed many good solutions that satisfy all of the design constraints. For these solutions, the EOC soluble boron concentration changes from 68.9 to 97.2 ppm. It is important to note that the difference of 28.3 ppm between the best and the worst solution in the good solutions region represent the potential of 12.5 Effective-Full-Power-Day (EPFD) savings in cycle length. As a comparison, the best BP loading design has 97.2 ppm soluble boron concentration at EOC while the BP loading with available vendors' U/Gd FA designs has 94.4 ppm SOB at EOC. It was estimated that the difference of 2.8 ppm reflected the potential savings of 1.25 EFPD in cycle length. Moreover, the total Gd amount was reduced by 6.89% in mass that provided extra savings in fuel cost compared to the BP loading pattern with available vendor's U/Gd FA designs. (Abstract shortened by UMI.)

  19. Encoder fault analysis system based on Moire fringe error signal

    NASA Astrophysics Data System (ADS)

    Gao, Xu; Chen, Wei; Wan, Qiu-hua; Lu, Xin-ran; Xie, Chun-yu

    2018-02-01

    Aiming at the problem of any fault and wrong code in the practical application of photoelectric shaft encoder, a fast and accurate encoder fault analysis system is researched from the aspect of Moire fringe photoelectric signal processing. DSP28335 is selected as the core processor and high speed serial A/D converter acquisition card is used. And temperature measuring circuit using AD7420 is designed. Discrete data of Moire fringe error signal is collected at different temperatures and it is sent to the host computer through wireless transmission. The error signal quality index and fault type is displayed on the host computer based on the error signal identification method. The error signal quality can be used to diagnosis the state of error code through the human-machine interface.

  20. Are Military and Medical Ethics Necessarily Incompatible? A Canadian Case Study.

    PubMed

    Rochon, Christiane; Williams-Jones, Bryn

    2016-12-01

    Military physicians are often perceived to be in a position of 'dual loyalty' because they have responsibilities towards their patients but also towards their employer, the military institution. Further, they have to ascribe to and are bound by two distinct codes of ethics (i.e., medical and military), each with its own set of values and duties, that could at first glance be considered to be very different or even incompatible. How, then, can military physicians reconcile these two codes of ethics and their distinct professional/institutional values, and assume their responsibilities towards both their patients and the military institution? To clarify this situation, and to show how such a reconciliation might be possible, we compared the history and content of two national professional codes of ethics: the Defence Ethics of the Canadian Armed Forces and the Code of Ethics of the Canadian Medical Association. Interestingly, even if the medical code is more focused on duties and responsibility while the military code is more focused on core values and is supported by a comprehensive ethical training program, they also have many elements in common. Further, both are based on the same core values of loyalty and integrity, and they are broad in scope but are relatively flexible in application. While there are still important sources of tension between and limits within these two codes of ethics, there are fewer differences than may appear at first glance because the core values and principles of military and medical ethics are not so different.

  1. Square lattice honeycomb reactor for space power and propulsion

    NASA Astrophysics Data System (ADS)

    Gouw, Reza; Anghaie, Samim

    2000-01-01

    The most recent nuclear design study at the Innovative Nuclear Space Power and Propulsion Institute (INSPI) is the Moderated Square-Lattice Honeycomb (M-SLHC) reactor design utilizing the solid solution of ternary carbide fuels. The reactor is fueled with solid solution of 93% enriched (U,Zr,Nb)C. The square-lattice honeycomb design provides high strength and is amenable to the processing complexities of these ultrahigh temperature fuels. The optimum core configuration requires a balance between high specific impulse and thrust level performance, and maintaining the temperature and strength limits of the fuel. The M-SLHC design is based on a cylindrical core that has critical radius and length of 37 cm and 50 cm, respectively. This design utilized zirconium hydrate to act as moderator. The fuel sub-assemblies are designed as cylindrical tubes with 12 cm in diameter and 10 cm in length. Five fuel subassemblies are stacked up axially to form one complete fuel assembly. These fuel assemblies are then arranged in the circular arrangement to form two fuel regions. The first fuel region consists of six fuel assemblies, and 18 fuel assemblies for the second fuel region. A 10-cm radial beryllium reflector in addition to 10-cm top axial beryllium reflector is used to reduce neutron leakage from the system. To perform nuclear design analysis of the M-SLHC design, a series of neutron transport and diffusion codes are used. To optimize the system design, five axial regions are specified. In each axial region, temperature and fuel density are varied. The axial and radial power distributions for the system are calculated, as well as the axial and radial flux distributions. Temperature coefficients of the system are also calculated. A water submersion accident scenario is also analyzed for these systems. Results of the nuclear design analysis indicate that a compact core can be designed based on ternary uranium carbide square-lattice honeycomb fuel, which provides a relatively high thrust to weight ratio. .

  2. Molecular diagnostic development for begomovirus-betasatellite complexes undergoing diversification: A case study.

    PubMed

    Brown, Judith K; Ur-Rehman, Muhammad Zia; Avelar, Sofia; Chingandu, N; Hameed, Usman; Haider, Saleem; Ilyas, Muhammad

    2017-09-15

    At least five begomoviral species that cause leaf curl disease of cotton have emerged recently in Asia and Africa, reducing fiber quality and yield. The potential for the spread of these viruses to other cotton-vegetable growing regions throughout the world is extensive, owing to routine, global transport of alternative hosts of the leaf curl viruses, especially ornamentals. The research reported here describes the design and validation of polymerase chain reaction (PCR) primers undertaken to facilitate molecular detection of the two most-prevalent leaf curl-associated begomovirus-betasatellite complexes in the Indian Subcontinent and Africa, the Cotton leaf curl Kokhran virus-Burewala strain and Cotton leaf curl Gezira virus, endemic to Asia and Africa, respectively. Ongoing genomic diversification of these begomoviral-satellite complexes was evident based on nucleotide sequence alignments, and analysis of single nucleotide polymorphisms, both factors that created new challenges for primer design. The additional requirement for species and strain-specific, and betasatellite-specific primer design, imposes further constraints on primer design and validation due to the large number of related species and strains extant in 'core leaf curl virus complex', now with expanded distribution in south Asia, the Pacific region, and Africa-Arabian Peninsula that have relatively highly conserved coding and non-coding regions, which precludes much of the genome-betasatellite sequence when selecting primer 'targets'. Here, PCR primers were successfully designed and validated for detection of cloned viral genomes and betasatellites for representative 'core leaf curl' strains and species, distant relatives, and total DNA isolated from selected plant species. The application of molecular diagnostics to screen plant imports prior to export or release from ports of entry is expected to greatly reduce the likelihood of exotic leaf curl virus introductions that could dramatically affect the production of cotton as well as vegetable and ornamental crop hosts. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. VENTURE/PC manual: A multidimensional multigroup neutron diffusion code system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shapiro, A.; Huria, H.C.; Cho, K.W.

    1991-12-01

    VENTURE/PC is a recompilation of part of the Oak Ridge BOLD VENTURE code system, which will operate on an IBM PC or compatible computer. Neutron diffusion theory solutions are obtained for multidimensional, multigroup problems. This manual contains information associated with operating the code system. The purpose of the various modules used in the code system, and the input for these modules are discussed. The PC code structure is also given. Version 2 included several enhancements not given in the original version of the code. In particular, flux iterations can be done in core rather than by reading and writing tomore » disk, for problems which allow sufficient memory for such in-core iterations. This speeds up the iteration process. Version 3 does not include any of the special processors used in the previous versions. These special processors utilized formatted input for various elements of the code system. All such input data is now entered through the Input Processor, which produces standard interface files for the various modules in the code system. In addition, a Standard Interface File Handbook is included in the documentation which is distributed with the code, to assist in developing the input for the Input Processor.« less

  4. Comparative study between single core model and detail core model of CFD modelling on reactor core cooling behaviour

    NASA Astrophysics Data System (ADS)

    Darmawan, R.

    2018-01-01

    Nuclear power industry is facing uncertainties since the occurrence of the unfortunate accident at Fukushima Daiichi Nuclear Power Plant. The issue of nuclear power plant safety becomes the major hindrance in the planning of nuclear power program for new build countries. Thus, the understanding of the behaviour of reactor system is very important to ensure the continuous development and improvement on reactor safety. Throughout the development of nuclear reactor technology, investigation and analysis on reactor safety have gone through several phases. In the early days, analytical and experimental methods were employed. For the last four decades 1D system level codes were widely used. The continuous development of nuclear reactor technology has brought about more complex system and processes of nuclear reactor operation. More detailed dimensional simulation codes are needed to assess these new reactors. Recently, 2D and 3D system level codes such as CFD are being explored. This paper discusses a comparative study on two different approaches of CFD modelling on reactor core cooling behaviour.

  5. HACC: Extreme Scaling and Performance Across Diverse Architectures

    NASA Astrophysics Data System (ADS)

    Habib, Salman; Morozov, Vitali; Frontiere, Nicholas; Finkel, Hal; Pope, Adrian; Heitmann, Katrin

    2013-11-01

    Supercomputing is evolving towards hybrid and accelerator-based architectures with millions of cores. The HACC (Hardware/Hybrid Accelerated Cosmology Code) framework exploits this diverse landscape at the largest scales of problem size, obtaining high scalability and sustained performance. Developed to satisfy the science requirements of cosmological surveys, HACC melds particle and grid methods using a novel algorithmic structure that flexibly maps across architectures, including CPU/GPU, multi/many-core, and Blue Gene systems. We demonstrate the success of HACC on two very different machines, the CPU/GPU system Titan and the BG/Q systems Sequoia and Mira, attaining unprecedented levels of scalable performance. We demonstrate strong and weak scaling on Titan, obtaining up to 99.2% parallel efficiency, evolving 1.1 trillion particles. On Sequoia, we reach 13.94 PFlops (69.2% of peak) and 90% parallel efficiency on 1,572,864 cores, with 3.6 trillion particles, the largest cosmological benchmark yet performed. HACC design concepts are applicable to several other supercomputer applications.

  6. Fault Mitigation Schemes for Future Spaceflight Multicore Processors

    NASA Technical Reports Server (NTRS)

    Alexander, James W.; Clement, Bradley J.; Gostelow, Kim P.; Lai, John Y.

    2012-01-01

    Future planetary exploration missions demand significant advances in on-board computing capabilities over current avionics architectures based on a single-core processing element. The state-of-the-art multi-core processor provides much promise in meeting such challenges while introducing new fault tolerance problems when applied to space missions. Software-based schemes are being presented in this paper that can achieve system-level fault mitigation beyond that provided by radiation-hard-by-design (RHBD). For mission and time critical applications such as the Terrain Relative Navigation (TRN) for planetary or small body navigation, and landing, a range of fault tolerance methods can be adapted by the application. The software methods being investigated include Error Correction Code (ECC) for data packet routing between cores, virtual network routing, Triple Modular Redundancy (TMR), and Algorithm-Based Fault Tolerance (ABFT). A robust fault tolerance framework that provides fail-operational behavior under hard real-time constraints and graceful degradation will be demonstrated using TRN executing on a commercial Tilera(R) processor with simulated fault injections.

  7. Toward performance portability of the Albany finite element analysis code using the Kokkos library

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Demeshko, Irina; Watkins, Jerry; Tezaur, Irina K.

    Performance portability on heterogeneous high-performance computing (HPC) systems is a major challenge faced today by code developers: parallel code needs to be executed correctly as well as with high performance on machines with different architectures, operating systems, and software libraries. The finite element method (FEM) is a popular and flexible method for discretizing partial differential equations arising in a wide variety of scientific, engineering, and industrial applications that require HPC. This paper presents some preliminary results pertaining to our development of a performance portable implementation of the FEM-based Albany code. Performance portability is achieved using the Kokkos library. We presentmore » performance results for the Aeras global atmosphere dynamical core module in Albany. Finally, numerical experiments show that our single code implementation gives reasonable performance across three multicore/many-core architectures: NVIDIA General Processing Units (GPU’s), Intel Xeon Phis, and multicore CPUs.« less

  8. Toward performance portability of the Albany finite element analysis code using the Kokkos library

    DOE PAGES

    Demeshko, Irina; Watkins, Jerry; Tezaur, Irina K.; ...

    2018-02-05

    Performance portability on heterogeneous high-performance computing (HPC) systems is a major challenge faced today by code developers: parallel code needs to be executed correctly as well as with high performance on machines with different architectures, operating systems, and software libraries. The finite element method (FEM) is a popular and flexible method for discretizing partial differential equations arising in a wide variety of scientific, engineering, and industrial applications that require HPC. This paper presents some preliminary results pertaining to our development of a performance portable implementation of the FEM-based Albany code. Performance portability is achieved using the Kokkos library. We presentmore » performance results for the Aeras global atmosphere dynamical core module in Albany. Finally, numerical experiments show that our single code implementation gives reasonable performance across three multicore/many-core architectures: NVIDIA General Processing Units (GPU’s), Intel Xeon Phis, and multicore CPUs.« less

  9. Depletion optimization of lumped burnable poisons in pressurized water reactors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kodah, Z.H.

    1982-01-01

    Techniques were developed to construct a set of basic poison depletion curves which deplete in a monotonical manner. These curves were combined to match a required optimized depletion profile by utilizing either linear or non-linear programming methods. Three computer codes, LEOPARD, XSDRN, and EXTERMINATOR-2 were used in the analyses. A depletion routine was developed and incorporated into the XSDRN code to allow the depletion of fuel, fission products, and burnable poisons. The Three Mile Island Unit-1 reactor core was used in this work as a typical PWR core. Two fundamental burnable poison rod designs were studied. They are a solidmore » cylindrical poison rod and an annular cylindrical poison rod with water filling the central region.These two designs have either a uniform mixture of burnable poisons or lumped spheroids of burnable poisons in the poison region. Boron and gadolinium are the two burnable poisons which were investigated in this project. Thermal self-shielding factor calculations for solid and annular poison rods were conducted. Also expressions for overall thermal self-shielding factors for one or more than one size group of poison spheroids inside solid and annular poison rods were derived and studied. Poison spheroids deplete at a slower rate than the poison mixture because each spheroid exhibits some self-shielding effects of its own. The larger the spheroid, the higher the self-shielding effects due to the increase in poison concentration.« less

  10. XGC developments for a more efficient XGC-GENE code coupling

    NASA Astrophysics Data System (ADS)

    Dominski, Julien; Hager, Robert; Ku, Seung-Hoe; Chang, Cs

    2017-10-01

    In the Exascale Computing Program, the High-Fidelity Whole Device Modeling project initially aims at delivering a tightly-coupled simulation of plasma neoclassical and turbulence dynamics from the core to the edge of the tokamak. To permit such simulations, the gyrokinetic codes GENE and XGC will be coupled together. Numerical efforts are made to improve the numerical schemes agreement in the coupling region. One of the difficulties of coupling those codes together is the incompatibility of their grids. GENE is a continuum grid-based code and XGC is a Particle-In-Cell code using unstructured triangular mesh. A field-aligned filter is thus implemented in XGC. Even if XGC originally had an approximately field-following mesh, this field-aligned filter permits to have a perturbation discretization closer to the one solved in the field-aligned code GENE. Additionally, new XGC gyro-averaging matrices are implemented on a velocity grid adapted to the plasma properties, thus ensuring same accuracy from the core to the edge regions.

  11. Status of VICTORIA: NRC peer review and recent code applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bixler, N.E.; Schaperow, J.H.

    1997-12-01

    VICTORIA is a mechanistic computer code designed to analyze fission product behavior within a nuclear reactor coolant system (RCS) during a severe accident. It provides detailed predictions of the release of radioactive and nonradioactive materials from the reactor core and transport and deposition of these materials within the RCS. A summary of the results and recommendations of an independent peer review of VICTORIA by the US Nuclear Regulatory Commission (NRC) is presented, along with recent applications of the code. The latter include analyses of a temperature-induced steam generator tube rupture sequence and post-test analyses of the Phebus FPT-1 test. Themore » next planned Phebus test, FTP-4, will focus on fission product releases from a rubble bed, especially those of the less-volatile elements, and on the speciation of the released elements. Pretest analyses using VICTORIA to estimate the magnitude and timing of releases are presented. The predicted release of uranium is a matter of particular importance because of concern about filter plugging during the test.« less

  12. Embedded Palmprint Recognition System Using OMAP 3530

    PubMed Central

    Shen, Linlin; Wu, Shipei; Zheng, Songhao; Ji, Zhen

    2012-01-01

    We have proposed in this paper an embedded palmprint recognition system using the dual-core OMAP 3530 platform. An improved algorithm based on palm code was proposed first. In this method, a Gabor wavelet is first convolved with the palmprint image to produce a response image, where local binary patterns are then applied to code the relation among the magnitude of wavelet response at the ccentral pixel with that of its neighbors. The method is fully tested using the public PolyU palmprint database. While palm code achieves only about 89% accuracy, over 96% accuracy is achieved by the proposed G-LBP approach. The proposed algorithm was then deployed to the DSP processor of OMAP 3530 and work together with the ARM processor for feature extraction. When complicated algorithms run on the DSP processor, the ARM processor can focus on image capture, user interface and peripheral control. Integrated with an image sensing module and central processing board, the designed device can achieve accurate and real time performance. PMID:22438721

  13. Embedded palmprint recognition system using OMAP 3530.

    PubMed

    Shen, Linlin; Wu, Shipei; Zheng, Songhao; Ji, Zhen

    2012-01-01

    We have proposed in this paper an embedded palmprint recognition system using the dual-core OMAP 3530 platform. An improved algorithm based on palm code was proposed first. In this method, a Gabor wavelet is first convolved with the palmprint image to produce a response image, where local binary patterns are then applied to code the relation among the magnitude of wavelet response at the central pixel with that of its neighbors. The method is fully tested using the public PolyU palmprint database. While palm code achieves only about 89% accuracy, over 96% accuracy is achieved by the proposed G-LBP approach. The proposed algorithm was then deployed to the DSP processor of OMAP 3530 and work together with the ARM processor for feature extraction. When complicated algorithms run on the DSP processor, the ARM processor can focus on image capture, user interface and peripheral control. Integrated with an image sensing module and central processing board, the designed device can achieve accurate and real time performance.

  14. A computer program for processing impedance cardiographic data: Improving accuracy through user-interactive software

    NASA Technical Reports Server (NTRS)

    Cowings, Patricia S.; Naifeh, Karen; Thrasher, Chet

    1988-01-01

    This report contains the source code and documentation for a computer program used to process impedance cardiography data. The cardiodynamic measures derived from impedance cardiography are ventricular stroke column, cardiac output, cardiac index and Heather index. The program digitizes data collected from the Minnesota Impedance Cardiograph, Electrocardiography (ECG), and respiratory cycles and then stores these data on hard disk. It computes the cardiodynamic functions using interactive graphics and stores the means and standard deviations of each 15-sec data epoch on floppy disk. This software was designed on a Digital PRO380 microcomputer and used version 2.0 of P/OS, with (minimally) a 4-channel 16-bit analog/digital (A/D) converter. Applications software is written in FORTRAN 77, and uses Digital's Pro-Tool Kit Real Time Interface Library, CORE Graphic Library, and laboratory routines. Source code can be readily modified to accommodate alternative detection, A/D conversion and interactive graphics. The object code utilizing overlays and multitasking has a maximum of 50 Kbytes.

  15. Status and future of the 3D MAFIA group of codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ebeling, F.; Klatt, R.; Krawzcyk, F.

    1988-12-01

    The group of fully three dimensional computer codes for solving Maxwell's equations for a wide range of applications, MAFIA, is already well established. Extensive comparisons with measurements have demonstrated the accuracy of the computations. A large numer of components have been designed for accelerators, such as kicker magnets, non cyclindrical cavities, ferrite loaded cavities, vacuum chambers with slots and transitions, etc. The latest additions to the system include a new static solver that can calculate 3D magneto- and electrostatic fields, and a self consistent version of the 2D-BCI that solves the field equations and the equations of motion in parallel.more » Work on new eddy current modules has started, which will allow treatment of laminated and/or solid iron cores excited by low frequency currents. Based on our experience with the present releases 1 and 2, we have started a complete revision of the whole user interface and data structure, which will make the codes even more user-friendly and flexible.« less

  16. Implementation of metal-friendly EAM/FS-type semi-empirical potentials in HOOMD-blue: A GPU-accelerated molecular dynamics software

    NASA Astrophysics Data System (ADS)

    Yang, Lin; Zhang, Feng; Wang, Cai-Zhuang; Ho, Kai-Ming; Travesset, Alex

    2018-04-01

    We present an implementation of EAM and FS interatomic potentials, which are widely used in simulating metallic systems, in HOOMD-blue, a software designed to perform classical molecular dynamics simulations using GPU accelerations. We first discuss the details of our implementation and then report extensive benchmark tests. We demonstrate that single-precision floating point operations efficiently implemented on GPUs can produce sufficient accuracy when compared against double-precision codes, as demonstrated in test simulations of calculations of the glass-transition temperature of Cu64.5Zr35.5, and pair correlation function g (r) of liquid Ni3Al. Our code scales well with the size of the simulating system on NVIDIA Tesla M40 and P100 GPUs. Compared with another popular software LAMMPS running on 32 cores of AMD Opteron 6220 processors, the GPU/CPU performance ratio can reach as high as 4.6. The source code can be accessed through the HOOMD-blue web page for free by any interested user.

  17. Performance Prediction Toolkit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chennupati, Gopinath; Santhi, Nanadakishore; Eidenbenz, Stephen

    The Performance Prediction Toolkit (PPT), is a scalable co-design tool that contains the hardware and middle-ware models, which accept proxy applications as input in runtime prediction. PPT relies on Simian, a parallel discrete event simulation engine in Python or Lua, that uses the process concept, where each computing unit (host, node, core) is a Simian entity. Processes perform their task through message exchanges to remain active, sleep, wake-up, begin and end. The PPT hardware model of a compute core (such as a Haswell core) consists of a set of parameters, such as clock speed, memory hierarchy levels, their respective sizes,more » cache-lines, access times for different cache levels, average cycle counts of ALU operations, etc. These parameters are ideally read off a spec sheet or are learned using regression models learned from hardware counters (PAPI) data. The compute core model offers an API to the software model, a function called time_compute(), which takes as input a tasklist. A tasklist is an unordered set of ALU, and other CPU-type operations (in particular virtual memory loads and stores). The PPT application model mimics the loop structure of the application and replaces the computational kernels with a call to the hardware model's time_compute() function giving tasklists as input that model the compute kernel. A PPT application model thus consists of tasklists representing kernels and the high-er level loop structure that we like to think of as pseudo code. The key challenge for the hardware model's time_compute-function is to translate virtual memory accesses into actual cache hierarchy level hits and misses.PPT also contains another CPU core level hardware model, Analytical Memory Model (AMM). The AMM solves this challenge soundly, where our previous alternatives explicitly include the L1,L2,L3 hit-rates as inputs to the tasklists. Explicit hit-rates inevitably only reflect the application modeler's best guess, perhaps informed by a few small test problems using hardware counters; also, hard-coded hit-rates make the hardware model insensitive to changes in cache sizes. Alternatively, we use reuse distance distributions in the tasklists. In general, reuse profiles require the application modeler to run a very expensive trace analysis on the real code that realistically can be done at best for small examples.« less

  18. a Dosimetry Assessment for the Core Restraint of AN Advanced Gas Cooled Reactor

    NASA Astrophysics Data System (ADS)

    Thornton, D. A.; Allen, D. A.; Tyrrell, R. J.; Meese, T. C.; Huggon, A. P.; Whiley, G. S.; Mossop, J. R.

    2009-08-01

    This paper describes calculations of neutron damage rates within the core restraint structures of Advanced Gas Cooled Reactors (AGRs). Using advanced features of the Monte Carlo radiation transport code MCBEND, and neutron source data from core follow calculations performed with the reactor physics code PANTHER, a detailed model of the reactor cores of two of British Energy's AGR power plants has been developed for this purpose. Because there are no relevant neutron fluence measurements directly supporting this assessment, results of benchmark comparisons and successful validation of MCBEND for Magnox reactors have been used to estimate systematic and random uncertainties on the predictions. In particular, it has been necessary to address the known under-prediction of lower energy fast neutron responses associated with the penetration of large thicknesses of graphite.

  19. HTR-PROTEUS pebble bed experimental program cores 9 & 10: columnar hexagonal point-on-point packing with a 1:1 moderator-to-fuel pebble ratio

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bess, John D.

    2014-03-01

    PROTEUS is a zero-power research reactor based on a cylindrical graphite annulus with a central cylindrical cavity. The graphite annulus remains basically the same for all experimental programs, but the contents of the central cavity are changed according to the type of reactor being investigated. Through most of its service history, PROTEUS has represented light-water reactors, but from 1992 to 1996 PROTEUS was configured as a pebble-bed reactor (PBR) critical facility and designated as HTR-PROTEUS. The nomenclature was used to indicate that this series consisted of High Temperature Reactor experiments performed in the PROTEUS assembly. During this period, seventeen criticalmore » configurations were assembled and various reactor physics experiments were conducted. These experiments included measurements of criticality, differential and integral control rod and safety rod worths, kinetics, reaction rates, water ingress effects, and small sample reactivity effects (Ref. 3). HTR-PROTEUS was constructed, and the experimental program was conducted, for the purpose of providing experimental benchmark data for assessment of reactor physics computer codes. Considerable effort was devoted to benchmark calculations as a part of the HTR-PROTEUS program. References 1 and 2 provide detailed data for use in constructing models for codes to be assessed. Reference 3 is a comprehensive summary of the HTR-PROTEUS experiments and the associated benchmark program. This document draws freely from these references. Only Cores 9 and 10 are evaluated in this benchmark report due to similarities in their construction. The other core configurations of the HTR-PROTEUS program are evaluated in their respective reports as outlined in Section 1.0. Cores 9 and 10 were evaluated and determined to be acceptable benchmark experiments.« less

  20. HTR-PROTEUS PEBBLE BED EXPERIMENTAL PROGRAM CORES 5, 6, 7, & 8: COLUMNAR HEXAGONAL POINT-ON-POINT PACKING WITH A 1:2 MODERATOR-TO-FUEL PEBBLE RATIO

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    John D. Bess

    2013-03-01

    PROTEUS is a zero-power research reactor based on a cylindrical graphite annulus with a central cylindrical cavity. The graphite annulus remains basically the same for all experimental programs, but the contents of the central cavity are changed according to the type of reactor being investigated. Through most of its service history, PROTEUS has represented light-water reactors, but from 1992 to 1996 PROTEUS was configured as a pebble-bed reactor (PBR) critical facility and designated as HTR-PROTEUS. The nomenclature was used to indicate that this series consisted of High Temperature Reactor experiments performed in the PROTEUS assembly. During this period, seventeen criticalmore » configurations were assembled and various reactor physics experiments were conducted. These experiments included measurements of criticality, differential and integral control rod and safety rod worths, kinetics, reaction rates, water ingress effects, and small sample reactivity effects (Ref. 3). HTR-PROTEUS was constructed, and the experimental program was conducted, for the purpose of providing experimental benchmark data for assessment of reactor physics computer codes. Considerable effort was devoted to benchmark calculations as a part of the HTR-PROTEUS program. References 1 and 2 provide detailed data for use in constructing models for codes to be assessed. Reference 3 is a comprehensive summary of the HTR-PROTEUS experiments and the associated benchmark program. This document draws freely from these references. Only Cores 9 and 10 are evaluated in this benchmark report due to similarities in their construction. The other core configurations of the HTR-PROTEUS program are evaluated in their respective reports as outlined in Section 1.0. Cores 9 and 10 were evaluated and determined to be acceptable benchmark experiments.« less

  1. HTR-PROTEUS PEBBLE BED EXPERIMENTAL PROGRAM CORES 9 & 10: COLUMNAR HEXAGONAL POINT-ON-POINT PACKING WITH A 1:1 MODERATOR-TO-FUEL PEBBLE RATIO

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    John D. Bess

    2013-03-01

    PROTEUS is a zero-power research reactor based on a cylindrical graphite annulus with a central cylindrical cavity. The graphite annulus remains basically the same for all experimental programs, but the contents of the central cavity are changed according to the type of reactor being investigated. Through most of its service history, PROTEUS has represented light-water reactors, but from 1992 to 1996 PROTEUS was configured as a pebble-bed reactor (PBR) critical facility and designated as HTR-PROTEUS. The nomenclature was used to indicate that this series consisted of High Temperature Reactor experiments performed in the PROTEUS assembly. During this period, seventeen criticalmore » configurations were assembled and various reactor physics experiments were conducted. These experiments included measurements of criticality, differential and integral control rod and safety rod worths, kinetics, reaction rates, water ingress effects, and small sample reactivity effects (Ref. 3). HTR-PROTEUS was constructed, and the experimental program was conducted, for the purpose of providing experimental benchmark data for assessment of reactor physics computer codes. Considerable effort was devoted to benchmark calculations as a part of the HTR-PROTEUS program. References 1 and 2 provide detailed data for use in constructing models for codes to be assessed. Reference 3 is a comprehensive summary of the HTR-PROTEUS experiments and the associated benchmark program. This document draws freely from these references. Only Cores 9 and 10 are evaluated in this benchmark report due to similarities in their construction. The other core configurations of the HTR-PROTEUS program are evaluated in their respective reports as outlined in Section 1.0. Cores 9 and 10 were evaluated and determined to be acceptable benchmark experiments.« less

  2. On the effective implementation of a boundary element code on graphics processing units unsing an out-of-core LU algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    D'Azevedo, Ed F; Nintcheu Fata, Sylvain

    2012-01-01

    A collocation boundary element code for solving the three-dimensional Laplace equation, publicly available from \\url{http://www.intetec.org}, has been adapted to run on an Nvidia Tesla general purpose graphics processing unit (GPU). Global matrix assembly and LU factorization of the resulting dense matrix were performed on the GPU. Out-of-core techniques were used to solve problems larger than available GPU memory. The code achieved over eight times speedup in matrix assembly and about 56~Gflops/sec in the LU factorization using only 512~Mbytes of GPU memory. Details of the GPU implementation and comparisons with the standard sequential algorithm are included to illustrate the performance ofmore » the GPU code.« less

  3. Automatic Between-Pulse Analysis of DIII-D Experimental Data Performed Remotely on a Supercomputer at Argonne Leadership Computing Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kostuk, M.; Uram, T. D.; Evans, T.

    For the first time, an automatically triggered, between-pulse fusion science analysis code was run on-demand at a remotely located supercomputer at Argonne Leadership Computing Facility (ALCF, Lemont, IL) in support of in-process experiments being performed at DIII-D (San Diego, CA). This represents a new paradigm for combining geographically distant experimental and high performance computing (HPC) facilities to provide enhanced data analysis that is quickly available to researchers. Enhanced analysis improves the understanding of the current pulse, translating into a more efficient use of experimental resources, and to the quality of the resultant science. The analysis code used here, called SURFMN,more » calculates the magnetic structure of the plasma using Fourier transform. Increasing the number of Fourier components provides a more accurate determination of the stochastic boundary layer near the plasma edge by better resolving magnetic islands, but requires 26 minutes to complete using local DIII-D resources, putting it well outside the useful time range for between pulse analysis. These islands relate to confinement and edge localized mode (ELM) suppression, and may be controlled by adjusting coil currents for the next pulse. Argonne has ensured on-demand execution of SURFMN by providing a reserved queue, a specialized service that launches the code after receiving an automatic trigger, and with network access from the worker nodes for data transfer. Runs are executed on 252 cores of ALCF’s Cooley cluster and the data is available locally at DIII-D within three minutes of triggering. The original SURFMN design limits additional improvements with more cores, however our work shows a path forward where codes that benefit from thousands of processors can run between pulses.« less

  4. Automatic Between-Pulse Analysis of DIII-D Experimental Data Performed Remotely on a Supercomputer at Argonne Leadership Computing Facility

    DOE PAGES

    Kostuk, M.; Uram, T. D.; Evans, T.; ...

    2018-02-01

    For the first time, an automatically triggered, between-pulse fusion science analysis code was run on-demand at a remotely located supercomputer at Argonne Leadership Computing Facility (ALCF, Lemont, IL) in support of in-process experiments being performed at DIII-D (San Diego, CA). This represents a new paradigm for combining geographically distant experimental and high performance computing (HPC) facilities to provide enhanced data analysis that is quickly available to researchers. Enhanced analysis improves the understanding of the current pulse, translating into a more efficient use of experimental resources, and to the quality of the resultant science. The analysis code used here, called SURFMN,more » calculates the magnetic structure of the plasma using Fourier transform. Increasing the number of Fourier components provides a more accurate determination of the stochastic boundary layer near the plasma edge by better resolving magnetic islands, but requires 26 minutes to complete using local DIII-D resources, putting it well outside the useful time range for between pulse analysis. These islands relate to confinement and edge localized mode (ELM) suppression, and may be controlled by adjusting coil currents for the next pulse. Argonne has ensured on-demand execution of SURFMN by providing a reserved queue, a specialized service that launches the code after receiving an automatic trigger, and with network access from the worker nodes for data transfer. Runs are executed on 252 cores of ALCF’s Cooley cluster and the data is available locally at DIII-D within three minutes of triggering. The original SURFMN design limits additional improvements with more cores, however our work shows a path forward where codes that benefit from thousands of processors can run between pulses.« less

  5. Development and Assessment of CTF for Pin-resolved BWR Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Salko, Robert K; Wysocki, Aaron J; Collins, Benjamin S

    2017-01-01

    CTF is the modernized and improved version of the subchannel code, COBRA-TF. It has been adopted by the Consortium for Advanced Simulation for Light Water Reactors (CASL) for subchannel analysis applications and thermal hydraulic feedback calculations in the Virtual Environment for Reactor Applications Core Simulator (VERA-CS). CTF is now jointly developed by Oak Ridge National Laboratory and North Carolina State University. Until now, CTF has been used for pressurized water reactor modeling and simulation in CASL, but in the future it will be extended to boiling water reactor designs. This required development activities to integrate the code into the VERA-CSmore » workflow and to make it more ecient for full-core, pin resolved simulations. Additionally, there is a significant emphasis on producing high quality tools that follow a regimented software quality assurance plan in CASL. Part of this plan involves performing validation and verification assessments on the code that are easily repeatable and tied to specific code versions. This work has resulted in the CTF validation and verification matrix being expanded to include several two-phase flow experiments, including the General Electric 3 3 facility and the BWR Full-Size Fine Mesh Bundle Tests (BFBT). Comparisons with both experimental databases is reasonable, but the BFBT analysis reveals a tendency of CTF to overpredict void, especially in the slug flow regime. The execution of these tests is fully automated, analysis is documented in the CTF Validation and Verification manual, and the tests have become part of CASL continuous regression testing system. This paper will summarize these recent developments and some of the two-phase assessments that have been performed on CTF.« less

  6. Mechanic: The MPI/HDF code framework for dynamical astronomy

    NASA Astrophysics Data System (ADS)

    Słonina, Mariusz; Goździewski, Krzysztof; Migaszewski, Cezary

    2015-01-01

    We introduce the Mechanic, a new open-source code framework. It is designed to reduce the development effort of scientific applications by providing unified API (Application Programming Interface) for configuration, data storage and task management. The communication layer is based on the well-established Message Passing Interface (MPI) standard, which is widely used on variety of parallel computers and CPU-clusters. The data storage is performed within the Hierarchical Data Format (HDF5). The design of the code follows core-module approach which allows to reduce the user’s codebase and makes it portable for single- and multi-CPU environments. The framework may be used in a local user’s environment, without administrative access to the cluster, under the PBS or Slurm job schedulers. It may become a helper tool for a wide range of astronomical applications, particularly focused on processing large data sets, such as dynamical studies of long-term orbital evolution of planetary systems with Monte Carlo methods, dynamical maps or evolutionary algorithms. It has been already applied in numerical experiments conducted for Kepler-11 (Migaszewski et al., 2012) and νOctantis planetary systems (Goździewski et al., 2013). In this paper we describe the basics of the framework, including code listings for the implementation of a sample user’s module. The code is illustrated on a model Hamiltonian introduced by (Froeschlé et al., 2000) presenting the Arnold diffusion. The Arnold web is shown with the help of the MEGNO (Mean Exponential Growth of Nearby Orbits) fast indicator (Goździewski et al., 2008a) applied onto symplectic SABAn integrators family (Laskar and Robutel, 2001).

  7. NATCRCTR: One-dimensional thermal-hydraulics analysis code for natural-circulation TRIGA reactors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feltus, M.A.; Rubinaccio, G.

    1996-12-31

    The Pennsylvania State University nuclear engineering department is evaluating the upgrade of the Reed College (Portland, Oregon) TRIGA reactor from 250 kW to 1 MW in two areas: thermal-hydraulics and steady-state neutronics analysis. This analysis was initiated as a cooperative effort between Penn State and Reed College as a training project for two International Atomic Energy Agency (IAEA) fellows from Ghana. The two Ghanaian IAEA fellows were assisted by G. Rubinaccio, an undergraduate, who undertook the task of writing the new computer programs for the thermal-hydraulic and physics evaluation as a three-credit special design project course. The Reed College TRIGA,more » which has a fixed graphite radial reflector, is cooled by natural circulation, without external cross-flow; whereas, the Penn State Breazeale Reactor has significant crossflow into its sides. To model the Reed TRIGA, the NATCRCTR program has been developed from first principles using the following assumptions: 1. The core is surrounded by the fixed reflector structure, which acts as a one-dimensional channel. 2. The core inlet temperature distribution is constant at the core bottom. 3. The axial heat flux distribution is a chopped cosine shape. 4. The heat transfer in the fuel is primarily in the radial directions. 5. A small gap between the fuel and cladding exists. The NATCRCTR code is used to find the peak centerline fuel, gap, and cladding surface temperatures, based on assumed flux and engineering peaking factors.« less

  8. SANTA BARBARA CLUSTER COMPARISON TEST WITH DISPH

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saitoh, Takayuki R.; Makino, Junichiro, E-mail: saitoh@elsi.jp

    2016-06-01

    The Santa Barbara cluster comparison project revealed that there is a systematic difference between entropy profiles of clusters of galaxies obtained by Eulerian mesh and Lagrangian smoothed particle hydrodynamics (SPH) codes: mesh codes gave a core with a constant entropy, whereas SPH codes did not. One possible reason for this difference is that mesh codes are not Galilean invariant. Another possible reason is the problem of the SPH method, which might give too much “protection” to cold clumps because of the unphysical surface tension induced at contact discontinuities. In this paper, we apply the density-independent formulation of SPH (DISPH), whichmore » can handle contact discontinuities accurately, to simulations of a cluster of galaxies and compare the results with those with the standard SPH. We obtained the entropy core when we adopt DISPH. The size of the core is, however, significantly smaller than those obtained with mesh simulations and is comparable to those obtained with quasi-Lagrangian schemes such as “moving mesh” and “mesh free” schemes. We conclude that both the standard SPH without artificial conductivity and Eulerian mesh codes have serious problems even with such an idealized simulation, while DISPH, SPH with artificial conductivity, and quasi-Lagrangian schemes have sufficient capability to deal with it.« less

  9. User's Guide for TOUGH2-MP - A Massively Parallel Version of the TOUGH2 Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Earth Sciences Division; Zhang, Keni; Zhang, Keni

    TOUGH2-MP is a massively parallel (MP) version of the TOUGH2 code, designed for computationally efficient parallel simulation of isothermal and nonisothermal flows of multicomponent, multiphase fluids in one, two, and three-dimensional porous and fractured media. In recent years, computational requirements have become increasingly intensive in large or highly nonlinear problems for applications in areas such as radioactive waste disposal, CO2 geological sequestration, environmental assessment and remediation, reservoir engineering, and groundwater hydrology. The primary objective of developing the parallel-simulation capability is to significantly improve the computational performance of the TOUGH2 family of codes. The particular goal for the parallel simulator ismore » to achieve orders-of-magnitude improvement in computational time for models with ever-increasing complexity. TOUGH2-MP is designed to perform parallel simulation on multi-CPU computational platforms. An earlier version of TOUGH2-MP (V1.0) was based on the TOUGH2 Version 1.4 with EOS3, EOS9, and T2R3D modules, a software previously qualified for applications in the Yucca Mountain project, and was designed for execution on CRAY T3E and IBM SP supercomputers. The current version of TOUGH2-MP (V2.0) includes all fluid property modules of the standard version TOUGH2 V2.0. It provides computationally efficient capabilities using supercomputers, Linux clusters, or multi-core PCs, and also offers many user-friendly features. The parallel simulator inherits all process capabilities from V2.0 together with additional capabilities for handling fractured media from V1.4. This report provides a quick starting guide on how to set up and run the TOUGH2-MP program for users with a basic knowledge of running the (standard) version TOUGH2 code, The report also gives a brief technical description of the code, including a discussion of parallel methodology, code structure, as well as mathematical and numerical methods used. To familiarize users with the parallel code, illustrative sample problems are presented.« less

  10. caGrid 1.0 : an enterprise Grid infrastructure for biomedical research.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oster, S.; Langella, S.; Hastings, S.

    To develop software infrastructure that will provide support for discovery, characterization, integrated access, and management of diverse and disparate collections of information sources, analysis methods, and applications in biomedical research. Design: An enterprise Grid software infrastructure, called caGrid version 1.0 (caGrid 1.0), has been developed as the core Grid architecture of the NCI-sponsored cancer Biomedical Informatics Grid (caBIG{trademark}) program. It is designed to support a wide range of use cases in basic, translational, and clinical research, including (1) discovery, (2) integrated and large-scale data analysis, and (3) coordinated study. Measurements: The caGrid is built as a Grid software infrastructure andmore » leverages Grid computing technologies and the Web Services Resource Framework standards. It provides a set of core services, toolkits for the development and deployment of new community provided services, and application programming interfaces for building client applications. Results: The caGrid 1.0 was released to the caBIG community in December 2006. It is built on open source components and caGrid source code is publicly and freely available under a liberal open source license. The core software, associated tools, and documentation can be downloaded from the following URL: .« less

  11. Developing a Machine-Supported Coding System for Constructed-Response Items in PISA. Research Report. ETS RR-17-47

    ERIC Educational Resources Information Center

    Yamamoto, Kentaro; He, Qiwei; Shin, Hyo Jeong; von Davier, Mattias

    2017-01-01

    Approximately a third of the Programme for International Student Assessment (PISA) items in the core domains (math, reading, and science) are constructed-response items and require human coding (scoring). This process is time-consuming, expensive, and prone to error as often (a) humans code inconsistently, and (b) coding reliability in…

  12. TOUGH+ v1.5 Core Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moridis, George J.

    TOUGH+ v1.5 is a numerical code for the simulation of multi-phase, multi-component flow and transport of mass and heat through porous and fractured media, and represents the third update of the code since its first release [Moridis et al., 2008]. TOUGH+ is a successor to the TOUGH2 [Pruess et al., 1991; 2012] family of codes for multi-component, multiphase fluid and heat flow developed at the Lawrence Berkeley National Laboratory. It is written in standard FORTRAN 95/2003, and can be run on any computational platform (workstations, PC, Macintosh). TOUGH+ v1.5 employs dynamic memory allocation, thus minimizing storage requirements. It has amore » completely modular structure, follows the tenets of Object-Oriented Programming (OOP), and involves the advanced features of FORTRAN 95/2003, i.e., modules, derived data types, the use of pointers, lists and trees, data encapsulation, defined operators and assignments, operator extension and overloading, use of generic procedures, and maximum use of the powerful intrinsic vector and matrix processing operations. TOUGH+ v1.5 is the core code for its family of applications, i.e., the part of the code that is common to all its applications. It provides a description of the underlying physics and thermodynamics of non-isothermal flow, of the mathematical and numerical approaches, as well as a detailed explanation of the general (common to all applications) input requirements, options, capabilities and output specifications. The core code cannot run by itself: it needs to be coupled with the code for the specific TOUGH+ application option that describes a particular type of problem. The additional input requirements specific to a particular TOUGH+ application options and related illustrative examples can be found in the corresponding User's Manual.« less

  13. Developing and validating advanced divertor solutions on DIII-D for next-step fusion devices

    NASA Astrophysics Data System (ADS)

    Guo, H. Y.; Hill, D. N.; Leonard, A. W.; Allen, S. L.; Stangeby, P. C.; Thomas, D.; Unterberg, E. A.; Abrams, T.; Boedo, J.; Briesemeister, A. R.; Buchenauer, D.; Bykov, I.; Canik, J. M.; Chrobak, C.; Covele, B.; Ding, R.; Doerner, R.; Donovan, D.; Du, H.; Elder, D.; Eldon, D.; Lasa, A.; Groth, M.; Guterl, J.; Jarvinen, A.; Hinson, E.; Kolemen, E.; Lasnier, C. J.; Lore, J.; Makowski, M. A.; McLean, A.; Meyer, B.; Moser, A. L.; Nygren, R.; Owen, L.; Petrie, T. W.; Porter, G. D.; Rognlien, T. D.; Rudakov, D.; Sang, C. F.; Samuell, C.; Si, H.; Schmitz, O.; Sontag, A.; Soukhanovskii, V.; Wampler, W.; Wang, H.; Watkins, J. G.

    2016-12-01

    A major challenge facing the design and operation of next-step high-power steady-state fusion devices is to develop a viable divertor solution with order-of-magnitude increases in power handling capability relative to present experience, while having acceptable divertor target plate erosion and being compatible with maintaining good core plasma confinement. A new initiative has been launched on DIII-D to develop the scientific basis for design, installation, and operation of an advanced divertor to evaluate boundary plasma solutions applicable to next step fusion experiments beyond ITER. Developing the scientific basis for fusion reactor divertor solutions must necessarily follow three lines of research, which we plan to pursue in DIII-D: (1) Advance scientific understanding and predictive capability through development and comparison between state-of-the art computational models and enhanced measurements using targeted parametric scans; (2) Develop and validate key divertor design concepts and codes through innovative variations in physical structure and magnetic geometry; (3) Assess candidate materials, determining the implications for core plasma operation and control, and develop mitigation techniques for any deleterious effects, incorporating development of plasma-material interaction models. These efforts will lead to design, installation, and evaluation of an advanced divertor for DIII-D to enable highly dissipative divertor operation at core density (n e/n GW), neutral fueling and impurity influx most compatible with high performance plasma scenarios and reactor relevant plasma facing components (PFCs). This paper highlights the current progress and near-term strategies of boundary/PMI research on DIII-D.

  14. Developing and validating advanced divertor solutions on DIII-D for next-step fusion devices

    DOE PAGES

    Guo, H. Y.; Hill, D. N.; Leonard, A. W.; ...

    2016-09-14

    A major challenge facing the design and operation of next-step high-power steady-state fusion devices is to develop a viable divertor solution with order-of-magnitude increases in power handling capability relative to present experience, while having acceptable divertor target plate erosion and being compatible with maintaining good core plasma confinement. A new initiative has been launched on DIII-D to develop the scientific basis for design, installation, and operation of an advanced divertor to evaluate boundary plasma solutions applicable to next step fusion experiments beyond ITER. Developing the scientific basis for fusion reactor divertor solutions must necessarily follow three lines of research, whichmore » we plan to pursue in DIII-D: (1) Advance scientific understanding and predictive capability through development and comparison between state-of-the art computational models and enhanced measurements using targeted parametric scans; (2) Develop and validate key divertor design concepts and codes through innovative variations in physical structure and magnetic geometry; (3) Assess candidate materials, determining the implications for core plasma operation and control, and develop mitigation techniques for any deleterious effects, incorporating development of plasma-material interaction models. These efforts will lead to design, installation, and evaluation of an advanced divertor for DIII-D to enable highly dissipative divertor operation at core density (n e/n GW), neutral fueling and impurity influx most compatible with high performance plasma scenarios and reactor relevant plasma facing components (PFCs). In conclusion, this paper highlights the current progress and near-term strategies of boundary/PMI research on DIII-D.« less

  15. Results of a Neutronic Simulation of HTR-Proteus Core 4.2 using PEBBED and other INL Reactor Physics Tools: FY-09 Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hans D. Gougar

    The Idaho National Laboratory’s deterministic neutronics analysis codes and methods were applied to the computation of the core multiplication factor of the HTR-Proteus pebble bed reactor critical facility. A combination of unit cell calculations (COMBINE-PEBDAN), 1-D discrete ordinates transport (SCAMP), and nodal diffusion calculations (PEBBED) were employed to yield keff and flux profiles. Preliminary results indicate that these tools, as currently configured and used, do not yield satisfactory estimates of keff. If control rods are not modeled, these methods can deliver much better agreement with experimental core eigenvalues which suggests that development efforts should focus on modeling control rod andmore » other absorber regions. Under some assumptions and in 1D subcore analyses, diffusion theory agrees well with transport. This suggests that developments in specific areas can produce a viable core simulation approach. Some corrections have been identified and can be further developed, specifically: treatment of the upper void region, treatment of inter-pebble streaming, and explicit (multiscale) transport modeling of TRISO fuel particles as a first step in cross section generation. Until corrections are made that yield better agreement with experiment, conclusions from core design and burnup analyses should be regarded as qualitative and not benchmark quality.« less

  16. Experimental results from the VENUS-F critical reference state for the GUINEVERE accelerator driven system project

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Uyttenhove, W.; Baeten, P.; Ban, G.

    The GUINEVERE (Generation of Uninterrupted Intense Neutron pulses at the lead Venus Reactor) project was launched in 2006 within the framework of FP6 EUROTRANS in order to validate on-line reactivity monitoring and subcriticality level determination in Accelerator Driven Systems. Therefore the VENUS reactor at SCK.CEN in Mol (Belgium) was modified towards a fast core (VENUS-F) and coupled to the GENEPI-3C accelerator built by CNRS The accelerator can operate in both continuous and pulsed mode. The VENUS-F core is loaded with enriched Uranium and reflected with solid lead. A well-chosen critical reference state is indispensable for the validation of the on-linemore » subcriticality monitoring methodology. Moreover a benchmarking tool is required for nuclear data research and code validation. In this paper the design and the importance of the critical reference state for the GUINEVERE project are motivated. The results of the first experimental phase on the critical core are presented. The control rods worth is determined by the rod drop technique and the application of the Modified Source Multiplication (MSM) method allows the determination of the worth of the safety rods. The results are implemented in the VENUS-F core certificate for full exploitation of the critical core. (authors)« less

  17. Experimental results from the VENUS-F critical reference state for the GUINEVERE accelerator driven system project

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Uyttenhove, W.; Baeten, P.; Kochetkov, A.

    The GUINEVERE (Generation of Uninterrupted Intense Neutron pulses at the lead Venus Reactor) project was launched in 2006 within the framework of FP6 EUROTRANS in order to validate online reactivity monitoring and subcriticality level determination in accelerator driven systems (ADS). Therefore, the VENUS reactor at SCK.CEN in Mol, Belgium, was modified towards a fast core (VENUS-F) and coupled to the GENEPI-3C accelerator built by CNRS. The accelerator can operate in both continuous and pulsed mode. The VENUS-F core is loaded with enriched Uranium and reflected with solid lead. A well-chosen critical reference state is indispensable for the validation of themore » online subcriticality monitoring methodology. Moreover, a benchmarking tool is required for nuclear data research and code validation. In this paper, the design and the importance of the critical reference state for the GUINEVERE project are motivated. The results of the first experimental phase on the critical core are presented. The control rods worth is determined by the positive period method and the application of the Modified Source Multiplication (MSM) method allows the determination of the worth of the safety rods. The results are implemented in the VENUS-F core certificate for full exploitation of the critical core. (authors)« less

  18. Thermal neutron filter design for the neutron radiography facility at the LVR-15 reactor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Soltes, Jaroslav; Faculty of Nuclear Sciences and Physical Engineering, CTU in Prague,; Viererbl, Ladislav

    2015-07-01

    In 2011 a decision was made to build a neutron radiography facility at one of the unused horizontal channels of the LVR-15 research reactor in Rez, Czech Republic. One of the key conditions for operating an effective radiography facility is the delivery of a high intensity, homogeneous and collimated thermal neutron beam at the sample location. Additionally the intensity of fast neutrons has to be kept as low as possible as the fast neutrons may damage the detectors used for neutron imaging. As the spectrum in the empty horizontal channel roughly copies the spectrum in the reactor core, which hasmore » a high ratio of fast neutrons, neutron filter components have to be installed inside the channel in order to achieve desired beam parameters. As the channel design does not allow the instalment of complex filters and collimators, an optimal solution represent neutron filters made of large single-crystal ingots of proper material composition. Single-crystal silicon was chosen as a favorable filter material for its wide availability in sufficient dimensions. Besides its ability to reasonably lower the ratio of fast neutrons while still keeping high intensities of thermal neutrons, due to its large dimensions, it suits as a shielding against gamma radiation from the reactor core. For designing the necessary filter dimensions the Monte-Carlo MCNP transport code was used. As the code does not provide neutron cross-section libraries for thermal neutron transport through single-crystalline silicon, these had to be created by approximating the theory of thermal neutron scattering and modifying the original cross-section data which are provided with the code. Carrying out a series of calculations the filter thickness of 1 m proved good for gaining a beam with desired parameters and a low gamma background. After mounting the filter inside the channel several measurements of the neutron field were realized at the beam exit. The results have justified the expected calculated values. After the successful filter installing and a series of measurements, first test neutron radiography attempts with test samples could been carried out. (authors)« less

  19. Characterization of a New Mach 9 Nozzle for the HEAT Hypersonic Wind Tunnel

    NASA Astrophysics Data System (ADS)

    Baccarella, D.; Passaro, A.; Caredda, P.; Cristofolini, A.; Neretti, G.; Granciu, V. M.; Schettino, A.; Battista, F.; D'Ambrosio, D.

    2009-01-01

    A new Mach 9 contoured nozzle to use with air was designed and realized at Alta SpA with the aim to produce a uniform core flow with a diameter of at least 80 mm. The design was iteratively carried out using engineering codes and CFD simulations by CIRA. The characterization activity was carried out mapping the complete test section in terms of pitot pressure and total enthalpy and measuring the pressure and heat flux distribution on the nozzle internal walls. The flow before the convergent was characterized by means of total pressure measurements and spectroscopy. A numerical rebuilding of the test was performed by CIRA and PoliTO and compared with experimental data. The paper will briefly describe the design phase and will present all the characterization results.

  20. Support the Design of Improved IUE NEWSIPS High Dispersion Extraction Algorithms: Improved IUE High Dispersion Extraction Algorithms

    NASA Technical Reports Server (NTRS)

    Lawton, Pat

    2004-01-01

    The objective of this work was to support the design of improved IUE NEWSIPS high dispersion extraction algorithms. The purpose of this work was to evaluate use of the Linearized Image (LIHI) file versus the Re-Sampled Image (SIHI) file, evaluate various extraction, and design algorithms for evaluation of IUE High Dispersion spectra. It was concluded the use of the Re-Sampled Image (SIHI) file was acceptable. Since the Gaussian profile worked well for the core and the Lorentzian profile worked well for the wings, the Voigt profile was chosen for use in the extraction algorithm. It was found that the gamma and sigma parameters varied significantly across the detector, so gamma and sigma masks for the SWP detector were developed. Extraction code was written.

  1. The extent and nature of food advertising to children on Spanish television in 2012 using an international food-based coding system and the UK nutrient profiling model.

    PubMed

    Royo-Bordonada, M Á; León-Flández, K; Damián, J; Bosqued-Estefanía, M J; Moya-Geromini, M Á; López-Jurado, L

    2016-08-01

    To examine the extent and nature of food television advertising directed at children in Spain using an international food-based system and the United Kingdom nutrient profile model (UKNPM). Cross-sectional study of advertisements of food and drinks shown on five television channels over 7 days in 2012 (8am-midnight). Showing time and duration of each advertisement was recorded. Advertisements were classified as core (nutrient-rich/calorie-low products), non-core, or miscellaneous based on the international system, and either healthy/less healthy, i.e., high in saturated fats, trans-fatty acids, salt, or free sugars (HFSS), according to UKNPM. The food industry accounted for 23.7% of the advertisements (4212 out of 17,722) with 7.5 advertisements per hour of broadcasting. The international food-based coding system classified 60.2% of adverts as non-core, and UKNPM classified 64.0% as HFSS. Up to 31.5% of core, 86.8% of non-core, and 8.3% of miscellaneous advertisements were for HFSS products. The percentage of advertisements for HFSS products was higher during reinforced protected viewing times (69.0%), on weekends (71.1%), on channels of particular appeal to children and teenagers (67.8%), and on broadcasts regulated by the Spanish Code of self-regulation of the advertising of food products directed at children (70.7%). Both schemes identified that a majority of foods advertised were unhealthy, although some classification differences between the two systems are important to consider. The food advertising Code is not limiting Spanish children's exposure to advertisements for HFSS products, which were more frequent on Code-regulated broadcasts and during reinforced protected viewing time. Copyright © 2016 The Royal Society for Public Health. Published by Elsevier Ltd. All rights reserved.

  2. Reconfigurable Polymer Shells on Shape-Anisotropic Gold Nanoparticle Cores.

    PubMed

    Kim, Juyeong; Song, Xiaohui; Kim, Ahyoung; Luo, Binbin; Smith, John W; Ou, Zihao; Wu, Zixuan; Chen, Qian

    2018-05-03

    Reconfigurable hybrid nanoparticles made by decorating flexible polymer shells on rigid inorganic nanoparticle cores can provide a unique means to build stimuli-responsive functional materials. The polymer shell reconfiguration has been expected to depend on the local core shape details, but limited systematic investigations have been undertaken. Here, two literature methods are adapted to coat either thiol-terminated polystyrene (PS) or polystyrene-poly(acrylic acid) (PS-b-PAA) shells onto a series of anisotropic gold nanoparticles of shapes not studied previously, including octahedron, concave cube, and bipyramid. These core shapes are complex, rendering shell contours with nanoscale details (e.g., local surface curvature, shell thickness) that are imaged and analyzed quantitatively using the authors' customized analysis codes. It is found that the hybrid nanoparticles based on the chosen core shapes, when coated with the above two polymer shells, exhibit distinct shell segregations upon a variation in solvent polarity or temperature. It is demonstrated for the PS-b-PAA-coated hybrid nanoparticles, the shell segregation is maintained even after a further decoration of the shell periphery with gold seeds; these seeds can potentially facilitate subsequent deposition of other nanostructures to enrich structural and functional diversity. These synthesis, imaging, and analysis methods for the hybrid nanoparticles of anisotropically shaped cores can potentially aid in their predictive design for materials reconfigurable from the bottom up. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. Sequence and structural implications of a bovine corneal keratan sulfate proteoglycan core protein. Protein 37B represents bovine lumican and proteins 37A and 25 are unique

    NASA Technical Reports Server (NTRS)

    Funderburgh, J. L.; Funderburgh, M. L.; Brown, S. J.; Vergnes, J. P.; Hassell, J. R.; Mann, M. M.; Conrad, G. W.; Spooner, B. S. (Principal Investigator)

    1993-01-01

    Amino acid sequence from tryptic peptides of three different bovine corneal keratan sulfate proteoglycan (KSPG) core proteins (designated 37A, 37B, and 25) showed similarities to the sequence of a chicken KSPG core protein lumican. Bovine lumican cDNA was isolated from a bovine corneal expression library by screening with chicken lumican cDNA. The bovine cDNA codes for a 342-amino acid protein, M(r) 38,712, containing amino acid sequences identified in the 37B KSPG core protein. The bovine lumican is 68% identical to chicken lumican, with an 83% identity excluding the N-terminal 40 amino acids. Location of 6 cysteine and 4 consensus N-glycosylation sites in the bovine sequence were identical to those in chicken lumican. Bovine lumican had about 50% identity to bovine fibromodulin and 20% identity to bovine decorin and biglycan. About two-thirds of the lumican protein consists of a series of 10 amino acid leucine-rich repeats that occur in regions of calculated high beta-hydrophobic moment, suggesting that the leucine-rich repeats contribute to beta-sheet formation in these proteins. Sequences obtained from 37A and 25 core proteins were absent in bovine lumican, thus predicting a unique primary structure and separate mRNA for each of the three bovine KSPG core proteins.

  4. The RAVE/VERTIGO vertex reconstruction toolkit and framework

    NASA Astrophysics Data System (ADS)

    Waltenberger, W.; Mitaroff, W.; Moser, F.; Pflugfelder, B.; Riedel, H. V.

    2008-07-01

    A detector-independent toolkit for vertex reconstruction (RAVE1) is being developed, along with a standalone framework (VERTIGO2) for testing, analyzing and debugging. The core algorithms represent state-of-the-art for geometric vertex finding and fitting by both linear (Kalman filter) and robust estimation methods. Main design goals are ease of use, flexibility for embedding into existing software frameworks, extensibility, and openness. The implementation is based on modern object-oriented techniques, is coded in C++ with interfaces for Java and Python, and follows an open-source approach. A beta release is available.

  5. Ultra-miniature wireless temperature sensor for thermal medicine applications.

    PubMed

    Khairi, Ahmad; Hung, Shih-Chang; Paramesh, Jeyanandh; Fedder, Gary; Rabin, Yoed

    2011-01-01

    This study presents a prototype design of an ultra-miniature, wireless, battery-less, and implantable temperature-sensor, with applications to thermal medicine such as cryosurgery, hyperthermia, and thermal ablation. The design aims at a sensory device smaller than 1.5 mm in diameter and 3 mm in length, to enable minimally invasive deployment through a hypodermic needle. While the new device may be used for local temperature monitoring, simultaneous data collection from an array of such sensors can be used to reconstruct the 3D temperature field in the treated area, offering a unique capability in thermal medicine. The new sensory device consists of three major subsystems: a temperature-sensing core, a wireless data-communication unit, and a wireless power reception and management unit. Power is delivered wirelessly to the implant from an external source using an inductive link. To meet size requirements while enhancing reliability and minimizing cost, the implant is fully integrated in a regular foundry CMOS technology (0.15 μm in the current study), including the implant-side inductor of the power link. A temperature-sensing core that consists of a proportional-to-absolute-temperature (PTAT) circuit has been designed and characterized. It employs a microwatt chopper stabilized op-amp and dynamic element-matched current sources to achieve high absolute accuracy. A second order sigma-delta (Σ-Δ) analog-to-digital converter (ADC) is designed to convert the temperature reading to a digital code, which is transmitted by backscatter through the same antenna used for receiving power. A high-efficiency multi-stage differential CMOS rectifier has been designed to provide a DC supply to the sensing and communication subsystems. This paper focuses on the development of the all-CMOS temperature sensing core circuitry part of the device, and briefly reviews the wireless power delivery and communication subsystems.

  6. MCNP-model for the OAEP Thai Research Reactor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gallmeier, F.X.; Tang, J.S.; Primm, R.T. III

    An MCNP input was prepared for the Thai Research Reactor, making extensive use of the MCNP geometry`s lattice feature that allows a flexible and easy rearrangement of the core components and the adjustment of the control elements. The geometry was checked for overdefined or undefined zones by two-dimensional plots of cuts through the core configuration with the MCNP geometry plotting capabilities, and by a three-dimensional view of the core configuration with the SABRINA code. Cross sections were defined for a hypothetical core of 67 standard fuel elements and 38 low-enriched uranium fuel elements--all filled with fresh fuel. Three test calculationsmore » were performed with the MCNP4B-code to obtain the multiplication factor for the cases with control elements fully inserted, fully withdrawn, and at a working position.« less

  7. Test case for VVER-1000 complex modeling using MCU and ATHLET

    NASA Astrophysics Data System (ADS)

    Bahdanovich, R. B.; Bogdanova, E. V.; Gamtsemlidze, I. D.; Nikonov, S. P.; Tikhomirov, G. V.

    2017-01-01

    The correct modeling of processes occurring in the fuel core of the reactor is very important. In the design and operation of nuclear reactors it is necessary to cover the entire range of reactor physics. Very often the calculations are carried out within the framework of only one domain, for example, in the framework of structural analysis, neutronics (NT) or thermal hydraulics (TH). However, this is not always correct, as the impact of related physical processes occurring simultaneously, could be significant. Therefore it is recommended to spend the coupled calculations. The paper provides test case for the coupled neutronics-thermal hydraulics calculation of VVER-1000 using the precise neutron code MCU and system engineering code ATHLET. The model is based on the fuel assembly (type 2M). Test case for calculation of power distribution, fuel and coolant temperature, coolant density, etc. has been developed. It is assumed that the test case will be used for simulation of VVER-1000 reactor and in the calculation using other programs, for example, for codes cross-verification. The detailed description of the codes (MCU, ATHLET), geometry and material composition of the model and an iterative calculation scheme is given in the paper. Script in PERL language was written to couple the codes.

  8. VORCOR: A computer program for calculating characteristics of wings with edge vortex separation by using a vortex-filament and-core model

    NASA Technical Reports Server (NTRS)

    Pao, J. L.; Mehrotra, S. C.; Lan, C. E.

    1982-01-01

    A computer code base on an improved vortex filament/vortex core method for predicting aerodynamic characteristics of slender wings with edge vortex separations is developed. The code is applicable to camber wings, straked wings or wings with leading edge vortex flaps at subsonic speeds. The prediction of lifting pressure distribution and the computer time are improved by using a pair of concentrated vortex cores above the wing surface. The main features of this computer program are: (1) arbitrary camber shape may be defined and an option for exactly defining leading edge flap geometry is also provided; (2) the side edge vortex system is incorporated.

  9. Applications of Derandomization Theory in Coding

    NASA Astrophysics Data System (ADS)

    Cheraghchi, Mahdi

    2011-07-01

    Randomized techniques play a fundamental role in theoretical computer science and discrete mathematics, in particular for the design of efficient algorithms and construction of combinatorial objects. The basic goal in derandomization theory is to eliminate or reduce the need for randomness in such randomized constructions. In this thesis, we explore some applications of the fundamental notions in derandomization theory to problems outside the core of theoretical computer science, and in particular, certain problems related to coding theory. First, we consider the wiretap channel problem which involves a communication system in which an intruder can eavesdrop a limited portion of the transmissions, and construct efficient and information-theoretically optimal communication protocols for this model. Then we consider the combinatorial group testing problem. In this classical problem, one aims to determine a set of defective items within a large population by asking a number of queries, where each query reveals whether a defective item is present within a specified group of items. We use randomness condensers to explicitly construct optimal, or nearly optimal, group testing schemes for a setting where the query outcomes can be highly unreliable, as well as the threshold model where a query returns positive if the number of defectives pass a certain threshold. Finally, we design ensembles of error-correcting codes that achieve the information-theoretic capacity of a large class of communication channels, and then use the obtained ensembles for construction of explicit capacity achieving codes. [This is a shortened version of the actual abstract in the thesis.

  10. VENTURE/PC manual: A multidimensional multigroup neutron diffusion code system. Version 3

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shapiro, A.; Huria, H.C.; Cho, K.W.

    1991-12-01

    VENTURE/PC is a recompilation of part of the Oak Ridge BOLD VENTURE code system, which will operate on an IBM PC or compatible computer. Neutron diffusion theory solutions are obtained for multidimensional, multigroup problems. This manual contains information associated with operating the code system. The purpose of the various modules used in the code system, and the input for these modules are discussed. The PC code structure is also given. Version 2 included several enhancements not given in the original version of the code. In particular, flux iterations can be done in core rather than by reading and writing tomore » disk, for problems which allow sufficient memory for such in-core iterations. This speeds up the iteration process. Version 3 does not include any of the special processors used in the previous versions. These special processors utilized formatted input for various elements of the code system. All such input data is now entered through the Input Processor, which produces standard interface files for the various modules in the code system. In addition, a Standard Interface File Handbook is included in the documentation which is distributed with the code, to assist in developing the input for the Input Processor.« less

  11. Post-test analysis of dryout test 7B' of the W-1 Sodium Loop Safety Facility Experiment with the SABRE-2P code. [LMFBR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rose, S.D.; Dearing, J.F.

    An understanding of conditions that may cause sodium boiling and boiling propagation that may lead to dryout and fuel failure is crucial in liquid-metal fast-breeder reactor safety. In this study, the SABRE-2P subchannel analysis code has been used to analyze the ultimate transient of the in-core W-1 Sodium Loop Safety Facility experiment. This code has a 3-D simple nondynamic boiling model which is able to predict the flow instability which caused dryout. In other analyses dryout has been predicted for out-of-core test bundles and so this study provides additional confirmation of the model.

  12. TRAC-PD2 posttest analysis of the CCTF Evaluation-Model Test C1-19 (Run 38). [PWR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Motley, F.

    The results of a Transient Reactor Analysis Code posttest analysis of the Cylindral Core Test Facility Evaluation-Model Test agree very well with the results of the experiment. The good agreement obtained verifies the multidimensional analysis capability of the TRAC code. Because of the steep radial power profile, the importance of using fine noding in the core region was demonstrated (as compared with poorer results obtained from an earlier pretest prediction that used a coarsely noded model).

  13. Responsive Image Inline Filter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Freeman, Ian

    2016-10-20

    RIIF is a contributed module for the Drupal php web application framework (drupal.org). It is written as a helper or sub-module of other code which is part of version 8 "core Drupal" and is intended to extend its functionality. It allows Drupal to resize images uploaded through the user-facing text editor within the Drupal GUI (a.k.a. "inline images") for various browser widths. This resizing is already done foe other images through the parent "Responsive Image" core module. This code extends that functionality to inline images.

  14. Analysis of dosimetry from the H.B. Robinson unit 2 pressure vessel benchmark using RAPTOR-M3G and ALPAN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fischer, G.A.

    2011-07-01

    Document available in abstract form only, full text of document follows: The dosimetry from the H. B. Robinson Unit 2 Pressure Vessel Benchmark is analyzed with a suite of Westinghouse-developed codes and data libraries. The radiation transport from the reactor core to the surveillance capsule and ex-vessel locations is performed by RAPTOR-M3G, a parallel deterministic radiation transport code that calculates high-resolution neutron flux information in three dimensions. The cross-section library used in this analysis is the ALPAN library, an Evaluated Nuclear Data File (ENDF)/B-VII.0-based library designed for reactor dosimetry and fluence analysis applications. Dosimetry is evaluated with the industry-standard SNLRMLmore » reactor dosimetry cross-section data library. (authors)« less

  15. CoRes utilization for building PCK in pre-service teacher education on the digestive system topic

    NASA Astrophysics Data System (ADS)

    Nugraha, Ikmanda

    2017-05-01

    Knowledge of teachers in learning activities in the classroom has a close relationship with how well and how much students learn. Recently, a promising development in teacher education has appeared that centers on the academic construct of pedagogical content knowledge (PCK). This study was an exploratory study into a science teacher education program that seeks to build the foundations on which pre-service teachers can begin to build their pedagogical content knowledge (PCK). The program involved the use of Content Representations (CoRes), which was initially applied as component of a strategy for exploring and gaining insights into the PCK of in-service science teachers. This study involved the researcher and 20 students (third year) in a pre-service teacher education course (School Science I) in science education when the students worked to make content analysis on the digestive system topic. During the course, the students make their own CoRes through a workshop for digestive system topic individually, in pairs and whole class discussion. Data were recorded from students' CoRes, student reflective journals, interviews, and field notes recorded in the researcher's reflective journal. Pre-service teachers' comments from interviews and reflective journals were coded in relation to references about: (1) the effectiveness of variety strategies in building the knowledge bases required to design a CoRes and (2) their awareness and/or development of tentative components of future PCK for a digestive system topic as a result of CoRes construction. Observational data were examined for indications of increasing independence and competency on the part of student teachers when locating appropriate information for designing their CoRes. From this study, it is hoped that the pre-service science teachers are able to build knowledge and then transform it into a form of PCK for digestive system topic for their first classroom planning and teaching to teach digestive system contents effectively.

  16. An efficient modeling method for thermal stratification simulation in a BWR suppression pool

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haihua Zhao; Ling Zou; Hongbin Zhang

    2012-09-01

    The suppression pool in a BWR plant not only is the major heat sink within the containment system, but also provides major emergency cooling water for the reactor core. In several accident scenarios, such as LOCA and extended station blackout, thermal stratification tends to form in the pool after the initial rapid venting stage. Accurately predicting the pool stratification phenomenon is important because it affects the peak containment pressure; and the pool temperature distribution also affects the NPSHa (Available Net Positive Suction Head) and therefore the performance of the pump which draws cooling water back to the core. Current safetymore » analysis codes use 0-D lumped parameter methods to calculate the energy and mass balance in the pool and therefore have large uncertainty in prediction of scenarios in which stratification and mixing are important. While 3-D CFD methods can be used to analyze realistic 3D configurations, these methods normally require very fine grid resolution to resolve thin substructures such as jets and wall boundaries, therefore long simulation time. For mixing in stably stratified large enclosures, the BMIX++ code has been developed to implement a highly efficient analysis method for stratification where the ambient fluid volume is represented by 1-D transient partial differential equations and substructures such as free or wall jets are modeled with 1-D integral models. This allows very large reductions in computational effort compared to 3-D CFD modeling. The POOLEX experiments at Finland, which was designed to study phenomena relevant to Nordic design BWR suppression pool including thermal stratification and mixing, are used for validation. GOTHIC lumped parameter models are used to obtain boundary conditions for BMIX++ code and CFD simulations. Comparison between the BMIX++, GOTHIC, and CFD calculations against the POOLEX experimental data is discussed in detail.« less

  17. LDRD final report on massively-parallel linear programming : the parPCx system.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parekh, Ojas; Phillips, Cynthia Ann; Boman, Erik Gunnar

    2005-02-01

    This report summarizes the research and development performed from October 2002 to September 2004 at Sandia National Laboratories under the Laboratory-Directed Research and Development (LDRD) project ''Massively-Parallel Linear Programming''. We developed a linear programming (LP) solver designed to use a large number of processors. LP is the optimization of a linear objective function subject to linear constraints. Companies and universities have expended huge efforts over decades to produce fast, stable serial LP solvers. Previous parallel codes run on shared-memory systems and have little or no distribution of the constraint matrix. We have seen no reports of general LP solver runsmore » on large numbers of processors. Our parallel LP code is based on an efficient serial implementation of Mehrotra's interior-point predictor-corrector algorithm (PCx). The computational core of this algorithm is the assembly and solution of a sparse linear system. We have substantially rewritten the PCx code and based it on Trilinos, the parallel linear algebra library developed at Sandia. Our interior-point method can use either direct or iterative solvers for the linear system. To achieve a good parallel data distribution of the constraint matrix, we use a (pre-release) version of a hypergraph partitioner from the Zoltan partitioning library. We describe the design and implementation of our new LP solver called parPCx and give preliminary computational results. We summarize a number of issues related to efficient parallel solution of LPs with interior-point methods including data distribution, numerical stability, and solving the core linear system using both direct and iterative methods. We describe a number of applications of LP specific to US Department of Energy mission areas and we summarize our efforts to integrate parPCx (and parallel LP solvers in general) into Sandia's massively-parallel integer programming solver PICO (Parallel Interger and Combinatorial Optimizer). We conclude with directions for long-term future algorithmic research and for near-term development that could improve the performance of parPCx.« less

  18. Xenon-induced power oscillations in a generic small modular reactor

    NASA Astrophysics Data System (ADS)

    Kitcher, Evans Damenortey

    As world demand for energy continues to grow at unprecedented rates, the world energy portfolio of the future will inevitably include a nuclear energy contribution. It has been suggested that the Small Modular Reactor (SMR) could play a significant role in the spread of civilian nuclear technology to nations previously without nuclear energy. As part of the design process, the SMR design must be assessed for the threat to operations posed by xenon-induced power oscillations. In this research, a generic SMR design was analyzed with respect to just such a threat. In order to do so, a multi-physics coupling routine was developed with MCNP/MCNPX as the neutronics solver. Thermal hydraulic assessments were performed using a single channel analysis tool developed in Python. Fuel and coolant temperature profiles were implemented in the form of temperature dependent fuel cross sections generated using the SIGACE code and reactor core coolant densities. The Power Axial Offset (PAO) and Xenon Axial Offset (XAO) parameters were chosen to quantify any oscillatory behavior observed. The methodology was benchmarked against results from literature of startup tests performed at a four-loop PWR in Korea. The developed benchmark model replicated the pertinent features of the reactor within ten percent of the literature values. The results of the benchmark demonstrated that the developed methodology captured the desired phenomena accurately. Subsequently, a high fidelity SMR core model was developed and assessed. Results of the analysis revealed an inherently stable SMR design at beginning of core life and end of core life under full-power and half-power conditions. The effect of axial discretization, stochastic noise and convergence of the Monte Carlo tallies in the calculations of the PAO and XAO parameters was investigated. All were found to be quite small and the inherently stable nature of the core design with respect to xenon-induced power oscillations was confirmed. Finally, a preliminary investigation into excess reactivity control options for the SMR design was conducted confirming the generally held notion that existing PWR control mechanisms can be used in iPWR SMRs with similar effectiveness. With the desire to operate the SMR under the boron free coolant condition, erbium oxide fuel integral burnable absorber rods were identified as a possible means to retain the dispersed absorber effect of soluble boron in the reactor coolant in replacement.

  19. I-NERI Quarterly Technical Report (April 1 to June 30, 2005)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang Oh; Prof. Hee Cheon NO; Prof. John Lee

    2005-06-01

    The objective of this Korean/United States/laboratory/university collaboration is to develop new advanced computational methods for safety analysis codes for very-high-temperature gas-cooled reactors (VHTGRs) and numerical and experimental validation of these computer codes. This study consists of five tasks for FY-03: (1) development of computational methods for the VHTGR, (2) theoretical modification of aforementioned computer codes for molecular diffusion (RELAP5/ATHENA) and modeling CO and CO2 equilibrium (MELCOR), (3) development of a state-of-the-art methodology for VHTGR neutronic analysis and calculation of accurate power distributions and decay heat deposition rates, (4) reactor cavity cooling system experiment, and (5) graphite oxidation experiment. Second quartermore » of Year 3: (A) Prof. NO and Kim continued Task 1. As a further plant application of GAMMA code, we conducted two analyses: IAEA GT-MHR benchmark calculation for LPCC and air ingress analysis for PMR 600MWt. The GAMMA code shows comparable peak fuel temperature trend to those of other country codes. The analysis results for air ingress show much different trend from that of previous PBR analysis: later onset of natural circulation and less significant rise in graphite temperature. (B) Prof. Park continued Task 2. We have designed new separate effect test device having same heat transfer area and different diameter and total number of U-bands of air cooling pipe. New design has smaller pressure drop in the air cooling pipe than the previous one as designed with larger diameter and less number of U-bands. With the device, additional experiments have been performed to obtain temperature distributions of the water tank, the surface and the center of cooling pipe on axis. The results will be used to optimize the design of SNU-RCCS. (C) Prof. NO continued Task 3. The experimental work of air ingress is going on without any concern: With nuclear graphite IG-110, various kinetic parameters and reaction rates for the C/CO2 reaction were measured. Then, the rates of C/CO2 reaction were compared to the ones of C/O2 reaction. The rate equation for C/CO2 has been developed. (D) INL added models to RELAP5/ATHENA to cacilate the chemical reactions in a VHTR during an air ingress accident. Limited testing of the models indicate that they are calculating a correct special distribution in gas compositions. (E) INL benchmarked NACOK natural circulation data. (F) Professor Lee et al at the University of Michigan (UM) Task 5. The funding was received from the DOE Richland Office at the end of May and the subcontract paperwork was delivered to the UM on the sixth of June. The objective of this task is to develop a state of the art neutronics model for determining power distributions and decay heat deposition rates in a VHTGR core. Our effort during the reporting period covered reactor physics analysis of coated particles and coupled nuclear-thermal-hydraulic (TH) calculations, together with initial calculations for decay heat deposition rates in the core.« less

  20. Experimental and code simulation of a station blackout scenario for APR1400 with test facility ATLAS and MARS code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, X. G.; Kim, Y. S.; Choi, K. Y.

    2012-07-01

    A SBO (station blackout) experiment named SBO-01 was performed at full-pressure IET (Integral Effect Test) facility ATLAS (Advanced Test Loop for Accident Simulation) which is scaled down from the APR1400 (Advanced Power Reactor 1400 MWe). In this study, the transient of SBO-01 is discussed and is subdivided into three phases: the SG fluid loss phase, the RCS fluid loss phase, and the core coolant depletion and core heatup phase. In addition, the typical phenomena in SBO-01 test - SG dryout, natural circulation, core coolant boiling, the PRZ full, core heat-up - are identified. Furthermore, the SBO-01 test is reproduced bymore » the MARS code calculation with the ATLAS model which represents the ATLAS test facility. The experimental and calculated transients are then compared and discussed. The comparison reveals there was malfunction of equipments: the SG leakage through SG MSSV and the measurement error of loop flow meter. As the ATLAS model is validated against the experimental results, it can be further employed to investigate the other possible SBO scenarios and to study the scaling distortions in the ATLAS. (authors)« less

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Downar, Thomas

    This report summarizes the current status of VERA-CS Verification and Validation for PWR Core Follow operation and proposes a multi-phase plan for continuing VERA-CS V&V in FY17 and FY18. The proposed plan recognizes the hierarchical nature of a multi-physics code system such as VERA-CS and the importance of first achieving an acceptable level of V&V on each of the single physics codes before focusing on the V&V of the coupled physics solution. The report summarizes the V&V of each of the single physics codes systems currently used for core follow analysis (ie MPACT, CTF, Multigroup Cross Section Generation, and BISONmore » / Fuel Temperature Tables) and proposes specific actions to achieve a uniformly acceptable level of V&V in FY17. The report also recognizes the ongoing development of other codes important for PWR Core Follow (e.g. TIAMAT, MAMBA3D) and proposes Phase II (FY18) VERA-CS V&V activities in which those codes will also reach an acceptable level of V&V. The report then summarizes the current status of VERA-CS multi-physics V&V for PWR Core Follow and the ongoing PWR Core Follow V&V activities for FY17. An automated procedure and output data format is proposed for standardizing the output for core follow calculations and automatically generating tables and figures for the VERA-CS Latex file. A set of acceptance metrics is also proposed for the evaluation and assessment of core follow results that would be used within the script to automatically flag any results which require further analysis or more detailed explanation prior to being added to the VERA-CS validation base. After the Automation Scripts have been completed and tested using BEAVRS, the VERA-CS plan proposes the Watts Bar cycle depletion cases should be performed with the new cross section library and be included in the first draft of the new VERA-CS manual for release at the end of PoR15. Also, within the constraints imposed by the proprietary nature of plant data, as many as possible of the FY17 AMA Plant Core Follow cases should also be included in the VERA-CS manual at the end of PoR15. After completion of the ongoing development of TIAMAT for fully coupled, full core calculations with VERA-CS / BISON 1.5D, and after the completion of the refactoring of MAMBA3D for CIPS analysis in FY17, selected cases from the VERA-CS validation based should be performed, beginning with the legacy cases of Watts Bar and BEAVRS in PoR16. Finally, as potential Phase III future work some additional considerations are identified for extending the VERA-CS V&V to other reactor types such as the BWR.« less

  2. THC-MP: High performance numerical simulation of reactive transport and multiphase flow in porous media

    NASA Astrophysics Data System (ADS)

    Wei, Xiaohui; Li, Weishan; Tian, Hailong; Li, Hongliang; Xu, Haixiao; Xu, Tianfu

    2015-07-01

    The numerical simulation of multiphase flow and reactive transport in the porous media on complex subsurface problem is a computationally intensive application. To meet the increasingly computational requirements, this paper presents a parallel computing method and architecture. Derived from TOUGHREACT that is a well-established code for simulating subsurface multi-phase flow and reactive transport problems, we developed a high performance computing THC-MP based on massive parallel computer, which extends greatly on the computational capability for the original code. The domain decomposition method was applied to the coupled numerical computing procedure in the THC-MP. We designed the distributed data structure, implemented the data initialization and exchange between the computing nodes and the core solving module using the hybrid parallel iterative and direct solver. Numerical accuracy of the THC-MP was verified through a CO2 injection-induced reactive transport problem by comparing the results obtained from the parallel computing and sequential computing (original code). Execution efficiency and code scalability were examined through field scale carbon sequestration applications on the multicore cluster. The results demonstrate successfully the enhanced performance using the THC-MP on parallel computing facilities.

  3. Post-test analysis of PIPER-ONE PO-IC-2 experiment by RELAP5/MOD3 codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bovalini, R.; D`Auria, F.; Galassi, G.M.

    1996-11-01

    RELAP5/MOD3.1 was applied to the PO-IC-2 experiment performed in PIPER-ONE facility, which has been modified to reproduce typical isolation condenser thermal-hydraulic conditions. RELAP5 is a well known code widely used at the University of Pisa during the past seven years. RELAP5/MOD3.1 was the latest version of the code made available by the Idaho National Engineering Laboratory at the time of the reported study. PIPER-ONE is an experimental facility simulating a General Electric BWR-6 with volume and height scaling ratios of 1/2,200 and 1./1, respectively. In the frame of the present activity a once-through heat exchanger immersed in a pool ofmore » ambient temperature water, installed approximately 10 m above the core, was utilized to reproduce qualitatively the phenomenologies expected for the Isolation Condenser in the simplified BWR (SBWR). The PO-IC-2 experiment is the flood up of the PO-SD-8 and has been designed to solve some of the problems encountered in the analysis of the PO-SD-8 experiment. A very wide analysis is presented hereafter including the use of different code versions.« less

  4. Coupling procedure for TRANSURANUS and KTF codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jimenez, J.; Alglave, S.; Avramova, M.

    2012-07-01

    The nuclear industry aims to ensure safe and economic operation of each single fuel rod introduced in the reactor core. This goal is even more challenging nowadays due to the current strategy of going for higher burn-up (fuel cycles of 18 or 24 months) and longer residence time. In order to achieve that goal, fuel modeling is the key to predict the fuel rod behavior and lifetime under thermal and pressure loads, corrosion and irradiation. In this context, fuel performance codes, such as TRANSURANUS, are used to improve the fuel rod design. The modeling capabilities of the above mentioned toolsmore » can be significantly improved if they are coupled with a thermal-hydraulic code in order to have a better description of the flow conditions within the rod bundle. For LWR applications, a good representation of the two phase flow within the fuel assembly is necessary in order to have a best estimate calculation of the heat transfer inside the bundle. In this paper we present the coupling methodology of TRANSURANUS with KTF (Karlsruhe Two phase Flow subchannel code) as well as selected results of the coupling proof of principle. (authors)« less

  5. SASS-1--SUBASSEMBLY STRESS SURVEY CODE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Friedrich, C.M.

    1960-01-01

    SASS-1, an IBM-704 FORTRAN code, calculates pressure, thermal, and combined stresses in a nuclear reactor core subassembly. In addition to cross- section stresses, the code calculates axial shear stresses needed to keep plane cross sections plane under axial variations of temperature. The input and output nomenclature, arrangement, and formats are described. (B.O.G.)

  6. Coupling of TRAC-PF1/MOD2, Version 5.4.25, with NESTLE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Knepper, P.L.; Hochreiter, L.E.; Ivanov, K.N.

    1999-09-01

    A three-dimensional (3-D) spatial kinetics capability within a thermal-hydraulics system code provides a more correct description of the core physics during reactor transients that involve significant variations in the neutron flux distribution. Coupled codes provide the ability to forecast safety margins in a best-estimate manner. The behavior of a reactor core and the feedback to the plant dynamics can be accurately simulated. For each time step, coupled codes are capable of resolving system interaction effects on neutronics feedback and are capable of describing local neutronics effects caused by the thermal hydraulics and neutronics coupling. With the improvements in computational technology,more » modeling complex reactor behaviors with coupled thermal hydraulics and spatial kinetics is feasible. Previously, reactor analysis codes were limited to either a detailed thermal-hydraulics model with simplified kinetics or multidimensional neutron kinetics with a simplified thermal-hydraulics model. The authors discuss the coupling of the Transient Reactor Analysis Code (TRAC)-PF1/MOD2, Version 5.4.25, with the NESTLE code.« less

  7. Characterization of open-cycle coal-fired MHD generators. Quarterly technical summary report No. 6, October 1--December 31, 1977. [PACKAGE code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kolb, C.E.; Yousefian, V.; Wormhoudt, J.

    1978-01-30

    Research has included theoretical modeling of important plasma chemical effects such as: conductivity reductions due to condensed slag/electron interactions; conductivity and generator efficiency reductions due to the formation of slag-related negative ion species; and the loss of alkali seed due to chemical combination with condensed slag. A summary of the major conclusions in each of these areas is presented. A major output of the modeling effort has been the development of an MHD plasma chemistry core flow model. This model has been formulated into a computer program designated the PACKAGE code (Plasma Analysis, Chemical Kinetics, And Generator Efficiency). The PACKAGEmore » code is designed to calculate the effect of coal rank, ash percentage, ash composition, air preheat temperatures, equivalence ratio, and various generator channel parameters on the overall efficiency of open-cycle, coal-fired MHD generators. A complete description of the PACKAGE code and a preliminary version of the PACKAGE user's manual are included. A laboratory measurements program involving direct, mass spectrometric sampling of the positive and negative ions formed in a one atmosphere coal combustion plasma was also completed during the contract's initial phase. The relative ion concentrations formed in a plasma due to the methane augmented combustion of pulverized Montana Rosebud coal with potassium carbonate seed and preheated air are summarized. Positive ions measured include K/sup +/, KO/sup +/, Na/sup +/, Rb/sup +/, Cs/sup +/, and CsO/sup +/, while negative ions identified include PO/sub 3//sup -/, PO/sub 2//sup -/, BO/sub 2//sup -/, OH/sup -/, SH/sup -/, and probably HCrO/sub 3/, HMoO/sub 4//sup -/, and HWO/sub 3//sup -/. Comparison of the measurements with PACKAGE code predictions are presented. Preliminary design considerations for a mass spectrometric sampling probe capable of characterizing coal combustion plasmas from full scale combustors and flow trains are presented and discussed.« less

  8. N-body simulation for self-gravitating collisional systems with a new SIMD instruction set extension to the x86 architecture, Advanced Vector eXtensions

    NASA Astrophysics Data System (ADS)

    Tanikawa, Ataru; Yoshikawa, Kohji; Okamoto, Takashi; Nitadori, Keigo

    2012-02-01

    We present a high-performance N-body code for self-gravitating collisional systems accelerated with the aid of a new SIMD instruction set extension of the x86 architecture: Advanced Vector eXtensions (AVX), an enhanced version of the Streaming SIMD Extensions (SSE). With one processor core of Intel Core i7-2600 processor (8 MB cache and 3.40 GHz) based on Sandy Bridge micro-architecture, we implemented a fourth-order Hermite scheme with individual timestep scheme ( Makino and Aarseth, 1992), and achieved the performance of ˜20 giga floating point number operations per second (GFLOPS) for double-precision accuracy, which is two times and five times higher than that of the previously developed code implemented with the SSE instructions ( Nitadori et al., 2006b), and that of a code implemented without any explicit use of SIMD instructions with the same processor core, respectively. We have parallelized the code by using so-called NINJA scheme ( Nitadori et al., 2006a), and achieved ˜90 GFLOPS for a system containing more than N = 8192 particles with 8 MPI processes on four cores. We expect to achieve about 10 tera FLOPS (TFLOPS) for a self-gravitating collisional system with N ˜ 10 5 on massively parallel systems with at most 800 cores with Sandy Bridge micro-architecture. This performance will be comparable to that of Graphic Processing Unit (GPU) cluster systems, such as the one with about 200 Tesla C1070 GPUs ( Spurzem et al., 2010). This paper offers an alternative to collisional N-body simulations with GRAPEs and GPUs.

  9. Best estimate plus uncertainty analysis of departure from nucleate boiling limiting case with CASL core simulator VERA-CS in response to PWR main steam line break event

    DOE PAGES

    Brown, Cameron S.; Zhang, Hongbin; Kucukboyaci, Vefa; ...

    2016-09-07

    VERA-CS (Virtual Environment for Reactor Applications, Core Simulator) is a coupled neutron transport and thermal-hydraulics subchannel code under development by the Consortium for Advanced Simulation of Light Water Reactors (CASL). VERA-CS was used to simulate a typical pressurized water reactor (PWR) full core response with 17x17 fuel assemblies for a main steam line break (MSLB) accident scenario with the most reactive rod cluster control assembly stuck out of the core. The accident scenario was initiated at the hot zero power (HZP) at the end of the first fuel cycle with return to power state points that were determined by amore » system analysis code and the most limiting state point was chosen for core analysis. The best estimate plus uncertainty (BEPU) analysis method was applied using Wilks’ nonparametric statistical approach. In this way, 59 full core simulations were performed to provide the minimum departure from nucleate boiling ratio (MDNBR) at the 95/95 (95% probability with 95% confidence level) tolerance limit. The results show that this typical PWR core remains within MDNBR safety limits for the MSLB accident.« less

  10. Best estimate plus uncertainty analysis of departure from nucleate boiling limiting case with CASL core simulator VERA-CS in response to PWR main steam line break event

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Cameron S.; Zhang, Hongbin; Kucukboyaci, Vefa

    VERA-CS (Virtual Environment for Reactor Applications, Core Simulator) is a coupled neutron transport and thermal-hydraulics subchannel code under development by the Consortium for Advanced Simulation of Light Water Reactors (CASL). VERA-CS was used to simulate a typical pressurized water reactor (PWR) full core response with 17x17 fuel assemblies for a main steam line break (MSLB) accident scenario with the most reactive rod cluster control assembly stuck out of the core. The accident scenario was initiated at the hot zero power (HZP) at the end of the first fuel cycle with return to power state points that were determined by amore » system analysis code and the most limiting state point was chosen for core analysis. The best estimate plus uncertainty (BEPU) analysis method was applied using Wilks’ nonparametric statistical approach. In this way, 59 full core simulations were performed to provide the minimum departure from nucleate boiling ratio (MDNBR) at the 95/95 (95% probability with 95% confidence level) tolerance limit. The results show that this typical PWR core remains within MDNBR safety limits for the MSLB accident.« less

  11. Numerical optimization of three-dimensional coils for NSTX-U

    DOE PAGES

    Lazerson, S. A.; Park, J. -K.; Logan, N.; ...

    2015-09-03

    A tool for the calculation of optimal three-dimensional (3D) perturbative magnetic fields in tokamaks has been developed. The IPECOPT code builds upon the stellarator optimization code STELLOPT to allow for optimization of linear ideal magnetohydrodynamic perturbed equilibrium (IPEC). This tool has been applied to NSTX-U equilibria, addressing which fields are the most effective at driving NTV torques. The NTV torque calculation is performed by the PENT code. Optimization of the normal field spectrum shows that fields with n = 1 character can drive a large core torque. It is also shown that fields with n = 3 features are capablemore » of driving edge torque and some core torque. Coil current optimization (using the planned in-vessel and existing RWM coils) on NSTX-U suggest the planned coils set is adequate for core and edge torque control. In conclusion, comparison between error field correction experiments on DIII-D and the optimizer show good agreement.« less

  12. Assessment of the prevailing physics codes: LEOPARD, LASER, and EPRI-CELL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lan, J.S.

    1981-01-01

    In order to analyze core performance and fuel management, it is necessary to verify reactor physics codes in great detail. This kind of work not only serves the purpose of understanding and controlling the characteristics of each code, but also ensures the reliability as codes continually change due to constant modifications and machine transfers. This paper will present the results of a comprehensive verification of three code packages - LEOPARD, LASER, and EPRI-CELL.

  13. Results of comparative RBMK neutron computation using VNIIEF codes (cell computation, 3D statics, 3D kinetics). Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grebennikov, A.N.; Zhitnik, A.K.; Zvenigorodskaya, O.A.

    1995-12-31

    In conformity with the protocol of the Workshop under Contract {open_quotes}Assessment of RBMK reactor safety using modern Western Codes{close_quotes} VNIIEF performed a neutronics computation series to compare western and VNIIEF codes and assess whether VNIIEF codes are suitable for RBMK type reactor safety assessment computation. The work was carried out in close collaboration with M.I. Rozhdestvensky and L.M. Podlazov, NIKIET employees. The effort involved: (1) cell computations with the WIMS, EKRAN codes (improved modification of the LOMA code) and the S-90 code (VNIIEF Monte Carlo). Cell, polycell, burnup computation; (2) 3D computation of static states with the KORAT-3D and NEUmore » codes and comparison with results of computation with the NESTLE code (USA). The computations were performed in the geometry and using the neutron constants presented by the American party; (3) 3D computation of neutron kinetics with the KORAT-3D and NEU codes. These computations were performed in two formulations, both being developed in collaboration with NIKIET. Formulation of the first problem maximally possibly agrees with one of NESTLE problems and imitates gas bubble travel through a core. The second problem is a model of the RBMK as a whole with imitation of control and protection system controls (CPS) movement in a core.« less

  14. Computational performance of a smoothed particle hydrodynamics simulation for shared-memory parallel computing

    NASA Astrophysics Data System (ADS)

    Nishiura, Daisuke; Furuichi, Mikito; Sakaguchi, Hide

    2015-09-01

    The computational performance of a smoothed particle hydrodynamics (SPH) simulation is investigated for three types of current shared-memory parallel computer devices: many integrated core (MIC) processors, graphics processing units (GPUs), and multi-core CPUs. We are especially interested in efficient shared-memory allocation methods for each chipset, because the efficient data access patterns differ between compute unified device architecture (CUDA) programming for GPUs and OpenMP programming for MIC processors and multi-core CPUs. We first introduce several parallel implementation techniques for the SPH code, and then examine these on our target computer architectures to determine the most effective algorithms for each processor unit. In addition, we evaluate the effective computing performance and power efficiency of the SPH simulation on each architecture, as these are critical metrics for overall performance in a multi-device environment. In our benchmark test, the GPU is found to produce the best arithmetic performance as a standalone device unit, and gives the most efficient power consumption. The multi-core CPU obtains the most effective computing performance. The computational speed of the MIC processor on Xeon Phi approached that of two Xeon CPUs. This indicates that using MICs is an attractive choice for existing SPH codes on multi-core CPUs parallelized by OpenMP, as it gains computational acceleration without the need for significant changes to the source code.

  15. Time-dependent simulations of disk-embedded planetary atmospheres

    NASA Astrophysics Data System (ADS)

    Stökl, A.; Dorfi, E. A.

    2014-03-01

    At the early stages of evolution of planetary systems, young Earth-like planets still embedded in the protoplanetary disk accumulate disk gas gravitationally into planetary atmospheres. The established way to study such atmospheres are hydrostatic models, even though in many cases the assumption of stationarity is unlikely to be fulfilled. Furthermore, such models rely on the specification of a planetary luminosity, attributed to a continuous, highly uncertain accretion of planetesimals onto the surface of the solid core. We present for the first time time-dependent, dynamic simulations of the accretion of nebula gas into an atmosphere around a proto-planet and the evolution of such embedded atmospheres while integrating the thermal energy budget of the solid core. The spherical symmetric models computed with the TAPIR-Code (short for The adaptive, implicit RHD-Code) range from the surface of the rocky core up to the Hill radius where the surrounding protoplanetary disk provides the boundary conditions. The TAPIR-Code includes the hydrodynamics equations, gray radiative transport and convective energy transport. The results indicate that diskembedded planetary atmospheres evolve along comparatively simple outlines and in particular settle, dependent on the mass of the solid core, at characteristic surface temperatures and planetary luminosities, quite independent on numerical parameters and initial conditions. For sufficiently massive cores, this evolution ultimately also leads to runaway accretion and the formation of a gas planet.

  16. Bumper 3 Update for IADC Protection Manual

    NASA Technical Reports Server (NTRS)

    Christiansen, Eric L.; Nagy, Kornel; Hyde, Jim

    2016-01-01

    The Bumper code has been the standard in use by NASA and contractors to perform meteoroid/debris risk assessments since 1990. It has undergone extensive revisions and updates [NASA JSC HITF website; Christiansen et al., 1992, 1997]. NASA Johnson Space Center (JSC) has applied BUMPER to risk assessments for Space Station, Shuttle, Mir, Extravehicular Mobility Units (EMU) space suits, and other spacecraft (e.g., LDEF, Iridium, TDRS, and Hubble Space Telescope). Bumper continues to be updated with changes in the ballistic limit equations describing failure threshold of various spacecraft components, as well as changes in the meteoroid and debris environment models. Significant efforts are expended to validate Bumper and benchmark it to other meteoroid/debris risk assessment codes. Bumper 3 is a refactored version of Bumper II. The structure of the code was extensively modified to improve maintenance, performance and flexibility. The architecture was changed to separate the frequently updated ballistic limit equations from the relatively stable common core functions of the program. These updates allow NASA to produce specific editions of the Bumper 3 that are tailored for specific customer requirements. The core consists of common code necessary to process the Micrometeoroid and Orbital Debris (MMOD) environment models, assess shadowing and calculate MMOD risk. The library of target response subroutines includes a board range of different types of MMOD shield ballistic limit equations as well as equations describing damage to various spacecraft subsystems or hardware (thermal protection materials, windows, radiators, solar arrays, cables, etc.). The core and library of ballistic response subroutines are maintained under configuration control. A change in the core will affect all editions of the code, whereas a change in one or more of the response subroutines will affect all editions of the code that contain the particular response subroutines which are modified. Note that the Bumper II program is no longer maintained or distributed by NASA.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, C. S.; Zhang, Hongbin

    Uncertainty quantification and sensitivity analysis are important for nuclear reactor safety design and analysis. A 2x2 fuel assembly core design was developed and simulated by the Virtual Environment for Reactor Applications, Core Simulator (VERA-CS) coupled neutronics and thermal-hydraulics code under development by the Consortium for Advanced Simulation of Light Water Reactors (CASL). An approach to uncertainty quantification and sensitivity analysis with VERA-CS was developed and a new toolkit was created to perform uncertainty quantification and sensitivity analysis with fourteen uncertain input parameters. Furthermore, the minimum departure from nucleate boiling ratio (MDNBR), maximum fuel center-line temperature, and maximum outer clad surfacemore » temperature were chosen as the selected figures of merit. Pearson, Spearman, and partial correlation coefficients were considered for all of the figures of merit in sensitivity analysis and coolant inlet temperature was consistently the most influential parameter. We used parameters as inputs to the critical heat flux calculation with the W-3 correlation were shown to be the most influential on the MDNBR, maximum fuel center-line temperature, and maximum outer clad surface temperature.« less

  18. Validation of the WIMSD4M cross-section generation code with benchmark results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deen, J.R.; Woodruff, W.L.; Leal, L.E.

    1995-01-01

    The WIMSD4 code has been adopted for cross-section generation in support of the Reduced Enrichment Research and Test Reactor (RERTR) program at Argonne National Laboratory (ANL). Subsequently, the code has undergone several updates, and significant improvements have been achieved. The capability of generating group-collapsed micro- or macroscopic cross sections from the ENDF/B-V library and the more recent evaluation, ENDF/B-VI, in the ISOTXS format makes the modified version of the WIMSD4 code, WIMSD4M, very attractive, not only for the RERTR program, but also for the reactor physics community. The intent of the present paper is to validate the WIMSD4M cross-section librariesmore » for reactor modeling of fresh water moderated cores. The results of calculations performed with multigroup cross-section data generated with the WIMSD4M code will be compared against experimental results. These results correspond to calculations carried out with thermal reactor benchmarks of the Oak Ridge National Laboratory (ORNL) unreflected HEU critical spheres, the TRX LEU critical experiments, and calculations of a modified Los Alamos HEU D{sub 2}O moderated benchmark critical system. The benchmark calculations were performed with the discrete-ordinates transport code, TWODANT, using WIMSD4M cross-section data. Transport calculations using the XSDRNPM module of the SCALE code system are also included. In addition to transport calculations, diffusion calculations with the DIF3D code were also carried out, since the DIF3D code is used in the RERTR program for reactor analysis and design. For completeness, Monte Carlo results of calculations performed with the VIM and MCNP codes are also presented.« less

  19. Embedded 3D shape measurement system based on a novel spatio-temporal coding method

    NASA Astrophysics Data System (ADS)

    Xu, Bin; Tian, Jindong; Tian, Yong; Li, Dong

    2016-11-01

    Structured light measurement has been wildly used since 1970s in industrial component detection, reverse engineering, 3D molding, robot navigation, medical and many other fields. In order to satisfy the demand for high speed, high precision and high resolution 3-D measurement for embedded system, a new patterns combining binary and gray coding principle in space are designed and projected onto the object surface orderly. Each pixel corresponds to the designed sequence of gray values in time - domain, which is treated as a feature vector. The unique gray vector is then dimensionally reduced to a scalar which could be used as characteristic information for binocular matching. In this method, the number of projected structured light patterns is reduced, and the time-consuming phase unwrapping in traditional phase shift methods is avoided. This algorithm is eventually implemented on DM3730 embedded system for 3-D measuring, which consists of an ARM and a DSP core and has a strong capability of digital signal processing. Experimental results demonstrated the feasibility of the proposed method.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dokhane, A.; Canepa, S.; Ferroukhi, H.

    For stability analyses of the Swiss operating Boiling-Water-Reactors (BWRs), the methodology employed and validated so far at the Paul Scherrer Inst. (PSI) was based on the RAMONA-3 code with a hybrid upstream static lattice/core analysis approach using CASMO-4 and PRESTO-2. More recently, steps were undertaken towards a new methodology based on the SIMULATE-3K (S3K) code for the dynamical analyses combined with the CMSYS system relying on the CASMO/SIMULATE-3 suite of codes and which was established at PSI to serve as framework for the development and validation of reference core models of all the Swiss reactors and operated cycles. This papermore » presents a first validation of the new methodology on the basis of a benchmark recently organised by a Swiss utility and including the participation of several international organisations with various codes/methods. Now in parallel, a transition from CASMO-4E (C4E) to CASMO-5M (C5M) as basis for the CMSYS core models was also recently initiated at PSI. Consequently, it was considered adequate to address the impact of this transition both for the steady-state core analyses as well as for the stability calculations and to achieve thereby, an integral approach for the validation of the new S3K methodology. Therefore, a comparative assessment of C4 versus C5M is also presented in this paper with particular emphasis on the void coefficients and their impact on the downstream stability analysis results. (authors)« less

  1. Development of a New 47-Group Library for the CASL Neutronics Simulators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Kang Seog; Williams, Mark L; Wiarda, Dorothea

    The CASL core simulator MPACT is under development for the neutronics and thermal-hydraulics coupled simulation for the pressurized light water reactors. The key characteristics of the MPACT code include a subgroup method for resonance self-shielding, and a whole core solver with a 1D/2D synthesis method. The ORNL AMPX/SCALE code packages have been significantly improved to support various intermediate resonance self-shielding approximations such as the subgroup and embedded self-shielding methods. New 47-group AMPX and MPACT libraries based on ENDF/B-VII.0 have been generated for the CASL core simulator MPACT of which group structure comes from the HELIOS library. The new 47-group MPACTmore » library includes all nuclear data required for static and transient core simulations. This study discusses a detailed procedure to generate the 47-group AMPX and MPACT libraries and benchmark results for the VERA progression problems.« less

  2. Design and Simulation of Material-Integrated Distributed Sensor Processing with a Code-Based Agent Platform and Mobile Multi-Agent Systems

    PubMed Central

    Bosse, Stefan

    2015-01-01

    Multi-agent systems (MAS) can be used for decentralized and self-organizing data processing in a distributed system, like a resource-constrained sensor network, enabling distributed information extraction, for example, based on pattern recognition and self-organization, by decomposing complex tasks in simpler cooperative agents. Reliable MAS-based data processing approaches can aid the material-integration of structural-monitoring applications, with agent processing platforms scaled to the microchip level. The agent behavior, based on a dynamic activity-transition graph (ATG) model, is implemented with program code storing the control and the data state of an agent, which is novel. The program code can be modified by the agent itself using code morphing techniques and is capable of migrating in the network between nodes. The program code is a self-contained unit (a container) and embeds the agent data, the initialization instructions and the ATG behavior implementation. The microchip agent processing platform used for the execution of the agent code is a standalone multi-core stack machine with a zero-operand instruction format, leading to a small-sized agent program code, low system complexity and high system performance. The agent processing is token-queue-based, similar to Petri-nets. The agent platform can be implemented in software, too, offering compatibility at the operational and code level, supporting agent processing in strong heterogeneous networks. In this work, the agent platform embedded in a large-scale distributed sensor network is simulated at the architectural level by using agent-based simulation techniques. PMID:25690550

  3. Design and simulation of material-integrated distributed sensor processing with a code-based agent platform and mobile multi-agent systems.

    PubMed

    Bosse, Stefan

    2015-02-16

    Multi-agent systems (MAS) can be used for decentralized and self-organizing data processing in a distributed system, like a resource-constrained sensor network, enabling distributed information extraction, for example, based on pattern recognition and self-organization, by decomposing complex tasks in simpler cooperative agents. Reliable MAS-based data processing approaches can aid the material-integration of structural-monitoring applications, with agent processing platforms scaled to the microchip level. The agent behavior, based on a dynamic activity-transition graph (ATG) model, is implemented with program code storing the control and the data state of an agent, which is novel. The program code can be modified by the agent itself using code morphing techniques and is capable of migrating in the network between nodes. The program code is a self-contained unit (a container) and embeds the agent data, the initialization instructions and the ATG behavior implementation. The microchip agent processing platform used for the execution of the agent code is a standalone multi-core stack machine with a zero-operand instruction format, leading to a small-sized agent program code, low system complexity and high system performance. The agent processing is token-queue-based, similar to Petri-nets. The agent platform can be implemented in software, too, offering compatibility at the operational and code level, supporting agent processing in strong heterogeneous networks. In this work, the agent platform embedded in a large-scale distributed sensor network is simulated at the architectural level by using agent-based simulation techniques.

  4. Characterization, Modeling, and Failure Analysis of Composite Structure Materials under Static and Dynamic Loading

    NASA Astrophysics Data System (ADS)

    Werner, Brian Thomas

    Composite structures have long been used in many industries where it is advantageous to reduce weight while maintaining high stiffness and strength. Composites can now be found in an ever broadening range of applications: sporting equipment, automobiles, marine and aerospace structures, and energy production. These structures are typically sandwich panels composed of fiber reinforced polymer composite (FRPC) facesheets which provide the stiffness and the strength and a low density polymeric foam core that adds bending rigidity with little additional weight. The expanding use of composite structures exposes them to high energy, high velocity dynamic loadings which produce multi-axial dynamic states of stress. This circumstance can present quite a challenge to designers, as composite structures are highly anisotropic and display properties that are sensitive to loading rates. Computer codes are continually in development to assist designers in the creation of safe, efficient structures. While the design of an optimal composite structure is more complex, engineers can take advantage of the effect of enhanced energy dissipation displayed by a composite when loaded at high strain rates. In order to build and verify effective computer codes, the underlying assumptions must be verified by laboratory experiments. Many of these codes look to use a micromechanical approach to determine the response of the structure. For this, the material properties of the constituent materials must be verified, three-dimensional constitutive laws must be developed, and failure of these materials must be investigated under static and dynamic loading conditions. In this study, simple models are sought not only to ease their implementation into such codes, but to allow for efficient characterization of new materials that may be developed. Characterization of composite materials and sandwich structures is a costly, time intensive process. A constituent based design approach evaluates potential combinations of materials in a much faster and more efficient manner.

  5. Code dependencies of pre-supernova evolution and nucleosynthesis in massive stars: evolution to the end of core helium burning

    DOE PAGES

    Jones, S.; Hirschi, R.; Pignatari, M.; ...

    2015-01-15

    We present a comparison of 15M ⊙ , 20M ⊙ and 25M ⊙ stellar models from three different codes|GENEC, KEPLER and MESA|and their nucleosynthetic yields. The models are calculated from the main sequence up to the pre-supernova (pre-SN) stage and do not include rotation. The GENEC and KEPLER models hold physics assumptions that are characteristic of the two codes. The MESA code is generally more flexible; overshooting of the convective core during the hydrogen and helium burning phases in MESA is chosen such that the CO core masses are consistent with those in the GENEC models. Full nucleosynthesis calculations aremore » performed for all models using the NuGrid post-processing tool MPPNP and the key energy-generating nuclear reaction rates are the same for all codes. We are thus able to highlight the key diferences between the models that are caused by the contrasting physics assumptions and numerical implementations of the three codes. A reasonable agreement is found between the surface abundances predicted by the models computed using the different codes, with GENEC exhibiting the strongest enrichment of H-burning products and KEPLER exhibiting the weakest. There are large variations in both the structure and composition of the models—the 15M ⊙ and 20M ⊙ in particular—at the pre-SN stage from code to code caused primarily by convective shell merging during the advanced stages. For example the C-shell abundances of O, Ne and Mg predicted by the three codes span one order of magnitude in the 15M ⊙ models. For the alpha elements between Si and Fe the differences are even larger. The s-process abundances in the C shell are modified by the merging of convective shells; the modification is strongest in the 15M ⊙ model in which the C-shell material is exposed to O-burning temperatures and the γ -process is activated. The variation in the s-process abundances across the codes is smallest in the 25M ⊙ models, where it is comparable to the impact of nuclear reaction rate uncertainties. In general the differences in the results from the three codes are due to their contrasting physics assumptions (e.g. prescriptions for mass loss and convection). The broadly similar evolution of the 25M ⊙ models gives us reassurance that different stellar evolution codes do produce similar results. For the 15M ⊙ and 20M ⊙ models, however, the different input physics and the interplay between the various convective zones lead to important differences in both the pre-supernova structure and nucleosynthesis predicted by the three codes. For the KEPLER models the core masses are different and therefore an exact match could not be expected.« less

  6. A Common Set of Core Values - The Foundation for a More Effective Joint Force

    DTIC Science & Technology

    2015-05-18

    these codes stopped short of codifying a set of core values and instead focused on right and wrong behaviors. This adherence to sets of rules and...Armed Forces independently recognized the limitations of compliance-based rules and the criticality of establishing a strong foundation with core...institutional values vice core values? The knee -jerk reaction of the 1990s and a subsequent lack of a formal effort to institute a single set of core

  7. Effect of discharge instructions on readmission of hospitalised patients with heart failure: do all of the Joint Commission on Accreditation of Healthcare Organizations heart failure core measures reflect better care?

    PubMed Central

    VanSuch, Monica; Naessens, James M; Stroebel, Robert J; Huddleston, Jeanne M; Williams, Arthur R

    2006-01-01

    Background Most nationally standardised quality measures use widely accepted evidence‐based processes as their foundation, but the discharge instruction component of the United States standards of Joint Commission on Accreditation of Healthcare Organizations heart failure core measure appears to be based on expert opinion alone. Objective To determine whether documentation of compliance with any or all of the six required discharge instructions is correlated with readmissions to hospital or mortality. Research design A retrospective study at a single tertiary care hospital was conducted on randomly sampled patients hospitalised for heart failure from July 2002 to September 2003. Participants Applying the Joint Commission on Accreditation of Healthcare Organizations criteria, 782 of 1121 patients were found eligible to receive discharge instructions. Eligibility was determined by age, principal diagnosis codes and discharge status codes. Measures The primary outcome measures are time to death and time to readmission for heart failure or readmission for any cause and time to death. Results In all, 68% of patients received all instructions, whereas 6% received no instructions. Patients who received all instructions were significantly less likely to be readmitted for any cause (p = 0.003) and for heart failure (p = 0.035) than those who missed at least one type of instruction. Documentation of discharge instructions is correlated with reduced readmission rates. However, there was no association between documentation of discharge instructions and mortality (p = 0.521). Conclusions Including discharge instructions among other evidence‐based heart failure core measures appears justified. PMID:17142589

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bess, John D.; Sterbentz, James W.; Snoj, Luka

    PROTEUS is a zero-power research reactor based on a cylindrical graphite annulus with a central cylindrical cavity. The graphite annulus remains basically the same for all experimental programs, but the contents of the central cavity are changed according to the type of reactor being investigated. Through most of its service history, PROTEUS has represented light-water reactors, but from 1992 to 1996 PROTEUS was configured as a pebble-bed reactor (PBR) critical facility and designated as HTR-PROTEUS. The nomenclature was used to indicate that this series consisted of High Temperature Reactor experiments performed in the PROTEUS assembly. During this period, seventeen criticalmore » configurations were assembled and various reactor physics experiments were conducted. These experiments included measurements of criticality, differential and integral control rod and safety rod worths, kinetics, reaction rates, water ingress effects, and small sample reactivity effects (Ref. 3). HTR-PROTEUS was constructed, and the experimental program was conducted, for the purpose of providing experimental benchmark data for assessment of reactor physics computer codes. Considerable effort was devoted to benchmark calculations as a part of the HTR-PROTEUS program. References 1 and 2 provide detailed data for use in constructing models for codes to be assessed. Reference 3 is a comprehensive summary of the HTR-PROTEUS experiments and the associated benchmark program. This document draws freely from these references. Only Cores 9 and 10 are evaluated in this benchmark report due to similarities in their construction. The other core configurations of the HTR-PROTEUS program are evaluated in their respective reports as outlined in Section 1.0. Cores 9 and 10 were evaluated and determined to be acceptable benchmark experiments.« less

  9. Coupled Monte Carlo neutronics and thermal hydraulics for power reactors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bernnat, W.; Buck, M.; Mattes, M.

    The availability of high performance computing resources enables more and more the use of detailed Monte Carlo models even for full core power reactors. The detailed structure of the core can be described by lattices, modeled by so-called repeated structures e.g. in Monte Carlo codes such as MCNP5 or MCNPX. For cores with mainly uniform material compositions, fuel and moderator temperatures, there is no problem in constructing core models. However, when the material composition and the temperatures vary strongly a huge number of different material cells must be described which complicate the input and in many cases exceed code ormore » memory limits. The second problem arises with the preparation of corresponding temperature dependent cross sections and thermal scattering laws. Only if these problems can be solved, a realistic coupling of Monte Carlo neutronics with an appropriate thermal-hydraulics model is possible. In this paper a method for the treatment of detailed material and temperature distributions in MCNP5 is described based on user-specified internal functions which assign distinct elements of the core cells to material specifications (e.g. water density) and temperatures from a thermal-hydraulics code. The core grid itself can be described with a uniform material specification. The temperature dependency of cross sections and thermal neutron scattering laws is taken into account by interpolation, requiring only a limited number of data sets generated for different temperatures. Applications will be shown for the stationary part of the Purdue PWR benchmark using ATHLET for thermal- hydraulics and for a generic Modular High Temperature reactor using THERMIX for thermal- hydraulics. (authors)« less

  10. A CFD-based aerodynamic design procedure for hypersonic wind-tunnel nozzles

    NASA Technical Reports Server (NTRS)

    Korte, John J.

    1993-01-01

    A new procedure which unifies the best of current classical design practices, computational fluid dynamics (CFD), and optimization procedures is demonstrated for designing the aerodynamic lines of hypersonic wind-tunnel nozzles. The new procedure can be used to design hypersonic wind tunnel nozzles with thick boundary layers where the classical design procedure has been shown to break down. An efficient CFD code, which solves the parabolized Navier-Stokes (PNS) equations using an explicit upwind algorithm, is coupled to a least-squares (LS) optimization procedure. A LS problem is formulated to minimize the difference between the computed flow field and the objective function, consisting of the centerline Mach number distribution and the exit Mach number and flow angle profiles. The aerodynamic lines of the nozzle are defined using a cubic spline, the slopes of which are optimized with the design procedure. The advantages of the new procedure are that it allows full use of powerful CFD codes in the design process, solves an optimization problem to determine the new contour, can be used to design new nozzles or improve sections of existing nozzles, and automatically compensates the nozzle contour for viscous effects as part of the unified design procedure. The new procedure is demonstrated by designing two Mach 15, a Mach 12, and a Mach 18 helium nozzles. The flexibility of the procedure is demonstrated by designing the two Mach 15 nozzles using different constraints, the first nozzle for a fixed length and exit diameter and the second nozzle for a fixed length and throat diameter. The computed flow field for the Mach 15 least squares parabolized Navier-Stokes (LS/PNS) designed nozzle is compared with the classically designed nozzle and demonstrates a significant improvement in the flow expansion process and uniform core region.

  11. Evaluation of a Variable-Impedance Ceramic Matrix Composite Acoustic Liner

    NASA Technical Reports Server (NTRS)

    Jones, M. G.; Watson, W. R.; Nark, D. M.; Howerton, B. M.

    2014-01-01

    As a result of significant progress in the reduction of fan and jet noise, there is growing concern regarding core noise. One method for achieving core noise reduction is via the use of acoustic liners. However, these liners must be constructed with materials suitable for high temperature environments and should be designed for optimum absorption of the broadband core noise spectrum. This paper presents results of tests conducted in the NASA Langley Liner Technology Facility to evaluate a variable-impedance ceramic matrix composite acoustic liner that offers the potential to achieve each of these goals. One concern is the porosity of the ceramic matrix composite material, and whether this might affect the predictability of liners constructed with this material. Comparisons between two variable-depth liners, one constructed with ceramic matrix composite material and the other constructed via stereolithography, are used to demonstrate this material porosity is not a concern. Also, some interesting observations are noted regarding the orientation of variable-depth liners. Finally, two propagation codes are validated via comparisons of predicted and measured acoustic pressure profiles for a variable-depth liner.

  12. Experimental demonstration of high spectral efficient 4 × 4 MIMO SCMA-OFDM/OQAM radio over multi-core fiber system.

    PubMed

    Liu, Chang; Deng, Lei; He, Jiale; Li, Di; Fu, Songnian; Tang, Ming; Cheng, Mengfan; Liu, Deming

    2017-07-24

    In this paper, 4 × 4 multiple-input multiple-output (MIMO) radio over 7-core fiber system based on sparse code multiple access (SCMA) and OFDM/OQAM techniques is proposed. No cyclic prefix (CP) is required by properly designing the prototype filters in OFDM/OQAM modulator, and non-orthogonally overlaid codewords by using SCMA is help to serve more users simultaneously under the condition of using equal number of time and frequency resources compared with OFDMA, resulting in the increase of spectral efficiency (SE) and system capacity. In our experiment, 11.04 Gb/s 4 × 4 MIMO SCMA-OFDM/OQAM signal is successfully transmitted over 20 km 7-core fiber and 0.4 m air distance in both uplink and downlink. As a comparison, 6.681 Gb/s traditional MIMO-OFDM signal with the same occupied bandwidth has been evaluated for both uplink and downlink transmission. The experimental results show that SE could be increased by 65.2% with no bit error rate (BER) performance degradation compared with the traditional MIMO-OFDM technique.

  13. Comparison of MELCOR and SCDAP/RELAP5 results for a low-pressure, short-term station blackout at Browns Ferry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carbajo, J.J.

    1995-12-31

    This study compares results obtained with two U.S. Nuclear Regulatory Commission (NRC)-sponsored codes, MELCOR version 1.8.3 (1.8PQ) and SCDAP/RELAP5 Mod3.1 release C, for the same transient - a low-pressure, short-term station blackout accident at the Browns Ferry nuclear plant. This work is part of MELCOR assessment activities to compare core damage progression calculations of MELCOR against SCDAP/RELAP5 since the two codes model core damage progression very differently.

  14. Simulations of Rayleigh Taylor Instabilities in the presence of a Strong Radiative shock

    NASA Astrophysics Data System (ADS)

    Trantham, Matthew; Kuranz, Carolyn; Shvarts, Dov; Drake, R. P.

    2016-10-01

    Recent Supernova Rayleigh Taylor experiments on the National Ignition Facility (NIF) are relevant to the evolution of core-collapse supernovae in which red supergiant stars explode. Here we report simulations of these experiments using the CRASH code. The CRASH code, developed at the University of Michigan to design and analyze high-energy-density experiments, is an Eulerian code with block-adaptive mesh refinement, multigroup diffusive radiation transport, and electron heat conduction. We explore two cases, one in which the shock is strongly radiative, and another with negligible radiation. The experiments in all cases produced structures at embedded interfaces by the Rayleigh Taylor instability. The weaker shocked environment is cooler and the instability grows classically. The strongly radiative shock produces a warm environment near the instability, ablates the interface, and alters the growth. We compare the simulated results with the experimental data and attempt to explain the differences. This work is funded by the NNSA-DS and SC-OFES Joint Program in High-Energy-Density Laboratory Plasmas, Grant Number DE-NA0002956.

  15. Implementation of metal-friendly EAM/FS-type semi-empirical potentials in HOOMD-blue: A GPU-accelerated molecular dynamics software

    DOE PAGES

    Yang, Lin; Zhang, Feng; Wang, Cai-Zhuang; ...

    2018-01-12

    We present an implementation of EAM and FS interatomic potentials, which are widely used in simulating metallic systems, in HOOMD-blue, a software designed to perform classical molecular dynamics simulations using GPU accelerations. We first discuss the details of our implementation and then report extensive benchmark tests. We demonstrate that single-precision floating point operations efficiently implemented on GPUs can produce sufficient accuracy when compared against double-precision codes, as demonstrated in test simulations of calculations of the glass-transition temperature of Cu 64.5Zr 35.5, and pair correlation function of liquid Ni 3Al. Our code scales well with the size of the simulating systemmore » on NVIDIA Tesla M40 and P100 GPUs. Compared with another popular software LAMMPS running on 32 cores of AMD Opteron 6220 processors, the GPU/CPU performance ratio can reach as high as 4.6. In conclusion, the source code can be accessed through the HOOMD-blue web page for free by any interested user.« less

  16. Implementation of metal-friendly EAM/FS-type semi-empirical potentials in HOOMD-blue: A GPU-accelerated molecular dynamics software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Lin; Zhang, Feng; Wang, Cai-Zhuang

    We present an implementation of EAM and FS interatomic potentials, which are widely used in simulating metallic systems, in HOOMD-blue, a software designed to perform classical molecular dynamics simulations using GPU accelerations. We first discuss the details of our implementation and then report extensive benchmark tests. We demonstrate that single-precision floating point operations efficiently implemented on GPUs can produce sufficient accuracy when compared against double-precision codes, as demonstrated in test simulations of calculations of the glass-transition temperature of Cu 64.5Zr 35.5, and pair correlation function of liquid Ni 3Al. Our code scales well with the size of the simulating systemmore » on NVIDIA Tesla M40 and P100 GPUs. Compared with another popular software LAMMPS running on 32 cores of AMD Opteron 6220 processors, the GPU/CPU performance ratio can reach as high as 4.6. In conclusion, the source code can be accessed through the HOOMD-blue web page for free by any interested user.« less

  17. Workshop on Models for Plasma Spectroscopy

    NASA Astrophysics Data System (ADS)

    1993-09-01

    A meeting was held at St. Johns College, Oxford from Monday 27th to Thursday 30th of September 1993 to bring together a group of physicists working on computational modelling of plasma spectroscopy. The group came from the UK, France, Israel and the USA. The meeting was organized by myself, Dr. Steven Rose of RAL and Dr. R.W. Lee of LLNL. It was funded by the U.S. European Office of Aerospace Research and Development and by LLNL. The meeting grew out of a wish by a group of core participants to make available to practicing plasma physicists (particularly those engaged in the design and analysis of experiments) sophisticated numerical models of plasma physics. Additional plasma physicists attended the meeting in Oxford by invitation. These were experimentalists and users of plasma physics simulation codes whose input to the meeting was to advise the core group as to what was really needed.

  18. Study on Utilization of Super Grade Plutonium in Molten Salt Reactor FUJI-U3 using CITATION Code

    NASA Astrophysics Data System (ADS)

    Wulandari, Cici; Waris, Abdul; Pramuditya, Syeilendra; Asril, Pramutadi AM; Novitrian

    2017-07-01

    FUJI-U3 type of Molten Salt Reactor (MSR) has a unique design since it consists of three core regions in order to avoid the replacement of graphite as moderator. MSR uses floride as a nuclear fuel salt with the most popular chemical composition is LiF-BeF2-ThF4-233UF4. ThF4 and 233UF4 are the fertile and fissile materials, respectively. On the other hand, LiF and BeF2 working as both fuel and heat transfer medium. In this study, the super grade plutonium will be utilized as substitution of 233U since plutonium is easier to be obtained compared to 233U as main fuel. Neutronics calculation was performed by using PIJ and CITATION modules of SRAC 2002 code with JENDL 3.2 as nuclear data library.

  19. Strategies to Improve Efficiency and Specificity of Degenerate Primers in PCR.

    PubMed

    Campos, Maria Jorge; Quesada, Alberto

    2017-01-01

    PCR with degenerate primers can be used to identify the coding sequence of an unknown protein or to detect a genetic variant within a gene family. These primers, which are complex mixtures of slightly different oligonucleotide sequences, can be optimized to increase the efficiency and/or specificity of PCR in the amplification of a sequence of interest by the introduction of mismatches with the target sequence and balancing their position toward the primers 5'- or 3'-ends. In this work, we explain in detail examples of rational design of primers in two different applications, including the use of specific determinants at the 3'-end, to: (1) improve PCR efficiency with coding sequences for members of a protein family by fully degeneration at a core box of conserved genetic information, with the reduction of degeneration at the 5'-end, and (2) optimize specificity of allelic discrimination of closely related orthologous by 5'-end degenerate primers.

  20. Decay heat uncertainty quantification of MYRRHA

    NASA Astrophysics Data System (ADS)

    Fiorito, Luca; Buss, Oliver; Hoefer, Axel; Stankovskiy, Alexey; Eynde, Gert Van den

    2017-09-01

    MYRRHA is a lead-bismuth cooled MOX-fueled accelerator driven system (ADS) currently in the design phase at SCK·CEN in Belgium. The correct evaluation of the decay heat and of its uncertainty level is very important for the safety demonstration of the reactor. In the first part of this work we assessed the decay heat released by the MYRRHA core using the ALEPH-2 burnup code. The second part of the study focused on the nuclear data uncertainty and covariance propagation to the MYRRHA decay heat. Radioactive decay data, independent fission yield and cross section uncertainties/covariances were propagated using two nuclear data sampling codes, namely NUDUNA and SANDY. According to the results, 238U cross sections and fission yield data are the largest contributors to the MYRRHA decay heat uncertainty. The calculated uncertainty values are deemed acceptable from the safety point of view as they are well within the available regulatory limits.

  1. GPU accelerated implementation of NCI calculations using promolecular density.

    PubMed

    Rubez, Gaëtan; Etancelin, Jean-Matthieu; Vigouroux, Xavier; Krajecki, Michael; Boisson, Jean-Charles; Hénon, Eric

    2017-05-30

    The NCI approach is a modern tool to reveal chemical noncovalent interactions. It is particularly attractive to describe ligand-protein binding. A custom implementation for NCI using promolecular density is presented. It is designed to leverage the computational power of NVIDIA graphics processing unit (GPU) accelerators through the CUDA programming model. The code performances of three versions are examined on a test set of 144 systems. NCI calculations are particularly well suited to the GPU architecture, which reduces drastically the computational time. On a single compute node, the dual-GPU version leads to a 39-fold improvement for the biggest instance compared to the optimal OpenMP parallel run (C code, icc compiler) with 16 CPU cores. Energy consumption measurements carried out on both CPU and GPU NCI tests show that the GPU approach provides substantial energy savings. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Steiner, J.L.; Lime, J.F.; Elson, J.S.

    One dimensional TRAC transient calculations of the process inherent ultimate safety (PIUS) advanced reactor design were performed for a pump-trip SCRAM. The TRAC calculations showed that the reactor power response and shutdown were in qualitative agreement with the one-dimensional analyses presented in the PIUS Preliminary Safety Information Document (PSID) submitted by Asea Brown Boveri (ABB) to the US Nuclear Regulatory Commission for preapplication safety review. The PSID analyses were performed with the ABB-developed RIGEL code. The TRAC-calculated phenomena and trends were also similar to those calculated with another one-dimensional PIUS model, the Brookhaven National Laboratory developed PIPA code. A TRACmore » pump-trip SCRAM transient has also been calculated with a TRAC model containing a multi-dimensional representation of the PIUS intemal flow structures and core region. The results obtained using the TRAC fully one-dimensional PIUS model are compared to the RIGEL, PIPA, and TRAC multi-dimensional results.« less

  3. G STL: the geostatistical template library in C++

    NASA Astrophysics Data System (ADS)

    Remy, Nicolas; Shtuka, Arben; Levy, Bruno; Caers, Jef

    2002-10-01

    The development of geostatistics has been mostly accomplished by application-oriented engineers in the past 20 years. The focus on concrete applications gave birth to many algorithms and computer programs designed to address different issues, such as estimating or simulating a variable while possibly accounting for secondary information such as seismic data, or integrating geological and geometrical data. At the core of any geostatistical data integration methodology is a well-designed algorithm. Yet, despite their obvious differences, all these algorithms share many commonalities on which to build a geostatistics programming library, lest the resulting library is poorly reusable and difficult to expand. Building on this observation, we design a comprehensive, yet flexible and easily reusable library of geostatistics algorithms in C++. The recent advent of the generic programming paradigm allows us elegantly to express the commonalities of the geostatistical algorithms into computer code. Generic programming, also referred to as "programming with concepts", provides a high level of abstraction without loss of efficiency. This last point is a major gain over object-oriented programming which often trades efficiency for abstraction. It is not enough for a numerical library to be reusable, it also has to be fast. Because generic programming is "programming with concepts", the essential step in the library design is the careful identification and thorough definition of these concepts shared by most geostatistical algorithms. Building on these definitions, a generic and expandable code can be developed. To show the advantages of such a generic library, we use G STL to build two sequential simulation programs working on two different types of grids—a surface with faults and an unstructured grid—without requiring any change to the G STL code.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    S. Bays; W. Skerjanc; M. Pope

    A comparative analysis and comparison of results obtained between 2-D lattice calculations and 3-D full core nodal calculations, in the frame of MOX fuel design, was conducted. This study revealed a set of advantages and disadvantages, with respect to each method, which can be used to guide the level of accuracy desired for future fuel and fuel cycle calculations. For the purpose of isotopic generation for fuel cycle analyses, the approach of using a 2-D lattice code (i.e., fuel assembly in infinite lattice) gave reasonable predictions of uranium and plutonium isotope concentrations at the predicted 3-D core simulation batch averagemore » discharge burnup. However, it was found that the 2-D lattice calculation can under-predict the power of pins located along a shared edge between MOX and UO2 by as much as 20%. In this analysis, this error did not occur in the peak pin. However, this was a coincidence and does not rule out the possibility that the peak pin could occur in a lattice position with high calculation uncertainty in future un-optimized studies. Another important consideration in realistic fuel design is the prediction of the peak axial burnup and neutron fluence. The use of 3-D core simulation gave peak burnup conditions, at the pellet level, to be approximately 1.4 times greater than what can be predicted using back-of-the-envelope assumptions of average specific power and irradiation time.« less

  5. A Versatile Multichannel Digital Signal Processing Module for Microcalorimeter Arrays

    NASA Astrophysics Data System (ADS)

    Tan, H.; Collins, J. W.; Walby, M.; Hennig, W.; Warburton, W. K.; Grudberg, P.

    2012-06-01

    Different techniques have been developed for reading out microcalorimeter sensor arrays: individual outputs for small arrays, and time-division or frequency-division or code-division multiplexing for large arrays. Typically, raw waveform data are first read out from the arrays using one of these techniques and then stored on computer hard drives for offline optimum filtering, leading not only to requirements for large storage space but also limitations on achievable count rate. Thus, a read-out module that is capable of processing microcalorimeter signals in real time will be highly desirable. We have developed multichannel digital signal processing electronics that are capable of on-board, real time processing of microcalorimeter sensor signals from multiplexed or individual pixel arrays. It is a 3U PXI module consisting of a standardized core processor board and a set of daughter boards. Each daughter board is designed to interface a specific type of microcalorimeter array to the core processor. The combination of the standardized core plus this set of easily designed and modified daughter boards results in a versatile data acquisition module that not only can easily expand to future detector systems, but is also low cost. In this paper, we first present the core processor/daughter board architecture, and then report the performance of an 8-channel daughter board, which digitizes individual pixel outputs at 1 MSPS with 16-bit precision. We will also introduce a time-division multiplexing type daughter board, which takes in time-division multiplexing signals through fiber-optic cables and then processes the digital signals to generate energy spectra in real time.

  6. Soft-core processor study for node-based architectures.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Van Houten, Jonathan Roger; Jarosz, Jason P.; Welch, Benjamin James

    2008-09-01

    Node-based architecture (NBA) designs for future satellite projects hold the promise of decreasing system development time and costs, size, weight, and power and positioning the laboratory to address other emerging mission opportunities quickly. Reconfigurable Field Programmable Gate Array (FPGA) based modules will comprise the core of several of the NBA nodes. Microprocessing capabilities will be necessary with varying degrees of mission-specific performance requirements on these nodes. To enable the flexibility of these reconfigurable nodes, it is advantageous to incorporate the microprocessor into the FPGA itself, either as a hardcore processor built into the FPGA or as a soft-core processor builtmore » out of FPGA elements. This document describes the evaluation of three reconfigurable FPGA based processors for use in future NBA systems--two soft cores (MicroBlaze and non-fault-tolerant LEON) and one hard core (PowerPC 405). Two standard performance benchmark applications were developed for each processor. The first, Dhrystone, is a fixed-point operation metric. The second, Whetstone, is a floating-point operation metric. Several trials were run at varying code locations, loop counts, processor speeds, and cache configurations. FPGA resource utilization was recorded for each configuration. Cache configurations impacted the results greatly; for optimal processor efficiency it is necessary to enable caches on the processors. Processor caches carry a penalty; cache error mitigation is necessary when operating in a radiation environment.« less

  7. 2D and 3D Models of Convective Turbulence and Oscillations in Intermediate-Mass Main-Sequence Stars

    NASA Astrophysics Data System (ADS)

    Guzik, Joyce Ann; Morgan, Taylor H.; Nelson, Nicholas J.; Lovekin, Catherine; Kitiashvili, Irina N.; Mansour, Nagi N.; Kosovichev, Alexander

    2015-08-01

    We present multidimensional modeling of convection and oscillations in main-sequence stars somewhat more massive than the sun, using three separate approaches: 1) Applying the spherical 3D MHD ASH (Anelastic Spherical Harmonics) code to simulate the core convection and radiative zone. Our goal is to determine whether core convection can excite low-frequency gravity modes, and thereby explain the presence of low frequencies for some hybrid gamma Dor/delta Sct variables for which the envelope convection zone is too shallow for the convective blocking mechanism to drive g modes; 2) Using the 3D planar ‘StellarBox’ radiation hydrodynamics code to model the envelope convection zone and part of the radiative zone. Our goals are to examine the interaction of stellar pulsations with turbulent convection in the envelope, excitation of acoustic modes, and the role of convective overshooting; 3) Applying the ROTORC 2D stellar evolution and dynamics code to calculate evolution with a variety of initial rotation rates and extents of core convective overshooting. The nonradial adiabatic pulsation frequencies of these nonspherical models will be calculated using the 2D pulsation code NRO of Clement. We will present new insights into gamma Dor and delta Sct pulsations gained by multidimensional modeling compared to 1D model expectations.

  8. Hybrid parallel code acceleration methods in full-core reactor physics calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Courau, T.; Plagne, L.; Ponicot, A.

    2012-07-01

    When dealing with nuclear reactor calculation schemes, the need for three dimensional (3D) transport-based reference solutions is essential for both validation and optimization purposes. Considering a benchmark problem, this work investigates the potential of discrete ordinates (Sn) transport methods applied to 3D pressurized water reactor (PWR) full-core calculations. First, the benchmark problem is described. It involves a pin-by-pin description of a 3D PWR first core, and uses a 8-group cross-section library prepared with the DRAGON cell code. Then, a convergence analysis is performed using the PENTRAN parallel Sn Cartesian code. It discusses the spatial refinement and the associated angular quadraturemore » required to properly describe the problem physics. It also shows that initializing the Sn solution with the EDF SPN solver COCAGNE reduces the number of iterations required to converge by nearly a factor of 6. Using a best estimate model, PENTRAN results are then compared to multigroup Monte Carlo results obtained with the MCNP5 code. Good consistency is observed between the two methods (Sn and Monte Carlo), with discrepancies that are less than 25 pcm for the k{sub eff}, and less than 2.1% and 1.6% for the flux at the pin-cell level and for the pin-power distribution, respectively. (authors)« less

  9. 3D Field Modifications of Core Neutral Fueling In the EMC3-EIRENE Code

    NASA Astrophysics Data System (ADS)

    Waters, Ian; Frerichs, Heinke; Schmitz, Oliver; Ahn, Joon-Wook; Canal, Gustavo; Evans, Todd; Feng, Yuehe; Kaye, Stanley; Maingi, Rajesh; Soukhanovskii, Vsevolod

    2017-10-01

    The application of 3-D magnetic field perturbations to the edge plasmas of tokamaks has long been seen as a viable way to control damaging Edge Localized Modes (ELMs). These 3-D fields have also been correlated with a density drop in the core plasmas of tokamaks; known as `pump-out'. While pump-out is typically explained as the result of enhanced outward transport, degraded fueling of the core may also play a role. By altering the temperature and density of the plasma edge, 3-D fields will impact the distribution function of high energy neutral particles produced through ion-neutral energy exchange processes. Starved of the deeply penetrating neutral source, the core density will decrease. Numerical studies carried out with the EMC3-EIRENE code on National Spherical Tokamak eXperiment-Upgrade (NSTX-U) equilibria show that this change to core fueling by high energy neutrals may be a significant contributor to the overall particle balance in the NSTX-U tokamak: deep core (Ψ < 0.5) fueling from neutral ionization sources is decreased by 40-60% with RMPs. This work was funded by the US Department of Energy under Grant DE-SC0012315.

  10. RAVE—a Detector-independent vertex reconstruction toolkit

    NASA Astrophysics Data System (ADS)

    Waltenberger, Wolfgang; Mitaroff, Winfried; Moser, Fabian

    2007-10-01

    A detector-independent toolkit for vertex reconstruction (RAVE ) is being developed, along with a standalone framework (VERTIGO ) for testing, analyzing and debugging. The core algorithms represent state of the art for geometric vertex finding and fitting by both linear (Kalman filter) and robust estimation methods. Main design goals are ease of use, flexibility for embedding into existing software frameworks, extensibility, and openness. The implementation is based on modern object-oriented techniques, is coded in C++ with interfaces for Java and Python, and follows an open-source approach. A beta release is available. VERTIGO = "vertex reconstruction toolkit and interface to generic objects".

  11. Do Librarians Have a Shared Set of Values? A Comparative Study of 36 Codes of Ethics Based on Gorman's "Enduring Values"

    ERIC Educational Resources Information Center

    Foster, Catherine; McMenemy, David

    2012-01-01

    Thirty-six ethical codes from national professional associations were studied, the aim to test whether librarians have global shared values or if political and cultural contexts have significantly influenced the codes' content. Gorman's eight core values of stewardship, service, intellectual freedom, rationalism, literacy and learning, equity of…

  12. Enabling Data Intensive Science through Service Oriented Science: Virtual Laboratories and Science Gateways

    NASA Astrophysics Data System (ADS)

    Lescinsky, D. T.; Wyborn, L. A.; Evans, B. J. K.; Allen, C.; Fraser, R.; Rankine, T.

    2014-12-01

    We present collaborative work on a generic, modular infrastructure for virtual laboratories (VLs, similar to science gateways) that combine online access to data, scientific code, and computing resources as services that support multiple data intensive scientific computing needs across a wide range of science disciplines. We are leveraging access to 10+ PB of earth science data on Lustre filesystems at Australia's National Computational Infrastructure (NCI) Research Data Storage Infrastructure (RDSI) node, co-located with NCI's 1.2 PFlop Raijin supercomputer and a 3000 CPU core research cloud. The development, maintenance and sustainability of VLs is best accomplished through modularisation and standardisation of interfaces between components. Our approach has been to break up tightly-coupled, specialised application packages into modules, with identified best techniques and algorithms repackaged either as data services or scientific tools that are accessible across domains. The data services can be used to manipulate, visualise and transform multiple data types whilst the scientific tools can be used in concert with multiple scientific codes. We are currently designing a scalable generic infrastructure that will handle scientific code as modularised services and thereby enable the rapid/easy deployment of new codes or versions of codes. The goal is to build open source libraries/collections of scientific tools, scripts and modelling codes that can be combined in specially designed deployments. Additional services in development include: provenance, publication of results, monitoring, workflow tools, etc. The generic VL infrastructure will be hosted at NCI, but can access alternative computing infrastructures (i.e., public/private cloud, HPC).The Virtual Geophysics Laboratory (VGL) was developed as a pilot project to demonstrate the underlying technology. This base is now being redesigned and generalised to develop a Virtual Hazards Impact and Risk Laboratory (VHIRL); any enhancements and new capabilities will be incorporated into a generic VL infrastructure. At same time, we are scoping seven new VLs and in the process, identifying other common components to prioritise and focus development.

  13. Complete synthetic seismograms based on a spherical self-gravitating Earth model with an atmosphere-ocean-mantle-core structure

    NASA Astrophysics Data System (ADS)

    Wang, Rongjiang; Heimann, Sebastian; Zhang, Yong; Wang, Hansheng; Dahm, Torsten

    2017-04-01

    A hybrid method is proposed to calculate complete synthetic seismograms based on a spherically symmetric and self-gravitating Earth with a multi-layered structure of atmosphere, ocean, mantle, liquid core and solid core. For large wavelengths, a numerical scheme is used to solve the geodynamic boundary-value problem without any approximation on the deformation and gravity coupling. With the decreasing wavelength, the gravity effect on the deformation becomes negligible and the analytical propagator scheme can be used. Many useful approaches are used to overcome the numerical problems that may arise in both analytical and numerical schemes. Some of these approaches have been established in the seismological community and the others are developed for the first time. Based on the stable and efficient hybrid algorithm, an all-in-one code QSSP is implemented to cover the complete spectrum of seismological interests. The performance of the code is demonstrated by various tests including the curvature effect on teleseismic body and surface waves, the appearance of multiple reflected, teleseismic core phases, the gravity effect on long period surface waves and free oscillations, the simulation of near-field displacement seismograms with the static offset, the coupling of tsunami and infrasound waves, and free oscillations of the solid Earth, the atmosphere and the ocean. QSSP is open source software that can be used as a stand-alone FORTRAN code or may be applied in combination with a Python toolbox to calculate and handle Green's function databases for efficient coding of source inversion problems.

  14. Complete synthetic seismograms based on a spherical self-gravitating Earth model with an atmosphere-ocean-mantle-core structure

    NASA Astrophysics Data System (ADS)

    Wang, Rongjiang; Heimann, Sebastian; Zhang, Yong; Wang, Hansheng; Dahm, Torsten

    2017-09-01

    A hybrid method is proposed to calculate complete synthetic seismograms based on a spherically symmetric and self-gravitating Earth with a multilayered structure of atmosphere, ocean, mantle, liquid core and solid core. For large wavelengths, a numerical scheme is used to solve the geodynamic boundary-value problem without any approximation on the deformation and gravity coupling. With decreasing wavelength, the gravity effect on the deformation becomes negligible and the analytical propagator scheme can be used. Many useful approaches are used to overcome the numerical problems that may arise in both analytical and numerical schemes. Some of these approaches have been established in the seismological community and the others are developed for the first time. Based on the stable and efficient hybrid algorithm, an all-in-one code QSSP is implemented to cover the complete spectrum of seismological interests. The performance of the code is demonstrated by various tests including the curvature effect on teleseismic body and surface waves, the appearance of multiple reflected, teleseismic core phases, the gravity effect on long period surface waves and free oscillations, the simulation of near-field displacement seismograms with the static offset, the coupling of tsunami and infrasound waves, and free oscillations of the solid Earth, the atmosphere and the ocean. QSSP is open source software that can be used as a stand-alone FORTRAN code or may be applied in combination with a Python toolbox to calculate and handle Green's function databases for efficient coding of source inversion problems.

  15. PanCoreGen - Profiling, detecting, annotating protein-coding genes in microbial genomes.

    PubMed

    Paul, Sandip; Bhardwaj, Archana; Bag, Sumit K; Sokurenko, Evgeni V; Chattopadhyay, Sujay

    2015-12-01

    A large amount of genomic data, especially from multiple isolates of a single species, has opened new vistas for microbial genomics analysis. Analyzing the pan-genome (i.e. the sum of genetic repertoire) of microbial species is crucial in understanding the dynamics of molecular evolution, where virulence evolution is of major interest. Here we present PanCoreGen - a standalone application for pan- and core-genomic profiling of microbial protein-coding genes. PanCoreGen overcomes key limitations of the existing pan-genomic analysis tools, and develops an integrated annotation-structure for a species-specific pan-genomic profile. It provides important new features for annotating draft genomes/contigs and detecting unidentified genes in annotated genomes. It also generates user-defined group-specific datasets within the pan-genome. Interestingly, analyzing an example-set of Salmonella genomes, we detect potential footprints of adaptive convergence of horizontally transferred genes in two human-restricted pathogenic serovars - Typhi and Paratyphi A. Overall, PanCoreGen represents a state-of-the-art tool for microbial phylogenomics and pathogenomics study. Copyright © 2015 Elsevier Inc. All rights reserved.

  16. Issues and opportunities: beam simulations for heavy ion fusion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Friedman, A

    1999-07-15

    UCRL- JC- 134975 PREPRINT code offering 3- D, axisymmetric, and ''transverse slice'' (steady flow) geometries, with a hierarchy of models for the ''lattice'' of focusing, bending, and accelerating elements. Interactive and script- driven code steering is afforded through an interpreter interface. The code runs with good parallel scaling on the T3E. Detailed simulations of machine segments and of complete small experiments, as well as simplified full- system runs, have been carried out, partially benchmarking the code. A magnetoinductive model, with module impedance and multi- beam effects, is under study. experiments, including an injector scalable to multi- beam arrays, a high-more » current beam transport and acceleration experiment, and a scaled final- focusing experiment. These ''phase I'' projects are laying the groundwork for the next major step in HIF development, the Integrated Research Experiment (IRE). Simulations aimed directly at the IRE must enable us to: design a facility with maximum power on target at minimal cost; set requirements for hardware tolerances, beam steering, etc.; and evaluate proposed chamber propagation modes. Finally, simulations must enable us to study all issues which arise in the context of a fusion driver, and must facilitate the assessment of driver options. In all of this, maximum advantage must be taken of emerging terascale computer architectures, requiring an aggressive code development effort. An organizing principle should be pursuit of the goal of integrated and detailed source- to- target simulation. methods for analysis of the beam dynamics in the various machine concepts, using moment- based methods for purposes of design, waveform synthesis, steering algorithm synthesis, etc. Three classes of discrete- particle models should be coupled: (1) electrostatic/ magnetoinductive PIC simulations should track the beams from the source through the final- focusing optics, passing details of the time- dependent distribution function to (2) electromagnetic or magnetoinductive PIC or hybrid PIG/ fluid simulations in the fusion chamber (which would finally pass their particle trajectory information to the radiation- hydrodynamics codes used for target design); in parallel, (3) detailed PIC, delta- f, core/ test- particle, and perhaps continuum Vlasov codes should be used to study individual sections of the driver and chamber very carefully; consistency may be assured by linking data from the PIC sequence, and knowledge gained may feed back into that sequence.« less

  17. caGrid 1.0: An Enterprise Grid Infrastructure for Biomedical Research

    PubMed Central

    Oster, Scott; Langella, Stephen; Hastings, Shannon; Ervin, David; Madduri, Ravi; Phillips, Joshua; Kurc, Tahsin; Siebenlist, Frank; Covitz, Peter; Shanbhag, Krishnakant; Foster, Ian; Saltz, Joel

    2008-01-01

    Objective To develop software infrastructure that will provide support for discovery, characterization, integrated access, and management of diverse and disparate collections of information sources, analysis methods, and applications in biomedical research. Design An enterprise Grid software infrastructure, called caGrid version 1.0 (caGrid 1.0), has been developed as the core Grid architecture of the NCI-sponsored cancer Biomedical Informatics Grid (caBIG™) program. It is designed to support a wide range of use cases in basic, translational, and clinical research, including 1) discovery, 2) integrated and large-scale data analysis, and 3) coordinated study. Measurements The caGrid is built as a Grid software infrastructure and leverages Grid computing technologies and the Web Services Resource Framework standards. It provides a set of core services, toolkits for the development and deployment of new community provided services, and application programming interfaces for building client applications. Results The caGrid 1.0 was released to the caBIG community in December 2006. It is built on open source components and caGrid source code is publicly and freely available under a liberal open source license. The core software, associated tools, and documentation can be downloaded from the following URL: https://cabig.nci.nih.gov/workspaces/Architecture/caGrid. Conclusions While caGrid 1.0 is designed to address use cases in cancer research, the requirements associated with discovery, analysis and integration of large scale data, and coordinated studies are common in other biomedical fields. In this respect, caGrid 1.0 is the realization of a framework that can benefit the entire biomedical community. PMID:18096909

  18. Design of batch audio/video conversion platform based on JavaEE

    NASA Astrophysics Data System (ADS)

    Cui, Yansong; Jiang, Lianpin

    2018-03-01

    With the rapid development of digital publishing industry, the direction of audio / video publishing shows the diversity of coding standards for audio and video files, massive data and other significant features. Faced with massive and diverse data, how to quickly and efficiently convert to a unified code format has brought great difficulties to the digital publishing organization. In view of this demand and present situation in this paper, basing on the development architecture of Sptring+SpringMVC+Mybatis, and combined with the open source FFMPEG format conversion tool, a distributed online audio and video format conversion platform with a B/S structure is proposed. Based on the Java language, the key technologies and strategies designed in the design of platform architecture are analyzed emphatically in this paper, designing and developing a efficient audio and video format conversion system, which is composed of “Front display system”, "core scheduling server " and " conversion server ". The test results show that, compared with the ordinary audio and video conversion scheme, the use of batch audio and video format conversion platform can effectively improve the conversion efficiency of audio and video files, and reduce the complexity of the work. Practice has proved that the key technology discussed in this paper can be applied in the field of large batch file processing, and has certain practical application value.

  19. The Oceanographic Multipurpose Software Environment (OMUSE v1.0)

    NASA Astrophysics Data System (ADS)

    Pelupessy, Inti; van Werkhoven, Ben; van Elteren, Arjen; Viebahn, Jan; Candy, Adam; Portegies Zwart, Simon; Dijkstra, Henk

    2017-08-01

    In this paper we present the Oceanographic Multipurpose Software Environment (OMUSE). OMUSE aims to provide a homogeneous environment for existing or newly developed numerical ocean simulation codes, simplifying their use and deployment. In this way, numerical experiments that combine ocean models representing different physics or spanning different ranges of physical scales can be easily designed. Rapid development of simulation models is made possible through the creation of simple high-level scripts. The low-level core of the abstraction in OMUSE is designed to deploy these simulations efficiently on heterogeneous high-performance computing resources. Cross-verification of simulation models with different codes and numerical methods is facilitated by the unified interface that OMUSE provides. Reproducibility in numerical experiments is fostered by allowing complex numerical experiments to be expressed in portable scripts that conform to a common OMUSE interface. Here, we present the design of OMUSE as well as the modules and model components currently included, which range from a simple conceptual quasi-geostrophic solver to the global circulation model POP (Parallel Ocean Program). The uniform access to the codes' simulation state and the extensive automation of data transfer and conversion operations aids the implementation of model couplings. We discuss the types of couplings that can be implemented using OMUSE. We also present example applications that demonstrate the straightforward model initialization and the concurrent use of data analysis tools on a running model. We give examples of multiscale and multiphysics simulations by embedding a regional ocean model into a global ocean model and by coupling a surface wave propagation model with a coastal circulation model.

  20. Optimizing Tensor Contraction Expressions for Hybrid CPU-GPU Execution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, Wenjing; Krishnamoorthy, Sriram; Villa, Oreste

    2013-03-01

    Tensor contractions are generalized multidimensional matrix multiplication operations that widely occur in quantum chemistry. Efficient execution of tensor contractions on Graphics Processing Units (GPUs) requires several challenges to be addressed, including index permutation and small dimension-sizes reducing thread block utilization. Moreover, to apply the same optimizations to various expressions, we need a code generation tool. In this paper, we present our approach to automatically generate CUDA code to execute tensor contractions on GPUs, including management of data movement between CPU and GPU. To evaluate our tool, GPU-enabled code is generated for the most expensive contractions in CCSD(T), a key coupledmore » cluster method, and incorporated into NWChem, a popular computational chemistry suite. For this method, we demonstrate speedup over a factor of 8.4 using one GPU (instead of one core per node) and over 2.6 when utilizing the entire system using hybrid CPU+GPU solution with 2 GPUs and 5 cores (instead of 7 cores per node). Finally, we analyze the implementation behavior on future GPU systems.« less

  1. Demonstration of Efficient Core Heating of Magnetized Fast Ignition in FIREX project

    NASA Astrophysics Data System (ADS)

    Johzaki, Tomoyuki

    2017-10-01

    Extensive theoretical and experimental research in the FIREX ``I project over the past decade revealed that the large angular divergence of the laser generated electron beam is one of the most critical problems inhibiting efficient core heating in electron-driven fast ignition. To solve this problem, beam guiding using externally applied kilo-tesla class magnetic field was proposed, and its feasibility has recently been numerically demonstrated. In 2016, integrated experiments at ILE Osaka University demonstrated core heating efficiencies reaching > 5 % and heated core temperatures of 1.7 keV. In these experiments, a kilo-tesla class magnetic field was applied to a cone-attached Cu(II) oleate spherical solid target by using a laser-driven capacitor-coil. The target was then imploded by G-XII laser and heated by the PW-class LFEX laser. The heating efficiency was evaluated by measuring the number of Cu-K- α photons emitted. The heated core temperature was estimated by the X-ray intensity ratio of Cu Li-like and He-like emission lines. To understand the detailed dynamics of the core heating process, we carried out integrated simulations using the FI3 code system. Effects of magnetic fields on the implosion and electron beam transport, detailed core heating dynamics, and the resultant heating efficiency and core temperature will be presented. I will also discuss the prospect for an ignition-scale design of magnetized fast ignition using a solid ball target. This work is partially supported by JSPA KAKENHI Grant Number JP16H02245, JP26400532, JP15K21767, JP26400532, JP16K05638 and is performed with the support and the auspices of the NIFS Collaboration Research program (NIFS12KUGK057, NIFS15KUGK087).

  2. Hybrid Reduced Order Modeling Algorithms for Reactor Physics Calculations

    NASA Astrophysics Data System (ADS)

    Bang, Youngsuk

    Reduced order modeling (ROM) has been recognized as an indispensable approach when the engineering analysis requires many executions of high fidelity simulation codes. Examples of such engineering analyses in nuclear reactor core calculations, representing the focus of this dissertation, include the functionalization of the homogenized few-group cross-sections in terms of the various core conditions, e.g. burn-up, fuel enrichment, temperature, etc. This is done via assembly calculations which are executed many times to generate the required functionalization for use in the downstream core calculations. Other examples are sensitivity analysis used to determine important core attribute variations due to input parameter variations, and uncertainty quantification employed to estimate core attribute uncertainties originating from input parameter uncertainties. ROM constructs a surrogate model with quantifiable accuracy which can replace the original code for subsequent engineering analysis calculations. This is achieved by reducing the effective dimensionality of the input parameter, the state variable, or the output response spaces, by projection onto the so-called active subspaces. Confining the variations to the active subspace allows one to construct an ROM model of reduced complexity which can be solved more efficiently. This dissertation introduces a new algorithm to render reduction with the reduction errors bounded based on a user-defined error tolerance which represents the main challenge of existing ROM techniques. Bounding the error is the key to ensuring that the constructed ROM models are robust for all possible applications. Providing such error bounds represents one of the algorithmic contributions of this dissertation to the ROM state-of-the-art. Recognizing that ROM techniques have been developed to render reduction at different levels, e.g. the input parameter space, the state space, and the response space, this dissertation offers a set of novel hybrid ROM algorithms which can be readily integrated into existing methods and offer higher computational efficiency and defendable accuracy of the reduced models. For example, the snapshots ROM algorithm is hybridized with the range finding algorithm to render reduction in the state space, e.g. the flux in reactor calculations. In another implementation, the perturbation theory used to calculate first order derivatives of responses with respect to parameters is hybridized with a forward sensitivity analysis approach to render reduction in the parameter space. Reduction at the state and parameter spaces can be combined to render further reduction at the interface between different physics codes in a multi-physics model with the accuracy quantified in a similar manner to the single physics case. Although the proposed algorithms are generic in nature, we focus here on radiation transport models used in support of the design and analysis of nuclear reactor cores. In particular, we focus on replacing the traditional assembly calculations by ROM models to facilitate the generation of homogenized cross-sections for downstream core calculations. The implication is that assembly calculations could be done instantaneously therefore precluding the need for the expensive evaluation of the few-group cross-sections for all possible core conditions. Given the generic natures of the algorithms, we make an effort to introduce the material in a general form to allow non-nuclear engineers to benefit from this work.

  3. Design calculations for NIF convergent ablator experiments.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Callahan, Debra; Leeper, Ramon Joe; Spears, B. K.

    2010-11-01

    Design calculations for NIF convergent ablator experiments will be described. The convergent ablator experiments measure the implosion trajectory, velocity, and ablation rate of an x-ray driven capsule and are a important component of the U. S. National Ignition Campaign at NIF. The design calculations are post-processed to provide simulations of the key diagnostics: (1) Dante measurements of hohlraum x-ray flux and spectrum, (2) streaked radiographs of the imploding ablator shell, (3) wedge range filter measurements of D-He3 proton output spectra, and (4) GXD measurements of the imploded core. The simulated diagnostics will be compared to the experimental measurements to providemore » an assessment of the accuracy of the design code predictions of hohlraum radiation temperature, capsule ablation rate, implosion velocity, shock flash areal density, and x-ray bang time. Post-shot versions of the design calculations are used to enhance the understanding of the experimental measurements and will assist in choosing parameters for subsequent shots and the path towards optimal ignition capsule tuning.« less

  4. Pebble bed modular reactor safeguards: developing new approaches and implementing safeguards by design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beyer, Brian David; Beddingfield, David H; Durst, Philip

    2010-01-01

    The design of the Pebble Bed Modular Reactor (PBMR) does not fit or seem appropriate to the IAEA safeguards approach under the categories of light water reactor (LWR), on-load refueled reactor (OLR, i.e. CANDU), or Other (prismatic HTGR) because the fuel is in a bulk form, rather than discrete items. Because the nuclear fuel is a collection of nuclear material inserted in tennis-ball sized spheres containing structural and moderating material and a PBMR core will contain a bulk load on the order of 500,000 spheres, it could be classified as a 'Bulk-Fuel Reactor.' Hence, the IAEA should develop unique safeguardsmore » criteria. In a multi-lab DOE study, it was found that an optimized blend of: (i) developing techniques to verify the plutonium content in spent fuel pebbles, (ii) improving burn-up computer codes for PBMR spent fuel to provide better understanding of the core and spent fuel makeup, and (iii) utilizing bulk verification techniques for PBMR spent fuel storage bins should be combined with the historic IAEA and South African approaches of containment and surveillance to verify and maintain continuity of knowledge of PBMR fuel. For all of these techniques to work the design of the reactor will need to accommodate safeguards and material accountancy measures to a far greater extent than has thus far been the case. The implementation of Safeguards-by-Design as the PBMR design progresses provides an approach to meets these safeguards and accountancy needs.« less

  5. Center for Technology for Advanced Scientific Componet Software (TASCS)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Govindaraju, Madhusudhan

    Advanced Scientific Computing Research Computer Science FY 2010Report Center for Technology for Advanced Scientific Component Software: Distributed CCA State University of New York, Binghamton, NY, 13902 Summary The overall objective of Binghamton's involvement is to work on enhancements of the CCA environment, motivated by the applications and research initiatives discussed in the proposal. This year we are working on re-focusing our design and development efforts to develop proof-of-concept implementations that have the potential to significantly impact scientific components. We worked on developing parallel implementations for non-hydrostatic code and worked on a model coupling interface for biogeochemical computations coded in MATLAB.more » We also worked on the design and implementation modules that will be required for the emerging MapReduce model to be effective for scientific applications. Finally, we focused on optimizing the processing of scientific datasets on multi-core processors. Research Details We worked on the following research projects that we are working on applying to CCA-based scientific applications. 1. Non-Hydrostatic Hydrodynamics: Non-static hydrodynamics are significantly more accurate at modeling internal waves that may be important in lake ecosystems. Non-hydrostatic codes, however, are significantly more computationally expensive, often prohibitively so. We have worked with Chin Wu at the University of Wisconsin to parallelize non-hydrostatic code. We have obtained a speed up of about 26 times maximum. Although this is significant progress, we hope to improve the performance further, such that it becomes a practical alternative to hydrostatic codes. 2. Model-coupling for water-based ecosystems: To answer pressing questions about water resources requires that physical models (hydrodynamics) be coupled with biological and chemical models. Most hydrodynamics codes are written in Fortran, however, while most ecologists work in MATLAB. This disconnect creates a great barrier. To address this, we are working on a model coupling interface that will allow biogeochemical computations written in MATLAB to couple with Fortran codes. This will greatly improve the productivity of ecosystem scientists. 2. Low overhead and Elastic MapReduce Implementation Optimized for Memory and CPU-Intensive Applications: Since its inception, MapReduce has frequently been associated with Hadoop and large-scale datasets. Its deployment at Amazon in the cloud, and its applications at Yahoo! for large-scale distributed document indexing and database building, among other tasks, have thrust MapReduce to the forefront of the data processing application domain. The applicability of the paradigm however extends far beyond its use with data intensive applications and diskbased systems, and can also be brought to bear in processing small but CPU intensive distributed applications. MapReduce however carries its own burdens. Through experiments using Hadoop in the context of diverse applications, we uncovered latencies and delay conditions potentially inhibiting the expected performance of a parallel execution in CPU-intensive applications. Furthermore, as it currently stands, MapReduce is favored for data-centric applications, and as such tends to be solely applied to disk-based applications. The paradigm, falls short in bringing its novelty to diskless systems dedicated to in-memory applications, and compute intensive programs processing much smaller data, but requiring intensive computations. In this project, we focused both on the performance of processing large-scale hierarchical data in distributed scientific applications, as well as the processing of smaller but demanding input sizes primarily used in diskless, and memory resident I/O systems. We designed LEMO-MR [1], a Low overhead, elastic, configurable for in- memory applications, and on-demand fault tolerance, an optimized implementation of MapReduce, for both on disk and in memory applications. We conducted experiments to identify not only the necessary components of this model, but also trade offs and factors to be considered. We have initial results to show the efficacy of our implementation in terms of potential speedup that can be achieved for representative data sets used by cloud applications. We have quantified the performance gains exhibited by our MapReduce implementation over Apache Hadoop in a compute intensive environment. 3. Cache Performance Optimization for Processing XML and HDF-based Application Data on Multi-core Processors: It is important to design and develop scientific middleware libraries to harness the opportunities presented by emerging multi-core processors. Implementations of scientific middleware and applications that do not adapt to the programming paradigm when executing on emerging processors can severely impact the overall performance. In this project, we focused on the utilization of the L2 cache, which is a critical shared resource on chip multiprocessors (CMP). The access pattern of the shared L2 cache, which is dependent on how the application schedules and assigns processing work to each thread, can either enhance or hurt the ability to hide memory latency on a multi-core processor. Therefore, while processing scientific datasets such as HDF5, it is essential to conduct fine-grained analysis of cache utilization, to inform scheduling decisions in multi-threaded programming. In this project, using the TAU toolkit for performance feedback from dual- and quad-core machines, we conducted performance analysis and recommendations on how processing threads can be scheduled on multi-core nodes to enhance the performance of a class of scientific applications that requires processing of HDF5 data. In particular, we quantified the gains associated with the use of the adaptations we have made to the Cache-Affinity and Balanced-Set scheduling algorithms to improve L2 cache performance, and hence the overall application execution time [2]. References: 1. Zacharia Fadika, Madhusudhan Govindaraju, ``MapReduce Implementation for Memory-Based and Processing Intensive Applications'', accepted in 2nd IEEE International Conference on Cloud Computing Technology and Science, Indianapolis, USA, Nov 30 - Dec 3, 2010. 2. Rajdeep Bhowmik, Madhusudhan Govindaraju, ``Cache Performance Optimization for Processing XML-based Application Data on Multi-core Processors'', in proceedings of The 10th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing, May 17-20, 2010, Melbourne, Victoria, Australia. Contact Information: Madhusudhan Govindaraju Binghamton University State University of New York (SUNY) mgovinda@cs.binghamton.edu Phone: 607-777-4904« less

  6. BETA (Bitter Electromagnet Testing Apparatus) Design and Testing

    NASA Astrophysics Data System (ADS)

    Bates, Evan; Birmingham, William; Rivera, William; Romero-Talamas, Carlos

    2016-10-01

    BETA is a 1T water cooled Bitter-type magnetic system that has been designed and constructed at the Dusty Plasma Laboratory of the University of Maryland, Baltimore County to serve as a prototype of a scaled 10T version. Currently the system is undergoing magnetic, thermal and mechanical testing to ensure safe operating conditions and to prove analytical design optimizations. These magnets will function as experimental tools for future dusty plasma based and collaborative experiments. An overview of design methods used for building a custom made Bitter magnet with user defined experimental constraints is reviewed. The three main design methods consist of minimizing the following: ohmic power, peak conductor temperatures, and stresses induced by Lorentz forces. We will also discuss the design of BETA which includes: the magnet core, pressure vessel, cooling system, power storage bank, high powered switching system, diagnostics with safety cutoff feedback, and data acquisition (DAQ)/magnet control Matlab code. Furthermore, we present experimental data from diagnostics for validation of our analytical preliminary design methodologies and finite element analysis calculations. BETA will contribute to the knowledge necessary to finalize the 10 T magnet design.

  7. Calculation of the Phenix end-of-life test 'Control Rod Withdrawal' with the ERANOS code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tiberi, V.

    2012-07-01

    The Inst. of Radiological Protection and Nuclear Safety (IRSN) acts as technical support to French public authorities. As such, IRSN is in charge of safety assessment of operating and under construction reactors, as well as future projects. In this framework, one current objective of IRSN is to evaluate the ability and accuracy of numerical tools to foresee consequences of accidents. Neutronic studies step in the safety assessment from different points of view among which the core design and its protection system. They are necessary to evaluate the core behavior in case of accident in order to assess the integrity ofmore » the first barrier and the absence of a prompt criticality risk. To reach this objective one main physical quantity has to be evaluated accurately: the neutronic power distribution in core during whole reactor lifetime. Phenix end of life tests, carried out in 2009, aim at increasing the experience feedback on sodium cooled fast reactors. These experiments have been done in the framework of the development of the 4. generation of nuclear reactors. Ten tests have been carried out: 6 on neutronic and fuel aspects, 2 on thermal hydraulics and 2 for the emergency shutdown. Two of them have been chosen for an international exercise on thermal hydraulics and neutronics in the frame of an IAEA Coordinated Research Project. Concerning neutronics, the Control Rod Withdrawal test is relevant for safety because it allows evaluating the capability of calculation tools to compute the radial power distribution on fast reactors core configurations in which the flux field is very deformed. IRSN participated to this benchmark with the ERANOS code developed by CEA for fast reactors studies. This paper presents the results obtained in the framework of the benchmark activity. A relatively good agreement was found with available measures considering the approximations done in the modeling. The work underlines the importance of burn-up calculations in order to have a fine core concentrations mesh for the calculation of the power distribution. (authors)« less

  8. Fault Tolerance Middleware for a Multi-Core System

    NASA Technical Reports Server (NTRS)

    Some, Raphael R.; Springer, Paul L.; Zima, Hans P.; James, Mark; Wagner, David A.

    2012-01-01

    Fault Tolerance Middleware (FTM) provides a framework to run on a dedicated core of a multi-core system and handles detection of single-event upsets (SEUs), and the responses to those SEUs, occurring in an application running on multiple cores of the processor. This software was written expressly for a multi-core system and can support different kinds of fault strategies, such as introspection, algorithm-based fault tolerance (ABFT), and triple modular redundancy (TMR). It focuses on providing fault tolerance for the application code, and represents the first step in a plan to eventually include fault tolerance in message passing and the FTM itself. In the multi-core system, the FTM resides on a single, dedicated core, separate from the cores used by the application. This is done in order to isolate the FTM from application faults and to allow it to swap out any application core for a substitute. The structure of the FTM consists of an interface to a fault tolerant strategy module, a responder module, a fault manager module, an error factory, and an error mapper that determines the severity of the error. In the present reference implementation, the only fault tolerant strategy implemented is introspection. The introspection code waits for an application node to send an error notification to it. It then uses the error factory to create an error object, and at this time, a severity level is assigned to the error. The introspection code uses its built-in knowledge base to generate a recommended response to the error. Responses might include ignoring the error, logging it, rolling back the application to a previously saved checkpoint, swapping in a new node to replace a bad one, or restarting the application. The original error and recommended response are passed to the top-level fault manager module, which invokes the response. The responder module also notifies the introspection module of the generated response. This provides additional information to the introspection module that it can use in generating its next response. For example, if the responder triggers an application rollback and errors are still occurring, the introspection module may decide to recommend an application restart.

  9. A COMPARISON OF EXPERIMENTS AND THREE-DIMENSIONAL ANALYSIS TECHNIQUES. PART II. UNPOISONED UNIFORM SLAB CORE WITH A PARTIALLY INSERTED HAFNIUM ROD AND A PARTIALLY INSERTED WATER GAP

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roseberry, R.J.

    The experimental measurements and nuclear analysis of a uniformly loaded, unpoisoned slab core with a partially inserted hafnium rod and/or a partially inserted water gap are described. Comparisons of experimental data with calculated results of the UFO core and flux synthesis techniques are given. It is concluded that one of the flux synthesis techniques and the UFO code are able to predict flux distributions to within approximately -5% of experiment for most cases, with a maximum error of approximately -10% for a channel at the core- reflector boundary. The second synthesis technique failed to give comparable agreement with experiment evenmore » when various refinements were used, e.g. increasing the number of mesh points, performing the flux synthesis technique of iteration, and spectrum-weighting the appropriate calculated fluxes through the use of the SWAKRAUM code. These results are comparable to those reported in Part I of this study. (auth)« less

  10. Optimizing the updated Goddard shortwave radiation Weather Research and Forecasting (WRF) scheme for Intel Many Integrated Core (MIC) architecture

    NASA Astrophysics Data System (ADS)

    Mielikainen, Jarno; Huang, Bormin; Huang, Allen H.-L.

    2015-05-01

    Intel Many Integrated Core (MIC) ushers in a new era of supercomputing speed, performance, and compatibility. It allows the developers to run code at trillions of calculations per second using the familiar programming model. In this paper, we present our results of optimizing the updated Goddard shortwave radiation Weather Research and Forecasting (WRF) scheme on Intel Many Integrated Core Architecture (MIC) hardware. The Intel Xeon Phi coprocessor is the first product based on Intel MIC architecture, and it consists of up to 61 cores connected by a high performance on-die bidirectional interconnect. The co-processor supports all important Intel development tools. Thus, the development environment is familiar one to a vast number of CPU developers. Although, getting a maximum performance out of Xeon Phi will require using some novel optimization techniques. Those optimization techniques are discusses in this paper. The results show that the optimizations improved performance of the original code on Xeon Phi 7120P by a factor of 1.3x.

  11. PLUMED 2: New feathers for an old bird

    NASA Astrophysics Data System (ADS)

    Tribello, Gareth A.; Bonomi, Massimiliano; Branduardi, Davide; Camilloni, Carlo; Bussi, Giovanni

    2014-02-01

    Enhancing sampling and analyzing simulations are central issues in molecular simulation. Recently, we introduced PLUMED, an open-source plug-in that provides some of the most popular molecular dynamics (MD) codes with implementations of a variety of different enhanced sampling algorithms and collective variables (CVs). The rapid changes in this field, in particular new directions in enhanced sampling and dimensionality reduction together with new hardware, require a code that is more flexible and more efficient. We therefore present PLUMED 2 here—a complete rewrite of the code in an object-oriented programming language (C++). This new version introduces greater flexibility and greater modularity, which both extends its core capabilities and makes it far easier to add new methods and CVs. It also has a simpler interface with the MD engines and provides a single software library containing both tools and core facilities. Ultimately, the new code better serves the ever-growing community of users and contributors in coping with the new challenges arising in the field.

  12. Synonymous Mutations in the Core Gene Are Linked to Unusual Serological Profile in Hepatitis C Virus Infection

    PubMed Central

    Budkowska, Agata; Kakkanas, Athanassios; Nerrienet, Eric; Kalinina, Olga; Maillard, Patrick; Horm, Srey Viseth; Dalagiorgou, Geena; Vassilaki, Niki; Georgopoulou, Urania; Martinot, Michelle; Sall, Amadou Alpha; Mavromara, Penelope

    2011-01-01

    The biological role of the protein encoded by the alternative open reading frame (core+1/ARF) of the Hepatitis C virus (HCV) genome remains elusive, as does the significance of the production of corresponding antibodies in HCV infection. We investigated the prevalence of anti-core and anti-core+1/ARFP antibodies in HCV-positive blood donors from Cambodia, using peptide and recombinant protein-based ELISAs. We detected unusual serological profiles in 3 out of 58 HCV positive plasma of genotype 1a. These patients were negative for anti-core antibodies by commercial and peptide-based assays using C-terminal fragments of core but reacted by Western Blot with full-length core protein. All three patients had high levels of anti-core+1/ARFP antibodies. Cloning of the cDNA that corresponds to the core-coding region from these sera resulted in the expression of both core and core+1/ARFP in mammalian cells. The core protein exhibited high amino-acid homology with a consensus HCV1a sequence. However, 10 identical synonymous mutations were found, and 7 were located in the aa(99–124) region of core. All mutations concerned the third base of a codon, and 5/10 represented a T>C mutation. Prediction analyses of the RNA secondary structure revealed conformational changes within the stem-loop region that contains the core+1/ARFP internal AUG initiator at position 85/87. Using the luciferase tagging approach, we showed that core+1/ARFP expression is more efficient from such a sequence than from the prototype HCV1a RNA. We provide additional evidence of the existence of core+1/ARFP in vivo and new data concerning expression of HCV core protein. We show that HCV patients who do not produce normal anti-core antibodies have unusually high levels of antit-core+1/ARFP and harbour several identical synonymous mutations in the core and core+1/ARFP coding region that result in major changes in predicted RNA structure. Such HCV variants may favour core+1/ARFP production during HCV infection. PMID:21283512

  13. Experimental and numerical investigations of high temperature gas heat transfer and flow in a VHTR reactor core

    NASA Astrophysics Data System (ADS)

    Valentin Rodriguez, Francisco Ivan

    High pressure/high temperature forced and natural convection experiments have been conducted in support of the development of a Very High Temperature Reactor (VHTR) with a prismatic core. VHTRs are designed with the capability to withstand accidents by preventing nuclear fuel meltdown, using passive safety mechanisms; a product of advanced reactor designs including the implementation of inert gases like helium as coolants. The present experiments utilize a high temperature/high pressure gas flow test facility constructed for forced and natural circulation experiments. This work examines fundamental aspects of high temperature gas heat transfer applied to VHTR operational and accident scenarios. Two different types of experiments, forced convection and natural circulation, were conducted under high pressure and high temperature conditions using three different gases: air, nitrogen and helium. The experimental data were analyzed to obtain heat transfer coefficient data in the form of Nusselt numbers as a function of Reynolds, Grashof and Prandtl numbers. This work also examines the flow laminarization phenomenon (turbulent flows displaying much lower heat transfer parameters than expected due to intense heating conditions) in detail for a full range of Reynolds numbers including: laminar, transition and turbulent flows under forced convection and its impact on heat transfer. This phenomenon could give rise to deterioration in convection heat transfer and occurrence of hot spots in the reactor core. Forced and mixed convection data analyzed indicated the occurrence of flow laminarization phenomenon due to the buoyancy and acceleration effects induced by strong heating. Turbulence parameters were also measured using a hot wire anemometer in forced convection experiments to confirm the existence of the flow laminarization phenomenon. In particular, these results demonstrated the influence of pressure on delayed transition between laminar and turbulent flow. The heat dissipating capabilities of helium flow, due to natural circulation in the system at both high and low pressure, were also examined. These experimental results are useful for the development and validation of VHTR design and safety analysis codes. Numerical simulations were performed using a Multiphysics computer code, COMSOL, displaying less than 5% error between the measured graphite temperatures in both the heated and cooled channels. Finally, new correlations have been proposed describing the thermal-hydraulic phenomena in buoyancy driven flows in both heated and cooled channels.

  14. Cultural differences in the experience of everyday symptoms: a comparative study of South Asian and European American women.

    PubMed

    Karasz, Alison; Dempsey, Kara; Fallek, Ronit

    2007-12-01

    This paper describes a study of medically ambiguous symptoms in two contrasting cultural groups. The study combined a qualitative, meaning-centered approach with a structured coding system and comparative design. Thirty-six South Asian immigrants and thirty-seven European Americans participated in a semistructured health history interview designed to elicit conceptual models of medically unexplained illness. The groups reported similar symptoms, but the organization of illness episodes and explanatory models associated with these episodes differed sharply. A variety of cultural variables and processes is proposed to account for observed differences, including somatization, the role of local illness categories, and the divergent core conflicts and values associated with gender roles. It is argued that the comparative design of the study provided insights that could not have been achieved through the study of a single group.

  15. Computational Analysis of a Low-Boom Supersonic Inlet

    NASA Technical Reports Server (NTRS)

    Chima, Rodrick V.

    2011-01-01

    A low-boom supersonic inlet was designed for use on a conceptual small supersonic aircraft that would cruise with an over-wing Mach number of 1.7. The inlet was designed to minimize external overpressures, and used a novel bypass duct to divert the highest shock losses around the engine. The Wind-US CFD code was used to predict the effects of capture ratio, struts, bypass design, and angles of attack on inlet performance. The inlet was tested in the 8-ft by 6-ft Supersonic Wind Tunnel at NASA Glenn Research Center. Test results showed that the inlet had excellent performance, with capture ratios near one, a peak core total pressure recovery of 96 percent, and a stable operating range much larger than that of an engine. Predictions generally compared very well with the experimental data, and were used to help interpret some of the experimental results.

  16. A human performance evaluation of graphic symbol-design features.

    PubMed

    Samet, M G; Geiselman, R E; Landee, B M

    1982-06-01

    16 subjects learned each of two tactical display symbol sets (conventional symbols and iconic symbols) in turn and were then shown a series of graphic displays containing various symbol configurations. For each display, the subject was asked questions corresponding to different behavioral processes relating to symbol use (identification, search, comparison, pattern recognition). The results indicated that: (a) conventional symbols yielded faster pattern-recognition performance than iconic symbols, and iconic symbols did not yield faster identification than conventional symbols, and (b) the portrayal of additional feature information (through the use of perimeter density or vector projection coding) slowed processing of the core symbol information in four tasks, but certain symbol-design features created less perceptual interference and had greater correspondence with the portrayal of specific tactical concepts than others. The results were discussed in terms of the complexities involved in the selection of symbol design features for use in graphic tactical displays.

  17. Recent update of the RPLUS2D/3D codes

    NASA Technical Reports Server (NTRS)

    Tsai, Y.-L. Peter

    1991-01-01

    The development of the RPLUS2D/3D codes is summarized. These codes utilize LU algorithms to solve chemical non-equilibrium flows in a body-fitted coordinate system. The motivation behind the development of these codes is the need to numerically predict chemical non-equilibrium flows for the National AeroSpace Plane Program. Recent improvements include vectorization method, blocking algorithms for geometric flexibility, out-of-core storage for large-size problems, and an LU-SW/UP combination for CPU-time efficiency and solution quality.

  18. CoCoNuT: General relativistic hydrodynamics code with dynamical space-time evolution

    NASA Astrophysics Data System (ADS)

    Dimmelmeier, Harald; Novak, Jérôme; Cerdá-Durán, Pablo

    2012-02-01

    CoCoNuT is a general relativistic hydrodynamics code with dynamical space-time evolution. The main aim of this numerical code is the study of several astrophysical scenarios in which general relativity can play an important role, namely the collapse of rapidly rotating stellar cores and the evolution of isolated neutron stars. The code has two flavors: CoCoA, the axisymmetric (2D) magnetized version, and CoCoNuT, the 3D non-magnetized version.

  19. Three-Dimensional Hydrodynamic Simulations of OMEGA Implosions

    NASA Astrophysics Data System (ADS)

    Igumenshchev, I. V.

    2016-10-01

    The effects of large-scale (with Legendre modes less than 30) asymmetries in OMEGA direct-drive implosions caused by laser illumination nonuniformities (beam-power imbalance and beam mispointing and mistiming) and target offset, mount, and layers nonuniformities were investigated using three-dimensional (3-D) hydrodynamic simulations. Simulations indicate that the performance degradation in cryogenic implosions is caused mainly by the target offsets ( 10 to 20 μm), beampower imbalance (σrms 10 %), and initial target asymmetry ( 5% ρRvariation), which distort implosion cores, resulting in a reduced hot-spot confinement and an increased residual kinetic energy of the stagnated target. The ion temperature inferred from the width of simulated neutron spectra are influenced by bulk fuel motion in the distorted hot spot and can result in up to 2-keV apparent temperature increase. Similar temperature variations along different lines of sight are observed. Simulated x-ray images of implosion cores in the 4- to 8-keV energy range show good agreement with experiments. Demonstrating hydrodynamic equivalence to ignition designs on OMEGA requires reducing large-scale target and laser-imposed nonuniformities, minimizing target offset, and employing high-efficient mid-adiabat (α = 4) implosion designs that mitigate cross-beam energy transfer (CBET) and suppress short-wavelength Rayleigh-Taylor growth. These simulations use a new low-noise 3-D Eulerian hydrodynamic code ASTER. Existing 3-D hydrodynamic codes for direct-drive implosions currently miss CBET and noise-free ray-trace laser deposition algorithms. ASTER overcomes these limitations using a simplified 3-D laser-deposition model, which includes CBET and is capable of simulating the effects of beam-power imbalance, beam mispointing, mistiming, and target offset. This material is based upon work supported by the Department of Energy National Nuclear Security Administration under Award Number DE-NA0001944.

  20. Advanced Aerospace Materials by Design

    NASA Technical Reports Server (NTRS)

    Srivastava, Deepak; Djomehri, Jahed; Wei, Chen-Yu

    2004-01-01

    The advances in the emerging field of nanophase thermal and structural composite materials; materials with embedded sensors and actuators for morphing structures; light-weight composite materials for energy and power storage; and large surface area materials for in-situ resource generation and waste recycling, are expected to :revolutionize the capabilities of virtually every system comprising of future robotic and :human moon and mars exploration missions. A high-performance multiscale simulation platform, including the computational capabilities and resources of Columbia - the new supercomputer, is being developed to discover, validate, and prototype next generation (of such advanced materials. This exhibit will describe the porting and scaling of multiscale 'physics based core computer simulation codes for discovering and designing carbon nanotube-polymer composite materials for light-weight load bearing structural and 'thermal protection applications.

  1. Coupled thermo-chemical boundary conditions in double-diffusive geodynamo models at arbitrary Lewis numbers.

    NASA Astrophysics Data System (ADS)

    Bouffard, M.

    2016-12-01

    Convection in the Earth's outer core is driven by the combination of two buoyancy sources: a thermal source directly related to the Earth's secular cooling, the release of latent heat and possibly the heat generated by radioactive decay, and a compositional source due to the crystallization of the growing inner core which releases light elements into the liquid outer core. The dynamics of fusion/crystallization being dependent on the heat flux distribution, the thermochemical boundary conditions are coupled at the inner core boundary which may affect the dynamo in various ways, particularly if heterogeneous conditions are imposed at one boundary. In addition, the thermal and compositional molecular diffusivities differ by three orders of magnitude. This can produce significant differences in the convective dynamics compared to pure thermal or compositional convection due to the potential occurence of double-diffusive phenomena. Traditionally, temperature and composition have been combined into one single variable called codensity under the assumption that turbulence mixes all physical properties at an "eddy-diffusion" rate. This description does not allow for a proper treatment of the thermochemical coupling and is certainly incorrect within stratified layers in which double-diffusive phenomena can be expected. For a more general and rigorous approach, two distinct transport equations should therefore be solved for temperature and composition. However, the weak compositional diffusivity is technically difficult to handle in current geodynamo codes and requires the use of a semi-Lagrangian description to minimize numerical diffusion. We implemented a "particle-in-cell" method into a geodynamo code to properly describe the compositional field. The code is suitable for High Parallel Computing architectures and was successfully tested on two benchmarks. Following the work by Aubert et al. (2008) we use this new tool to perform dynamo simulations including thermochemical coupling at the inner core boundary as well as exploration of the infinite Lewis number limit to study the effect of a heterogeneous core mantle boundary heat flow on the inner core growth.

  2. GeauxDock: Accelerating Structure-Based Virtual Screening with Heterogeneous Computing

    PubMed Central

    Fang, Ye; Ding, Yun; Feinstein, Wei P.; Koppelman, David M.; Moreno, Juana; Jarrell, Mark; Ramanujam, J.; Brylinski, Michal

    2016-01-01

    Computational modeling of drug binding to proteins is an integral component of direct drug design. Particularly, structure-based virtual screening is often used to perform large-scale modeling of putative associations between small organic molecules and their pharmacologically relevant protein targets. Because of a large number of drug candidates to be evaluated, an accurate and fast docking engine is a critical element of virtual screening. Consequently, highly optimized docking codes are of paramount importance for the effectiveness of virtual screening methods. In this communication, we describe the implementation, tuning and performance characteristics of GeauxDock, a recently developed molecular docking program. GeauxDock is built upon the Monte Carlo algorithm and features a novel scoring function combining physics-based energy terms with statistical and knowledge-based potentials. Developed specifically for heterogeneous computing platforms, the current version of GeauxDock can be deployed on modern, multi-core Central Processing Units (CPUs) as well as massively parallel accelerators, Intel Xeon Phi and NVIDIA Graphics Processing Unit (GPU). First, we carried out a thorough performance tuning of the high-level framework and the docking kernel to produce a fast serial code, which was then ported to shared-memory multi-core CPUs yielding a near-ideal scaling. Further, using Xeon Phi gives 1.9× performance improvement over a dual 10-core Xeon CPU, whereas the best GPU accelerator, GeForce GTX 980, achieves a speedup as high as 3.5×. On that account, GeauxDock can take advantage of modern heterogeneous architectures to considerably accelerate structure-based virtual screening applications. GeauxDock is open-sourced and publicly available at www.brylinski.org/geauxdock and https://figshare.com/articles/geauxdock_tar_gz/3205249. PMID:27420300

  3. OEDGE modeling for the planned tungsten ring experiment on DIII-D

    DOE PAGES

    Elder, J. David; Stangeby, Peter C.; Abrams, Tyler W.; ...

    2017-04-19

    The OEDGE code is used to model tungsten erosion and transport for DIII-D experiments with toroidal rings of high-Z metal tiles. Such modeling is needed for both experimental and diagnostic design to have estimates of the expected core and edge tungsten density and to understand the various factors contributing to the uncertainties in these calculations. OEDGE simulations are performed using the planned experimental magnetic geometries and plasma conditions typical of both L-mode and inter-ELM H-mode discharges in DIII-D. OEDGE plasma reconstruction based on specific representative discharges for similar geometries is used to determine the plasma conditions applied to tungsten plasmamore » impurity simulations. We developed a new model for tungsten erosion in OEDGE which imports charge-state resolved carbon impurity fluxes and impact energies from a separate OEDGE run which models the carbon production, transport and deposition for the same plasma conditions as the tungsten simulations. Furthermore, these values are then used to calculate the gross tungsten physical sputtering due to carbon plasma impurities which is then added to any sputtering by deuterium ions; tungsten self-sputtering is also included. The code results are found to be dependent on the following factors: divertor geometry and closure, the choice of cross-field anomalous transport coefficients, divertor plasma conditions (affecting both tungsten source strength and transport), the choice of tungsten atomic physics data used in the model (in particular sviz(Te) for W-atoms), and the model of the carbon flux and energy used for 2 calculating the tungsten source due to sputtering. The core tungsten density is found to be of order 10 15 m -3 (excluding effects of any core transport barrier and with significant variability depending on the other factors mentioned) with density decaying into the scrape off layer.« less

  4. Monte Carlo modelling of TRIGA research reactor

    NASA Astrophysics Data System (ADS)

    El Bakkari, B.; Nacir, B.; El Bardouni, T.; El Younoussi, C.; Merroun, O.; Htet, A.; Boulaich, Y.; Zoubair, M.; Boukhal, H.; Chakir, M.

    2010-10-01

    The Moroccan 2 MW TRIGA MARK II research reactor at Centre des Etudes Nucléaires de la Maâmora (CENM) achieved initial criticality on May 2, 2007. The reactor is designed to effectively implement the various fields of basic nuclear research, manpower training, and production of radioisotopes for their use in agriculture, industry, and medicine. This study deals with the neutronic analysis of the 2-MW TRIGA MARK II research reactor at CENM and validation of the results by comparisons with the experimental, operational, and available final safety analysis report (FSAR) values. The study was prepared in collaboration between the Laboratory of Radiation and Nuclear Systems (ERSN-LMR) from Faculty of Sciences of Tetuan (Morocco) and CENM. The 3-D continuous energy Monte Carlo code MCNP (version 5) was used to develop a versatile and accurate full model of the TRIGA core. The model represents in detailed all components of the core with literally no physical approximation. Continuous energy cross-section data from the more recent nuclear data evaluations (ENDF/B-VI.8, ENDF/B-VII.0, JEFF-3.1, and JENDL-3.3) as well as S( α, β) thermal neutron scattering functions distributed with the MCNP code were used. The cross-section libraries were generated by using the NJOY99 system updated to its more recent patch file "up259". The consistency and accuracy of both the Monte Carlo simulation and neutron transport physics were established by benchmarking the TRIGA experiments. Core excess reactivity, total and integral control rods worth as well as power peaking factors were used in the validation process. Results of calculations are analysed and discussed.

  5. GeauxDock: Accelerating Structure-Based Virtual Screening with Heterogeneous Computing.

    PubMed

    Fang, Ye; Ding, Yun; Feinstein, Wei P; Koppelman, David M; Moreno, Juana; Jarrell, Mark; Ramanujam, J; Brylinski, Michal

    2016-01-01

    Computational modeling of drug binding to proteins is an integral component of direct drug design. Particularly, structure-based virtual screening is often used to perform large-scale modeling of putative associations between small organic molecules and their pharmacologically relevant protein targets. Because of a large number of drug candidates to be evaluated, an accurate and fast docking engine is a critical element of virtual screening. Consequently, highly optimized docking codes are of paramount importance for the effectiveness of virtual screening methods. In this communication, we describe the implementation, tuning and performance characteristics of GeauxDock, a recently developed molecular docking program. GeauxDock is built upon the Monte Carlo algorithm and features a novel scoring function combining physics-based energy terms with statistical and knowledge-based potentials. Developed specifically for heterogeneous computing platforms, the current version of GeauxDock can be deployed on modern, multi-core Central Processing Units (CPUs) as well as massively parallel accelerators, Intel Xeon Phi and NVIDIA Graphics Processing Unit (GPU). First, we carried out a thorough performance tuning of the high-level framework and the docking kernel to produce a fast serial code, which was then ported to shared-memory multi-core CPUs yielding a near-ideal scaling. Further, using Xeon Phi gives 1.9× performance improvement over a dual 10-core Xeon CPU, whereas the best GPU accelerator, GeForce GTX 980, achieves a speedup as high as 3.5×. On that account, GeauxDock can take advantage of modern heterogeneous architectures to considerably accelerate structure-based virtual screening applications. GeauxDock is open-sourced and publicly available at www.brylinski.org/geauxdock and https://figshare.com/articles/geauxdock_tar_gz/3205249.

  6. Using the computer in the clinical consultation; setting the stage, reviewing, recording, and taking actions: multi-channel video study.

    PubMed

    Kumarapeli, Pushpa; de Lusignan, Simon

    2013-06-01

    Electronic patient record (EPR) systems are widely used. This study explores the context and use of systems to provide insights into improving their use in clinical practice. We used video to observe 163 consultations by 16 clinicians using four EPR brands. We made a visual study of the consultation room and coded interactions between clinician, patient, and computer. Few patients (6.9%, n=12) declined to participate. Patients looked at the computer twice as much (47.6 s vs 20.6 s, p<0.001) when it was within their gaze. A quarter of consultations were interrupted (27.6%, n=45); and in half the clinician left the room (12.3%, n=20). The core consultation takes about 87% of the total session time; 5% of time is spent pre-consultation, reading the record and calling the patient in; and 8% of time is spent post-consultation, largely entering notes. Consultations with more than one person and where prescribing took place were longer (R(2) adj=22.5%, p<0.001). The core consultation can be divided into 61% of direct clinician-patient interaction, of which 15% is examination, 25% computer use with no patient involvement, and 14% simultaneous clinician-computer-patient interplay. The proportions of computer use are similar between consultations (mean=40.6%, SD=13.7%). There was more data coding in problem-orientated EPR systems, though clinicians often used vague codes. The EPR system is used for a consistent proportion of the consultation and should be designed to facilitate multi-tasking. Clinicians who want to promote screen sharing should change their consulting room layout.

  7. Using the computer in the clinical consultation; setting the stage, reviewing, recording, and taking actions: multi-channel video study

    PubMed Central

    Kumarapeli, Pushpa; de Lusignan, Simon

    2013-01-01

    Background and objective Electronic patient record (EPR) systems are widely used. This study explores the context and use of systems to provide insights into improving their use in clinical practice. Methods We used video to observe 163 consultations by 16 clinicians using four EPR brands. We made a visual study of the consultation room and coded interactions between clinician, patient, and computer. Few patients (6.9%, n=12) declined to participate. Results Patients looked at the computer twice as much (47.6 s vs 20.6 s, p<0.001) when it was within their gaze. A quarter of consultations were interrupted (27.6%, n=45); and in half the clinician left the room (12.3%, n=20). The core consultation takes about 87% of the total session time; 5% of time is spent pre-consultation, reading the record and calling the patient in; and 8% of time is spent post-consultation, largely entering notes. Consultations with more than one person and where prescribing took place were longer (R2 adj=22.5%, p<0.001). The core consultation can be divided into 61% of direct clinician–patient interaction, of which 15% is examination, 25% computer use with no patient involvement, and 14% simultaneous clinician–computer–patient interplay. The proportions of computer use are similar between consultations (mean=40.6%, SD=13.7%). There was more data coding in problem-orientated EPR systems, though clinicians often used vague codes. Conclusions The EPR system is used for a consistent proportion of the consultation and should be designed to facilitate multi-tasking. Clinicians who want to promote screen sharing should change their consulting room layout. PMID:23242763

  8. Dynamic Response of Functionally Graded Carbon Nanotube Reinforced Sandwich Plate

    NASA Astrophysics Data System (ADS)

    Mehar, Kulmani; Panda, Subrata Kumar

    2018-03-01

    In this article, the dynamic response of the carbon nanotube-reinforced functionally graded sandwich composite plate has been studied numerically with the help of finite element method. The face sheets of the sandwich composite plate are made of carbon nanotube- reinforced composite for two different grading patterns whereas the core phase is taken as isotropic material. The final properties of the structure are calculated using the rule of mixture. The geometrical model of the sandwich plate is developed and discretized suitably with the help of available shell element in ANSYS library. Subsequently, the corresponding numerical dynamic responses computed via batch input technique (parametric design language code in ANSYS) of ANSYS including Newmark’s integration scheme. The stability of the sandwich structural numerical model is established through the proper convergence study. Further, the reliability of the sandwich model is checked by comparison study between present and available results from references. As a final point, some numerical problems have been solved to examine the effect of different design constraints (carbon nanotube distribution pattern, core to face thickness ratio, volume fractions of the nanotube, length to thickness ratio, aspect ratio and constraints at edges) on the time-responses of sandwich plate.

  9. Stationary Liquid Fuel Fast Reactor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Won Sik; Grandy, Andrew; Boroski, Andrew

    For effective burning of hazardous transuranic (TRU) elements of used nuclear fuel, a transformational advanced reactor concept named SLFFR (Stationary Liquid Fuel Fast Reactor) was proposed based on stationary molten metallic fuel. The fuel enters the reactor vessel in a solid form, and then it is heated to molten temperature in a small melting heater. The fuel is contained within a closed, thick container with penetrating coolant channels, and thus it is not mixed with coolant nor flow through the primary heat transfer circuit. The makeup fuel is semi- continuously added to the system, and thus a very small excessmore » reactivity is required. Gaseous fission products are also removed continuously, and a fraction of the fuel is periodically drawn off from the fuel container to a processing facility where non-gaseous mixed fission products and other impurities are removed and then the cleaned fuel is recycled into the fuel container. A reference core design and a preliminary plant system design of a 1000 MWt TRU- burning SLFFR concept were developed using TRU-Ce-Co fuel, Ta-10W fuel container, and sodium coolant. Conservative design approaches were adopted to stay within the current material performance database. Detailed neutronics and thermal-fluidic analyses were performed to develop a reference core design. Region-dependent 33-group cross sections were generated based on the ENDF/B-VII.0 data using the MC2-3 code. Core and fuel cycle analyses were performed in theta-r-z geometries using the DIF3D and REBUS-3 codes. Reactivity coefficients and kinetics parameters were calculated using the VARI3D perturbation theory code. Thermo-fluidic analyses were performed using the ANSYS FLUENT computational fluid dynamics (CFD) code. Figure 0.1 shows a schematic radial layout of the reference 1000 MWt SLFFR core, and Table 0.1 summarizes the main design parameters of SLFFR-1000 loop plant. The fuel container is a 2.5 cm thick cylinder with an inner radius of 87.5 cm. The fuel container is penetrated by twelve hexagonal control assembly (CA) guide tubes, each of which has 3.0 mm thickness and 69.4 mm flat-to-flat outer distance. The distance between two neighboring CA guide tube is selected to be 26 cm to provide an adequate space for CA driving systems. The fuel container has 18181 penetrating coolant tubes of 6.0 mm inner diameter and 2.0 mm thickness. The coolant tubes are arranged in a triangular lattice with a lattice pitch of 1.21 cm. The fuel, structure, and coolant volume fractions inside the fuel container are 0.386, 0.383, and 0.231, respectively. Separate steel reflectors and B4C shields are used outside of the fuel container. Six gas expansion modules (GEMs) of 5.0 cm thickness are introduced in the radial reflector region. Between the radial reflector and the fuel container is a 2.5 cm sodium gap. The TRU inventory at the beginning of equilibrium cycle (BOEC) is 5081 kg, whereas the TRU inventory at the beginning of life (BOL) was 3541 kg. This is because the equilibrium cycle fuel contains a significantly smaller fissile fraction than the LWR TRU feed. The fuel inventory at BOEC is composed of 34.0 a/o TRU, 41.4 a/o Ce, 23.6 a/o Co, and 1.03 a/o solid fission products. Since uranium-free fuel is used, a theoretical maximum TRU consumption rate of 1.011 kg/day is achieved. The semi-continuous fuel cycle based on the 300-batch, 1- day cycle approximation yields a burnup reactivity loss of 26 pcm/day, and requires a daily reprocessing of 32.5 kg of SLFFR fuel. This yields a daily TRU charge rate of 17.45 kg, including a makeup TRU feed of 1.011 kg recovered from the LWR used fuel. The charged TRU-Ce-Co fuel is composed of 34.4 a/o TRU, 40.6 a/o Ce, and 25.0 a/o Co.« less

  10. Observations Regarding Use of Advanced CFD Analysis, Sensitivity Analysis, and Design Codes in MDO

    NASA Technical Reports Server (NTRS)

    Newman, Perry A.; Hou, Gene J. W.; Taylor, Arthur C., III

    1996-01-01

    Observations regarding the use of advanced computational fluid dynamics (CFD) analysis, sensitivity analysis (SA), and design codes in gradient-based multidisciplinary design optimization (MDO) reflect our perception of the interactions required of CFD and our experience in recent aerodynamic design optimization studies using CFD. Sample results from these latter studies are summarized for conventional optimization (analysis - SA codes) and simultaneous analysis and design optimization (design code) using both Euler and Navier-Stokes flow approximations. The amount of computational resources required for aerodynamic design using CFD via analysis - SA codes is greater than that required for design codes. Thus, an MDO formulation that utilizes the more efficient design codes where possible is desired. However, in the aerovehicle MDO problem, the various disciplines that are involved have different design points in the flight envelope; therefore, CFD analysis - SA codes are required at the aerodynamic 'off design' points. The suggested MDO formulation is a hybrid multilevel optimization procedure that consists of both multipoint CFD analysis - SA codes and multipoint CFD design codes that perform suboptimizations.

  11. Determining Coolant Flow Rate Distribution In The Fuel-Modified TRIGA Plate Reactor

    NASA Astrophysics Data System (ADS)

    Puji Hastuti, Endiah; Widodo, Surip; Darwis Isnaini, M.; Geni Rina, S.; Syaiful, B.

    2018-02-01

    TRIGA 2000 reactor in Bandung is planned to have the fuel element replaced, from cylindrical uranium and zirconium-hydride (U-ZrH) alloy to U3Si2-Al plate type of low enriched uranium of 19.75% with uranium density of 2.96 gU/cm3, while the reactor power is maintained at 2 MW. This change is planned to anticipate the discontinuity of TRIGA fuel element production. The selection of this plate-type fuel element is supported by the fact that such fuel type has been produced in Indonesia and used in MPR-30 safely since 2000. The core configuration of plate-type-fuelled TRIGA reactor requires coolant flow rate through each fuel element channel in order to meet its safety function. This paper is aimed to describe the results of coolant flow rate distribution in the TRIGA core that meets the safety function at normal operation condition, physical test, shutdown, and at initial event of loss of coolant flow due power supply interruption. The design analysis to determine coolant flow rate in this paper employs CAUDVAP and COOLODN computation code. The designed coolant flow rate that meets the safety criteria of departure from nucleate boiling ratio (DNBR), onset of flow instability ratio (OFIR), and ΔΤ onset of nucleate boiling (ONB), indicates that the minimum flow rate required to cool the plate-type fuelled TRIGA core at 2 MW is 80 kg/s. Therefore, it can be concluded that the operating limitation condition (OLC) for the minimum flow rate is 80 kg/s; the 72 kg/s is to cool the active core; while the minimum flow rate for coolant flow rate drop is limited to 68 kg/s with the coolant inlet temperature 35°C. This thermohydraulic design also provides cooling for 4 positions irradiation position (IP) utilization and 1 central irradiation position (CIP) with end fitting inner diameter (ID) of 10 mm and 20 mm, respectively.

  12. Investigating the impact of the cielo cray XE6 architecture on scientific application codes.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rajan, Mahesh; Barrett, Richard; Pedretti, Kevin Thomas Tauke

    2010-12-01

    Cielo, a Cray XE6, is the Department of Energy NNSA Advanced Simulation and Computing (ASC) campaign's newest capability machine. Rated at 1.37 PFLOPS, it consists of 8,944 dual-socket oct-core AMD Magny-Cours compute nodes, linked using Cray's Gemini interconnect. Its primary mission objective is to enable a suite of the ASC applications implemented using MPI to scale to tens of thousands of cores. Cielo is an evolutionary improvement to a successful architecture previously available to many of our codes, thus enabling a basis for understanding the capabilities of this new architecture. Using three codes strategically important to the ASC campaign, andmore » supplemented with some micro-benchmarks that expose the fundamental capabilities of the XE6, we report on the performance characteristics and capabilities of Cielo.« less

  13. Ultra-miniature wireless temperature sensor for thermal medicine applications

    PubMed Central

    Khairi, Ahmad; Hung, Shih-Chang; Paramesh, Jeyanandh; Fedder, Gary; Rabin, Yoed

    2017-01-01

    This study presents a prototype design of an ultra-miniature, wireless, battery-less, and implantable temperature-sensor, with applications to thermal medicine such as cryosurgery, hyperthermia, and thermal ablation. The design aims at a sensory device smaller than 1.5 mm in diameter and 3 mm in length, to enable minimally invasive deployment through a hypodermic needle. While the new device may be used for local temperature monitoring, simultaneous data collection from an array of such sensors can be used to reconstruct the 3D temperature field in the treated area, offering a unique capability in thermal medicine. The new sensory device consists of three major subsystems: a temperature-sensing core, a wireless data-communication unit, and a wireless power reception and management unit. Power is delivered wirelessly to the implant from an external source using an inductive link. To meet size requirements while enhancing reliability and minimizing cost, the implant is fully integrated in a regular foundry CMOS technology (0.15 μm in the current study), including the implant-side inductor of the power link. A temperature-sensing core that consists of a proportional-to-absolute-temperature (PTAT) circuit has been designed and characterized. It employs a microwatt chopper stabilized op-amp and dynamic element-matched current sources to achieve high absolute accuracy. A second order sigma-delta (Σ-Δ) analog-to-digital converter (ADC) is designed to convert the temperature reading to a digital code, which is transmitted by backscatter through the same antenna used for receiving power. A high-efficiency multi-stage differential CMOS rectifier has been designed to provide a DC supply to the sensing and communication subsystems. This paper focuses on the development of the all-CMOS temperature sensing core circuitry part of the device, and briefly reviews the wireless power delivery and communication subsystems. PMID:28989222

  14. Implementing Molecular Dynamics for Hybrid High Performance Computers - 1. Short Range Forces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, W Michael; Wang, Peng; Plimpton, Steven J

    The use of accelerators such as general-purpose graphics processing units (GPGPUs) have become popular in scientific computing applications due to their low cost, impressive floating-point capabilities, high memory bandwidth, and low electrical power requirements. Hybrid high performance computers, machines with more than one type of floating-point processor, are now becoming more prevalent due to these advantages. In this work, we discuss several important issues in porting a large molecular dynamics code for use on parallel hybrid machines - 1) choosing a hybrid parallel decomposition that works on central processing units (CPUs) with distributed memory and accelerator cores with shared memory,more » 2) minimizing the amount of code that must be ported for efficient acceleration, 3) utilizing the available processing power from both many-core CPUs and accelerators, and 4) choosing a programming model for acceleration. We present our solution to each of these issues for short-range force calculation in the molecular dynamics package LAMMPS. We describe algorithms for efficient short range force calculation on hybrid high performance machines. We describe a new approach for dynamic load balancing of work between CPU and accelerator cores. We describe the Geryon library that allows a single code to compile with both CUDA and OpenCL for use on a variety of accelerators. Finally, we present results on a parallel test cluster containing 32 Fermi GPGPUs and 180 CPU cores.« less

  15. Development of a Stiffness-Based Chemistry Load Balancing Scheme, and Optimization of Input/Output and Communication, to Enable Massively Parallel High-Fidelity Internal Combustion Engine Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kodavasal, Janardhan; Harms, Kevin; Srivastava, Priyesh

    A closed-cycle gasoline compression ignition engine simulation near top dead center (TDC) was used to profile the performance of a parallel commercial engine computational fluid dynamics code, as it was scaled on up to 4096 cores of an IBM Blue Gene/Q supercomputer. The test case has 9 million cells near TDC, with a fixed mesh size of 0.15 mm, and was run on configurations ranging from 128 to 4096 cores. Profiling was done for a small duration of 0.11 crank angle degrees near TDC during ignition. Optimization of input/output performance resulted in a significant speedup in reading restart files, andmore » in an over 100-times speedup in writing restart files and files for post-processing. Improvements to communication resulted in a 1400-times speedup in the mesh load balancing operation during initialization, on 4096 cores. An improved, “stiffness-based” algorithm for load balancing chemical kinetics calculations was developed, which results in an over 3-times faster run-time near ignition on 4096 cores relative to the original load balancing scheme. With this improvement to load balancing, the code achieves over 78% scaling efficiency on 2048 cores, and over 65% scaling efficiency on 4096 cores, relative to 256 cores.« less

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andrews, Nathan; Faucett, Christopher; Haskin, Troy Christopher

    Following the conclusion of the first phase of the crosswalk analysis, one of the key unanswered questions was whether or not the deviations found would persist during a partially recovered accident scenario, similar to the one that occurred in TMI - 2. In particular this analysis aims to compare the impact of core degradation morphology on quenching models inherent within the two codes and the coolability of debris during partially recovered accidents. A primary motivation for this study is the development of insights into how uncertainties in core damage progression models impact the ability to assess the potential for recoverymore » of a degraded core. These quench and core recovery models are of the most interest when there is a significant amount of core damage, but intact and degraded fuel still remain in the cor e region or the lower plenum. Accordingly this analysis presents a spectrum of partially recovered accident scenarios by varying both water injection timing and rate to highlight the impact of core degradation phenomena on recovered accident scenarios. This analysis uses the newly released MELCOR 2.2 rev. 966 5 and MAAP5, Version 5.04. These code versions, which incorporate a significant number of modifications that have been driven by analyses and forensic evidence obtained from the Fukushima - Daiichi reactor site.« less

  17. Highly Efficient Parallel Multigrid Solver For Large-Scale Simulation of Grain Growth Using the Structural Phase Field Crystal Model

    NASA Astrophysics Data System (ADS)

    Guan, Zhen; Pekurovsky, Dmitry; Luce, Jason; Thornton, Katsuyo; Lowengrub, John

    The structural phase field crystal (XPFC) model can be used to model grain growth in polycrystalline materials at diffusive time-scales while maintaining atomic scale resolution. However, the governing equation of the XPFC model is an integral-partial-differential-equation (IPDE), which poses challenges in implementation onto high performance computing (HPC) platforms. In collaboration with the XSEDE Extended Collaborative Support Service, we developed a distributed memory HPC solver for the XPFC model, which combines parallel multigrid and P3DFFT. The performance benchmarking on the Stampede supercomputer indicates near linear strong and weak scaling for both multigrid and transfer time between multigrid and FFT modules up to 1024 cores. Scalability of the FFT module begins to decline at 128 cores, but it is sufficient for the type of problem we will be examining. We have demonstrated simulations using 1024 cores, and we expect to achieve 4096 cores and beyond. Ongoing work involves optimization of MPI/OpenMP-based codes for the Intel KNL Many-Core Architecture. This optimizes the code for coming pre-exascale systems, in particular many-core systems such as Stampede 2.0 and Cori 2 at NERSC, without sacrificing efficiency on other general HPC systems.

  18. Equilibrium cycle pin by pin transport depletion calculations with DeCART

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kochunas, B.; Downar, T.; Taiwo, T.

    As the Advanced Fuel Cycle Initiative (AFCI) program has matured it has become more important to utilize more advanced simulation methods. The work reported here was performed as part of the AFCI fellowship program to develop and demonstrate the capability of performing high fidelity equilibrium cycle calculations. As part of the work here, a new multi-cycle analysis capability was implemented in the DeCART code which included modifying the depletion modules to perform nuclide decay calculations, implementing an assembly shuffling pattern description, and modifying iteration schemes. During the work, stability issues were uncovered with respect to converging simultaneously the neutron flux,more » isotopics, and fluid density and temperature distributions in 3-D. Relaxation factors were implemented which considerably improved the stability of the convergence. To demonstrate the capability two core designs were utilized, a reference UOX core and a CORAIL core. Full core equilibrium cycle calculations were performed on both cores and the discharge isotopics were compared. From this comparison it was noted that the improved modeling capability was not drastically different in its prediction of the discharge isotopics when compared to 2-D single assembly or 2-D core models. For fissile isotopes such as U-235, Pu-239, and Pu-241 the relative differences were 1.91%, 1.88%, and 0.59%), respectively. While this difference may not seem large it translates to mass differences on the order of tens of grams per assembly, which may be significant for the purposes of accounting of special nuclear material. (authors)« less

  19. Development of an object-oriented ORIGEN for advanced nuclear fuel modeling applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Skutnik, S.; Havloej, F.; Lago, D.

    2013-07-01

    The ORIGEN package serves as the core depletion and decay calculation module within the SCALE code system. A recent major re-factor to the ORIGEN code architecture as part of an overall modernization of the SCALE code system has both greatly enhanced its maintainability as well as afforded several new capabilities useful for incorporating depletion analysis into other code frameworks. This paper will present an overview of the improved ORIGEN code architecture (including the methods and data structures introduced) as well as current and potential future applications utilizing the new ORIGEN framework. (authors)

  20. Academic Interventions for Children with Dyslexia Who Have Phonological Core Deficits.

    ERIC Educational Resources Information Center

    Frost, Julie A.; Emery, Michael J.

    1996-01-01

    This article briefly defines phonological core deficits in cases of dyslexia; considers student classification based on federal and state learning disability placement guidelines; and suggests 10 interventions such as teaching metacognitive strategies, providing direct instruction in language analysis and the alphabetic code, and teaching reading…

  1. Joint design of QC-LDPC codes for coded cooperation system with joint iterative decoding

    NASA Astrophysics Data System (ADS)

    Zhang, Shunwai; Yang, Fengfan; Tang, Lei; Ejaz, Saqib; Luo, Lin; Maharaj, B. T.

    2016-03-01

    In this paper, we investigate joint design of quasi-cyclic low-density-parity-check (QC-LDPC) codes for coded cooperation system with joint iterative decoding in the destination. First, QC-LDPC codes based on the base matrix and exponent matrix are introduced, and then we describe two types of girth-4 cycles in QC-LDPC codes employed by the source and relay. In the equivalent parity-check matrix corresponding to the jointly designed QC-LDPC codes employed by the source and relay, all girth-4 cycles including both type I and type II are cancelled. Theoretical analysis and numerical simulations show that the jointly designed QC-LDPC coded cooperation well combines cooperation gain and channel coding gain, and outperforms the coded non-cooperation under the same conditions. Furthermore, the bit error rate performance of the coded cooperation employing jointly designed QC-LDPC codes is better than those of random LDPC codes and separately designed QC-LDPC codes over AWGN channels.

  2. RELAP5 Analyses of OECD/NEA ROSA-2 Project Experiments on Intermediate-Break LOCAs at Hot Leg or Cold Leg

    NASA Astrophysics Data System (ADS)

    Takeda, Takeshi; Maruyama, Yu; Watanabe, Tadashi; Nakamura, Hideo

    Experiments simulating PWR intermediate-break loss-of-coolant accidents (IBLOCAs) with 17% break at hot leg or cold leg were conducted in OECD/NEA ROSA-2 Project using the Large Scale Test Facility (LSTF). In the hot leg IBLOCA test, core uncovery started simultaneously with liquid level drop in crossover leg downflow-side before loop seal clearing (LSC) induced by steam condensation on accumulator coolant injected into cold leg. Water remained on upper core plate in upper plenum due to counter-current flow limiting (CCFL) because of significant upward steam flow from the core. In the cold leg IBLOCA test, core dryout took place due to rapid liquid level drop in the core before LSC. Liquid was accumulated in upper plenum, steam generator (SG) U-tube upflow-side and SG inlet plenum before the LSC due to CCFL by high velocity vapor flow, causing enhanced decrease in the core liquid level. The RELAP5/MOD3.2.1.2 post-test analyses of the two LSTF experiments were performed employing critical flow model in the code with a discharge coefficient of 1.0. In the hot leg IBLOCA case, cladding surface temperature of simulated fuel rods was underpredicted due to overprediction of core liquid level after the core uncovery. In the cold leg IBLOCA case, the cladding surface temperature was underpredicted too due to later core uncovery than in the experiment. These may suggest that the code has remaining problems in proper prediction of primary coolant distribution.

  3. A Two-Step Approach to Uncertainty Quantification of Core Simulators

    DOE PAGES

    Yankov, Artem; Collins, Benjamin; Klein, Markus; ...

    2012-01-01

    For the multiple sources of error introduced into the standard computational regime for simulating reactor cores, rigorous uncertainty analysis methods are available primarily to quantify the effects of cross section uncertainties. Two methods for propagating cross section uncertainties through core simulators are the XSUSA statistical approach and the “two-step” method. The XSUSA approach, which is based on the SUSA code package, is fundamentally a stochastic sampling method. Alternatively, the two-step method utilizes generalized perturbation theory in the first step and stochastic sampling in the second step. The consistency of these two methods in quantifying uncertainties in the multiplication factor andmore » in the core power distribution was examined in the framework of phase I-3 of the OECD Uncertainty Analysis in Modeling benchmark. With the Three Mile Island Unit 1 core as a base model for analysis, the XSUSA and two-step methods were applied with certain limitations, and the results were compared to those produced by other stochastic sampling-based codes. Based on the uncertainty analysis results, conclusions were drawn as to the method that is currently more viable for computing uncertainties in burnup and transient calculations.« less

  4. Validation of updated neutronic calculation models proposed for Atucha-II PHWR. Part I: Benchmark comparisons of WIMS-D5 and DRAGON cell and control rod parameters with MCNP5

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mollerach, R.; Leszczynski, F.; Fink, J.

    2006-07-01

    In 2005 the Argentine Government took the decision to complete the construction of the Atucha-II nuclear power plant, which has been progressing slowly during the last ten years. Atucha-II is a 745 MWe nuclear station moderated and cooled with heavy water, of German (Siemens) design located in Argentina. It has a pressure-vessel design with 451 vertical coolant channels, and the fuel assemblies (FA) are clusters of 37 natural UO{sub 2} rods with an active length of 530 cm. For the reactor physics area, a revision and update calculation methods and models (cell, supercell and reactor) was recently carried out coveringmore » cell, supercell (control rod) and core calculations. As a validation of the new models some benchmark comparisons were done with Monte Carlo calculations with MCNP5. This paper presents comparisons of cell and supercell benchmark problems based on a slightly idealized model of the Atucha-I core obtained with the WIMS-D5 and DRAGON codes with MCNP5 results. The Atucha-I core was selected because it is smaller, similar from a neutronic point of view, and more symmetric than Atucha-II Cell parameters compared include cell k-infinity, relative power levels of the different rings of fuel rods, and some two-group macroscopic cross sections. Supercell comparisons include supercell k-infinity changes due to the control rods (tubes) of steel and hafnium. (authors)« less

  5. Power control of SAFE reactor using fuzzy logic

    NASA Astrophysics Data System (ADS)

    Irvine, Claude

    2002-01-01

    Controlling the 100 kW SAFE (Safe Affordable Fission Engine) reactor consists of design and implementation of a fuzzy logic process control system to regulate dynamic variables related to nuclear system power. The first phase of development concentrates primarily on system power startup and regulation, maintaining core temperature equilibrium, and power profile matching. This paper discusses the experimental work performed in those areas. Nuclear core power from the fuel elements is simulated using resistive heating elements while heat rejection is processed by a series of heat pipes. Both axial and radial nuclear power distributions are determined from neuronic modeling codes. The axial temperature profile of the simulated core is matched to the nuclear power profile by varying the resistance of the heating elements. The SAFE model establishes radial temperature profile equivalence by establishing 32 control zones as the nodal coordinates. Control features also allow for slow warm up, since complete shutoff can occur in the heat pipes if heat-source temperatures drop/rise below a certain minimum value, depending on the specific fluid and gas combination in the heat pipe. The entire system is expected to be self-adaptive, i.e., capable of responding to long-range changes in the space environment. Particular attention in the development of the fuzzy logic algorithm shall ensure that the system process remains at set point, virtually eliminating overshoot on start-up and during in-process disturbances. The controller design will withstand harsh environments and applications where it might come in contact with water, corrosive chemicals, radiation fields, etc. .

  6. Extending Moore's Law via Computationally Error Tolerant Computing.

    DOE PAGES

    Deng, Bobin; Srikanth, Sriseshan; Hein, Eric R.; ...

    2018-03-01

    Dennard scaling has ended. Lowering the voltage supply (V dd) to sub-volt levels causes intermittent losses in signal integrity, rendering further scaling (down) no longer acceptable as a means to lower the power required by a processor core. However, it is possible to correct the occasional errors caused due to lower V dd in an efficient manner and effectively lower power. By deploying the right amount and kind of redundancy, we can strike a balance between overhead incurred in achieving reliability and energy savings realized by permitting lower V dd. One promising approach is the Redundant Residue Number System (RRNS)more » representation. Unlike other error correcting codes, RRNS has the important property of being closed under addition, subtraction and multiplication, thus enabling computational error correction at a fraction of an overhead compared to conventional approaches. We use the RRNS scheme to design a Computationally-Redundant, Energy-Efficient core, including the microarchitecture, Instruction Set Architecture (ISA) and RRNS centered algorithms. Finally, from the simulation results, this RRNS system can reduce the energy-delay-product by about 3× for multiplication intensive workloads and by about 2× in general, when compared to a non-error-correcting binary core.« less

  7. Extending Moore's Law via Computationally Error Tolerant Computing.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deng, Bobin; Srikanth, Sriseshan; Hein, Eric R.

    Dennard scaling has ended. Lowering the voltage supply (V dd) to sub-volt levels causes intermittent losses in signal integrity, rendering further scaling (down) no longer acceptable as a means to lower the power required by a processor core. However, it is possible to correct the occasional errors caused due to lower V dd in an efficient manner and effectively lower power. By deploying the right amount and kind of redundancy, we can strike a balance between overhead incurred in achieving reliability and energy savings realized by permitting lower V dd. One promising approach is the Redundant Residue Number System (RRNS)more » representation. Unlike other error correcting codes, RRNS has the important property of being closed under addition, subtraction and multiplication, thus enabling computational error correction at a fraction of an overhead compared to conventional approaches. We use the RRNS scheme to design a Computationally-Redundant, Energy-Efficient core, including the microarchitecture, Instruction Set Architecture (ISA) and RRNS centered algorithms. Finally, from the simulation results, this RRNS system can reduce the energy-delay-product by about 3× for multiplication intensive workloads and by about 2× in general, when compared to a non-error-correcting binary core.« less

  8. Determination of performance characteristics of scientific applications on IBM Blue Gene/Q

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Evangelinos, C.; Walkup, R. E.; Sachdeva, V.

    The IBM Blue Gene®/Q platform presents scientists and engineers with a rich set of hardware features such as 16 cores per chip sharing a Level 2 cache, a wide SIMD (single-instruction, multiple-data) unit, a five-dimensional torus network, and hardware support for collective operations. Especially important is the feature related to cores that have four “hardware threads,” which makes it possible to hide latencies and obtain a high fraction of the peak issue rate from each core. All of these hardware resources present unique performance-tuning opportunities on Blue Gene/Q. We provide an overview of several important applications and solvers and studymore » them on Blue Gene/Q using performance counters and Message Passing Interface profiles. We also discuss how Blue Gene/Q tools help us understand the interaction of the application with the hardware and software layers and provide guidance for optimization. Furthermore, on the basis of our analysis, we discuss code improvement strategies targeting Blue Gene/Q. Information about how these algorithms map to the Blue Gene® architecture is expected to have an impact on future system design as we move to the exascale era.« less

  9. Multichannel Baseband Processor for Wideband CDMA

    NASA Astrophysics Data System (ADS)

    Jalloul, Louay M. A.; Lin, Jim

    2005-12-01

    The system architecture of the cellular base station modem engine (CBME) is described. The CBME is a single-chip multichannel transceiver capable of processing and demodulating signals from multiple users simultaneously. It is optimized to process different classes of code-division multiple-access (CDMA) signals. The paper will show that through key functional system partitioning, tightly coupled small digital signal processing cores, and time-sliced reuse architecture, CBME is able to achieve a high degree of algorithmic flexibility while maintaining efficiency. The paper will also highlight the implementation and verification aspects of the CBME chip design. In this paper, wideband CDMA is used as an example to demonstrate the architecture concept.

  10. Core domains of shared decision-making during psychiatric visits: scientific and preference-based discussions.

    PubMed

    Fukui, Sadaaki; Matthias, Marianne S; Salyers, Michelle P

    2015-01-01

    Shared decision-making (SDM) is imperative to person-centered care, yet little is known about what aspects of SDM are targeted during psychiatric visits. This secondary data analysis (191 psychiatric visits with 11 providers, coded with a validated SDM coding system) revealed two factors (scientific and preference-based discussions) underlying SDM communication. Preference-based discussion occurred less. Both provider and consumer initiation of SDM elements and decision complexity were associated with greater discussions in both factors, but were more strongly associated with scientific discussion. Longer visit length correlated with only scientific discussion. Providers' understanding of core domains could facilitate engaging consumers in SDM.

  11. Cross-separatrix Coupling in Nonlinear Global Electrostatic Turbulent Transport in C-2U

    NASA Astrophysics Data System (ADS)

    Lau, Calvin; Fulton, Daniel; Bao, Jian; Lin, Zhihong; Binderbauer, Michl; Tajima, Toshiki; Schmitz, Lothar; TAE Team

    2017-10-01

    In recent years, the progress of the C-2/C-2U advanced beam-driven field-reversed configuration (FRC) experiments at Tri Alpha Energy, Inc. has pushed FRCs to transport limited regimes. Understanding particle and energy transport is a vital step towards an FRC reactor, and two particle-in-cell microturbulence codes, the Gyrokinetic Toroidal Code (GTC) and A New Code (ANC), are being developed and applied toward this goal. Previous local electrostatic GTC simulations find the core to be robustly stable with drift-wave instability only in the scrape-off layer (SOL) region. However, experimental measurements showed fluctuations in both regions; one possibility is that fluctuations in the core originate from the SOL, suggesting the need for non-local simulations with cross-separatrix coupling. Current global ANC simulations with gyrokinetic ions and adiabatic electrons find that non-local effects (1) modify linear growth-rates and frequencies of instabilities and (2) allow instability to move from the unstable SOL to the linearly stable core. Nonlinear spreading is also seen prior to mode saturation. We also report on the progress of the first turbulence simulations in the SOL. This work is supported by the Norman Rostoker Fellowship.

  12. Parallelization of GeoClaw code for modeling geophysical flows with adaptive mesh refinement on many-core systems

    USGS Publications Warehouse

    Zhang, S.; Yuen, D.A.; Zhu, A.; Song, S.; George, D.L.

    2011-01-01

    We parallelized the GeoClaw code on one-level grid using OpenMP in March, 2011 to meet the urgent need of simulating tsunami waves at near-shore from Tohoku 2011 and achieved over 75% of the potential speed-up on an eight core Dell Precision T7500 workstation [1]. After submitting that work to SC11 - the International Conference for High Performance Computing, we obtained an unreleased OpenMP version of GeoClaw from David George, who developed the GeoClaw code as part of his PH.D thesis. In this paper, we will show the complementary characteristics of the two approaches used in parallelizing GeoClaw and the speed-up obtained by combining the advantage of each of the two individual approaches with adaptive mesh refinement (AMR), demonstrating the capabilities of running GeoClaw efficiently on many-core systems. We will also show a novel simulation of the Tohoku 2011 Tsunami waves inundating the Sendai airport and Fukushima Nuclear Power Plants, over which the finest grid distance of 20 meters is achieved through a 4-level AMR. This simulation yields quite good predictions about the wave-heights and travel time of the tsunami waves. ?? 2011 IEEE.

  13. Big Software for SmallSats: Adapting CFS to CubeSat Missions

    NASA Technical Reports Server (NTRS)

    Cudmore, Alan P.; Crum, Gary; Sheikh, Salman; Marshall, James

    2015-01-01

    Expanding capabilities and mission objectives for SmallSats and CubeSats is driving the need for reliable, reusable, and robust flight software. While missions are becoming more complicated and the scientific goals more ambitious, the level of acceptable risk has decreased. Design challenges are further compounded by budget and schedule constraints that have not kept pace. NASA's Core Flight Software System (cFS) is an open source solution which enables teams to build flagship satellite level flight software within a CubeSat schedule and budget. NASA originally developed cFS to reduce mission and schedule risk for flagship satellite missions by increasing code reuse and reliability. The Lunar Reconnaissance Orbiter, which launched in 2009, was the first of a growing list of Class B rated missions to use cFS. Large parts of cFS are now open source, which has spurred adoption outside of NASA. This paper reports on the experiences of two teams using cFS for current CubeSat missions. The performance overheads of cFS are quantified, and the reusability of code between missions is discussed. The analysis shows that cFS is well suited to use on CubeSats and demonstrates the portability and modularity of cFS code.

  14. The gene coding for small ribosomal subunit RNA in the basidiomycete Ustilago maydis contains a group I intron.

    PubMed Central

    De Wachter, R; Neefs, J M; Goris, A; Van de Peer, Y

    1992-01-01

    The nucleotide sequence of the gene coding for small ribosomal subunit RNA in the basidiomycete Ustilago maydis was determined. It revealed the presence of a group I intron with a length of 411 nucleotides. This is the third occurrence of such an intron discovered in a small subunit rRNA gene encoded by a eukaryotic nuclear genome. The other two occurrences are in Pneumocystis carinii, a fungus of uncertain taxonomic status, and Ankistrodesmus stipitatus, a green alga. The nucleotides of the conserved core structure of 101 group I intron sequences present in different genes and genome types were aligned and their evolutionary relatedness was examined. This revealed a cluster including all group I introns hitherto found in eukaryotic nuclear genes coding for small and large subunit rRNAs. A secondary structure model was designed for the area of the Ustilago maydis small ribosomal subunit RNA precursor where the intron is situated. It shows that the internal guide sequence pairing with the intron boundaries fits between two helices of the small subunit rRNA, and that minimal rearrangement of base pairs suffices to achieve the definitive secondary structure of the 18S rRNA upon splicing. PMID:1561081

  15. Improvement of Speckle Contrast Image Processing by an Efficient Algorithm.

    PubMed

    Steimers, A; Farnung, W; Kohl-Bareis, M

    2016-01-01

    We demonstrate an efficient algorithm for the temporal and spatial based calculation of speckle contrast for the imaging of blood flow by laser speckle contrast analysis (LASCA). It reduces the numerical complexity of necessary calculations, facilitates a multi-core and many-core implementation of the speckle analysis and enables an independence of temporal or spatial resolution and SNR. The new algorithm was evaluated for both spatial and temporal based analysis of speckle patterns with different image sizes and amounts of recruited pixels as sequential, multi-core and many-core code.

  16. TRAC-PD2 posttest analysis of CCTF Test C1-16 (Run 025). [Cylindrical Core Test Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sugimoto, J.

    The TRAC-PD2 code version was used to analyze CCTF Test C1-16 (Run 025). The results indicate that the core heater rod temperatures, the liquid mass in the vessel, and differential pressures in the primary loop are predicted well, but the void fraction distribution in the core and water accumulation in the upper plenum are not in good agreement with the data.

  17. Extended burnup core management for once-through uranium fuel cycles in LWRS. First annual report for the period 1 July 1979-30 June 1980

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sesonske, A.

    1980-08-01

    Detailed core management arrangements are developed requiring four operating cycles for the transition from present three-batch loading to an extended burnup four-batch plan for Zion-1. The ARMP code EPRI-NODE-P was used for core modeling. Although this work is preliminary, uranium and economic savings during the transition cycles appear of the order of 6 percent.

  18. A feasibility study of reactor-based deep-burn concepts.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, T. K.; Taiwo, T. A.; Hill, R. N.

    2005-09-16

    A systematic assessment of the General Atomics (GA) proposed Deep-Burn concept based on the Modular Helium-Cooled Reactor design (DB-MHR) has been performed. Preliminary benchmarking of deterministic physics codes was done by comparing code results to those from MONTEBURNS (MCNP-ORIGEN) calculations. Detailed fuel cycle analyses were performed in order to provide an independent evaluation of the physics and transmutation performance of the one-pass and two-pass concepts. Key performance parameters such as transuranic consumption, reactor performance, and spent fuel characteristics were analyzed. This effort has been undertaken in close collaborations with the General Atomics design team and Brookhaven National Laboratory evaluation team.more » The study was performed primarily for a 600 MWt reference DB-MHR design having a power density of 4.7 MW/m{sup 3}. Based on parametric and sensitivity study, it was determined that the maximum burnup (TRU consumption) can be obtained using optimum values of 200 {micro}m and 20% for the fuel kernel diameter and fuel packing fraction, respectively. These values were retained for most of the one-pass and two-pass design calculations; variation to the packing fraction was necessary for the second stage of the two-pass concept. Using a four-batch fuel management scheme for the one-pass DB-MHR core, it was possible to obtain a TRU consumption of 58% and a cycle length of 286 EFPD. By increasing the core power to 800 MWt and the power density to 6.2 MW/m{sup 3}, it was possible to increase the TRU consumption to 60%, although the cycle length decreased by {approx}64 days. The higher TRU consumption (burnup) is due to the reduction of the in-core decay of fissile Pu-241 to Am-241 relative to fission, arising from the higher power density (specific power), which made the fuel more reactivity over time. It was also found that the TRU consumption can be improved by utilizing axial fuel shuffling or by operating with lower material temperatures (colder core). Results also showed that the transmutation performance of the one-pass deep-burn concept is sensitive to the initial TRU vector, primarily because longer cooling time reduces the fissile content (Pu-241 specifically.) With a cooling time of 5 years, the TRU consumption increases to 67%, while conversely, with 20-year cooling the TRU consumption is about 58%. For the two-pass DB-MHR (TRU recycling option), a fuel packing fraction of about 30% is required in the second pass (the recycled TRU). It was found that using a heterogeneous core (homogeneous fuel element) concept, the TRU consumption is dependent on the cooling interval before the 2nd pass, again due to Pu-241 decay during the time lag between the first pass fuel discharge and the second pass fuel charge. With a cooling interval of 7 years (5 and 2 years before and after reprocessing) a TRU consumption of 55% is obtained. With an assumed ''no cooling'' interval, the TRU consumption is 63%. By using a cylindrical core to reduce neutron leakage, TRU consumption of the case with 7-year cooling interval increases to 58%. For a two-pass concept using a heterogeneous fuel element (and homogeneous core) with first and second pass volume ratio of 2:1, the TRU consumption is 62.4%. Finally, the repository loading benefits arising from the deep-burn and Inert Matrix Fuel (IMF) concepts were estimated and compared, for the same initial TRU vector. The DB-MHR concept resulted in slightly higher TRU consumption and repository loading benefit compared to the IMF concept (58.1% versus 55.1% for TRU consumption and 2.0 versus 1.6 for estimated repository loading benefit).« less

  19. Power Aware Signal Processing Environment (PASPE) for PAC/C

    DTIC Science & Technology

    2003-02-01

    vs. FFT Size For our implementation , the Annapolis FFT core was radix-256, and therefore the smallest PN code length that could be processed was the...PN-64. A C- code version of correlate was compared to the FPGA 61 implementation . The results in Figure 68 show that for a PN-1024, the...12a. DISTRIBUTION / AVAILABILITY STATEMENT APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED. 12b. DISTRIBUTION CODE 13. ABSTRACT (Maximum

  20. Modelling the behaviour of oxide fuels containing minor actinides with urania, thoria and zirconia matrices in an accelerator-driven system

    NASA Astrophysics Data System (ADS)

    Sobolev, V.; Lemehov, S.; Messaoudi, N.; Van Uffelen, P.; Aı̈t Abderrahim, H.

    2003-06-01

    The Belgian Nuclear Research Centre, SCK • CEN, is currently working on the pre-design of the multipurpose accelerator-driven system (ADS) MYRRHA. A demonstration of the possibility of transmutation of minor actinides and long-lived fission products with a realistic design of experimental fuel targets and prognosis of their behaviour under typical ADS conditions is an important task in the MYRRHA project. In the present article, the irradiation behaviour of three different oxide fuel mixtures, containing americium and plutonium - (Am,Pu,U)O 2- x with urania matrix, (Am,Pu,Th)O 2- x with thoria matrix and (Am,Y,Pu,Zr)O 2- x with inert zirconia matrix stabilised by yttria - were simulated with the new fuel performance code MACROS, which is under development and testing at the SCK • CEN. All the fuel rods were considered to be of the same design and sizes: annular fuel pellets, helium bounded with the stainless steel cladding, and a large gas plenum. The liquid lead-bismuth eutectic was used as coolant. Typical irradiation conditions of the hottest fuel assembly of the MYRRHA subcritical core were pre-calculated with the MCNPX code and used in the following calculations as the input data. The results of prediction of the thermo-mechanical behaviour of the designed rods with the considered fuels during three irradiation cycles of 90 EFPD are presented and discussed.

  1. CESAR5.3: Isotopic depletion for Research and Testing Reactor decommissioning

    NASA Astrophysics Data System (ADS)

    Ritter, Guillaume; Eschbach, Romain; Girieud, Richard; Soulard, Maxime

    2018-05-01

    CESAR stands in French for "simplified depletion applied to reprocessing". The current version is now number 5.3 as it started 30 years ago from a long lasting cooperation with ORANO, co-owner of the code with CEA. This computer code can characterize several types of nuclear fuel assemblies, from the most regular PWR power plants to the most unexpected gas cooled and graphite moderated old timer research facility. Each type of fuel can also include numerous ranges of compositions like UOX, MOX, LEU or HEU. Such versatility comes from a broad catalog of cross section libraries, each corresponding to a specific reactor and fuel matrix design. CESAR goes beyond fuel characterization and can also provide an evaluation of structural materials activation. The cross-sections libraries are generated using the most refined assembly or core level transport code calculation schemes (CEA APOLLO2 or ERANOS), based on the European JEFF3.1.1 nuclear data base. Each new CESAR self shielded cross section library benefits all most recent CEA recommendations as for deterministic physics options. Resulting cross sections are organized as a function of burn up and initial fuel enrichment which allows to condensate this costly process into a series of Legendre polynomials. The final outcome is a fast, accurate and compact CESAR cross section library. Each library is fully validated, against a stochastic transport code (CEA TRIPOLI 4) if needed and against a reference depletion code (CEA DARWIN). Using CESAR does not require any of the neutron physics expertise implemented into cross section libraries generation. It is based on top quality nuclear data (JEFF3.1.1 for ˜400 isotopes) and includes up to date Bateman equation solving algorithms. However, defining a CESAR computation case can be very straightforward. Most results are only 3 steps away from any beginner's ambition: Initial composition, in core depletion and pool decay scenario. On top of a simple utilization architecture, CESAR includes a portable Graphical User Interface which can be broadly deployed in R&D or industrial facilities. Aging facilities currently face decommissioning and dismantling issues. This way to the end of the nuclear fuel cycle requires a careful assessment of source terms in the fuel, core structures and all parts of a facility that must be disposed of with "industrial nuclear" constraints. In that perspective, several CESAR cross section libraries were constructed for early CEA Research and Testing Reactors (RTR's). The aim of this paper is to describe how CESAR operates and how it can be used to help these facilities care for waste disposal, nuclear materials transport or basic safety cases. The test case will be based on the PHEBUS Facility located at CEA - Cadarache.

  2. Method for detecting core malware sites related to biomedical information systems.

    PubMed

    Kim, Dohoon; Choi, Donghee; Jin, Jonghyun

    2015-01-01

    Most advanced persistent threat attacks target web users through malicious code within landing (exploit) or distribution sites. There is an urgent need to block the affected websites. Attacks on biomedical information systems are no exception to this issue. In this paper, we present a method for locating malicious websites that attempt to attack biomedical information systems. Our approach uses malicious code crawling to rearrange websites in the order of their risk index by analyzing the centrality between malware sites and proactively eliminates the root of these sites by finding the core-hub node, thereby reducing unnecessary security policies. In particular, we dynamically estimate the risk index of the affected websites by analyzing various centrality measures and converting them into a single quantified vector. On average, the proactive elimination of core malicious websites results in an average improvement in zero-day attack detection of more than 20%.

  3. Method for Detecting Core Malware Sites Related to Biomedical Information Systems

    PubMed Central

    Kim, Dohoon; Choi, Donghee; Jin, Jonghyun

    2015-01-01

    Most advanced persistent threat attacks target web users through malicious code within landing (exploit) or distribution sites. There is an urgent need to block the affected websites. Attacks on biomedical information systems are no exception to this issue. In this paper, we present a method for locating malicious websites that attempt to attack biomedical information systems. Our approach uses malicious code crawling to rearrange websites in the order of their risk index by analyzing the centrality between malware sites and proactively eliminates the root of these sites by finding the core-hub node, thereby reducing unnecessary security policies. In particular, we dynamically estimate the risk index of the affected websites by analyzing various centrality measures and converting them into a single quantified vector. On average, the proactive elimination of core malicious websites results in an average improvement in zero-day attack detection of more than 20%. PMID:25821511

  4. Rapid algorithm prototyping and implementation for power quality measurement

    NASA Astrophysics Data System (ADS)

    Kołek, Krzysztof; Piątek, Krzysztof

    2015-12-01

    This article presents a Model-Based Design (MBD) approach to rapidly implement power quality (PQ) metering algorithms. Power supply quality is a very important aspect of modern power systems and will become even more important in future smart grids. In this case, maintaining the PQ parameters at the desired level will require efficient implementation methods of the metering algorithms. Currently, the development of new, advanced PQ metering algorithms requires new hardware with adequate computational capability and time intensive, cost-ineffective manual implementations. An alternative, considered here, is an MBD approach. The MBD approach focuses on the modelling and validation of the model by simulation, which is well-supported by a Computer-Aided Engineering (CAE) packages. This paper presents two algorithms utilized in modern PQ meters: a phase-locked loop based on an Enhanced Phase Locked Loop (EPLL), and the flicker measurement according to the IEC 61000-4-15 standard. The algorithms were chosen because of their complexity and non-trivial development. They were first modelled in the MATLAB/Simulink package, then tested and validated in a simulation environment. The models, in the form of Simulink diagrams, were next used to automatically generate C code. The code was compiled and executed in real-time on the Zynq Xilinx platform that combines a reconfigurable Field Programmable Gate Array (FPGA) with a dual-core processor. The MBD development of PQ algorithms, automatic code generation, and compilation form a rapid algorithm prototyping and implementation path for PQ measurements. The main advantage of this approach is the ability to focus on the design, validation, and testing stages while skipping over implementation issues. The code generation process renders production-ready code that can be easily used on the target hardware. This is especially important when standards for PQ measurement are in constant development, and the PQ issues in emerging smart grids will require tools for rapid development and implementation of such algorithms.

  5. PlasmaPy: beginning a community developed Python package for plasma physics

    NASA Astrophysics Data System (ADS)

    Murphy, Nicholas A.; Huang, Yi-Min; PlasmaPy Collaboration

    2016-10-01

    In recent years, researchers in several disciplines have collaborated on community-developed open source Python packages such as Astropy, SunPy, and SpacePy. These packages provide core functionality, common frameworks for data analysis and visualization, and educational tools. We propose that our community begins the development of PlasmaPy: a new open source core Python package for plasma physics. PlasmaPy could include commonly used functions in plasma physics, easy-to-use plasma simulation codes, Grad-Shafranov solvers, eigenmode solvers, and tools to analyze both simulations and experiments. The development will include modern programming practices such as version control, embedding documentation in the code, unit tests, and avoiding premature optimization. We will describe early code development on PlasmaPy, and discuss plans moving forward. The success of PlasmaPy depends on active community involvement and a welcoming and inclusive environment, so anyone interested in joining this collaboration should contact the authors.

  6. MOOSE Implementation of MAMBA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Galloway, Jack; Matthews, Topher

    The development of MAMBA is targeted at capturing both core wide CRUD induced power shifts (CIPS) as well as pin-­level CRUD induced localized corrosion (CILC). Both CIPS and CILC require some sort of information from thermal-­hydraulic, neutronics, and fuel performance codes, although the degree of coupling is different for the two effects. Since CIPS necessarily requires a core-­wide power distribution solve, it requires tight coupling with a neutronics code. Conversely, CIPS tends to be an individual pin phenomenon, requiring tight coupling a fuel performance code. As efforts are now focused on coupling MAMBA within the VERA suite, a natural separationmore » has surfaced in which a FORTRAN rewrite of MAMBA is optimal for VERA integration to capture CIPS behavior, while a CILC focused calculation would benefit from a tight coupling with BISON, motivating a MOOSE version of MAMBA.« less

  7. Reporting Guidelines: Optimal Use in Preventive Medicine and Public Health

    PubMed Central

    Popham, Karyn; Calo, William A.; Carpentier, Melissa Y.; Chen, Naomi E.; Kamrudin, Samira A.; Le, Yen-Chi L.; Skala, Katherine A.; Thornton, Logan R.; Mullen, Patricia Dolan

    2012-01-01

    Numerous reporting guidelines are available to help authors write higher quality manuscripts more efficiently. Almost 200 are listed on the EQUATOR (Enhancing the Quality and Transparency of Health Research) Network’s website and they vary in authority, usability, and breadth, making it difficult to decide which one(s) to use. This paper provides consistent information about guidelines for preventive medicine and public health and a framework and sequential approach for selecting them. EQUATOR guidelines were reviewed for relevance to target audiences; selected guidelines were classified as “core” (frequently recommended) or specialized, and the latter were grouped by their focus. Core and specialized guidelines were coded for indicators of authority (simultaneous publication in multiple journals, rationale, scientific background supporting each element, expertise of designers, permanent website/named group), usability (presence of checklists and examples of good reporting), and breadth (manuscript sections covered). Discrepancies were resolved by consensus. Selected guidelines are presented in four tables arranged to facilitate selection: core guidelines, all of which pertain to major research designs; guidelines for additional study designs, topical guidelines, and guidelines for particular manuscript sections. A flow diagram provides an overview. The framework and sequential approach will enable authors as well as editors, peer reviewers, researchers, and systematic reviewers to make optimal use of available guidelines to improve the transparency, clarity, and rigor of manuscripts and research protocols and the efficiency of conducing systematic reviews and meta-analyses. PMID:22992369

  8. Code manual for CONTAIN 2.0: A computer code for nuclear reactor containment analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Murata, K.K.; Williams, D.C.; Griffith, R.O.

    1997-12-01

    The CONTAIN 2.0 computer code is an integrated analysis tool used for predicting the physical conditions, chemical compositions, and distributions of radiological materials inside a containment building following the release of material from the primary system in a light-water reactor accident. It can also predict the source term to the environment. CONTAIN 2.0 is intended to replace the earlier CONTAIN 1.12, which was released in 1991. The purpose of this Code Manual is to provide full documentation of the features and models in CONTAIN 2.0. Besides complete descriptions of the models, this Code Manual provides a complete description of themore » input and output from the code. CONTAIN 2.0 is a highly flexible and modular code that can run problems that are either quite simple or highly complex. An important aspect of CONTAIN is that the interactions among thermal-hydraulic phenomena, aerosol behavior, and fission product behavior are taken into account. The code includes atmospheric models for steam/air thermodynamics, intercell flows, condensation/evaporation on structures and aerosols, aerosol behavior, and gas combustion. It also includes models for reactor cavity phenomena such as core-concrete interactions and coolant pool boiling. Heat conduction in structures, fission product decay and transport, radioactive decay heating, and the thermal-hydraulic and fission product decontamination effects of engineered safety features are also modeled. To the extent possible, the best available models for severe accident phenomena have been incorporated into CONTAIN, but it is intrinsic to the nature of accident analysis that significant uncertainty exists regarding numerous phenomena. In those cases, sensitivity studies can be performed with CONTAIN by means of user-specified input parameters. Thus, the code can be viewed as a tool designed to assist the knowledge reactor safety analyst in evaluating the consequences of specific modeling assumptions.« less

  9. Numerical Prediction of Chevron Nozzle Noise Reduction using Wind-MGBK Methodology

    NASA Technical Reports Server (NTRS)

    Engblom, W.A.; Bridges, J.; Khavarant, A.

    2005-01-01

    Numerical predictions for single-stream chevron nozzle flow performance and farfield noise production are presented. Reynolds Averaged Navier Stokes (RANS) solutions, produced via the WIND flow solver, are provided as input to the MGBK code for prediction of farfield noise distributions. This methodology is applied to a set of sensitivity cases involving varying degrees of chevron inward bend angle relative to the core flow, for both cold and hot exhaust conditions. The sensitivity study results illustrate the effect of increased chevron bend angle and exhaust temperature on enhancement of fine-scale mixing, initiation of core breakdown, nozzle performance, and noise reduction. Direct comparisons with experimental data, including stagnation pressure and temperature rake data, PIV turbulent kinetic energy fields, and 90 degree observer farfield microphone data are provided. Although some deficiencies in the numerical predictions are evident, the correct farfield noise spectra trends are captured by the WIND-MGBK method, including the noise reduction benefit of chevrons. Implications of these results to future chevron design efforts are addressed.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huml, O.

    The objective of this work was to determine the neutron flux density distribution in various places of the training reactor VR-1 Sparrow. This experiment was performed on the new core design C1, composed of the new low-enriched uranium fuel cells IRT-4M (19.7 %). This fuel replaced the old high-enriched uranium fuel IRT-3M (36 %) within the framework of the RERTR Program in September 2005. The measurement used the neutron activation analysis method with gold wires. The principle of this method consists in neutron capture in a nucleus of the material forming the activation detector. This capture can change the nucleusmore » in a radioisotope, whose activity can be measured. The absorption cross-section values were evaluated by MCNP computer code. The gold wires were irradiated in seven different positions in the core C1. All irradiations were performed at reactor power level 1E8 (1 kW{sub therm}). The activity of segments of irradiated wires was measured by special automatic device called 'Drat' (Wire in English). (author)« less

  11. STRS SpaceWire FPGA Module

    NASA Technical Reports Server (NTRS)

    Lux, James P.; Taylor, Gregory H.; Lang, Minh; Stern, Ryan A.

    2011-01-01

    An FPGA module leverages the previous work from Goddard Space Flight Center (GSFC) relating to NASA s Space Telecommunications Radio System (STRS) project. The STRS SpaceWire FPGA Module is written in the Verilog Register Transfer Level (RTL) language, and it encapsulates an unmodified GSFC core (which is written in VHDL). The module has the necessary inputs/outputs (I/Os) and parameters to integrate seamlessly with the SPARC I/O FPGA Interface module (also developed for the STRS operating environment, OE). Software running on the SPARC processor can access the configuration and status registers within the SpaceWire module. This allows software to control and monitor the SpaceWire functions, but it is also used to give software direct access to what is transmitted and received through the link. SpaceWire data characters can be sent/received through the software interface, as well as through the dedicated interface on the GSFC core. Similarly, SpaceWire time codes can be sent/received through the software interface or through a dedicated interface on the core. This innovation is designed for plug-and-play integration in the STRS OE. The SpaceWire module simplifies the interfaces to the GSFC core, and synchronizes all I/O to a single clock. An interrupt output (with optional masking) identifies time-sensitive events within the module. Test modes were added to allow internal loopback of the SpaceWire link and internal loopback of the client-side data interface.

  12. LFRic: Building a new Unified Model

    NASA Astrophysics Data System (ADS)

    Melvin, Thomas; Mullerworth, Steve; Ford, Rupert; Maynard, Chris; Hobson, Mike

    2017-04-01

    The LFRic project, named for Lewis Fry Richardson, aims to develop a replacement for the Met Office Unified Model in order to meet the challenges which will be presented by the next generation of exascale supercomputers. This project, a collaboration between the Met Office, STFC Daresbury and the University of Manchester, builds on the earlier GungHo project to redesign the dynamical core, in partnership with NERC. The new atmospheric model aims to retain the performance of the current ENDGame dynamical core and associated subgrid physics, while also enabling a far greater scalability and flexibility to accommodate future supercomputer architectures. Design of the model revolves around a principle of a 'separation of concerns', whereby the natural science aspects of the code can be developed without worrying about the underlying architecture, while machine dependent optimisations can be carried out at a high level. These principles are put into practice through the development of an autogenerated Parallel Systems software layer (known as the PSy layer) using a domain-specific compiler called PSyclone. The prototype model includes a re-write of the dynamical core using a mixed finite element method, in which different function spaces are used to represent the various fields. It is able to run in parallel with MPI and OpenMP and has been tested on over 200,000 cores. In this talk an overview of the both the natural science and computational science implementations of the model will be presented.

  13. Well logs and core data from selected cored intervals, National Petroleum Reserve, Alaska

    USGS Publications Warehouse

    Nelson, Philip H.; Kibler, Joyce E.

    2001-01-01

    This report is preliminary and has not been reviewed for conformity with U.S. Geological Survey editorial standards or with the North American Stratigraphic Code. Any use of trade, firm, or product names is for descriptive purposes only and does not imply endorsement by the U.S. Government.

  14. Organization patterns of the AGFG genes: an evolutionary study.

    PubMed

    Panaro, Maria Antonietta; Acquafredda, Angela; Calvello, Rosa; Lisi, Sabrina; Dragone, Teresa; Cianciulli, Antonia

    2011-03-01

    A number of proteins which are needed for the building of new immunodeficiency virus type 1 virions can only be translated from unspliced virus-derived pre-mRNAs. These unspliced mRNAs are shuttled through the nuclear pores reaching the cytosol when bound to the viral protein Rev. However, as a cellular co-factor Rev requires a Rev-binding protein of the AGFG family (nucleoporin-related Arf-GAP domain and FG repeats-containing proteins). In this article we address the evolution of the AGFGs by analyzing the first section of the coding mRNAs. This contains a "core module" which can be traced from Drosophilae to fish, amphibia, birds, and mammals, including man. In the subfamily of AGFG1 molecules the estimated conservation from Drosophilae to primates is 67% (with limited gaps). In some Drosophilae the core module is preceded by a long stretch of more than 300 coding nucleotides, but this additional module is absent in other Drosophilae and in all AGFG1s of other species. The AGFG2 molecules emerged later in evolution, possibly deriving from a duplication of AGFG1s. AGFG2s, present in mammals only, exhibit an additional module of about 50 coding nucleotides ahead of the core module, which is significantly less conserved (54%, with more remarkable gaps). This additional module does not seem to have homologies with the additional module of Drosophilae nor with the precoding section of AGFG1s. Interestingly, in birds a highly re-edited form of the AGFG1 core module (Gallus gallus, Galliformes) coexists with a typical form of the AGFG1 core module (Taeniopygia guttata, Passeriformes).

  15. Bifrost: a Modular Python/C++ Framework for Development of High-Throughput Data Analysis Pipelines

    NASA Astrophysics Data System (ADS)

    Cranmer, Miles; Barsdell, Benjamin R.; Price, Danny C.; Garsden, Hugh; Taylor, Gregory B.; Dowell, Jayce; Schinzel, Frank; Costa, Timothy; Greenhill, Lincoln J.

    2017-01-01

    Large radio interferometers have data rates that render long-term storage of raw correlator data infeasible, thus motivating development of real-time processing software. For high-throughput applications, processing pipelines are challenging to design and implement. Motivated by science efforts with the Long Wavelength Array, we have developed Bifrost, a novel Python/C++ framework that eases the development of high-throughput data analysis software by packaging algorithms as black box processes in a directed graph. This strategy to modularize code allows astronomers to create parallelism without code adjustment. Bifrost uses CPU/GPU ’circular memory’ data buffers that enable ready introduction of arbitrary functions into the processing path for ’streams’ of data, and allow pipelines to automatically reconfigure in response to astrophysical transient detection or input of new observing settings. We have deployed and tested Bifrost at the latest Long Wavelength Array station, in Sevilleta National Wildlife Refuge, NM, where it handles throughput exceeding 10 Gbps per CPU core.

  16. Uranus: a rapid prototyping tool for FPGA embedded computer vision

    NASA Astrophysics Data System (ADS)

    Rosales-Hernández, Victor; Castillo-Jimenez, Liz; Viveros-Velez, Gilberto; Zuñiga-Grajeda, Virgilio; Treviño Torres, Abel; Arias-Estrada, M.

    2007-01-01

    The starting point for all successful system development is the simulation. Performing high level simulation of a system can help to identify, insolate and fix design problems. This work presents Uranus, a software tool for simulation and evaluation of image processing algorithms with support to migrate them to an FPGA environment for algorithm acceleration and embedded processes purposes. The tool includes an integrated library of previous coded operators in software and provides the necessary support to read and display image sequences as well as video files. The user can use the previous compiled soft-operators in a high level process chain, and code his own operators. Additional to the prototyping tool, Uranus offers FPGA-based hardware architecture with the same organization as the software prototyping part. The hardware architecture contains a library of FPGA IP cores for image processing that are connected with a PowerPC based system. The Uranus environment is intended for rapid prototyping of machine vision and the migration to FPGA accelerator platform, and it is distributed for academic purposes.

  17. Preliminary Structural Sizing and Alternative Material Trade Study of CEV Crew Module

    NASA Technical Reports Server (NTRS)

    Bednarcyk, Brett A.; Arnold, Steve M.; Collier, Craig S.; Yarrington, Phillip W.

    2007-01-01

    This paper presents the results of a preliminary structural sizing and alternate material trade study for NASA s Crew Exploration Vehicle (CEV) Crew Module (CM). This critical CEV component will house the astronauts during ascent, docking with the International Space Station, reentry, and landing. The alternate material design study considers three materials beyond the standard metallic (aluminum alloy) design that resulted from an earlier NASA Smart Buyer Team analysis. These materials are graphite/epoxy composite laminates, discontinuously reinforced SiC/Al (DRA) composites, and a novel integrated panel material/concept known as WebCore. Using the HyperSizer (Collier Research and Development Corporation) structural sizing software and NASTRAN finite element analysis code, a comparison is made among these materials for the three composite CM concepts considered by the 2006 NASA Engineering and Safety Center Composite Crew Module project.

  18. Design of verification platform for wireless vision sensor networks

    NASA Astrophysics Data System (ADS)

    Ye, Juanjuan; Shang, Fei; Yu, Chuang

    2017-08-01

    At present, the majority of research for wireless vision sensor networks (WVSNs) still remains in the software simulation stage, and the verification platforms of WVSNs that available for use are very few. This situation seriously restricts the transformation from theory research of WVSNs to practical application. Therefore, it is necessary to study the construction of verification platform of WVSNs. This paper combines wireless transceiver module, visual information acquisition module and power acquisition module, designs a high-performance wireless vision sensor node whose core is ARM11 microprocessor and selects AODV as the routing protocol to set up a verification platform called AdvanWorks for WVSNs. Experiments show that the AdvanWorks can successfully achieve functions of image acquisition, coding, wireless transmission, and obtain the effective distance parameters between nodes, which lays a good foundation for the follow-up application of WVSNs.

  19. Three-dimensional Boltzmann-Hydro Code for Core-collapse in Massive Stars. II. The Implementation of Moving-mesh for Neutron Star Kicks

    NASA Astrophysics Data System (ADS)

    Nagakura, Hiroki; Iwakami, Wakana; Furusawa, Shun; Sumiyoshi, Kohsuke; Yamada, Shoichi; Matsufuru, Hideo; Imakura, Akira

    2017-04-01

    We present a newly developed moving-mesh technique for the multi-dimensional Boltzmann-Hydro code for the simulation of core-collapse supernovae (CCSNe). What makes this technique different from others is the fact that it treats not only hydrodynamics but also neutrino transfer in the language of the 3 + 1 formalism of general relativity (GR), making use of the shift vector to specify the time evolution of the coordinate system. This means that the transport part of our code is essentially general relativistic, although in this paper it is applied only to the moving curvilinear coordinates in the flat Minknowski spacetime, since the gravity part is still Newtonian. The numerical aspect of the implementation is also described in detail. Employing the axisymmetric two-dimensional version of the code, we conduct two test computations: oscillations and runaways of proto-neutron star (PNS). We show that our new method works fine, tracking the motions of PNS correctly. We believe that this is a major advancement toward the realistic simulation of CCSNe.

  20. How collaboration in therapy becomes therapeutic: the therapeutic collaboration coding system.

    PubMed

    Ribeiro, Eugénia; Ribeiro, António P; Gonçalves, Miguel M; Horvath, Adam O; Stiles, William B

    2013-09-01

    The quality and strength of the therapeutic collaboration, the core of the alliance, is reliably associated with positive therapy outcomes. The urgent challenge for clinicians and researchers is constructing a conceptual framework to integrate the dialectical work that fosters collaboration, with a model of how clients make progress in therapy. We propose a conceptual account of how collaboration in therapy becomes therapeutic. In addition, we report on the construction of a coding system - the therapeutic collaboration coding system (TCCS) - designed to analyse and track on a moment-by-moment basis the interaction between therapist and client. Preliminary evidence is presented regarding the coding system's psychometric properties. The TCCS evaluates each speaking turn and assesses whether and how therapists are working within the client's therapeutic zone of proximal development, defined as the space between the client's actual therapeutic developmental level and their potential developmental level that can be reached in collaboration with the therapist. We applied the TCCS to five cases: a good and a poor outcome case of narrative therapy, a good and a poor outcome case of cognitive-behavioural therapy, and a dropout case of narrative therapy. The TCCS offers markers that may help researchers better understand the therapeutic collaboration on a moment-to-moment basis and may help therapists better regulate the relationship. © 2012 The British Psychological Society.

  1. 13 CFR 121.1103 - What are the procedures for appealing a NAICS code designation?

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... appealing a NAICS code designation? 121.1103 Section 121.1103 Business Credit and Assistance SMALL BUSINESS... Determinations and Naics Code Designations § 121.1103 What are the procedures for appealing a NAICS code... code designation and applicable size standard must be served and filed within 10 calendar days after...

  2. BNL program in support of LWR degraded-core accident analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ginsberg, T.; Greene, G.A.

    1982-01-01

    Two major sources of loading on dry watr reactor containments are steam generatin from core debris water thermal interactions and molten core-concrete interactions. Experiments are in progress at BNL in support of analytical model development related to aspects of the above containment loading mechanisms. The work supports development and evaluation of the CORCON (Muir, 1981) and MARCH (Wooton, 1980) computer codes. Progress in the two programs is described in this paper. 8 figures.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Epiney, A.; Canepa, S.; Zerkak, O.

    The STARS project at the Paul Scherrer Institut (PSI) has adopted the TRACE thermal-hydraulic (T-H) code for best-estimate system transient simulations of the Swiss Light Water Reactors (LWRs). For analyses involving interactions between system and core, a coupling of TRACE with the SIMULATE-3K (S3K) LWR core simulator has also been developed. In this configuration, the TRACE code and associated nuclear power reactor simulation models play a central role to achieve a comprehensive safety analysis capability. Thus, efforts have now been undertaken to consolidate the validation strategy by implementing a more rigorous and structured assessment approach for TRACE applications involving eithermore » only system T-H evaluations or requiring interfaces to e.g. detailed core or fuel behavior models. The first part of this paper presents the preliminary concepts of this validation strategy. The principle is to systematically track the evolution of a given set of predicted physical Quantities of Interest (QoIs) over a multidimensional parametric space where each of the dimensions represent the evolution of specific analysis aspects, including e.g. code version, transient specific simulation methodology and model "nodalisation". If properly set up, such environment should provide code developers and code users with persistent (less affected by user effect) and quantified information (sensitivity of QoIs) on the applicability of a simulation scheme (codes, input models, methodology) for steady state and transient analysis of full LWR systems. Through this, for each given transient/accident, critical paths of the validation process can be identified that could then translate into defining reference schemes to be applied for downstream predictive simulations. In order to illustrate this approach, the second part of this paper presents a first application of this validation strategy to an inadvertent blowdown event that occurred in a Swiss BWR/6. The transient was initiated by the spurious actuation of the Automatic Depressurization System (ADS). The validation approach progresses through a number of dimensions here: First, the same BWR system simulation model is assessed for different versions of the TRACE code, up to the most recent one. The second dimension is the "nodalisation" dimension, where changes to the input model are assessed. The third dimension is the "methodology" dimension. In this case imposed power and an updated TRACE core model are investigated. For each step in each validation dimension, a common set of QoIs are investigated. For the steady-state results, these include fuel temperatures distributions. For the transient part of the present study, the evaluated QoIs include the system pressure evolution and water carry-over into the steam line.« less

  4. Initial Comparison of Direct and Legacy Modeling Approaches for Radial Core Expansion Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shemon, Emily R.

    2016-10-10

    Radial core expansion in sodium-cooled fast reactors provides an important reactivity feedback effect. As the reactor power increases due to normal start up conditions or accident scenarios, the core and surrounding materials heat up, causing both grid plate expansion and bowing of the assembly ducts. When the core restraint system is designed correctly, the resulting structural deformations introduce negative reactivity which decreases the reactor power. Historically, an indirect procedure has been used to estimate the reactivity feedback due to structural deformation which relies upon perturbation theory and coupling legacy physics codes with limited geometry capabilities. With advancements in modeling andmore » simulation, radial core expansion phenomena can now be modeled directly, providing an assessment of the accuracy of the reactivity feedback coefficients generated by indirect legacy methods. Recently a new capability was added to the PROTEUS-SN unstructured geometry neutron transport solver to analyze deformed meshes quickly and directly. By supplying the deformed mesh in addition to the base configuration input files, PROTEUS-SN automatically processes material adjustments including calculation of region densities to conserve mass, calculation of isotopic densities according to material models (for example, sodium density as a function of temperature), and subsequent re-homogenization of materials. To verify the new capability of directly simulating deformed meshes, PROTEUS-SN was used to compute reactivity feedback for a series of contrived yet representative deformed configurations for the Advanced Burner Test Reactor design. The indirect legacy procedure was also performed to generate reactivity feedback coefficients for the same deformed configurations. Interestingly, the legacy procedure consistently overestimated reactivity feedbacks by 35% compared to direct simulations by PROTEUS-SN. This overestimation indicates that the legacy procedures are in fact not conservative and could be overestimating reactivity feedback effects that are closely tied to reactor safety. We conclude that there is indeed value in performing direct simulation of deformed meshes despite the increased computational expense. PROTEUS-SN is already part of the SHARP multi-physics toolkit where both thermal hydraulics and structural mechanical feedback modeling can be applied but this is the first comparison of direct simulation to legacy techniques for radial core expansion.« less

  5. Automated and Assistive Tools for Accelerated Code migration of Scientific Computing on to Heterogeneous MultiCore Systems

    DTIC Science & Technology

    2017-04-13

    modelling code, a parallel benchmark , and a communication avoiding version of the QR algorithm. Further, several improvements to the OmpSs model were...movement; and a port of the dynamic load balancing library to OmpSs. Finally, several updates to the tools infrastructure were accomplished, including: an...OmpSs: a basic algorithm on image processing applications, a mini application representative of an ocean modelling code, a parallel benchmark , and a

  6. Shaping electromagnetic waves using software-automatically-designed metasurfaces.

    PubMed

    Zhang, Qian; Wan, Xiang; Liu, Shuo; Yuan Yin, Jia; Zhang, Lei; Jun Cui, Tie

    2017-06-15

    We present a fully digital procedure of designing reflective coding metasurfaces to shape reflected electromagnetic waves. The design procedure is completely automatic, controlled by a personal computer. In details, the macro coding units of metasurface are automatically divided into several types (e.g. two types for 1-bit coding, four types for 2-bit coding, etc.), and each type of the macro coding units is formed by discretely random arrangement of micro coding units. By combining an optimization algorithm and commercial electromagnetic software, the digital patterns of the macro coding units are optimized to possess constant phase difference for the reflected waves. The apertures of the designed reflective metasurfaces are formed by arranging the macro coding units with certain coding sequence. To experimentally verify the performance, a coding metasurface is fabricated by automatically designing two digital 1-bit unit cells, which are arranged in array to constitute a periodic coding metasurface to generate the required four-beam radiations with specific directions. Two complicated functional metasurfaces with circularly- and elliptically-shaped radiation beams are realized by automatically designing 4-bit macro coding units, showing excellent performance of the automatic designs by software. The proposed method provides a smart tool to realize various functional devices and systems automatically.

  7. Parallel tiled Nussinov RNA folding loop nest generated using both dependence graph transitive closure and loop skewing.

    PubMed

    Palkowski, Marek; Bielecki, Wlodzimierz

    2017-06-02

    RNA secondary structure prediction is a compute intensive task that lies at the core of several search algorithms in bioinformatics. Fortunately, the RNA folding approaches, such as the Nussinov base pair maximization, involve mathematical operations over affine control loops whose iteration space can be represented by the polyhedral model. Polyhedral compilation techniques have proven to be a powerful tool for optimization of dense array codes. However, classical affine loop nest transformations used with these techniques do not optimize effectively codes of dynamic programming of RNA structure predictions. The purpose of this paper is to present a novel approach allowing for generation of a parallel tiled Nussinov RNA loop nest exposing significantly higher performance than that of known related code. This effect is achieved due to improving code locality and calculation parallelization. In order to improve code locality, we apply our previously published technique of automatic loop nest tiling to all the three loops of the Nussinov loop nest. This approach first forms original rectangular 3D tiles and then corrects them to establish their validity by means of applying the transitive closure of a dependence graph. To produce parallel code, we apply the loop skewing technique to a tiled Nussinov loop nest. The technique is implemented as a part of the publicly available polyhedral source-to-source TRACO compiler. Generated code was run on modern Intel multi-core processors and coprocessors. We present the speed-up factor of generated Nussinov RNA parallel code and demonstrate that it is considerably faster than related codes in which only the two outer loops of the Nussinov loop nest are tiled.

  8. Three-dimensional modeling of direct-drive cryogenic implosions on OMEGA

    DOE PAGES

    Igumenshchev, I. V.; Goncharov, V. N.; Marshall, F. J.; ...

    2016-05-04

    The effects of large-scale (with Legendre modes ≲10) laser-imposed nonuniformities in direct-drive cryogenic implosions on the OMEGA laser system are investigated using three-dimension hydrodynamic simulations performed using a newly developed code ASTER. Sources of these nonuniformities include an illumination pattern produced by 60 OMEGA laser beams, capsule offsets (~10 to 20 μm), and imperfect pointing, energy balance, and timing of the beams (with typical σ rms ~10 μm, 10%, and 5 ps, respectively). Two implosion designs using 26-kJ triple-picket laser pulses were studied: a nominal design, in which a 880-μm-diameter capsule is illuminated by the same-diameter beams, and a “R75”more » design using a capsule of 900 μm in diameter and beams of 75% of this diameter. Simulations found that nonuniformities because of capsule offsets and beam imbalance have the largest effect on implosion performance. These nonuniformities lead to significant distortions of implosion cores resulting in an incomplete stagnation. The shape of distorted cores is well represented by neutron images, but loosely in x-rays. Simulated neutron spectra from perturbed implosions show large directional variations and up to ~ 2 keV variation of the hot spot temperature inferred from these spectra. The R75 design is more hydrodynamically efficient because of mitigation of crossed-beam energy transfer, but also suffers more from the nonuniformities. Furthermore, simulations predict a performance advantage of this design over the nominal design when the target offset and beam imbalance σ rms are reduced to less than 5 μm and 5%, respectively.« less

  9. Three-dimensional modeling of direct-drive cryogenic implosions on OMEGA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Igumenshchev, I. V.; Goncharov, V. N.; Marshall, F. J.

    The effects of large-scale (with Legendre modes ≲10) laser-imposed nonuniformities in direct-drive cryogenic implosions on the OMEGA laser system are investigated using three-dimension hydrodynamic simulations performed using a newly developed code ASTER. Sources of these nonuniformities include an illumination pattern produced by 60 OMEGA laser beams, capsule offsets (~10 to 20 μm), and imperfect pointing, energy balance, and timing of the beams (with typical σ rms ~10 μm, 10%, and 5 ps, respectively). Two implosion designs using 26-kJ triple-picket laser pulses were studied: a nominal design, in which a 880-μm-diameter capsule is illuminated by the same-diameter beams, and a “R75”more » design using a capsule of 900 μm in diameter and beams of 75% of this diameter. Simulations found that nonuniformities because of capsule offsets and beam imbalance have the largest effect on implosion performance. These nonuniformities lead to significant distortions of implosion cores resulting in an incomplete stagnation. The shape of distorted cores is well represented by neutron images, but loosely in x-rays. Simulated neutron spectra from perturbed implosions show large directional variations and up to ~ 2 keV variation of the hot spot temperature inferred from these spectra. The R75 design is more hydrodynamically efficient because of mitigation of crossed-beam energy transfer, but also suffers more from the nonuniformities. Furthermore, simulations predict a performance advantage of this design over the nominal design when the target offset and beam imbalance σ rms are reduced to less than 5 μm and 5%, respectively.« less

  10. Simulation numerique de l'effet du reflecteur radial sur les cellules rep en utilisant les codes DRAGON et DONJON

    NASA Astrophysics Data System (ADS)

    Bejaoui, Najoua

    The pressurized water nuclear reactors (PWRs) is the largest fleet of nuclear reactors in operation around the world. Although these reactors have been studied extensively by designers and operators using efficient numerical methods, there are still some calculation weaknesses, given the geometric complexity of the core, still unresolved such as the analysis of the neutron flux's behavior at the core-reflector interface. The standard calculation scheme is a two steps process. In the first step, a detailed calculation at the assembly level with reflective boundary conditions, provides homogenized cross-sections for the assemblies, condensed to a reduced number of groups; this step is called the lattice calculation. The second step uses homogenized properties in each assemblies to calculate reactor properties at the core level. This step is called the full-core calculation or whole-core calculation. This decoupling of the two calculation steps is the origin of methodological bias particularly at the interface core reflector: the periodicity hypothesis used to calculate cross section librairies becomes less pertinent for assemblies that are adjacent to the reflector generally represented by these two models: thus the introduction of equivalent reflector or albedo matrices. The reflector helps to slowdown neutrons leaving the reactor and returning them to the core. This effect leads to two fission peaks in fuel assemblies localised at the core/reflector interface, the fission rate increasing due to the greater proportion of reentrant neutrons. This change in the neutron spectrum arises deep inside the fuel located on the outskirts of the core. To remedy this we simulated a peripheral assembly reflected with TMI-PWR reflector and developed an advanced calculation scheme that takes into account the environment of the peripheral assemblies and generate equivalent neutronic properties for the reflector. This scheme is tested on a core without control mechanisms and charged with fresh fuel. The results of this study showed that explicit representation of reflector and calculation of peripheral assembly with our advanced scheme allow corrections to the energy spectrum at the core interface and increase the peripheral power by up to 12% compared with that of the reference scheme.

  11. Pebble Bed Reactors Design Optimization Methods and their Application to the Pebble Bed Fluoride Salt Cooled High Temperature Reactor (PB-FHR)

    NASA Astrophysics Data System (ADS)

    Cisneros, Anselmo Tomas, Jr.

    The Fluoride salt cooled High temperature Reactor (FHR) is a class of advanced nuclear reactors that combine the robust coated particle fuel form from high temperature gas cooled reactors, direct reactor auxillary cooling system (DRACS) passive decay removal of liquid metal fast reactors, and the transparent, high volumetric heat capacitance liquid fluoride salt working fluids---flibe (33%7Li2F-67%BeF)---from molten salt reactors. This combination of fuel and coolant enables FHRs to operate in a high-temperature low-pressure design space that has beneficial safety and economic implications. In 2012, UC Berkeley was charged with developing a pre-conceptual design of a commercial prototype FHR---the Pebble Bed- Fluoride Salt Cooled High Temperature Reactor (PB-FHR)---as part of the Nuclear Energy University Programs' (NEUP) integrated research project. The Mark 1 design of the PB-FHR (Mk1 PB-FHR) is 236 MWt flibe cooled pebble bed nuclear heat source that drives an open-air Brayton combine-cycle power conversion system. The PB-FHR's pebble bed consists of a 19.8% enriched uranium fuel core surrounded by an inert graphite pebble reflector that shields the outer solid graphite reflector, core barrel and reactor vessel. The fuel reaches an average burnup of 178000 MWt-d/MT. The Mk1 PB-FHR exhibits strong negative temperature reactivity feedback from the fuel, graphite moderator and the flibe coolant but a small positive temperature reactivity feedback of the inner reflector and from the outer graphite pebble reflector. A novel neutronics and depletion methodology---the multiple burnup state methodology was developed for an accurate and efficient search for the equilibrium composition of an arbitrary continuously refueled pebble bed reactor core. The Burnup Equilibrium Analysis Utility (BEAU) computer program was developed to implement this methodology. BEAU was successfully benchmarked against published results generated with existing equilibrium depletion codes VSOP and PEBBED for a high temperature gas cooled pebble bed reactor. Three parametric studies were performed for exploring the design space of the PB-FHR---to select a fuel design for the PB-FHR] to select a core configuration; and to optimize the PB-FHR design. These parametric studies investigated trends in the dependence of important reactor performance parameters such as burnup, temperature reactivity feedback, radiation damage, etc on the reactor design variables and attempted to understand the underlying reactor physics responsible for these trends. A pebble fuel parametric study determined that pebble fuel should be designed with a carbon to heavy metal ratio (C/HM) less than 400 to maintain negative coolant temperature reactivity coefficients. Seed and thorium blanket-, seed and inert pebble reflector- and seed only core configurations were investigated for annular FHR PBRs---the C/HM of the blanket pebbles and discharge burnup of the thorium blanket pebbles were additional design variable for core configurations with thorium blankets. Either a thorium blanket or graphite pebble reflector is required to shield the outer graphite reflector enough to extend its service lifetime to 60 EFPY. The fuel fabrication costs and long cycle lengths of the thorium blanket fuel limit the potential economic advantages of using a thorium blanket. Therefore, the seed and pebble reflector core configuration was adopted as the baseline core configuration. Multi-objective optimization with respect to economics was performed for the PB-FHR accounting for safety and other physical design constraints derived from the high-level safety regulatory criteria. These physical constraints were applied along in a design tool, Nuclear Application Value Estimator, that evaluated a simplified cash flow economics model based on estimates of reactor performance parameters calculated using correlations based on the results of parametric design studies for a specific PB-FHR design and a set of economic assumptions about the electricity market to evaluate the economic implications of design decisions. The optimal PB-FHR design---Mark 1 PB-FHR---is described along with a detailed summary of its performance characteristics including: the burnup, the burnup evolution, temperature reactivity coefficients, the power distribution, radiation damage distributions, control element worths, decay heat curves and tritium production rates. The Mk1 PB-FHR satisfies the PB-FHR safety criteria. The fuel, moderator (pebble core, pebble shell, graphite matrix, TRISO layers) and coolant have global negative temperature reactivity coefficients and the fuel temperatures are well within their limits.

  12. Salvus: A flexible open-source package for waveform modelling and inversion from laboratory to global scales

    NASA Astrophysics Data System (ADS)

    Afanasiev, M.; Boehm, C.; van Driel, M.; Krischer, L.; May, D.; Rietmann, M.; Fichtner, A.

    2016-12-01

    Recent years have been witness to the application of waveform inversion to new and exciting domains, ranging from non-destructive testing to global seismology. Often, each new application brings with it novel wave propagation physics, spatial and temporal discretizations, and models of variable complexity. Adapting existing software to these novel applications often requires a significant investment of time, and acts as a barrier to progress. To combat these problems we introduce Salvus, a software package designed to solve large-scale full-waveform inverse problems, with a focus on both flexibility and performance. Based on a high order finite (spectral) element discretization, we have built Salvus to work on unstructured quad/hex meshes in both 2 or 3 dimensions, with support for P1-P3 bases on triangles and tetrahedra. A diverse (and expanding) collection of wave propagation physics are supported (i.e. coupled solid-fluid). With a focus on the inverse problem, functionality is provided to ease integration with internal and external optimization libraries. Additionally, a python-based meshing package is included to simplify the generation and manipulation of regional to global scale Earth models (quad/hex), with interfaces available to external mesh generators for complex engineering-scale applications (quad/hex/tri/tet). Finally, to ensure that the code remains accurate and maintainable, we build upon software libraries such as PETSc and Eigen, and follow modern software design and testing protocols. Salvus bridges the gap between research and production codes with a design based on C++ mixins and Python wrappers that separates the physical equations from the numerical core. This allows domain scientists to add new equations using a high-level interface, without having to worry about optimized implementation details. Our goal in this presentation is to introduce the code, show several examples across the scales, and discuss some of the extensible design points.

  13. Salvus: A flexible high-performance and open-source package for waveform modelling and inversion from laboratory to global scales

    NASA Astrophysics Data System (ADS)

    Afanasiev, Michael; Boehm, Christian; van Driel, Martin; Krischer, Lion; May, Dave; Rietmann, Max; Fichtner, Andreas

    2017-04-01

    Recent years have been witness to the application of waveform inversion to new and exciting domains, ranging from non-destructive testing to global seismology. Often, each new application brings with it novel wave propagation physics, spatial and temporal discretizations, and models of variable complexity. Adapting existing software to these novel applications often requires a significant investment of time, and acts as a barrier to progress. To combat these problems we introduce Salvus, a software package designed to solve large-scale full-waveform inverse problems, with a focus on both flexibility and performance. Currently based on an abstract implementation of high order finite (spectral) elements, we have built Salvus to work on unstructured quad/hex meshes in both 2 or 3 dimensions, with support for P1-P3 bases on triangles and tetrahedra. A diverse (and expanding) collection of wave propagation physics are supported (i.e. viscoelastic, coupled solid-fluid). With a focus on the inverse problem, functionality is provided to ease integration with internal and external optimization libraries. Additionally, a python-based meshing package is included to simplify the generation and manipulation of regional to global scale Earth models (quad/hex), with interfaces available to external mesh generators for complex engineering-scale applications (quad/hex/tri/tet). Finally, to ensure that the code remains accurate and maintainable, we build upon software libraries such as PETSc and Eigen, and follow modern software design and testing protocols. Salvus bridges the gap between research and production codes with a design based on C++ template mixins and Python wrappers that separates the physical equations from the numerical core. This allows domain scientists to add new equations using a high-level interface, without having to worry about optimized implementation details. Our goal in this presentation is to introduce the code, show several examples across the scales, and discuss some of the extensible design points.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bdzil, John Bohdan

    The full level-set function code, DSD3D, is fully described in LA-14336 (2007) [1]. This ASCI-supported, DSD code project was the last such LANL DSD code project that I was involved with before my retirement in 2007. My part in the project was to design and build the core DSD3D solver, which was to include a robust DSD boundary condition treatment. A robust boundary condition treatment was required, since for an important local “customer,” the only description of the explosives’ boundary was through volume fraction data. Given this requirement, the accuracy issues I had encountered with our “fast-tube,” narrowband, DSD2D solver,more » and the difficulty we had building an efficient MPI-parallel version of the narrowband DSD2D, I decided DSD3D should be built as a full level-set function code, using a totally local DSD boundary condition algorithm for the level-­set function, phi, which did not rely on the gradient of the level-­set function being one, |grad(phi)| = 1. The narrowband DSD2D solver was built on the assumption that |grad(phi)| could be driven to one, and near the boundaries of the explosive this condition was not being satisfied. Since the narrowband is typically no more than10*dx wide, narrowband methods are discrete methods with a fixed, non-­resolvable error, where the error is related to the thickness of the band: the narrower the band the larger the errors. Such a solution represents a discrete approximation to the true solution and does not limit to the solution of the underlying PDEs under grid resolution.The full level-­set function code, DSD3D, is fully described in LA-14336 (2007) [1]. This ASCI-­supported, DSD code project was the last such LANL DSD code project that I was involved with before my retirement in 2007. My part in the project was to design and build the core DSD3D solver, which was to include a robust DSD boundary condition treatment. A robust boundary condition treatment was required, since for an important local “customer,” the only description of the explosives’ boundary was through volume fraction data. Given this requirement, the accuracy issues I had encountered with our “fast-­tube,” narrowband, DSD2D solver, and the difficulty we had building an efficient MPI-parallel version of the narrowband DSD2D, I decided DSD3D should be built as a full level-­set function code, using a totally local DSD boundary condition algorithm for the level-­set function, phi, which did not rely on the gradient of the level-­set function being one, |grad(phi)| = 1. The narrowband DSD2D solver was built on the assumption that |grad(phi)| could be driven to one, and near the boundaries of the explosive this condition was not being satisfied. Since the narrowband is typically no more than10*dx wide, narrowband methods are discrete methods with a fixed, non-resolvable error, where the error is related to the thickness of the band: the narrower the band the larger the errors. Such a solution represents a discrete approximation to the true solution and does not limit to the solution of the underlying PDEs under grid resolution.« less

  15. Utilization of thorium and U-ZrH1.6 fuels in various heterogeneous cores for TRIGA PUSPATI Reactor (RTP)

    NASA Astrophysics Data System (ADS)

    Damahuri, Abdul Hannan Bin; Mohamed, Hassan; Aziz Mohamed, Abdul; Idris, Faridah

    2018-01-01

    The use of thorium as nuclear fuel has been an appealing prospect for many years and will be great significance to nuclear power generation. There is an increasing need for more research on thorium as Malaysian government is currently active in the national Thorium Flagship Project, which was launched in 2014. The thorium project, which is still in phase 1, focuses on the research and development of the thorium extraction from mineral processing ore. Thus, the aim of the study is to investigate other alternative TRIGA PUSPATI Reactor (RTP) core designs that can fully utilize thorium. Currently, the RTP reactor has an average neutron flux of 2.797 x 1012 cm-2/s-1 and an effective multiplication factor, k eff, of 1.001. The RTP core has a circular array core configuration with six circular rings. Each ring consists of 6, 12, 18, 24, 30 or 36 U-ZrH1.6 fuel rods. There are three main type of uranium weight, namely 8.5, 12 and 20 wt.%. For this research, uranium zirconium hydride (U-ZrH1.6) fuel rods in the RTP core were replaced by thorium (ThO2) fuel rods. Seven core configurations with different thorium fuel rods placements were modelled in a 2D structure and simulated using Monte Carlo n-particle (MCNPX) code. Results show that the highest initial criticality obtained is around 1.35101. Additionally there is a significant discrepancy between results from previous study and the work because of the large estimated leakage probability of approximately 21.7% and 2D model simplification.

  16. With or without you: predictive coding and Bayesian inference in the brain

    PubMed Central

    Aitchison, Laurence; Lengyel, Máté

    2018-01-01

    Two theoretical ideas have emerged recently with the ambition to provide a unifying functional explanation of neural population coding and dynamics: predictive coding and Bayesian inference. Here, we describe the two theories and their combination into a single framework: Bayesian predictive coding. We clarify how the two theories can be distinguished, despite sharing core computational concepts and addressing an overlapping set of empirical phenomena. We argue that predictive coding is an algorithmic / representational motif that can serve several different computational goals of which Bayesian inference is but one. Conversely, while Bayesian inference can utilize predictive coding, it can also be realized by a variety of other representations. We critically evaluate the experimental evidence supporting Bayesian predictive coding and discuss how to test it more directly. PMID:28942084

  17. Tuning iteration space slicing based tiled multi-core code implementing Nussinov's RNA folding.

    PubMed

    Palkowski, Marek; Bielecki, Wlodzimierz

    2018-01-15

    RNA folding is an ongoing compute-intensive task of bioinformatics. Parallelization and improving code locality for this kind of algorithms is one of the most relevant areas in computational biology. Fortunately, RNA secondary structure approaches, such as Nussinov's recurrence, involve mathematical operations over affine control loops whose iteration space can be represented by the polyhedral model. This allows us to apply powerful polyhedral compilation techniques based on the transitive closure of dependence graphs to generate parallel tiled code implementing Nussinov's RNA folding. Such techniques are within the iteration space slicing framework - the transitive dependences are applied to the statement instances of interest to produce valid tiles. The main problem at generating parallel tiled code is defining a proper tile size and tile dimension which impact parallelism degree and code locality. To choose the best tile size and tile dimension, we first construct parallel parametric tiled code (parameters are variables defining tile size). With this purpose, we first generate two nonparametric tiled codes with different fixed tile sizes but with the same code structure and then derive a general affine model, which describes all integer factors available in expressions of those codes. Using this model and known integer factors present in the mentioned expressions (they define the left-hand side of the model), we find unknown integers in this model for each integer factor available in the same fixed tiled code position and replace in this code expressions, including integer factors, with those including parameters. Then we use this parallel parametric tiled code to implement the well-known tile size selection (TSS) technique, which allows us to discover in a given search space the best tile size and tile dimension maximizing target code performance. For a given search space, the presented approach allows us to choose the best tile size and tile dimension in parallel tiled code implementing Nussinov's RNA folding. Experimental results, received on modern Intel multi-core processors, demonstrate that this code outperforms known closely related implementations when the length of RNA strands is bigger than 2500.

  18. Fast 2D FWI on a multi and many-cores workstation.

    NASA Astrophysics Data System (ADS)

    Thierry, Philippe; Donno, Daniela; Noble, Mark

    2014-05-01

    Following the introduction of x86 co-processors (Xeon Phi) and the performance increase of standard 2-socket workstations using the latest 12 cores E5-v2 x86-64 CPU, we present here a MPI + OpenMP implementation of an acoustic 2D FWI (full waveform inversion) code which simultaneously runs on the CPUs and on the co-processors installed in a workstation. The main advantage of running a 2D FWI on a workstation is to be able to quickly evaluate new features such as more complicated wave equations, new cost functions, finite-difference stencils or boundary conditions. Since the co-processor is made of 61 in-order x86 cores, each of them having up to 4 threads, this many-core can be seen as a shared memory SMP (symmetric multiprocessing) machine with its own IP address. Depending on the vendor, a single workstation can handle several co-processors making the workstation as a personal cluster under the desk. The original Fortran 90 CPU version of the 2D FWI code is just recompiled to get a Xeon Phi x86 binary. This multi and many-core configuration uses standard compilers and associated MPI as well as math libraries under Linux; therefore, the cost of code development remains constant, while improving computation time. We choose to implement the code with the so-called symmetric mode to fully use the capacity of the workstation, but we also evaluate the scalability of the code in native mode (i.e running only on the co-processor) thanks to the Linux ssh and NFS capabilities. Usual care of optimization and SIMD vectorization is used to ensure optimal performances, and to analyze the application performances and bottlenecks on both platforms. The 2D FWI implementation uses finite-difference time-domain forward modeling and a quasi-Newton (with L-BFGS algorithm) optimization scheme for the model parameters update. Parallelization is achieved through standard MPI shot gathers distribution and OpenMP for domain decomposition within the co-processor. Taking advantage of the 16 GB of memory available on the co-processor we are able to keep wavefields in memory to achieve the gradient computation by cross-correlation of forward and back-propagated wavefields needed by our time-domain FWI scheme, without heavy traffic on the i/o subsystem and PCIe bus. In this presentation we will also review some simple methodologies to determine performance expectation compared to real performances in order to get optimization effort estimation before starting any huge modification or rewriting of research codes. The key message is the ease of use and development of this hybrid configuration to reach not the absolute peak performance value but the optimal one that ensures the best balance between geophysical and computer developments.

  19. Finite Element Optimization for Nondestructive Evaluation on a Graphics Processing Unit for Ground Vehicle Hull Inspection

    DTIC Science & Technology

    2013-08-22

    4 cores, where the code may simultaneously run on the multiple cores or the graphics processing unit (or GPU – to be more specific on an NVIDIA ...allowed to get accurate crack shapes. DISCLAIMER Reference herein to any specific commercial company , product, process, or service by trade name

  20. A MATLAB based 3D modeling and inversion code for MT data

    NASA Astrophysics Data System (ADS)

    Singh, Arun; Dehiya, Rahul; Gupta, Pravin K.; Israil, M.

    2017-07-01

    The development of a MATLAB based computer code, AP3DMT, for modeling and inversion of 3D Magnetotelluric (MT) data is presented. The code comprises two independent components: grid generator code and modeling/inversion code. The grid generator code performs model discretization and acts as an interface by generating various I/O files. The inversion code performs core computations in modular form - forward modeling, data functionals, sensitivity computations and regularization. These modules can be readily extended to other similar inverse problems like Controlled-Source EM (CSEM). The modular structure of the code provides a framework useful for implementation of new applications and inversion algorithms. The use of MATLAB and its libraries makes it more compact and user friendly. The code has been validated on several published models. To demonstrate its versatility and capabilities the results of inversion for two complex models are presented.

Top