Heuristic rules embedded genetic algorithm for in-core fuel management optimization
NASA Astrophysics Data System (ADS)
Alim, Fatih
The objective of this study was to develop a unique methodology and a practical tool for designing loading pattern (LP) and burnable poison (BP) pattern for a given Pressurized Water Reactor (PWR) core. Because of the large number of possible combinations for the fuel assembly (FA) loading in the core, the design of the core configuration is a complex optimization problem. It requires finding an optimal FA arrangement and BP placement in order to achieve maximum cycle length while satisfying the safety constraints. Genetic Algorithms (GA) have been already used to solve this problem for LP optimization for both PWR and Boiling Water Reactor (BWR). The GA, which is a stochastic method works with a group of solutions and uses random variables to make decisions. Based on the theories of evaluation, the GA involves natural selection and reproduction of the individuals in the population for the next generation. The GA works by creating an initial population, evaluating it, and then improving the population by using the evaluation operators. To solve this optimization problem, a LP optimization package, GARCO (Genetic Algorithm Reactor Code Optimization) code is developed in the framework of this thesis. This code is applicable for all types of PWR cores having different geometries and structures with an unlimited number of FA types in the inventory. To reach this goal, an innovative GA is developed by modifying the classical representation of the genotype. To obtain the best result in a shorter time, not only the representation is changed but also the algorithm is changed to use in-core fuel management heuristics rules. The improved GA code was tested to demonstrate and verify the advantages of the new enhancements. The developed methodology is explained in this thesis and preliminary results are shown for the VVER-1000 reactor hexagonal geometry core and the TMI-1 PWR. The improved GA code was tested to verify the advantages of new enhancements. The core physics code used for VVER in this research is Moby-Dick, which was developed to analyze the VVER by SKODA Inc. The SIMULATE-3 code, which is an advanced two-group nodal code, is used to analyze the TMI-1.
Recent improvements of reactor physics codes in MHI
NASA Astrophysics Data System (ADS)
Kosaka, Shinya; Yamaji, Kazuya; Kirimura, Kazuki; Kamiyama, Yohei; Matsumoto, Hideki
2015-12-01
This paper introduces recent improvements for reactor physics codes in Mitsubishi Heavy Industries, Ltd(MHI). MHI has developed a new neutronics design code system Galaxy/Cosmo-S(GCS) for PWR core analysis. After TEPCO's Fukushima Daiichi accident, it is required to consider design extended condition which has not been covered explicitly by the former safety licensing analyses. Under these circumstances, MHI made some improvements for GCS code system. A new resonance calculation model of lattice physics code and homogeneous cross section representative model for core simulator have been developed to apply more wide range core conditions corresponding to severe accident status such like anticipated transient without scram (ATWS) analysis and criticality evaluation of dried-up spent fuel pit. As a result of these improvements, GCS code system has very wide calculation applicability with good accuracy for any core conditions as far as fuel is not damaged. In this paper, the outline of GCS code system is described briefly and recent relevant development activities are presented.
Recent improvements of reactor physics codes in MHI
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kosaka, Shinya, E-mail: shinya-kosaka@mhi.co.jp; Yamaji, Kazuya; Kirimura, Kazuki
2015-12-31
This paper introduces recent improvements for reactor physics codes in Mitsubishi Heavy Industries, Ltd(MHI). MHI has developed a new neutronics design code system Galaxy/Cosmo-S(GCS) for PWR core analysis. After TEPCO’s Fukushima Daiichi accident, it is required to consider design extended condition which has not been covered explicitly by the former safety licensing analyses. Under these circumstances, MHI made some improvements for GCS code system. A new resonance calculation model of lattice physics code and homogeneous cross section representative model for core simulator have been developed to apply more wide range core conditions corresponding to severe accident status such like anticipatedmore » transient without scram (ATWS) analysis and criticality evaluation of dried-up spent fuel pit. As a result of these improvements, GCS code system has very wide calculation applicability with good accuracy for any core conditions as far as fuel is not damaged. In this paper, the outline of GCS code system is described briefly and recent relevant development activities are presented.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Uematsu, Hitoshi; Yamamoto, Toru; Izutsu, Sadayuki
1990-06-01
A reactivity-initiated event is a design-basis accident for the safety analysis of boiling water reactors. It is defined as a rapid transient of reactor power caused by a reactivity insertion of over $1.0 due to a postulated drop or abnormal withdrawal of the control rod from the core. Strong space-dependent feedback effects are associated with the local power increase due to control rod movement. A realistic treatment of the core status in a transient by a code with a detailed core model is recommended in evaluating this event. A three-dimensional transient code, ARIES, has been developed to meet this need.more » The code simulates the event with three-dimensional neutronics, coupled with multichannel thermal hydraulics, based on a nonequilibrium separated flow model. The experimental data obtained in reactivity accident tests performed with the SPERT III-E core are used to verify the entire code, including thermal-hydraulic models.« less
Comparison of ENDF/B-VII.1 and JEFF-3.2 in VVER-1000 operational data calculation
NASA Astrophysics Data System (ADS)
Frybort, Jan
2017-09-01
Safe operation of a nuclear reactor requires an extensive calculational support. Operational data are determined by full-core calculations during the design phase of a fuel loading. Loading pattern and design of fuel assemblies are adjusted to meet safety requirements and optimize reactor operation. Nodal diffusion code ANDREA is used for this task in case of Czech VVER-1000 reactors. Nuclear data for this diffusion code are prepared regularly by lattice code HELIOS. These calculations are conducted in 2D on fuel assembly level. There is also possibility to calculate these macroscopic data by Monte-Carlo Serpent code. It can make use of alternative evaluated libraries. All calculations are affected by inherent uncertainties in nuclear data. It is useful to see results of full-core calculations based on two sets of diffusion data obtained by Serpent code calculations with ENDF/B-VII.1 and JEFF-3.2 nuclear data including also decay data library and fission yields data. The comparison is based directly on fuel assembly level macroscopic data and resulting operational data. This study illustrates effect of evaluated nuclear data library on full-core calculations of a large PWR reactor core. The level of difference which results exclusively from nuclear data selection can help to understand the level of inherent uncertainties of such full-core calculations.
TRAC-PD2 posttest analysis of the CCTF Evaluation-Model Test C1-19 (Run 38). [PWR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Motley, F.
The results of a Transient Reactor Analysis Code posttest analysis of the Cylindral Core Test Facility Evaluation-Model Test agree very well with the results of the experiment. The good agreement obtained verifies the multidimensional analysis capability of the TRAC code. Because of the steep radial power profile, the importance of using fine noding in the core region was demonstrated (as compared with poorer results obtained from an earlier pretest prediction that used a coarsely noded model).
Ex-Vessel Core Melt Modeling Comparison between MELTSPREAD-CORQUENCH and MELCOR 2.1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robb, Kevin R.; Farmer, Mitchell; Francis, Matthew W.
System-level code analyses by both United States and international researchers predict major core melting, bottom head failure, and corium-concrete interaction for Fukushima Daiichi Unit 1 (1F1). Although system codes such as MELCOR and MAAP are capable of capturing a wide range of accident phenomena, they currently do not contain detailed models for evaluating some ex-vessel core melt behavior. However, specialized codes containing more detailed modeling are available for melt spreading such as MELTSPREAD as well as long-term molten corium-concrete interaction (MCCI) and debris coolability such as CORQUENCH. In a preceding study, Enhanced Ex-Vessel Analysis for Fukushima Daiichi Unit 1: Meltmore » Spreading and Core-Concrete Interaction Analyses with MELTSPREAD and CORQUENCH, the MELTSPREAD-CORQUENCH codes predicted the 1F1 core melt readily cooled in contrast to predictions by MELCOR. The user community has taken notice and is in the process of updating their systems codes; specifically MAAP and MELCOR, to improve and reduce conservatism in their ex-vessel core melt models. This report investigates why the MELCOR v2.1 code, compared to the MELTSPREAD and CORQUENCH 3.03 codes, yield differing predictions of ex-vessel melt progression. To accomplish this, the differences in the treatment of the ex-vessel melt with respect to melt spreading and long-term coolability are examined. The differences in modeling approaches are summarized, and a comparison of example code predictions is provided.« less
Improvements and applications of COBRA-TF for stand-alone and coupled LWR safety analyses
DOE Office of Scientific and Technical Information (OSTI.GOV)
Avramova, M.; Cuervo, D.; Ivanov, K.
2006-07-01
The advanced thermal-hydraulic subchannel code COBRA-TF has been recently improved and applied for stand-alone and coupled LWR core calculations at the Pennsylvania State Univ. in cooperation with AREVA NP GmbH (Germany)) and the Technical Univ. of Madrid. To enable COBRA-TF for academic and industrial applications including safety margins evaluations and LWR core design analyses, the code programming, numerics, and basic models were revised and substantially improved. The code has undergone through an extensive validation, verification, and qualification program. (authors)
NASA Astrophysics Data System (ADS)
Karriem, Veronica V.
Nuclear reactor design incorporates the study and application of nuclear physics, nuclear thermal hydraulic and nuclear safety. Theoretical models and numerical methods implemented in computer programs are utilized to analyze and design nuclear reactors. The focus of this PhD study's is the development of an advanced high-fidelity multi-physics code system to perform reactor core analysis for design and safety evaluations of research TRIGA-type reactors. The fuel management and design code system TRIGSIMS was further developed to fulfill the function of a reactor design and analysis code system for the Pennsylvania State Breazeale Reactor (PSBR). TRIGSIMS, which is currently in use at the PSBR, is a fuel management tool, which incorporates the depletion code ORIGEN-S (part of SCALE system) and the Monte Carlo neutronics solver MCNP. The diffusion theory code ADMARC-H is used within TRIGSIMS to accelerate the MCNP calculations. It manages the data and fuel isotopic content and stores it for future burnup calculations. The contribution of this work is the development of an improved version of TRIGSIMS, named TRIGSIMS-TH. TRIGSIMS-TH incorporates a thermal hydraulic module based on the advanced sub-channel code COBRA-TF (CTF). CTF provides the temperature feedback needed in the multi-physics calculations as well as the thermal hydraulics modeling capability of the reactor core. The temperature feedback model is using the CTF-provided local moderator and fuel temperatures for the cross-section modeling for ADMARC-H and MCNP calculations. To perform efficient critical control rod calculations, a methodology for applying a control rod position was implemented in TRIGSIMS-TH, making this code system a modeling and design tool for future core loadings. The new TRIGSIMS-TH is a computer program that interlinks various other functional reactor analysis tools. It consists of the MCNP5, ADMARC-H, ORIGEN-S, and CTF. CTF was coupled with both MCNP and ADMARC-H to provide the heterogeneous temperature distribution throughout the core. Each of these codes is written in its own computer language performing its function and outputs a set of data. TRIGSIMS-TH provides an effective use and data manipulation and transfer between different codes. With the implementation of feedback and control- rod-position modeling methodologies, the TRIGSIMS-TH calculations are more accurate and in a better agreement with measured data. The PSBR is unique in many ways and there are no "off-the-shelf" codes, which can model this design in its entirety. In particular, PSBR has an open core design, which is cooled by natural convection. Combining several codes into a unique system brings many challenges. It also requires substantial knowledge of both operation and core design of the PSBR. This reactor is in operation decades and there is a fair amount of studies and developments in both PSBR thermal hydraulics and neutronics. Measured data is also available for various core loadings and can be used for validation activities. The previous studies and developments in PSBR modeling also aids as a guide to assess the findings of the work herein. In order to incorporate new methods and codes into exiting TRIGSIMS, a re-evaluation of various components of the code was performed to assure the accuracy and efficiency of the existing CTF/MCNP5/ADMARC-H multi-physics coupling. A new set of ADMARC-H diffusion coefficients and cross sections was generated using the SERPENT code. This was needed as the previous data was not generated with thermal hydraulic feedback and the ARO position was used as the critical rod position. The B4C was re-evaluated for this update. The data exchange between ADMARC-H and MCNP5 was modified. The basic core model is given a flexibility to allow for various changes within the core model, and this feature was implemented in TRIGSIMS-TH. The PSBR core in the new code model can be expanded and changed. This allows the new code to be used as a modeling tool for design and analyses of future code loadings.
WWER-1000 core and reflector parameters investigation in the LR-0 reactor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zaritsky, S. M.; Alekseev, N. I.; Bolshagin, S. N.
2006-07-01
Measurements and calculations carried out in the core and reflector of WWER-1000 mock-up are discussed: - the determination of the pin-to-pin power distribution in the core by means of gamma-scanning of fuel pins and pin-to-pin calculations with Monte Carlo code MCU-REA and diffusion codes MOBY-DICK (with WIMS-D4 cell constants preparation) and RADAR - the fast neutron spectra measurements by proton recoil method inside the experimental channel in the core and inside the channel in the baffle, and corresponding calculations in P{sub 3}S{sub 8} approximation of discrete ordinates method with code DORT and BUGLE-96 library - the neutron spectra evaluations (adjustment)more » in the same channels in energy region 0.5 eV-18 MeV based on the activation and solid state track detectors measurements. (authors)« less
Evaluating QR Code Case Studies Using a Mobile Learning Framework
ERIC Educational Resources Information Center
Rikala, Jenni
2014-01-01
The aim of this study was to evaluate the feasibility of Quick Response (QR) codes and mobile devices in the context of Finnish basic education. The feasibility was analyzed through a mobile learning framework, which includes the core characteristics of mobile learning. The study is part of a larger research where the aim is to develop a…
Development of Safety Analysis Code System of Beam Transport and Core for Accelerator Driven System
NASA Astrophysics Data System (ADS)
Aizawa, Naoto; Iwasaki, Tomohiko
2014-06-01
Safety analysis code system of beam transport and core for accelerator driven system (ADS) is developed for the analyses of beam transients such as the change of the shape and position of incident beam. The code system consists of the beam transport analysis part and the core analysis part. TRACE 3-D is employed in the beam transport analysis part, and the shape and incident position of beam at the target are calculated. In the core analysis part, the neutronics, thermo-hydraulics and cladding failure analyses are performed by the use of ADS dynamic calculation code ADSE on the basis of the external source database calculated by PHITS and the cross section database calculated by SRAC, and the programs of the cladding failure analysis for thermoelastic and creep. By the use of the code system, beam transient analyses are performed for the ADS proposed by Japan Atomic Energy Agency. As a result, the rapid increase of the cladding temperature happens and the plastic deformation is caused in several seconds. In addition, the cladding is evaluated to be failed by creep within a hundred seconds. These results have shown that the beam transients have caused a cladding failure.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Salko, Robert K; Sung, Yixing; Kucukboyaci, Vefa
The Virtual Environment for Reactor Applications core simulator (VERA-CS) being developed by the Consortium for the Advanced Simulation of Light Water Reactors (CASL) includes coupled neutronics, thermal-hydraulics, and fuel temperature components with an isotopic depletion capability. The neutronics capability employed is based on MPACT, a three-dimensional (3-D) whole core transport code. The thermal-hydraulics and fuel temperature models are provided by the COBRA-TF (CTF) subchannel code. As part of the CASL development program, the VERA-CS (MPACT/CTF) code system was applied to model and simulate reactor core response with respect to departure from nucleate boiling ratio (DNBR) at the limiting time stepmore » of a postulated pressurized water reactor (PWR) main steamline break (MSLB) event initiated at the hot zero power (HZP), either with offsite power available and the reactor coolant pumps in operation (high-flow case) or without offsite power where the reactor core is cooled through natural circulation (low-flow case). The VERA-CS simulation was based on core boundary conditions from the RETRAN-02 system transient calculations and STAR-CCM+ computational fluid dynamics (CFD) core inlet distribution calculations. The evaluation indicated that the VERA-CS code system is capable of modeling and simulating quasi-steady state reactor core response under the steamline break (SLB) accident condition, the results are insensitive to uncertainties in the inlet flow distributions from the CFD simulations, and the high-flow case is more DNB limiting than the low-flow case.« less
Overview and Current Status of Analyses of Potential LEU Design Concepts for TREAT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Connaway, H. M.; Kontogeorgakos, D. C.; Papadias, D. D.
2015-10-01
Neutronic and thermal-hydraulic analyses have been performed to evaluate the performance of different low-enriched uranium (LEU) fuel design concepts for the conversion of the Transient Reactor Test Facility (TREAT) from its current high-enriched uranium (HEU) fuel. TREAT is an experimental reactor developed to generate high neutron flux transients for the testing of nuclear fuels. The goal of this work was to identify an LEU design which can maintain the performance of the existing HEU core while continuing to operate safely. A wide variety of design options were considered, with a focus on minimizing peak fuel temperatures and optimizing the powermore » coupling between the TREAT core and test samples. Designs were also evaluated to ensure that they provide sufficient reactivity and shutdown margin for each control rod bank. Analyses were performed using the core loading and experiment configuration of historic M8 Power Calibration experiments (M8CAL). The Monte Carlo code MCNP was utilized for steady-state analyses, and transient calculations were performed with the point kinetics code TREKIN. Thermal analyses were performed with the COMSOL multi-physics code. Using the results of this study, a new LEU Baseline design concept is being established, which will be evaluated in detail in a future report.« less
Processor-in-memory-and-storage architecture
DOE Office of Scientific and Technical Information (OSTI.GOV)
DeBenedictis, Erik
A method and apparatus for performing reliable general-purpose computing. Each sub-core of a plurality of sub-cores of a processor core processes a same instruction at a same time. A code analyzer receives a plurality of residues that represents a code word corresponding to the same instruction and an indication of whether the code word is a memory address code or a data code from the plurality of sub-cores. The code analyzer determines whether the plurality of residues are consistent or inconsistent. The code analyzer and the plurality of sub-cores perform a set of operations based on whether the code wordmore » is a memory address code or a data code and a determination of whether the plurality of residues are consistent or inconsistent.« less
NASA Astrophysics Data System (ADS)
Sboev, A. G.; Ilyashenko, A. S.; Vetrova, O. A.
1997-02-01
The method of bucking evaluation, realized in the MOnte Carlo code MCS, is described. This method was applied for calculational analysis of well known light water experiments TRX-1 and TRX-2. The analysis of this comparison shows, that there is no coincidence between Monte Carlo calculations, obtained by different ways: the MCS calculations with given experimental bucklings; the MCS calculations with given bucklings evaluated on base of full core MCS direct simulations; the full core MCNP and MCS direct simulations; the MCNP and MCS calculations, where the results of cell calculations are corrected by the coefficients taking into the account the leakage from the core. Also the buckling values evaluated by full core MCS calculations have differed from experimental ones, especially in the case of TRX-1, when this difference has corresponded to 0.5 percent increase of Keff value.
Swinburn, Boyd; Vandevijvere, Stefanie; Woodward, Alistair; Hornblow, Andrew; Richardson, Ann; Burlingame, Barbara; Borman, Barry; Taylor, Barry; Breier, Bernhard; Arroll, Bruce; Drummond, Bernadette; Grant, Cameron; Bullen, Chris; Wall, Clare; Mhurchu, Cliona Ni; Cameron-Smith, David; Menkes, David; Murdoch, David; Mangin, Dee; Lennon, Diana; Sarfati, Diana; Sellman, Doug; Rush, Elaine; Sopoaga, Faafetai; Thomson, George; Devlin, Gerry; Abel, Gillian; White, Harvey; Coad, Jane; Hoek, Janet; Connor, Jennie; Krebs, Jeremy; Douwes, Jeroen; Mann, Jim; McCall, John; Broughton, John; Potter, John D; Toop, Les; McCowan, Lesley; Signal, Louise; Beckert, Lutz; Elwood, Mark; Kruger, Marlena; Farella, Mauro; Baker, Michael; Keall, Michael; Skeaff, Murray; Thomson, Murray; Wilson, Nick; Chandler, Nicholas; Reid, Papaarangi; Priest, Patricia; Brunton, Paul; Crampton, Peter; Davis, Peter; Gendall, Philip; Howden-Chapman, Philippa; Taylor, Rachael; Edwards, Richard; Beaglehole, Robert; Doughty, Robert; Scragg, Robert; Gauld, Robin; McGee, Robert; Jackson, Rod; Hughes, Roger; Mulder, Roger; Bonita, Ruth; Kruger, Rozanne; Casswell, Sally; Derrett, Sarah; Ameratunga, Shanthi; Denny, Simon; Hales, Simon; Pullon, Sue; Wells, Susan; Cundy, Tim; Blakely, Tony
2017-02-17
Reducing the exposure of children and young people to the marketing of unhealthy foods is a core strategy for reducing the high overweight and obesity prevalence in this population. The Advertising Standards Authority (ASA) has recently reviewed its self-regulatory codes and proposed a revised single code on advertising to children. This article evaluates the proposed code against eight criteria for an effective code, which were included in a submission to the ASA review process from over 70 New Zealand health professors. The evaluation found that the proposed code largely represents no change or uncertain change from the existing codes, and cannot be expected to provide substantial protection for children and young people from the marketing of unhealthy foods. Government regulations will be needed to achieve this important outcome.
2013-08-22
4 cores, where the code may simultaneously run on the multiple cores or the graphics processing unit (or GPU – to be more specific on an NVIDIA ...allowed to get accurate crack shapes. DISCLAIMER Reference herein to any specific commercial company , product, process, or service by trade name
Improvement of Speckle Contrast Image Processing by an Efficient Algorithm.
Steimers, A; Farnung, W; Kohl-Bareis, M
2016-01-01
We demonstrate an efficient algorithm for the temporal and spatial based calculation of speckle contrast for the imaging of blood flow by laser speckle contrast analysis (LASCA). It reduces the numerical complexity of necessary calculations, facilitates a multi-core and many-core implementation of the speckle analysis and enables an independence of temporal or spatial resolution and SNR. The new algorithm was evaluated for both spatial and temporal based analysis of speckle patterns with different image sizes and amounts of recruited pixels as sequential, multi-core and many-core code.
BNL program in support of LWR degraded-core accident analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ginsberg, T.; Greene, G.A.
1982-01-01
Two major sources of loading on dry watr reactor containments are steam generatin from core debris water thermal interactions and molten core-concrete interactions. Experiments are in progress at BNL in support of analytical model development related to aspects of the above containment loading mechanisms. The work supports development and evaluation of the CORCON (Muir, 1981) and MARCH (Wooton, 1980) computer codes. Progress in the two programs is described in this paper. 8 figures.
Evaluation Criteria for Nursing Student Application of Evidence-Based Practice: A Delphi Study.
Bostwick, Lina; Linden, Lois
2016-06-01
Core clinical evaluation criteria do not exist for measuring prelicensure baccalaureate nursing students' application of evidence-based practice (EBP) during direct care assignments. The study objective was to achieve consensus among EBP nursing experts to create clinical criteria for faculty to use in evaluating students' application of EBP principles. A three-round Delphi method was used. Experts were invited to participate in Web-based surveys. Data were analyzed using qualitative coding and categorizing. Quantitative analyses were descriptive calculations for rating and ranking. Expert consensus occurred in the Delphi rounds. The study provides a set of 10 core clinical evaluation criteria for faculty evaluating students' progression toward competency in their application of EBP. A baccalaureate program curriculum requiring the use of Bostwick's EBP Core Clinical Evaluation Criteria will provide a clear definition for understanding basic core EBP competence as expected for the assessment of student learning. [J Nurs Educ. 2016;55(5):336-341.]. Copyright 2016, SLACK Incorporated.
León-Flández, K; Rico-Gómez, A; Moya-Geromin, M Á; Romero-Fernández, M; Bosqued-Estefania, M J; Damián, J; López-Jurado, L; Royo-Bordonada, M Á
2017-09-01
To evaluate compliance levels with the Spanish Code of self-regulation of food and drinks advertising directed at children under the age of 12 years (Publicidad, Actividad, Obesidad, Salud [PAOS] Code) in 2012; and compare these against the figures for 2008. Cross-sectional study. Television advertisements of food and drinks (AFD) were recorded over 7 days in 2012 (8am-midnight) of five Spanish channels popular to children. AFD were classified as core (nutrient-rich/low-calorie products), non-core (nutrient-poor/rich-calorie products) or miscellaneous. Compliance with each standard of the PAOS Code was evaluated. AFD were deemed to be fully compliant when it met all the standards. Two thousand five hundred and eighty-two AFDs came within the purview of the PAOS Code. Some of the standards that registered the highest levels of non-compliance were those regulating the suitability of the information presented (79.4%) and those prohibiting the use of characters popular with children (25%). Overall non-compliance with the Code was greater in 2012 than in 2008 (88.3% vs 49.3%). Non-compliance was highest for advertisements screened on children's/youth channels (92.3% vs. 81.5%; P < 0.001) and for those aired outside the enhanced protection time slot (89.3% vs. 86%; P = 0.015). Non-compliance with the PAOS Code is higher than for 2008. Given the lack of effectiveness of self-regulation, a statutory system should be adopted to ban AFD directed at minors, or at least restrict it to healthy products. Copyright © 2017 The Royal Society for Public Health. Published by Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Dunham, R. S.
1976-01-01
FORTRAN coded out-of-core equation solvers that solve using direct methods symmetric banded systems of simultaneous algebraic equations. Banded, frontal and column (skyline) solvers were studied as well as solvers that can partition the working area and thus could fit into any available core. Comparison timings are presented for several typical two dimensional and three dimensional continuum type grids of elements with and without midside nodes. Extensive conclusions are also given.
BNL severe-accident sequence experiments and analysis program. [PWR; BWR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Greene, G.A.; Ginsberg, T.; Tutu, N.K.
1983-01-01
In the analysis of degraded core accidents, the two major sources of pressure loading on light water reactor containments are: steam generation from core debris-water thermal interactions; and molten core-concrete interactions. Experiments are in progress at BNL in support of analytical model development related to aspects of the above containment loading mechanisms. The work supports development and evaluation of the CORCON (Muir, 1981) and MARCH (Wooton, 1980) computer codes. Progress in the two programs is described.
Design and optimization of a portable LQCD Monte Carlo code using OpenACC
NASA Astrophysics Data System (ADS)
Bonati, Claudio; Coscetti, Simone; D'Elia, Massimo; Mesiti, Michele; Negro, Francesco; Calore, Enrico; Schifano, Sebastiano Fabio; Silvi, Giorgio; Tripiccione, Raffaele
The present panorama of HPC architectures is extremely heterogeneous, ranging from traditional multi-core CPU processors, supporting a wide class of applications but delivering moderate computing performance, to many-core Graphics Processor Units (GPUs), exploiting aggressive data-parallelism and delivering higher performances for streaming computing applications. In this scenario, code portability (and performance portability) become necessary for easy maintainability of applications; this is very relevant in scientific computing where code changes are very frequent, making it tedious and prone to error to keep different code versions aligned. In this work, we present the design and optimization of a state-of-the-art production-level LQCD Monte Carlo application, using the directive-based OpenACC programming model. OpenACC abstracts parallel programming to a descriptive level, relieving programmers from specifying how codes should be mapped onto the target architecture. We describe the implementation of a code fully written in OpenAcc, and show that we are able to target several different architectures, including state-of-the-art traditional CPUs and GPUs, with the same code. We also measure performance, evaluating the computing efficiency of our OpenACC code on several architectures, comparing with GPU-specific implementations and showing that a good level of performance-portability can be reached.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Farmer, M. T.
MELTSPREAD3 is a transient one-dimensional computer code that has been developed to predict the gravity-driven flow and freezing behavior of molten reactor core materials (corium) in containment geometries. Predictions can be made for corium flowing across surfaces under either dry or wet cavity conditions. The spreading surfaces that can be selected are steel, concrete, a user-specified material (e.g., a ceramic), or an arbitrary combination thereof. The corium can have a wide range of compositions of reactor core materials that includes distinct oxide phases (predominantly Zr, and steel oxides) plus metallic phases (predominantly Zr and steel). The code requires input thatmore » describes the containment geometry, melt “pour” conditions, and cavity atmospheric conditions (i.e., pressure, temperature, and cavity flooding information). For cases in which the cavity contains a preexisting water layer at the time of RPV failure, melt jet breakup and particle bed formation can be calculated mechanistically given the time-dependent melt pour conditions (input data) as well as the heatup and boiloff of water in the melt impingement zone (calculated). For core debris impacting either the containment floor or previously spread material, the code calculates the transient hydrodynamics and heat transfer which determine the spreading and freezing behavior of the melt. The code predicts conditions at the end of the spreading stage, including melt relocation distance, depth and material composition profiles, substrate ablation profile, and wall heatup. Code output can be used as input to other models such as CORQUENCH that evaluate long term core-concrete interaction behavior following the transient spreading stage. MELTSPREAD3 was originally developed to investigate BWR Mark I liner vulnerability, but has been substantially upgraded and applied to other reactor designs (e.g., the EPR), and more recently to the plant accidents at Fukushima Daiichi. The most recent round of improvements that are documented in this report have been specifically implemented to support industry in developing Severe Accident Water Management (SAWM) strategies for Boiling Water Reactors.« less
NASA Astrophysics Data System (ADS)
Nishiura, Daisuke; Furuichi, Mikito; Sakaguchi, Hide
2015-09-01
The computational performance of a smoothed particle hydrodynamics (SPH) simulation is investigated for three types of current shared-memory parallel computer devices: many integrated core (MIC) processors, graphics processing units (GPUs), and multi-core CPUs. We are especially interested in efficient shared-memory allocation methods for each chipset, because the efficient data access patterns differ between compute unified device architecture (CUDA) programming for GPUs and OpenMP programming for MIC processors and multi-core CPUs. We first introduce several parallel implementation techniques for the SPH code, and then examine these on our target computer architectures to determine the most effective algorithms for each processor unit. In addition, we evaluate the effective computing performance and power efficiency of the SPH simulation on each architecture, as these are critical metrics for overall performance in a multi-device environment. In our benchmark test, the GPU is found to produce the best arithmetic performance as a standalone device unit, and gives the most efficient power consumption. The multi-core CPU obtains the most effective computing performance. The computational speed of the MIC processor on Xeon Phi approached that of two Xeon CPUs. This indicates that using MICs is an attractive choice for existing SPH codes on multi-core CPUs parallelized by OpenMP, as it gains computational acceleration without the need for significant changes to the source code.
Optimizing Tensor Contraction Expressions for Hybrid CPU-GPU Execution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma, Wenjing; Krishnamoorthy, Sriram; Villa, Oreste
2013-03-01
Tensor contractions are generalized multidimensional matrix multiplication operations that widely occur in quantum chemistry. Efficient execution of tensor contractions on Graphics Processing Units (GPUs) requires several challenges to be addressed, including index permutation and small dimension-sizes reducing thread block utilization. Moreover, to apply the same optimizations to various expressions, we need a code generation tool. In this paper, we present our approach to automatically generate CUDA code to execute tensor contractions on GPUs, including management of data movement between CPU and GPU. To evaluate our tool, GPU-enabled code is generated for the most expensive contractions in CCSD(T), a key coupledmore » cluster method, and incorporated into NWChem, a popular computational chemistry suite. For this method, we demonstrate speedup over a factor of 8.4 using one GPU (instead of one core per node) and over 2.6 when utilizing the entire system using hybrid CPU+GPU solution with 2 GPUs and 5 cores (instead of 7 cores per node). Finally, we analyze the implementation behavior on future GPU systems.« less
Nuclear fuel management optimization using genetic algorithms
DOE Office of Scientific and Technical Information (OSTI.GOV)
DeChaine, M.D.; Feltus, M.A.
1995-07-01
The code independent genetic algorithm reactor optimization (CIGARO) system has been developed to optimize nuclear reactor loading patterns. It uses genetic algorithms (GAs) and a code-independent interface, so any reactor physics code (e.g., CASMO-3/SIMULATE-3) can be used to evaluate the loading patterns. The system is compared to other GA-based loading pattern optimizers. Tests were carried out to maximize the beginning of cycle k{sub eff} for a pressurized water reactor core loading with a penalty function to limit power peaking. The CIGARO system performed well, increasing the k{sub eff} after lowering the peak power. Tests of a prototype parallel evaluation methodmore » showed the potential for a significant speedup.« less
Evaluating nuclear physics inputs in core-collapse supernova models
NASA Astrophysics Data System (ADS)
Lentz, E.; Hix, W. R.; Baird, M. L.; Messer, O. E. B.; Mezzacappa, A.
Core-collapse supernova models depend on the details of the nuclear and weak interaction physics inputs just as they depend on the details of the macroscopic physics (transport, hydrodynamics, etc.), numerical methods, and progenitors. We present preliminary results from our ongoing comparison studies of nuclear and weak interaction physics inputs to core collapse supernova models using the spherically-symmetric, general relativistic, neutrino radiation hydrodynamics code Agile-Boltztran. We focus on comparisons of the effects of the nuclear EoS and the effects of improving the opacities, particularly neutrino--nucleon interactions.
Core Physics and Kinetics Calculations for the Fissioning Plasma Core Reactor
NASA Technical Reports Server (NTRS)
Butler, C.; Albright, D.
2007-01-01
Highly efficient, compact nuclear reactors would provide high specific impulse spacecraft propulsion. This analysis and numerical simulation effort has focused on the technical feasibility issues related to the nuclear design characteristics of a novel reactor design. The Fissioning Plasma Core Reactor (FPCR) is a shockwave-driven gaseous-core nuclear reactor, which uses Magneto Hydrodynamic effects to generate electric power to be used for propulsion. The nuclear design of the system depends on two major calculations: core physics calculations and kinetics calculations. Presently, core physics calculations have concentrated on the use of the MCNP4C code. However, initial results from other codes such as COMBINE/VENTURE and SCALE4a. are also shown. Several significant modifications were made to the ISR-developed QCALC1 kinetics analysis code. These modifications include testing the state of the core materials, an improvement to the calculation of the material properties of the core, the addition of an adiabatic core temperature model and improvement of the first order reactivity correction model. The accuracy of these modifications has been verified, and the accuracy of the point-core kinetics model used by the QCALC1 code has also been validated. Previously calculated kinetics results for the FPCR were described in the ISR report, "QCALC1: A code for FPCR Kinetics Model Feasibility Analysis" dated June 1, 2002.
Towards a Consolidated Approach for the Assessment of Evaluation Models of Nuclear Power Reactors
Epiney, A.; Canepa, S.; Zerkak, O.; ...
2016-11-02
The STARS project at the Paul Scherrer Institut (PSI) has adopted the TRACE thermal-hydraulic (T-H) code for best-estimate system transient simulations of the Swiss Light Water Reactors (LWRs). For analyses involving interactions between system and core, a coupling of TRACE with the SIMULATE-3K (S3K) LWR core simulator has also been developed. In this configuration, the TRACE code and associated nuclear power reactor simulation models play a central role to achieve a comprehensive safety analysis capability. Thus, efforts have now been undertaken to consolidate the validation strategy by implementing a more rigorous and structured assessment approach for TRACE applications involving eithermore » only system T-H evaluations or requiring interfaces to e.g. detailed core or fuel behavior models. The first part of this paper presents the preliminary concepts of this validation strategy. The principle is to systematically track the evolution of a given set of predicted physical Quantities of Interest (QoIs) over a multidimensional parametric space where each of the dimensions represent the evolution of specific analysis aspects, including e.g. code version, transient specific simulation methodology and model "nodalisation". If properly set up, such environment should provide code developers and code users with persistent (less affected by user effect) and quantified information (sensitivity of QoIs) on the applicability of a simulation scheme (codes, input models, methodology) for steady state and transient analysis of full LWR systems. Through this, for each given transient/accident, critical paths of the validation process can be identified that could then translate into defining reference schemes to be applied for downstream predictive simulations. In order to illustrate this approach, the second part of this paper presents a first application of this validation strategy to an inadvertent blowdown event that occurred in a Swiss BWR/6. The transient was initiated by the spurious actuation of the Automatic Depressurization System (ADS). The validation approach progresses through a number of dimensions here: First, the same BWR system simulation model is assessed for different versions of the TRACE code, up to the most recent one. The second dimension is the "nodalisation" dimension, where changes to the input model are assessed. The third dimension is the "methodology" dimension. In this case imposed power and an updated TRACE core model are investigated. For each step in each validation dimension, a common set of QoIs are investigated. For the steady-state results, these include fuel temperatures distributions. For the transient part of the present study, the evaluated QoIs include the system pressure evolution and water carry-over into the steam line.« less
With or without you: predictive coding and Bayesian inference in the brain
Aitchison, Laurence; Lengyel, Máté
2018-01-01
Two theoretical ideas have emerged recently with the ambition to provide a unifying functional explanation of neural population coding and dynamics: predictive coding and Bayesian inference. Here, we describe the two theories and their combination into a single framework: Bayesian predictive coding. We clarify how the two theories can be distinguished, despite sharing core computational concepts and addressing an overlapping set of empirical phenomena. We argue that predictive coding is an algorithmic / representational motif that can serve several different computational goals of which Bayesian inference is but one. Conversely, while Bayesian inference can utilize predictive coding, it can also be realized by a variety of other representations. We critically evaluate the experimental evidence supporting Bayesian predictive coding and discuss how to test it more directly. PMID:28942084
Wong, Alex W K; Lau, Stephen C L; Fong, Mandy W M; Cella, David; Lai, Jin-Shei; Heinemann, Allen W
2018-04-03
To determine the extent to which the content of the Quality of Life in Neurological Disorders (Neuro-QoL) covers the International Classification of Functioning, Disability and Health (ICF) Core Sets for multiple sclerosis (MS), stroke, spinal cord injury (SCI), and traumatic brain injury (TBI) using summary linkage indicators. Content analysis by linking content of the Neuro-QoL to corresponding ICF codes of each Core Set for MS, stroke, SCI, and TBI. Three academic centers. None. None. Four summary linkage indicators proposed by MacDermid et al were estimated to compare the content coverage between Neuro-QoL and the ICF codes of Core Sets for MS, stroke, MS, and TBI. Neuro-QoL represented 20% to 30% Core Set codes for different conditions in which more codes in Core Sets for MS (29%), stroke (28%), and TBI (28%) were covered than those for SCI in the long-term (20%) and early postacute (19%) contexts. Neuro-QoL represented nearly half of the unique Activity and Participation codes (43%-49%) and less than one third of the unique Body Function codes (12%-32%). It represented fewer Environmental Factors codes (2%-6%) and no Body Structures codes. Absolute linkage indicators found that at least 60% of Neuro-QoL items were linked to Core Set codes (63%-95%), but many items covered the same codes as revealed by unique linkage indicators (7%-13%), suggesting high concept redundancy among items. The Neuro-QoL links more closely to ICF Core Sets for stroke, MS, and TBI than to those for SCI, and primarily covers activity and participation ICF domains. Other instruments are needed to address concepts not measured by the Neuro-QoL when a comprehensive health assessment is needed. Copyright © 2018 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.
Investigation on the Core Bypass Flow in a Very High Temperature Reactor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hassan, Yassin
2013-10-22
Uncertainties associated with the core bypass flow are some of the key issues that directly influence the coolant mass flow distribution and magnitude, and thus the operational core temperature profiles, in the very high-temperature reactor (VHTR). Designers will attempt to configure the core geometry so the core cooling flow rate magnitude and distribution conform to the design values. The objective of this project is to study the bypass flow both experimentally and computationally. Researchers will develop experimental data using state-of-the-art particle image velocimetry in a small test facility. The team will attempt to obtain full field temperature distribution using racksmore » of thermocouples. The experimental data are intended to benchmark computational fluid dynamics (CFD) codes by providing detailed information. These experimental data are urgently needed for validation of the CFD codes. The following are the project tasks: • Construct a small-scale bench-top experiment to resemble the bypass flow between the graphite blocks, varying parameters to address their impact on bypass flow. Wall roughness of the graphite block walls, spacing between the blocks, and temperature of the blocks are some of the parameters to be tested. • Perform CFD to evaluate pre- and post-test calculations and turbulence models, including sensitivity studies to achieve high accuracy. • Develop the state-of-the art large eddy simulation (LES) using appropriate subgrid modeling. • Develop models to be used in systems thermal hydraulics codes to account and estimate the bypass flows. These computer programs include, among others, RELAP3D, MELCOR, GAMMA, and GAS-NET. Actual core bypass flow rate may vary considerably from the design value. Although the uncertainty of the bypass flow rate is not known, some sources have stated that the bypass flow rates in the Fort St. Vrain reactor were between 8 and 25 percent of the total reactor mass flow rate. If bypass flow rates are on the high side, the quantity of cooling flow through the core may be considerably less than the nominal design value, causing some regions of the core to operate at temperatures in excess of the design values. These effects are postulated to lead to localized hot regions in the core that must be considered when evaluating the VHTR operational and accident scenarios.« less
Preliminary engineering design of sodium-cooled CANDLE core
NASA Astrophysics Data System (ADS)
Takaki, Naoyuki; Namekawa, Azuma; Yoda, Tomoyuki; Mizutani, Akihiko; Sekimoto, Hiroshi
2012-06-01
The CANDLE burning process is characterized by the autonomous shifting of burning region with constant reactivity and constant spacial power distribution. Evaluations of such critical burning process by using widely used neutron diffusion and burning codes under some realistic engineering constraints are valuable to confirm the technical feasibility of the CANDLE concept and to put the idea into concrete core design. In the first part of this paper, it is discussed that whether the sustainable and stable CANDLE burning process can be reproduced even by using conventional core analysis tools such as SLAROM and CITATION-FBR. As a result, it is certainly possible to demonstrate it if the proper core configuration and initial fuel composition required as CANDLE core are applied to the analysis. In the latter part, an example of a concrete image of sodium cooled, metal fuel, 2000MWt rating CANDLE core has been presented by assuming an emerging inevitable technology of recladding. The core satisfies engineering design criteria including cladding temperature, pressure drop, linear heat rate, and cumulative damage fraction (CDF) of cladding, fast neutron fluence and sodium void reactivity which are defined in the Japanese FBR design project. It can be concluded that it is feasible to design CADLE core by using conventional codes while satisfying some realistic engineering design constraints assuming that recladding at certain time interval is technically feasible.
Flow Analysis of a Gas Turbine Low- Pressure Subsystem
NASA Technical Reports Server (NTRS)
Veres, Joseph P.
1997-01-01
The NASA Lewis Research Center is coordinating a project to numerically simulate aerodynamic flow in the complete low-pressure subsystem (LPS) of a gas turbine engine. The numerical model solves the three-dimensional Navier-Stokes flow equations through all components within the low-pressure subsystem as well as the external flow around the engine nacelle. The Advanced Ducted Propfan Analysis Code (ADPAC), which is being developed jointly by Allison Engine Company and NASA, is the Navier-Stokes flow code being used for LPS simulation. The majority of the LPS project is being done under a NASA Lewis contract with Allison. Other contributors to the project are NYMA and the University of Toledo. For this project, the Energy Efficient Engine designed by GE Aircraft Engines is being modeled. This engine includes a low-pressure system and a high-pressure system. An inlet, a fan, a booster stage, a bypass duct, a lobed mixer, a low-pressure turbine, and a jet nozzle comprise the low-pressure subsystem within this engine. The tightly coupled flow analysis evaluates aerodynamic interactions between all components of the LPS. The high-pressure core engine of this engine is simulated with a one-dimensional thermodynamic cycle code in order to provide boundary conditions to the detailed LPS model. This core engine consists of a high-pressure compressor, a combustor, and a high-pressure turbine. The three-dimensional LPS flow model is coupled to the one-dimensional core engine model to provide a "hybrid" flow model of the complete gas turbine Energy Efficient Engine. The resulting hybrid engine model evaluates the detailed interaction between the LPS components at design and off-design engine operating conditions while considering the lumped-parameter performance of the core engine.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Doerfler, Douglas; Austin, Brian; Cook, Brandon
There are many potential issues associated with deploying the Intel Xeon Phi™ (code named Knights Landing [KNL]) manycore processor in a large-scale supercomputer. One in particular is the ability to fully utilize the high-speed communications network, given that the serial performance of a Xeon Phi TM core is a fraction of a Xeon®core. In this paper, we take a look at the trade-offs associated with allocating enough cores to fully utilize the Aries high-speed network versus cores dedicated to computation, e.g., the trade-off between MPI and OpenMP. In addition, we evaluate new features of Cray MPI in support of KNL,more » such as internode optimizations. We also evaluate one-sided programming models such as Unified Parallel C. We quantify the impact of the above trade-offs and features using a suite of National Energy Research Scientific Computing Center applications.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Epiney, A.; Canepa, S.; Zerkak, O.
The STARS project at the Paul Scherrer Institut (PSI) has adopted the TRACE thermal-hydraulic (T-H) code for best-estimate system transient simulations of the Swiss Light Water Reactors (LWRs). For analyses involving interactions between system and core, a coupling of TRACE with the SIMULATE-3K (S3K) LWR core simulator has also been developed. In this configuration, the TRACE code and associated nuclear power reactor simulation models play a central role to achieve a comprehensive safety analysis capability. Thus, efforts have now been undertaken to consolidate the validation strategy by implementing a more rigorous and structured assessment approach for TRACE applications involving eithermore » only system T-H evaluations or requiring interfaces to e.g. detailed core or fuel behavior models. The first part of this paper presents the preliminary concepts of this validation strategy. The principle is to systematically track the evolution of a given set of predicted physical Quantities of Interest (QoIs) over a multidimensional parametric space where each of the dimensions represent the evolution of specific analysis aspects, including e.g. code version, transient specific simulation methodology and model "nodalisation". If properly set up, such environment should provide code developers and code users with persistent (less affected by user effect) and quantified information (sensitivity of QoIs) on the applicability of a simulation scheme (codes, input models, methodology) for steady state and transient analysis of full LWR systems. Through this, for each given transient/accident, critical paths of the validation process can be identified that could then translate into defining reference schemes to be applied for downstream predictive simulations. In order to illustrate this approach, the second part of this paper presents a first application of this validation strategy to an inadvertent blowdown event that occurred in a Swiss BWR/6. The transient was initiated by the spurious actuation of the Automatic Depressurization System (ADS). The validation approach progresses through a number of dimensions here: First, the same BWR system simulation model is assessed for different versions of the TRACE code, up to the most recent one. The second dimension is the "nodalisation" dimension, where changes to the input model are assessed. The third dimension is the "methodology" dimension. In this case imposed power and an updated TRACE core model are investigated. For each step in each validation dimension, a common set of QoIs are investigated. For the steady-state results, these include fuel temperatures distributions. For the transient part of the present study, the evaluated QoIs include the system pressure evolution and water carry-over into the steam line.« less
Etude des performances de solveurs deterministes sur un coeur rapide a caloporteur sodium
NASA Astrophysics Data System (ADS)
Bay, Charlotte
The reactors of next generation, in particular SFR model, represent a true challenge for current codes and solvers, used mainly for thermic cores. There is no guarantee that their competences could be straight adapted to fast neutron spectrum, or to major design differences. Thus it is necessary to assess the validity of solvers and their potential shortfall in the case of fast neutron reactors. As part of an internship with CEA (France), and at the instigation of EPM Nuclear Institute, this study concerns the following codes : DRAGON/DONJON, ERANOS, PARIS and APOLLO3. The precision assessment has been performed using Monte Carlo code TRIPOLI4. Only core calculation was of interest, namely numerical methods competences in precision and rapidity. Lattice code was not part of the study, that is to say nuclear data, self-shielding, or isotopic compositions. Nor was tackled burnup or time evolution effects. The study consists in two main steps : first evaluating the sensitivity of each solver to calculation parameters, and obtain its optimal calculation set ; then compare their competences in terms of precision and rapidity, by collecting usual quantities (effective multiplication factor, reaction rates map), but also more specific quantities which are crucial to the SFR design, namely control rod worth and sodium void effect. The calculation time is also a key factor. Whatever conclusion or recommendation that could be drawn from this study, they must first of all be applied within similar frameworks, that is to say small fast neutron cores with hexagonal geometry. Eventual adjustments for big cores will have to be demonstrated in developments of this study.
Development and validation of a low-frequency modeling code for high-moment transmitter rod antennas
NASA Astrophysics Data System (ADS)
Jordan, Jared Williams; Sternberg, Ben K.; Dvorak, Steven L.
2009-12-01
The goal of this research is to develop and validate a low-frequency modeling code for high-moment transmitter rod antennas to aid in the design of future low-frequency TX antennas with high magnetic moments. To accomplish this goal, a quasi-static modeling algorithm was developed to simulate finite-length, permeable-core, rod antennas. This quasi-static analysis is applicable for low frequencies where eddy currents are negligible, and it can handle solid or hollow cores with winding insulation thickness between the antenna's windings and its core. The theory was programmed in Matlab, and the modeling code has the ability to predict the TX antenna's gain, maximum magnetic moment, saturation current, series inductance, and core series loss resistance, provided the user enters the corresponding complex permeability for the desired core magnetic flux density. In order to utilize the linear modeling code to model the effects of nonlinear core materials, it is necessary to use the correct complex permeability for a specific core magnetic flux density. In order to test the modeling code, we demonstrated that it can accurately predict changes in the electrical parameters associated with variations in the rod length and the core thickness for antennas made out of low carbon steel wire. These tests demonstrate that the modeling code was successful in predicting the changes in the rod antenna characteristics under high-current nonlinear conditions due to changes in the physical dimensions of the rod provided that the flux density in the core was held constant in order to keep the complex permeability from changing.
Update and evaluation of decay data for spent nuclear fuel analyses
NASA Astrophysics Data System (ADS)
Simeonov, Teodosi; Wemple, Charles
2017-09-01
Studsvik's approach to spent nuclear fuel analyses combines isotopic concentrations and multi-group cross-sections, calculated by the CASMO5 or HELIOS2 lattice transport codes, with core irradiation history data from the SIMULATE5 reactor core simulator and tabulated isotopic decay data. These data sources are used and processed by the code SNF to predict spent nuclear fuel characteristics. Recent advances in the generation procedure for the SNF decay data are presented. The SNF decay data includes basic data, such as decay constants, atomic masses and nuclide transmutation chains; radiation emission spectra for photons from radioactive decay, alpha-n reactions, bremsstrahlung, and spontaneous fission, electrons and alpha particles from radioactive decay, and neutrons from radioactive decay, spontaneous fission, and alpha-n reactions; decay heat production; and electro-atomic interaction data for bremsstrahlung production. These data are compiled from fundamental (ENDF, ENSDF, TENDL) and processed (ESTAR) sources for nearly 3700 nuclides. A rigorous evaluation procedure of internal consistency checks and comparisons to measurements and benchmarks, and code-to-code verifications is performed at the individual isotope level and using integral characteristics on a fuel assembly level (e.g., decay heat, radioactivity, neutron and gamma sources). Significant challenges are presented by the scope and complexity of the data processing, a dearth of relevant detailed measurements, and reliance on theoretical models for some data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simunovic, Srdjan
2015-02-16
CASL's modeling and simulation technology, the Virtual Environment for Reactor Applications (VERA), incorporates coupled physics and science-based models, state-of-the-art numerical methods, modern computational science, integrated uncertainty quantification (UQ) and validation against data from operating pressurized water reactors (PWRs), single-effect experiments, and integral tests. The computational simulation component of VERA is the VERA Core Simulator (VERA-CS). The core simulator is the specific collection of multi-physics computer codes used to model and deplete a LWR core over multiple cycles. The core simulator has a single common input file that drives all of the different physics codes. The parser code, VERAIn, converts VERAmore » Input into an XML file that is used as input to different VERA codes.« less
Fuel burnup analysis for IRIS reactor using MCNPX and WIMS-D5 codes
NASA Astrophysics Data System (ADS)
Amin, E. A.; Bashter, I. I.; Hassan, Nabil M.; Mustafa, S. S.
2017-02-01
International Reactor Innovative and Secure (IRIS) reactor is a compact power reactor designed with especial features. It contains Integral Fuel Burnable Absorber (IFBA). The core is heterogeneous both axially and radially. This work provides the full core burn up analysis for IRIS reactor using MCNPX and WIMDS-D5 codes. Criticality calculations, radial and axial power distributions and nuclear peaking factor at the different stages of burnup were studied. Effective multiplication factor values for the core were estimated by coupling MCNPX code with WIMS-D5 code and compared with SAS2H/KENO-V code values at different stages of burnup. The two calculation codes show good agreement and correlation. The values of radial and axial powers for the full core were also compared with published results given by SAS2H/KENO-V code (at the beginning and end of reactor operation). The behavior of both radial and axial power distribution is quiet similar to the other data published by SAS2H/KENO-V code. The peaking factor values estimated in the present work are close to its values calculated by SAS2H/KENO-V code.
Fast 2D FWI on a multi and many-cores workstation.
NASA Astrophysics Data System (ADS)
Thierry, Philippe; Donno, Daniela; Noble, Mark
2014-05-01
Following the introduction of x86 co-processors (Xeon Phi) and the performance increase of standard 2-socket workstations using the latest 12 cores E5-v2 x86-64 CPU, we present here a MPI + OpenMP implementation of an acoustic 2D FWI (full waveform inversion) code which simultaneously runs on the CPUs and on the co-processors installed in a workstation. The main advantage of running a 2D FWI on a workstation is to be able to quickly evaluate new features such as more complicated wave equations, new cost functions, finite-difference stencils or boundary conditions. Since the co-processor is made of 61 in-order x86 cores, each of them having up to 4 threads, this many-core can be seen as a shared memory SMP (symmetric multiprocessing) machine with its own IP address. Depending on the vendor, a single workstation can handle several co-processors making the workstation as a personal cluster under the desk. The original Fortran 90 CPU version of the 2D FWI code is just recompiled to get a Xeon Phi x86 binary. This multi and many-core configuration uses standard compilers and associated MPI as well as math libraries under Linux; therefore, the cost of code development remains constant, while improving computation time. We choose to implement the code with the so-called symmetric mode to fully use the capacity of the workstation, but we also evaluate the scalability of the code in native mode (i.e running only on the co-processor) thanks to the Linux ssh and NFS capabilities. Usual care of optimization and SIMD vectorization is used to ensure optimal performances, and to analyze the application performances and bottlenecks on both platforms. The 2D FWI implementation uses finite-difference time-domain forward modeling and a quasi-Newton (with L-BFGS algorithm) optimization scheme for the model parameters update. Parallelization is achieved through standard MPI shot gathers distribution and OpenMP for domain decomposition within the co-processor. Taking advantage of the 16 GB of memory available on the co-processor we are able to keep wavefields in memory to achieve the gradient computation by cross-correlation of forward and back-propagated wavefields needed by our time-domain FWI scheme, without heavy traffic on the i/o subsystem and PCIe bus. In this presentation we will also review some simple methodologies to determine performance expectation compared to real performances in order to get optimization effort estimation before starting any huge modification or rewriting of research codes. The key message is the ease of use and development of this hybrid configuration to reach not the absolute peak performance value but the optimal one that ensures the best balance between geophysical and computer developments.
Fukushima Daiichi Unit 1 Ex-Vessel Prediction: Core Concrete Interaction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robb, Kevin R; Farmer, Mitchell; Francis, Matthew W
Lower head failure and corium concrete interaction were predicted to occur at Fukushima Daiichi Unit 1 (1F1) by several different system-level code analyses, including MELCOR v2.1 and MAAP5. Although these codes capture a wide range of accident phenomena, they do not contain detailed models for ex-vessel core melt behavior. However, specialized codes exist for analysis of ex-vessel melt spreading (e.g., MELTSPREAD) and long-term debris coolability (e.g., CORQUENCH). On this basis, an analysis was carried out to further evaluate ex-vessel behavior for 1F1 using MELTSPREAD and CORQUENCH. Best-estimate melt pour conditions predicted by MELCOR v2.1 and MAAP5 were used as input.more » MELTSPREAD was then used to predict the spatially dependent melt conditions and extent of spreading during relocation from the vessel. The results of the MELTSPREAD analysis are reported in a companion paper. This information was used as input for the long-term debris coolability analysis with CORQUENCH.« less
Fukushima Daiichi Unit 1 ex-vessel prediction: Core melt spreading
DOE Office of Scientific and Technical Information (OSTI.GOV)
Farmer, M. T.; Robb, K. R.; Francis, M. W.
Lower head failure and corium-concrete interaction were predicted to occur at Fukushima Daiichi Unit 1 (1F1) by several different system-level code analyses, including MELCOR v2.1 and MAAP5. Although these codes capture a wide range of accident phenomena, they do not contain detailed models for ex-vessel core melt behavior. However, specialized codes exist for analysis of ex-vessel melt spreading (e.g., MELTSPREAD) and long-term debris coolability (e.g., CORQUENCH). On this basis, an analysis has been carried out to further evaluate ex-vessel behavior for 1F1 using MELTSPREAD and CORQUENCH. Best-estimate melt pour conditions predicted by MELCOR v2.1 and MAAP5 were used as input.more » MELTSPREAD was then used to predict the spatially-dependent melt conditions and extent of spreading during relocation from the vessel. Lastly, this information was then used as input for the long-term debris coolability analysis with CORQUENCH that is reported in a companion paper.« less
Fukushima Daiichi Unit 1 ex-vessel prediction: Core melt spreading
Farmer, M. T.; Robb, K. R.; Francis, M. W.
2016-10-31
Lower head failure and corium-concrete interaction were predicted to occur at Fukushima Daiichi Unit 1 (1F1) by several different system-level code analyses, including MELCOR v2.1 and MAAP5. Although these codes capture a wide range of accident phenomena, they do not contain detailed models for ex-vessel core melt behavior. However, specialized codes exist for analysis of ex-vessel melt spreading (e.g., MELTSPREAD) and long-term debris coolability (e.g., CORQUENCH). On this basis, an analysis has been carried out to further evaluate ex-vessel behavior for 1F1 using MELTSPREAD and CORQUENCH. Best-estimate melt pour conditions predicted by MELCOR v2.1 and MAAP5 were used as input.more » MELTSPREAD was then used to predict the spatially-dependent melt conditions and extent of spreading during relocation from the vessel. Lastly, this information was then used as input for the long-term debris coolability analysis with CORQUENCH that is reported in a companion paper.« less
Production Level CFD Code Acceleration for Hybrid Many-Core Architectures
NASA Technical Reports Server (NTRS)
Duffy, Austen C.; Hammond, Dana P.; Nielsen, Eric J.
2012-01-01
In this work, a novel graphics processing unit (GPU) distributed sharing model for hybrid many-core architectures is introduced and employed in the acceleration of a production-level computational fluid dynamics (CFD) code. The latest generation graphics hardware allows multiple processor cores to simultaneously share a single GPU through concurrent kernel execution. This feature has allowed the NASA FUN3D code to be accelerated in parallel with up to four processor cores sharing a single GPU. For codes to scale and fully use resources on these and the next generation machines, codes will need to employ some type of GPU sharing model, as presented in this work. Findings include the effects of GPU sharing on overall performance. A discussion of the inherent challenges that parallel unstructured CFD codes face in accelerator-based computing environments is included, with considerations for future generation architectures. This work was completed by the author in August 2010, and reflects the analysis and results of the time.
NASA Astrophysics Data System (ADS)
Arora, Vanita; Mulaveesala, Ravibabu
2017-06-01
In recent years, InfraRed Thermography (IRT) has become a widely accepted non-destructive testing technique to evaluate the structural integrity of composite sandwich structures due to its full-field, remote, fast and in-service inspection capabilities. This paper presents a novel infrared thermographic approach named as Golay complementary coded thermal wave imaging is presented to detect disbonds in a sandwich structure having face sheets from Glass/Carbon Fibre Reinforced (GFR/CFR) laminates and core of the wooden block.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Downar, Thomas
This report summarizes the current status of VERA-CS Verification and Validation for PWR Core Follow operation and proposes a multi-phase plan for continuing VERA-CS V&V in FY17 and FY18. The proposed plan recognizes the hierarchical nature of a multi-physics code system such as VERA-CS and the importance of first achieving an acceptable level of V&V on each of the single physics codes before focusing on the V&V of the coupled physics solution. The report summarizes the V&V of each of the single physics codes systems currently used for core follow analysis (ie MPACT, CTF, Multigroup Cross Section Generation, and BISONmore » / Fuel Temperature Tables) and proposes specific actions to achieve a uniformly acceptable level of V&V in FY17. The report also recognizes the ongoing development of other codes important for PWR Core Follow (e.g. TIAMAT, MAMBA3D) and proposes Phase II (FY18) VERA-CS V&V activities in which those codes will also reach an acceptable level of V&V. The report then summarizes the current status of VERA-CS multi-physics V&V for PWR Core Follow and the ongoing PWR Core Follow V&V activities for FY17. An automated procedure and output data format is proposed for standardizing the output for core follow calculations and automatically generating tables and figures for the VERA-CS Latex file. A set of acceptance metrics is also proposed for the evaluation and assessment of core follow results that would be used within the script to automatically flag any results which require further analysis or more detailed explanation prior to being added to the VERA-CS validation base. After the Automation Scripts have been completed and tested using BEAVRS, the VERA-CS plan proposes the Watts Bar cycle depletion cases should be performed with the new cross section library and be included in the first draft of the new VERA-CS manual for release at the end of PoR15. Also, within the constraints imposed by the proprietary nature of plant data, as many as possible of the FY17 AMA Plant Core Follow cases should also be included in the VERA-CS manual at the end of PoR15. After completion of the ongoing development of TIAMAT for fully coupled, full core calculations with VERA-CS / BISON 1.5D, and after the completion of the refactoring of MAMBA3D for CIPS analysis in FY17, selected cases from the VERA-CS validation based should be performed, beginning with the legacy cases of Watts Bar and BEAVRS in PoR16. Finally, as potential Phase III future work some additional considerations are identified for extending the VERA-CS V&V to other reactor types such as the BWR.« less
Network Coding on Heterogeneous Multi-Core Processors for Wireless Sensor Networks
Kim, Deokho; Park, Karam; Ro, Won W.
2011-01-01
While network coding is well known for its efficiency and usefulness in wireless sensor networks, the excessive costs associated with decoding computation and complexity still hinder its adoption into practical use. On the other hand, high-performance microprocessors with heterogeneous multi-cores would be used as processing nodes of the wireless sensor networks in the near future. To this end, this paper introduces an efficient network coding algorithm developed for the heterogenous multi-core processors. The proposed idea is fully tested on one of the currently available heterogeneous multi-core processors referred to as the Cell Broadband Engine. PMID:22164053
Convergence studies of deterministic methods for LWR explicit reflector methodology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Canepa, S.; Hursin, M.; Ferroukhi, H.
2013-07-01
The standard approach in modem 3-D core simulators, employed either for steady-state or transient simulations, is to use Albedo coefficients or explicit reflectors at the core axial and radial boundaries. In the latter approach, few-group homogenized nuclear data are a priori produced with lattice transport codes using 2-D reflector models. Recently, the explicit reflector methodology of the deterministic CASMO-4/SIMULATE-3 code system was identified to potentially constitute one of the main sources of errors for core analyses of the Swiss operating LWRs, which are all belonging to GII design. Considering that some of the new GIII designs will rely on verymore » different reflector concepts, a review and assessment of the reflector methodology for various LWR designs appeared as relevant. Therefore, the purpose of this paper is to first recall the concepts of the explicit reflector modelling approach as employed by CASMO/SIMULATE. Then, for selected reflector configurations representative of both GII and GUI designs, a benchmarking of the few-group nuclear data produced with the deterministic lattice code CASMO-4 and its successor CASMO-5, is conducted. On this basis, a convergence study with regards to geometrical requirements when using deterministic methods with 2-D homogenous models is conducted and the effect on the downstream 3-D core analysis accuracy is evaluated for a typical GII deflector design in order to assess the results against available plant measurements. (authors)« less
Bohlin, Jon; Eldholm, Vegard; Pettersson, John H O; Brynildsrud, Ola; Snipen, Lars
2017-02-10
The core genome consists of genes shared by the vast majority of a species and is therefore assumed to have been subjected to substantially stronger purifying selection than the more mobile elements of the genome, also known as the accessory genome. Here we examine intragenic base composition differences in core genomes and corresponding accessory genomes in 36 species, represented by the genomes of 731 bacterial strains, to assess the impact of selective forces on base composition in microbes. We also explore, in turn, how these results compare with findings for whole genome intragenic regions. We found that GC content in coding regions is significantly higher in core genomes than accessory genomes and whole genomes. Likewise, GC content variation within coding regions was significantly lower in core genomes than in accessory genomes and whole genomes. Relative entropy in coding regions, measured as the difference between observed and expected trinucleotide frequencies estimated from mononucleotide frequencies, was significantly higher in the core genomes than in accessory and whole genomes. Relative entropy was positively associated with coding region GC content within the accessory genomes, but not within the corresponding coding regions of core or whole genomes. The higher intragenic GC content and relative entropy, as well as the lower GC content variation, observed in the core genomes is most likely associated with selective constraints. It is unclear whether the positive association between GC content and relative entropy in the more mobile accessory genomes constitutes signatures of selection or selective neutral processes.
Development of the V4.2m5 and V5.0m0 Multigroup Cross Section Libraries for MPACT for PWR and BWR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Kang Seog; Clarno, Kevin T.; Gentry, Cole
2017-03-01
The MPACT neutronics module of the Consortium for Advanced Simulation of Light Water Reactors (CASL) core simulator is a 3-D whole core transport code being developed for the CASL toolset, Virtual Environment for Reactor Analysis (VERA). Key characteristics of the MPACT code include (1) a subgroup method for resonance selfshielding and (2) a whole-core transport solver with a 2-D/1-D synthesis method. The MPACT code requires a cross section library to support all the MPACT core simulation capabilities which would be the most influencing component for simulation accuracy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bess, John D.
2014-03-01
PROTEUS is a zero-power research reactor based on a cylindrical graphite annulus with a central cylindrical cavity. The graphite annulus remains basically the same for all experimental programs, but the contents of the central cavity are changed according to the type of reactor being investigated. Through most of its service history, PROTEUS has represented light-water reactors, but from 1992 to 1996 PROTEUS was configured as a pebble-bed reactor (PBR) critical facility and designated as HTR-PROTEUS. The nomenclature was used to indicate that this series consisted of High Temperature Reactor experiments performed in the PROTEUS assembly. During this period, seventeen criticalmore » configurations were assembled and various reactor physics experiments were conducted. These experiments included measurements of criticality, differential and integral control rod and safety rod worths, kinetics, reaction rates, water ingress effects, and small sample reactivity effects (Ref. 3). HTR-PROTEUS was constructed, and the experimental program was conducted, for the purpose of providing experimental benchmark data for assessment of reactor physics computer codes. Considerable effort was devoted to benchmark calculations as a part of the HTR-PROTEUS program. References 1 and 2 provide detailed data for use in constructing models for codes to be assessed. Reference 3 is a comprehensive summary of the HTR-PROTEUS experiments and the associated benchmark program. This document draws freely from these references. Only Cores 9 and 10 are evaluated in this benchmark report due to similarities in their construction. The other core configurations of the HTR-PROTEUS program are evaluated in their respective reports as outlined in Section 1.0. Cores 9 and 10 were evaluated and determined to be acceptable benchmark experiments.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
John D. Bess
2013-03-01
PROTEUS is a zero-power research reactor based on a cylindrical graphite annulus with a central cylindrical cavity. The graphite annulus remains basically the same for all experimental programs, but the contents of the central cavity are changed according to the type of reactor being investigated. Through most of its service history, PROTEUS has represented light-water reactors, but from 1992 to 1996 PROTEUS was configured as a pebble-bed reactor (PBR) critical facility and designated as HTR-PROTEUS. The nomenclature was used to indicate that this series consisted of High Temperature Reactor experiments performed in the PROTEUS assembly. During this period, seventeen criticalmore » configurations were assembled and various reactor physics experiments were conducted. These experiments included measurements of criticality, differential and integral control rod and safety rod worths, kinetics, reaction rates, water ingress effects, and small sample reactivity effects (Ref. 3). HTR-PROTEUS was constructed, and the experimental program was conducted, for the purpose of providing experimental benchmark data for assessment of reactor physics computer codes. Considerable effort was devoted to benchmark calculations as a part of the HTR-PROTEUS program. References 1 and 2 provide detailed data for use in constructing models for codes to be assessed. Reference 3 is a comprehensive summary of the HTR-PROTEUS experiments and the associated benchmark program. This document draws freely from these references. Only Cores 9 and 10 are evaluated in this benchmark report due to similarities in their construction. The other core configurations of the HTR-PROTEUS program are evaluated in their respective reports as outlined in Section 1.0. Cores 9 and 10 were evaluated and determined to be acceptable benchmark experiments.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
John D. Bess
2013-03-01
PROTEUS is a zero-power research reactor based on a cylindrical graphite annulus with a central cylindrical cavity. The graphite annulus remains basically the same for all experimental programs, but the contents of the central cavity are changed according to the type of reactor being investigated. Through most of its service history, PROTEUS has represented light-water reactors, but from 1992 to 1996 PROTEUS was configured as a pebble-bed reactor (PBR) critical facility and designated as HTR-PROTEUS. The nomenclature was used to indicate that this series consisted of High Temperature Reactor experiments performed in the PROTEUS assembly. During this period, seventeen criticalmore » configurations were assembled and various reactor physics experiments were conducted. These experiments included measurements of criticality, differential and integral control rod and safety rod worths, kinetics, reaction rates, water ingress effects, and small sample reactivity effects (Ref. 3). HTR-PROTEUS was constructed, and the experimental program was conducted, for the purpose of providing experimental benchmark data for assessment of reactor physics computer codes. Considerable effort was devoted to benchmark calculations as a part of the HTR-PROTEUS program. References 1 and 2 provide detailed data for use in constructing models for codes to be assessed. Reference 3 is a comprehensive summary of the HTR-PROTEUS experiments and the associated benchmark program. This document draws freely from these references. Only Cores 9 and 10 are evaluated in this benchmark report due to similarities in their construction. The other core configurations of the HTR-PROTEUS program are evaluated in their respective reports as outlined in Section 1.0. Cores 9 and 10 were evaluated and determined to be acceptable benchmark experiments.« less
An Architecture for Coexistence with Multiple Users in Frequency Hopping Cognitive Radio Networks
2013-03-01
the base WARP system, a custom IP core written in VHDL , and the Virtex IV’s embedded PowerPC core with C code to implement the radio and hopset...shown in Appendix C as Figure C.2. All VHDL code necessary to implement this IP core is included in Appendix G. 69 Figure 3.19: FPGA bus structure...subsystem functionality. A total of 1,430 lines of VHDL code were implemented for this research. 1 library ieee; 2 use ieee.std logic 1164.all; 3 use
An approach for coupled-code multiphysics core simulations from a common input
Schmidt, Rodney; Belcourt, Kenneth; Hooper, Russell; ...
2014-12-10
This study describes an approach for coupled-code multiphysics reactor core simulations that is being developed by the Virtual Environment for Reactor Applications (VERA) project in the Consortium for Advanced Simulation of Light-Water Reactors (CASL). In this approach a user creates a single problem description, called the “VERAIn” common input file, to define and setup the desired coupled-code reactor core simulation. A preprocessing step accepts the VERAIn file and generates a set of fully consistent input files for the different physics codes being coupled. The problem is then solved using a single-executable coupled-code simulation tool applicable to the problem, which ismore » built using VERA infrastructure software tools and the set of physics codes required for the problem of interest. The approach is demonstrated by performing an eigenvalue and power distribution calculation of a typical three-dimensional 17 × 17 assembly with thermal–hydraulic and fuel temperature feedback. All neutronics aspects of the problem (cross-section calculation, neutron transport, power release) are solved using the Insilico code suite and are fully coupled to a thermal–hydraulic analysis calculated by the Cobra-TF (CTF) code. The single-executable coupled-code (Insilico-CTF) simulation tool is created using several VERA tools, including LIME (Lightweight Integrating Multiphysics Environment for coupling codes), DTK (Data Transfer Kit), Trilinos, and TriBITS. Parallel calculations are performed on the Titan supercomputer at Oak Ridge National Laboratory using 1156 cores, and a synopsis of the solution results and code performance is presented. Finally, ongoing development of this approach is also briefly described.« less
Posttest analysis of the FFTF inherent safety tests
DOE Office of Scientific and Technical Information (OSTI.GOV)
Padilla, A. Jr.; Claybrook, S.W.
Inherent safety tests were performed during 1986 in the 400-MW (thermal) Fast Flux Test Facility (FFTF) reactor to demonstrate the effectiveness of an inherent shutdown device called the gas expansion module (GEM). The GEM device provided a strong negative reactivity feedback during loss-of-flow conditions by increasing the neutron leakage as a result of an expanding gas bubble. The best-estimate pretest calculations for these tests were performed using the IANUS plant analysis code (Westinghouse Electric Corporation proprietary code) and the MELT/SIEX3 core analysis code. These two codes were also used to perform the required operational safety analyses for the FFTF reactormore » and plant. Although it was intended to also use the SASSYS systems (core and plant) analysis code, the calibration of the SASSYS code for FFTF core and plant analysis was not completed in time to perform pretest analyses. The purpose of this paper is to present the results of the posttest analysis of the 1986 FFTF inherent safety tests using the SASSYS code.« less
Calculation of the Phenix end-of-life test 'Control Rod Withdrawal' with the ERANOS code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tiberi, V.
2012-07-01
The Inst. of Radiological Protection and Nuclear Safety (IRSN) acts as technical support to French public authorities. As such, IRSN is in charge of safety assessment of operating and under construction reactors, as well as future projects. In this framework, one current objective of IRSN is to evaluate the ability and accuracy of numerical tools to foresee consequences of accidents. Neutronic studies step in the safety assessment from different points of view among which the core design and its protection system. They are necessary to evaluate the core behavior in case of accident in order to assess the integrity ofmore » the first barrier and the absence of a prompt criticality risk. To reach this objective one main physical quantity has to be evaluated accurately: the neutronic power distribution in core during whole reactor lifetime. Phenix end of life tests, carried out in 2009, aim at increasing the experience feedback on sodium cooled fast reactors. These experiments have been done in the framework of the development of the 4. generation of nuclear reactors. Ten tests have been carried out: 6 on neutronic and fuel aspects, 2 on thermal hydraulics and 2 for the emergency shutdown. Two of them have been chosen for an international exercise on thermal hydraulics and neutronics in the frame of an IAEA Coordinated Research Project. Concerning neutronics, the Control Rod Withdrawal test is relevant for safety because it allows evaluating the capability of calculation tools to compute the radial power distribution on fast reactors core configurations in which the flux field is very deformed. IRSN participated to this benchmark with the ERANOS code developed by CEA for fast reactors studies. This paper presents the results obtained in the framework of the benchmark activity. A relatively good agreement was found with available measures considering the approximations done in the modeling. The work underlines the importance of burn-up calculations in order to have a fine core concentrations mesh for the calculation of the power distribution. (authors)« less
A New Capability for Nuclear Thermal Propulsion Design
DOE Office of Scientific and Technical Information (OSTI.GOV)
Amiri, Benjamin W.; Nuclear and Radiological Engineering Department, University of Florida, Gainesville, FL 32611; Kapernick, Richard J.
2007-01-30
This paper describes a new capability for Nuclear Thermal Propulsion (NTP) design that has been developed, and presents the results of some analyses performed with this design tool. The purpose of the tool is to design to specified mission and material limits, while maximizing system thrust to weight. The head end of the design tool utilizes the ROCket Engine Transient Simulation (ROCETS) code to generate a system design and system design requirements as inputs to the core analysis. ROCETS is a modular system level code which has been used extensively in the liquid rocket engine industry for many years. Themore » core design tool performs high-fidelity reactor core nuclear and thermal-hydraulic design analysis. At the heart of this process are two codes TMSS-NTP and NTPgen, which together greatly automate the analysis, providing the capability to rapidly produce designs that meet all specified requirements while minimizing mass. A PERL based command script, called CORE DESIGNER controls the execution of these two codes, and checks for convergence throughout the process. TMSS-NTP is executed first, to produce a suite of core designs that meet the specified reactor core mechanical, thermal-hydraulic and structural requirements. The suite of designs consists of a set of core layouts and, for each core layout specific designs that span a range of core fuel volumes. NTPgen generates MCNPX models for each of the core designs from TMSS-NTP. Iterative analyses are performed in NTPgen until a reactor design (fuel volume) is identified for each core layout that meets cold and hot operation reactivity requirements and that is zoned to meet a radial core power distribution requirement.« less
Analysis of Phenix end-of-life natural convection test with the MARS-LMR code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jeong, H. Y.; Ha, K. S.; Lee, K. L.
The end-of-life test of Phenix reactor performed by the CEA provided an opportunity to have reliable and valuable test data for the validation and verification of a SFR system analysis code. KAERI joined this international program for the analysis of Phenix end-of-life natural circulation test coordinated by the IAEA from 2008. The main objectives of this study were to evaluate the capability of existing SFR system analysis code MARS-LMR and to identify any limitation of the code. The analysis was performed in three stages: pre-test analysis, blind posttest analysis, and final post-test analysis. In the pre-test analysis, the design conditionsmore » provided by the CEA were used to obtain a prediction of the test. The blind post-test analysis was based on the test conditions measured during the tests but the test results were not provided from the CEA. The final post-test analysis was performed to predict the test results as accurate as possible by improving the previous modeling of the test. Based on the pre-test analysis and blind test analysis, the modeling for heat structures in the hot pool and cold pool, steel structures in the core, heat loss from roof and vessel, and the flow path at core outlet were reinforced in the final analysis. The results of the final post-test analysis could be characterized into three different phases. In the early phase, the MARS-LMR simulated the heat-up process correctly due to the enhanced heat structure modeling. In the mid phase before the opening of SG casing, the code reproduced the decrease of core outlet temperature successfully. Finally, in the later phase the increase of heat removal by the opening of the SG opening was well predicted with the MARS-LMR code. (authors)« less
Adaptive Core Simulation Employing Discrete Inverse Theory - Part II: Numerical Experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abdel-Khalik, Hany S.; Turinsky, Paul J.
2005-07-15
Use of adaptive simulation is intended to improve the fidelity and robustness of important core attribute predictions such as core power distribution, thermal margins, and core reactivity. Adaptive simulation utilizes a selected set of past and current reactor measurements of reactor observables, i.e., in-core instrumentation readings, to adapt the simulation in a meaningful way. The companion paper, ''Adaptive Core Simulation Employing Discrete Inverse Theory - Part I: Theory,'' describes in detail the theoretical background of the proposed adaptive techniques. This paper, Part II, demonstrates several computational experiments conducted to assess the fidelity and robustness of the proposed techniques. The intentmore » is to check the ability of the adapted core simulator model to predict future core observables that are not included in the adaption or core observables that are recorded at core conditions that differ from those at which adaption is completed. Also, this paper demonstrates successful utilization of an efficient sensitivity analysis approach to calculate the sensitivity information required to perform the adaption for millions of input core parameters. Finally, this paper illustrates a useful application for adaptive simulation - reducing the inconsistencies between two different core simulator code systems, where the multitudes of input data to one code are adjusted to enhance the agreement between both codes for important core attributes, i.e., core reactivity and power distribution. Also demonstrated is the robustness of such an application.« less
Industry self regulation of television food advertising: responsible or responsive?
King, Lesley; Hebden, Lana; Grunseit, Anne; Kelly, Bridget; Chapman, Kathy; Venugopal, Kamalesh
2011-06-01
This study evaluated the impact of the Australian Food and Grocery Council (AFGC) self-regulatory initiative on unhealthy food marketing to children, introduced in January 2009. The study compared patterns of food advertising by AFGC and non-AFGC signatory companies in 2009, 2007 and 2006 on three Sydney commercial free-to-air television channels. Data were collected across seven days in May 2006 and 2007, and four days in May 2009. Advertised foods were coded as core, non-core and miscellaneous. Regression for counts analyses was used to examine change in rates of advertisements across the sampled periods and differential change between AFGC-signatory or non-signatory companies between 2007 and 2009. Of 36 food companies that advertised during the 2009 sample period, 14 were AFGC signatories. The average number of food advertisements decreased significantly from 7.0 per hour in 2007 to 5.9 in 2009. There was a significant reduction in non-core food advertising from 2007 to 2009 by AFGC signatories compared with non-signatory companies overall and during peak times, when the largest numbers of children were viewing. There was no reduction in the rate of non-core food advertisements by all companies, and these advertisements continue to comprise the majority during peak viewing times. While some companies have responded to pressures to reduce unhealthy food advertising on television, the impact of the self-regulatory code is limited by the extent of uptake by food companies. The continued advertising of unhealthy foods indicates that this self-regulatory code does not adequately protect children.
Neutronics Analysis of SMART Small Modular Reactor using SRAC 2006 Code
NASA Astrophysics Data System (ADS)
Ramdhani, Rahmi N.; Prastyo, Puguh A.; Waris, Abdul; Widayani; Kurniadi, Rizal
2017-07-01
Small modular reactors (SMRs) are part of a new generation of nuclear reactor being developed worldwide. One of the advantages of SMR is the flexibility to adopt the advanced design concepts and technology. SMART (System integrated Modular Advanced ReacTor) is a small sized integral type PWR with a thermal power of 330 MW that has been developed by KAERI (Korea Atomic Energy Research Institute). SMART core consists of 57 fuel assemblies which are based on the well proven 17×17 array that has been used in Korean commercial PWRs. SMART is soluble boron free, and the high initial reactivity is mainly controlled by burnable absorbers. The goal of this study is to perform neutronics evaluation of SMART core with UO2 as main fuel. Neutronics calculation was performed by using PIJ and CITATION modules of SRAC 2006 code with JENDL 3.3 as nuclear data library.
Byrd, Gary D; Winkelstein, Peter
2014-10-01
Based on the authors' shared interest in the interprofessional challenges surrounding health information management, this study explores the degree to which librarians, informatics professionals, and core health professionals in medicine, nursing, and public health share common ethical behavior norms grounded in moral principles. Using the "Principlism" framework from a widely cited textbook of biomedical ethics, the authors analyze the statements in the ethical codes for associations of librarians (Medical Library Association [MLA], American Library Association, and Special Libraries Association), informatics professionals (American Medical Informatics Association [AMIA] and American Health Information Management Association), and core health professionals (American Medical Association, American Nurses Association, and American Public Health Association). This analysis focuses on whether and how the statements in these eight codes specify core moral norms (Autonomy, Beneficence, Non-Maleficence, and Justice), core behavioral norms (Veracity, Privacy, Confidentiality, and Fidelity), and other norms that are empirically derived from the code statements. These eight ethical codes share a large number of common behavioral norms based most frequently on the principle of Beneficence, then on Autonomy and Justice, but rarely on Non-Maleficence. The MLA and AMIA codes share the largest number of common behavioral norms, and these two associations also share many norms with the other six associations. The shared core of behavioral norms among these professions, all grounded in core moral principles, point to many opportunities for building effective interprofessional communication and collaboration regarding the development, management, and use of health information resources and technologies.
Byrd, Gary D.; Winkelstein, Peter
2014-01-01
Objective: Based on the authors' shared interest in the interprofessional challenges surrounding health information management, this study explores the degree to which librarians, informatics professionals, and core health professionals in medicine, nursing, and public health share common ethical behavior norms grounded in moral principles. Methods: Using the “Principlism” framework from a widely cited textbook of biomedical ethics, the authors analyze the statements in the ethical codes for associations of librarians (Medical Library Association [MLA], American Library Association, and Special Libraries Association), informatics professionals (American Medical Informatics Association [AMIA] and American Health Information Management Association), and core health professionals (American Medical Association, American Nurses Association, and American Public Health Association). This analysis focuses on whether and how the statements in these eight codes specify core moral norms (Autonomy, Beneficence, Non-Maleficence, and Justice), core behavioral norms (Veracity, Privacy, Confidentiality, and Fidelity), and other norms that are empirically derived from the code statements. Results: These eight ethical codes share a large number of common behavioral norms based most frequently on the principle of Beneficence, then on Autonomy and Justice, but rarely on Non-Maleficence. The MLA and AMIA codes share the largest number of common behavioral norms, and these two associations also share many norms with the other six associations. Implications: The shared core of behavioral norms among these professions, all grounded in core moral principles, point to many opportunities for building effective interprofessional communication and collaboration regarding the development, management, and use of health information resources and technologies. PMID:25349543
TREAT Transient Analysis Benchmarking for the HEU Core
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kontogeorgakos, D. C.; Connaway, H. M.; Wright, A. E.
2014-05-01
This work was performed to support the feasibility study on the potential conversion of the Transient Reactor Test Facility (TREAT) at Idaho National Laboratory from the use of high enriched uranium (HEU) fuel to the use of low enriched uranium (LEU) fuel. The analyses were performed by the GTRI Reactor Conversion staff at the Argonne National Laboratory (ANL). The objective of this study was to benchmark the transient calculations against temperature-limited transients performed in the final operating HEU TREAT core configuration. The MCNP code was used to evaluate steady-state neutronics behavior, and the point kinetics code TREKIN was used tomore » determine core power and energy during transients. The first part of the benchmarking process was to calculate with MCNP all the neutronic parameters required by TREKIN to simulate the transients: the transient rod-bank worth, the prompt neutron generation lifetime, the temperature reactivity feedback as a function of total core energy, and the core-average temperature and peak temperature as a functions of total core energy. The results of these calculations were compared against measurements or against reported values as documented in the available TREAT reports. The heating of the fuel was simulated as an adiabatic process. The reported values were extracted from ANL reports, intra-laboratory memos and experiment logsheets and in some cases it was not clear if the values were based on measurements, on calculations or a combination of both. Therefore, it was decided to use the term “reported” values when referring to such data. The methods and results from the HEU core transient analyses will be used for the potential LEU core configurations to predict the converted (LEU) core’s performance.« less
NASA Astrophysics Data System (ADS)
Olson, Richard F.
2013-05-01
Rendering of point scatterer based radar scenes for millimeter wave (mmW) seeker tests in real-time hardware-in-the-loop (HWIL) scene generation requires efficient algorithms and vector-friendly computer architectures for complex signal synthesis. New processor technology from Intel implements an extended 256-bit vector SIMD instruction set (AVX, AVX2) in a multi-core CPU design providing peak execution rates of hundreds of GigaFLOPS (GFLOPS) on one chip. Real world mmW scene generation code can approach peak SIMD execution rates only after careful algorithm and source code design. An effective software design will maintain high computing intensity emphasizing register-to-register SIMD arithmetic operations over data movement between CPU caches or off-chip memories. Engineers at the U.S. Army Aviation and Missile Research, Development and Engineering Center (AMRDEC) applied two basic parallel coding methods to assess new 256-bit SIMD multi-core architectures for mmW scene generation in HWIL. These include use of POSIX threads built on vector library functions and more portable, highlevel parallel code based on compiler technology (e.g. OpenMP pragmas and SIMD autovectorization). Since CPU technology is rapidly advancing toward high processor core counts and TeraFLOPS peak SIMD execution rates, it is imperative that coding methods be identified which produce efficient and maintainable parallel code. This paper describes the algorithms used in point scatterer target model rendering, the parallelization of those algorithms, and the execution performance achieved on an AVX multi-core machine using the two basic parallel coding methods. The paper concludes with estimates for scale-up performance on upcoming multi-core technology.
Moats and Drawbridges: An Isolation Primitive for Reconfigurable Hardware Based Systems
2007-05-01
these systems, and after being run through an optimizing CAD tool the resulting circuit is a single entangled mess of gates and wires. To prevent the...translates MATLAB [48] algorithms into HDL, logic synthesis translates this HDL into a netlist, a synthesis tool uses a place-and-route algorithm to...Core Soft Core µ Soft P Core µP Core Hard Soft Algorithms MATLAB gcc ExecutableC Code HDL C Code Bitstream Place and Route NetlistLogic Synthesis EDK µP
1981-12-01
file.library-unit{.subunit).SYMAP Statement Map: library-file. library-unit.subunit).SMAP Type Map: 1 ibrary.fi le. 1 ibrary-unit{.subunit). TMAP The library...generator SYMAP Symbol Map code generator SMAP Updated Statement Map code generator TMAP Type Map code generator A.3.5 The PUNIT Command The P UNIT...Core.Stmtmap) NAME Tmap (Core.Typemap) END Example A-3 Compiler Command Stream for the Code Generator Texas Instruments A-5 Ada Optimizing Compiler
Elaborate SMART MCNP Modelling Using ANSYS and Its Applications
NASA Astrophysics Data System (ADS)
Song, Jaehoon; Surh, Han-bum; Kim, Seung-jin; Koo, Bonsueng
2017-09-01
An MCNP 3-dimensional model can be widely used to evaluate various design parameters such as a core design or shielding design. Conventionally, a simplified 3-dimensional MCNP model is applied to calculate these parameters because of the cumbersomeness of modelling by hand. ANSYS has a function for converting the CAD `stp' format into an MCNP input in the geometry part. Using ANSYS and a 3- dimensional CAD file, a very detailed and sophisticated MCNP 3-dimensional model can be generated. The MCNP model is applied to evaluate the assembly weighting factor at the ex-core detector of SMART, and the result is compared with a simplified MCNP SMART model and assembly weighting factor calculated by DORT, which is a deterministic Sn code.
Core belief content examined in a large sample of patients using online cognitive behaviour therapy.
Millings, Abigail; Carnelley, Katherine B
2015-11-01
Computerised cognitive behavioural therapy provides a unique opportunity to collect and analyse data regarding the idiosyncratic content of people's core beliefs about the self, others and the world. 'Beating the Blues' users recorded a core belief derived through the downward arrow technique. Core beliefs from 1813 mental health patients were coded into 10 categories. The most common were global self-evaluation, attachment, and competence. Women were more likely, and men were less likely (than chance), to provide an attachment-related core belief; and men were more likely, and women less likely, to provide a self-competence-related core belief. This may be linked to gender differences in sources of self-esteem. Those who were suffering from anxiety were more likely to provide power- and control-themed core beliefs and less likely to provide attachment core beliefs than chance. Finally, those who had thoughts of suicide in the preceding week reported less competence themed core beliefs and more global self-evaluation (e.g., 'I am useless') core beliefs than chance. Concurrent symptom level was not available. The sample was not nationally representative, and featured programme completers only. Men and women may focus on different core beliefs in the context of CBT. Those suffering anxiety may need a therapeutic focus on power and control. A complete rejection of the self (not just within one domain, such as competence) may be linked to thoughts of suicide. Future research should examine how individual differences and symptom severity influence core beliefs. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bess, John D.; Sterbentz, James W.; Snoj, Luka
PROTEUS is a zero-power research reactor based on a cylindrical graphite annulus with a central cylindrical cavity. The graphite annulus remains basically the same for all experimental programs, but the contents of the central cavity are changed according to the type of reactor being investigated. Through most of its service history, PROTEUS has represented light-water reactors, but from 1992 to 1996 PROTEUS was configured as a pebble-bed reactor (PBR) critical facility and designated as HTR-PROTEUS. The nomenclature was used to indicate that this series consisted of High Temperature Reactor experiments performed in the PROTEUS assembly. During this period, seventeen criticalmore » configurations were assembled and various reactor physics experiments were conducted. These experiments included measurements of criticality, differential and integral control rod and safety rod worths, kinetics, reaction rates, water ingress effects, and small sample reactivity effects (Ref. 3). HTR-PROTEUS was constructed, and the experimental program was conducted, for the purpose of providing experimental benchmark data for assessment of reactor physics computer codes. Considerable effort was devoted to benchmark calculations as a part of the HTR-PROTEUS program. References 1 and 2 provide detailed data for use in constructing models for codes to be assessed. Reference 3 is a comprehensive summary of the HTR-PROTEUS experiments and the associated benchmark program. This document draws freely from these references. Only Cores 9 and 10 are evaluated in this benchmark report due to similarities in their construction. The other core configurations of the HTR-PROTEUS program are evaluated in their respective reports as outlined in Section 1.0. Cores 9 and 10 were evaluated and determined to be acceptable benchmark experiments.« less
Conceptual Core Analysis of Long Life PWR Utilizing Thorium-Uranium Fuel Cycle
NASA Astrophysics Data System (ADS)
Rouf; Su'ud, Zaki
2016-08-01
Conceptual core analysis of long life PWR utilizing thorium-uranium based fuel has conducted. The purpose of this study is to evaluate neutronic behavior of reactor core using combined thorium and enriched uranium fuel. Based on this fuel composition, reactor core have higher conversion ratio rather than conventional fuel which could give longer operation length. This simulation performed using SRAC Code System based on library SRACLIB-JDL32. The calculation carried out for (Th-U)O2 and (Th-U)C fuel with uranium composition 30 - 40% and gadolinium (Gd2O3) as burnable poison 0,0125%. The fuel composition adjusted to obtain burn up length 10 - 15 years under thermal power 600 - 1000 MWt. The key properties such as uranium enrichment, fuel volume fraction, percentage of uranium are evaluated. Core calculation on this study adopted R-Z geometry divided by 3 region, each region have different uranium enrichment. The result show multiplication factor every burn up step for 15 years operation length, power distribution behavior, power peaking factor, and conversion ratio. The optimum core design achieved when thermal power 600 MWt, percentage of uranium 35%, U-235 enrichment 11 - 13%, with 14 years operation length, axial and radial power peaking factor about 1.5 and 1.2 respectively.
NASA Astrophysics Data System (ADS)
Susilo, J.; Suparlina, L.; Deswandri; Sunaryo, G. R.
2018-02-01
The using of a computer program for the PWR type core neutronic design parameters analysis has been carried out in some previous studies. These studies included a computer code validation on the neutronic parameters data values resulted from measurements and benchmarking calculation. In this study, the AP1000 first cycle core radial power peaking factor validation and analysis were performed using CITATION module of the SRAC2006 computer code. The computer code has been also validated with a good result to the criticality values of VERA benchmark core. The AP1000 core power distribution calculation has been done in two-dimensional X-Y geometry through ¼ section modeling. The purpose of this research is to determine the accuracy of the SRAC2006 code, and also the safety performance of the AP1000 core first cycle operating. The core calculations were carried out with the several conditions, those are without Rod Cluster Control Assembly (RCCA), by insertion of a single RCCA (AO, M1, M2, MA, MB, MC, MD) and multiple insertion RCCA (MA + MB, MA + MB + MC, MA + MB + MC + MD, and MA + MB + MC + MD + M1). The maximum power factor of the fuel rods value in the fuel assembly assumedapproximately 1.406. The calculation results analysis showed that the 2-dimensional CITATION module of SRAC2006 code is accurate in AP1000 power distribution calculation without RCCA and with MA+MB RCCA insertion.The power peaking factor on the first operating cycle of the AP1000 core without RCCA, as well as with single and multiple RCCA are still below in the safety limit values (less then about 1.798). So in terms of thermal power generated by the fuel assembly, then it can be considered that the AP100 core at the first operating cycle is safe.
Interface requirements to couple thermal-hydraulic codes to severe accident codes: ATHLET-CD
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trambauer, K.
1997-07-01
The system code ATHLET-CD is being developed by GRS in cooperation with IKE and IPSN. Its field of application comprises the whole spectrum of leaks and large breaks, as well as operational and abnormal transients for LWRs and VVERs. At present the analyses cover the in-vessel thermal-hydraulics, the early phases of core degradation, as well as fission products and aerosol release from the core and their transport in the Reactor Coolant System. The aim of the code development is to extend the simulation of core degradation up to failure of the reactor pressure vessel and to cover all physically reasonablemore » accident sequences for western and eastern LWRs including RMBKs. The ATHLET-CD structure is highly modular in order to include a manifold spectrum of models and to offer an optimum basis for further development. The code consists of four general modules to describe the reactor coolant system thermal-hydraulics, the core degradation, the fission product core release, and fission product and aerosol transport. Each general module consists of some basic modules which correspond to the process to be simulated or to its specific purpose. Besides the code structure based on the physical modelling, the code follows four strictly separated steps during the course of a calculation: (1) input of structure, geometrical data, initial and boundary condition, (2) initialization of derived quantities, (3) steady state calculation or input of restart data, and (4) transient calculation. In this paper, the transient solution method is briefly presented and the coupling methods are discussed. Three aspects have to be considered for the coupling of different modules in one code system. First is the conservation of masses and energy in the different subsystems as there are fluid, structures, and fission products and aerosols. Second is the convergence of the numerical solution and stability of the calculation. The third aspect is related to the code performance, and running time.« less
Mr.CAS-A minimalistic (pure) Ruby CAS for fast prototyping and code generation
NASA Astrophysics Data System (ADS)
Ragni, Matteo
There are Computer Algebra System (CAS) systems on the market with complete solutions for manipulation of analytical models. But exporting a model that implements specific algorithms on specific platforms, for target languages or for particular numerical library, is often a rigid procedure that requires manual post-processing. This work presents a Ruby library that exposes core CAS capabilities, i.e. simplification, substitution, evaluation, etc. The library aims at programmers that need to rapidly prototype and generate numerical code for different target languages, while keeping separated mathematical expression from the code generation rules, where best practices for numerical conditioning are implemented. The library is written in pure Ruby language and is compatible with most Ruby interpreters.
Preliminary Analysis of the Transient Reactor Test Facility (TREAT) with PROTEUS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Connaway, H. M.; Lee, C. H.
The neutron transport code PROTEUS has been used to perform preliminary simulations of the Transient Reactor Test Facility (TREAT). TREAT is an experimental reactor designed for the testing of nuclear fuels and other materials under transient conditions. It operated from 1959 to 1994, when it was placed on non-operational standby. The restart of TREAT to support the U.S. Department of Energy’s resumption of transient testing is currently underway. Both single assembly and assembly-homogenized full core models have been evaluated. Simulations were performed using a historic set of WIMS-ANL-generated cross-sections as well as a new set of Serpent-generated cross-sections. To supportmore » this work, further analyses were also performed using additional codes in order to investigate particular aspects of TREAT modeling. DIF3D and the Monte-Carlo codes MCNP and Serpent were utilized in these studies. MCNP and Serpent were used to evaluate the effect of geometry homogenization on the simulation results and to support code-to-code comparisons. New meshes for the PROTEUS simulations were created using the CUBIT toolkit, with additional meshes generated via conversion of selected DIF3D models to support code-to-code verifications. All current analyses have focused on code-to-code verifications, with additional verification and validation studies planned. The analysis of TREAT with PROTEUS-SN is an ongoing project. This report documents the studies that have been performed thus far, and highlights key challenges to address in future work.« less
Wilkinson, Karl A; Hine, Nicholas D M; Skylaris, Chris-Kriton
2014-11-11
We present a hybrid MPI-OpenMP implementation of Linear-Scaling Density Functional Theory within the ONETEP code. We illustrate its performance on a range of high performance computing (HPC) platforms comprising shared-memory nodes with fast interconnect. Our work has focused on applying OpenMP parallelism to the routines which dominate the computational load, attempting where possible to parallelize different loops from those already parallelized within MPI. This includes 3D FFT box operations, sparse matrix algebra operations, calculation of integrals, and Ewald summation. While the underlying numerical methods are unchanged, these developments represent significant changes to the algorithms used within ONETEP to distribute the workload across CPU cores. The new hybrid code exhibits much-improved strong scaling relative to the MPI-only code and permits calculations with a much higher ratio of cores to atoms. These developments result in a significantly shorter time to solution than was possible using MPI alone and facilitate the application of the ONETEP code to systems larger than previously feasible. We illustrate this with benchmark calculations from an amyloid fibril trimer containing 41,907 atoms. We use the code to study the mechanism of delamination of cellulose nanofibrils when undergoing sonification, a process which is controlled by a large number of interactions that collectively determine the structural properties of the fibrils. Many energy evaluations were needed for these simulations, and as these systems comprise up to 21,276 atoms this would not have been feasible without the developments described here.
Giles, Tracey M; de Lacey, Sheryl; Muir-Cochrane, Eimear
2016-01-01
Grounded theory method has been described extensively in the literature. Yet, the varying processes portrayed can be confusing for novice grounded theorists. This article provides a worked example of the data analysis phase of a constructivist grounded theory study that examined family presence during resuscitation in acute health care settings. Core grounded theory methods are exemplified, including initial and focused coding, constant comparative analysis, memo writing, theoretical sampling, and theoretical saturation. The article traces the construction of the core category "Conditional Permission" from initial and focused codes, subcategories, and properties, through to its position in the final substantive grounded theory.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kurosu, K; Department of Medical Physics ' Engineering, Osaka University Graduate School of Medicine, Osaka; Takashina, M
Purpose: Monte Carlo codes are becoming important tools for proton beam dosimetry. However, the relationships between the customizing parameters and percentage depth dose (PDD) of GATE and PHITS codes have not been reported which are studied for PDD and proton range compared to the FLUKA code and the experimental data. Methods: The beam delivery system of the Indiana University Health Proton Therapy Center was modeled for the uniform scanning beam in FLUKA and transferred identically into GATE and PHITS. This computational model was built from the blue print and validated with the commissioning data. Three parameters evaluated are the maximummore » step size, cut off energy and physical and transport model. The dependence of the PDDs on the customizing parameters was compared with the published results of previous studies. Results: The optimal parameters for the simulation of the whole beam delivery system were defined by referring to the calculation results obtained with each parameter. Although the PDDs from FLUKA and the experimental data show a good agreement, those of GATE and PHITS obtained with our optimal parameters show a minor discrepancy. The measured proton range R90 was 269.37 mm, compared to the calculated range of 269.63 mm, 268.96 mm, and 270.85 mm with FLUKA, GATE and PHITS, respectively. Conclusion: We evaluated the dependence of the results for PDDs obtained with GATE and PHITS Monte Carlo generalpurpose codes on the customizing parameters by using the whole computational model of the treatment nozzle. The optimal parameters for the simulation were then defined by referring to the calculation results. The physical model, particle transport mechanics and the different geometrybased descriptions need accurate customization in three simulation codes to agree with experimental data for artifact-free Monte Carlo simulation. This study was supported by Grants-in Aid for Cancer Research (H22-3rd Term Cancer Control-General-043) from the Ministry of Health, Labor and Welfare of Japan, Grants-in-Aid for Scientific Research (No. 23791419), and JSPS Core-to-Core program (No. 23003). The authors have no conflict of interest.« less
NASA Astrophysics Data System (ADS)
Lasbleis, M.; Day, E. A.; Waszek, L.
2017-12-01
The complex nature of inner core structure has been well-established from seismic studies, with heterogeneities at various length scales, both radially and laterally. Despite this, no geodynamic model has successfully explained all of the observed seismic features. To facilitate comparisons between seismic observations and geodynamic models of inner core growth we have developed a new, open access Python tool - GrowYourIC - that allows users to compare models of inner core structure. The code allows users to simulate different evolution models of the inner core, with user-defined rates of inner core growth, translation and rotation. Once the user has "grown" an inner core with their preferred parameters they can then explore the effect of "their" inner core's evolution on the relative age and growth rate in different regions of the inner core. The code will convert these parameters into seismic properties using either built-in mineral physics models, or user-supplied ones that calculate these seismic properties with users' own preferred mineralogical models. The 3D model of isotropic inner core properties can then be used to calculate the predicted seismic travel time anomalies for a random, or user-specified, set of seismic ray paths through the inner core. A real dataset of inner core body-wave differential travel times is included for the purpose of comparing user-generated models of inner core growth to actual observed travel time anomalies in the top 100km of the inner core. Here, we explore some of the possibilities of our code. We investigate the effect of the limited illumination of the inner core by seismic waves on the robustness of kinematic model interpretation. We test the impact on seismic differential travel time observations of several kinematic models of inner core growth: fast lateral translation; slow differential growth; and inner core super-rotation. We find that a model of inner core evolution incorporating both differential growth and slow super-rotation is able to recreate some of the more intricate details of the seismic observations. Specifically we are able to "grow" an inner core that has an asymmetric shift in isotropic hemisphere boundaries with increasing depth in the inner core.
SOPHAEROS code development and its application to falcon tests
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lajtha, G.; Missirlian, M.; Kissane, M.
1996-12-31
One of the key issues in source-term evaluation in nuclear reactor severe accidents is determination of the transport behavior of fission products released from the degrading core. The SOPHAEROS computer code is being developed to predict fission product transport in a mechanistic way in light water reactor circuits. These applications of the SOPHAEROS code to the Falcon experiments, among others not presented here, indicate that the numerical scheme of the code is robust, and no convergence problems are encountered. The calculation is also very fast being three times longer on a Sun SPARC 5 workstation than real time and typicallymore » {approx} 10 times faster than an identical calculation with the VICTORIA code. The study demonstrates that the SOPHAEROS 1.3 code is a suitable tool for prediction of the vapor chemistry and fission product transport with a reasonable level of accuracy. Furthermore, the fexibility of the code material data bank allows improvement of understanding of fission product transport and deposition in the circuit. Performing sensitivity studies with different chemical species or with different properties (saturation pressure, chemical equilibrium constants) is very straightforward.« less
Cscibox: A Software System for Age-Model Construction and Evaluation
NASA Astrophysics Data System (ADS)
Bradley, E.; Anderson, K. A.; Marchitto, T. M., Jr.; de Vesine, L. R.; White, J. W. C.; Anderson, D. M.
2014-12-01
CSciBox is an integrated software system for the construction and evaluation of age models of paleo-environmetal archives, both directly dated and cross dated. The time has come to encourage cross-pollinization between earth science and computer science in dating paleorecords. This project addresses that need. The CSciBox code, which is being developed by a team of computer scientists and geoscientists, is open source and freely available on github. The system employs modern database technology to store paleoclimate proxy data and analysis results in an easily accessible and searchable form. This makes it possible to do analysis on the whole core at once, in an interactive fashion, or to tailor the analysis to a subset of the core without loading the entire data file. CSciBox provides a number of 'components' that perform the common steps in age-model construction and evaluation: calibrations, reservoir-age correction, interpolations, statistics, and so on. The user employs these components via a graphical user interface (GUI) to go from raw data to finished age model in a single tool: e.g., an IntCal09 calibration of 14C data from a marine sediment core, followed by a piecewise-linear interpolation. CSciBox's GUI supports plotting of any measurement in the core against any other measurement, or against any of the variables in the calculation of the age model-with or without explicit error representations. Using the GUI, CSciBox's user can import a new calibration curve or other background data set and define a new module that employs that information. Users can also incorporate other software (e.g., Calib, BACON) as 'plug ins.' In the case of truly large data or significant computational effort, CSciBox is parallelizable across modern multicore processors, or clusters, or even the cloud. The next generation of the CSciBox code, currently in the testing stages, includes an automated reasoning engine that supports a more-thorough exploration of plausible age models and cross-dating scenarios.
Development of an extensible dual-core wireless sensing node for cyber-physical systems
NASA Astrophysics Data System (ADS)
Kane, Michael; Zhu, Dapeng; Hirose, Mitsuhito; Dong, Xinjun; Winter, Benjamin; Häckell, Mortiz; Lynch, Jerome P.; Wang, Yang; Swartz, A.
2014-04-01
The introduction of wireless telemetry into the design of monitoring and control systems has been shown to reduce system costs while simplifying installations. To date, wireless nodes proposed for sensing and actuation in cyberphysical systems have been designed using microcontrollers with one computational pipeline (i.e., single-core microcontrollers). While concurrent code execution can be implemented on single-core microcontrollers, concurrency is emulated by splitting the pipeline's resources to support multiple threads of code execution. For many applications, this approach to multi-threading is acceptable in terms of speed and function. However, some applications such as feedback controls demand deterministic timing of code execution and maximum computational throughput. For these applications, the adoption of multi-core processor architectures represents one effective solution. Multi-core microcontrollers have multiple computational pipelines that can execute embedded code in parallel and can be interrupted independent of one another. In this study, a new wireless platform named Martlet is introduced with a dual-core microcontroller adopted in its design. The dual-core microcontroller design allows Martlet to dedicate one core to standard wireless sensor operations while the other core is reserved for embedded data processing and real-time feedback control law execution. Another distinct feature of Martlet is a standardized hardware interface that allows specialized daughter boards (termed wing boards) to be interfaced to the Martlet baseboard. This extensibility opens opportunity to encapsulate specialized sensing and actuation functions in a wing board without altering the design of Martlet. In addition to describing the design of Martlet, a few example wings are detailed, along with experiments showing the Martlet's ability to monitor and control physical systems such as wind turbines and buildings.
NASA Astrophysics Data System (ADS)
Ivanov, V.; Samokhin, A.; Danicheva, I.; Khrennikov, N.; Bouscuet, J.; Velkov, K.; Pasichnyk, I.
2017-01-01
In this paper the approaches used for developing of the BN-800 reactor test model and for validation of coupled neutron-physic and thermohydraulic calculations are described. Coupled codes ATHLET 3.0 (code for thermohydraulic calculations of reactor transients) and DYN3D (3-dimensional code of neutron kinetics) are used for calculations. The main calculation results of reactor steady state condition are provided. 3-D model used for neutron calculations was developed for start reactor BN-800 load. The homogeneous approach is used for description of reactor assemblies. Along with main simplifications, the main reactor BN-800 core zones are described (LEZ, MEZ, HEZ, MOX, blankets). The 3D neutron physics calculations were provided with 28-group library, which is based on estimated nuclear data ENDF/B-7.0. Neutron SCALE code was used for preparation of group constants. Nodalization hydraulic model has boundary conditions by coolant mass-flow rate for core inlet part, by pressure and enthalpy for core outlet part, which can be chosen depending on reactor state. Core inlet and outlet temperatures were chosen according to reactor nominal state. The coolant mass flow rate profiling through the core is based on reactor power distribution. The test thermohydraulic calculations made with using of developed model showed acceptable results in coolant mass flow rate distribution through the reactor core and in axial temperature and pressure distribution. The developed model will be upgraded in future for different transient analysis in metal-cooled fast reactors of BN type including reactivity transients (control rods withdrawal, stop of the main circulation pump, etc.).
NATCRCTR: One-dimensional thermal-hydraulics analysis code for natural-circulation TRIGA reactors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feltus, M.A.; Rubinaccio, G.
1996-12-31
The Pennsylvania State University nuclear engineering department is evaluating the upgrade of the Reed College (Portland, Oregon) TRIGA reactor from 250 kW to 1 MW in two areas: thermal-hydraulics and steady-state neutronics analysis. This analysis was initiated as a cooperative effort between Penn State and Reed College as a training project for two International Atomic Energy Agency (IAEA) fellows from Ghana. The two Ghanaian IAEA fellows were assisted by G. Rubinaccio, an undergraduate, who undertook the task of writing the new computer programs for the thermal-hydraulic and physics evaluation as a three-credit special design project course. The Reed College TRIGA,more » which has a fixed graphite radial reflector, is cooled by natural circulation, without external cross-flow; whereas, the Penn State Breazeale Reactor has significant crossflow into its sides. To model the Reed TRIGA, the NATCRCTR program has been developed from first principles using the following assumptions: 1. The core is surrounded by the fixed reflector structure, which acts as a one-dimensional channel. 2. The core inlet temperature distribution is constant at the core bottom. 3. The axial heat flux distribution is a chopped cosine shape. 4. The heat transfer in the fuel is primarily in the radial directions. 5. A small gap between the fuel and cladding exists. The NATCRCTR code is used to find the peak centerline fuel, gap, and cladding surface temperatures, based on assumed flux and engineering peaking factors.« less
Evaluation of HFIR LEU Fuel Using the COMSOL Multiphysics Platform
DOE Office of Scientific and Technical Information (OSTI.GOV)
Primm, Trent; Ruggles, Arthur; Freels, James D
2009-03-01
A finite element computational approach to simulation of the High Flux Isotope Reactor (HFIR) Core Thermal-Fluid behavior is developed. These models were developed to facilitate design of a low enriched core for the HFIR, which will have different axial and radial flux profiles from the current HEU core and thus will require fuel and poison load optimization. This report outlines a stepwise implementation of this modeling approach using the commercial finite element code, COMSOL, with initial assessment of fuel, poison and clad conduction modeling capability, followed by assessment of mating of the fuel conduction models to a one dimensional fluidmore » model typical of legacy simulation techniques for the HFIR core. The model is then extended to fully couple 2-dimensional conduction in the fuel to a 2-dimensional thermo-fluid model of the coolant for a HFIR core cooling sub-channel with additional assessment of simulation outcomes. Finally, 3-dimensional simulations of a fuel plate and cooling channel are presented.« less
HowTo - Easy use of global unique identifier
NASA Astrophysics Data System (ADS)
Czerniak, A.; Fleischer, D.; Schirnick, C.
2013-12-01
The GEOMAR sample- and core repository covers several thousands of samples and cores and was collected over the last decades. In the actual project, we bring this collection up to the new generation and tag every sample and core with a unique identifier, in our case the International Geo Sample Number (ISGN). This work is done with our digital Ink and hand writing recognition implementation. The Smart Pen technology was save time and resources to record the information on every sample or core. In the procedure of recording, there are several steps systematical are done: 1. Getting all information about the core or sample, such as cruise number, responsible person and so on. 2. Tag with unique identifiers, in our case a QR-Code. 3. Wrote down the location of sample or core. After transmitting the information from Smart Pen, actually via USB but wireless is a choice too, into our server infrastructure the link to other information began. As it linked in our Virtual Research Environment (VRE) with the unique identifier (ISGN) sample or core can be located and the QR-Code was simply linked back from core or sample to ISGN with additional scientific information. On the QR-Code all important information are on it and it was simple to produce thousand of it.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Earl, Christopher; Might, Matthew; Bagusetty, Abhishek
This study presents Nebo, a declarative domain-specific language embedded in C++ for discretizing partial differential equations for transport phenomena on multiple architectures. Application programmers use Nebo to write code that appears sequential but can be run in parallel, without editing the code. Currently Nebo supports single-thread execution, multi-thread execution, and many-core (GPU-based) execution. With single-thread execution, Nebo performs on par with code written by domain experts. With multi-thread execution, Nebo can linearly scale (with roughly 90% efficiency) up to 12 cores, compared to its single-thread execution. Moreover, Nebo’s many-core execution can be over 140x faster than its single-thread execution.
Earl, Christopher; Might, Matthew; Bagusetty, Abhishek; ...
2016-01-26
This study presents Nebo, a declarative domain-specific language embedded in C++ for discretizing partial differential equations for transport phenomena on multiple architectures. Application programmers use Nebo to write code that appears sequential but can be run in parallel, without editing the code. Currently Nebo supports single-thread execution, multi-thread execution, and many-core (GPU-based) execution. With single-thread execution, Nebo performs on par with code written by domain experts. With multi-thread execution, Nebo can linearly scale (with roughly 90% efficiency) up to 12 cores, compared to its single-thread execution. Moreover, Nebo’s many-core execution can be over 140x faster than its single-thread execution.
SOFIP: A Short Orbital Flux Integration Program
NASA Technical Reports Server (NTRS)
Stassinopoulos, E. G.; Hebert, J. J.; Butler, E. L.; Barth, J. L.
1979-01-01
A computer code was developed to evaluate the space radiation environment encountered by geocentric satellites. The Short Orbital Flux Integration Program (SOFIP) is a compact routine of modular compositions, designed mostly with structured programming techniques in order to provide core and time economy and ease of use. The program in its simplest form produces for a given input trajectory a composite integral orbital spectrum of either protons or electrons. Additional features are available separately or in combination with the inclusion of the corresponding (optional) modules. The code is described in detail, and the function and usage of the various modules are explained. A program listing and sample outputs are attached.
Self-Shielded Flux Cored Wire Evaluation
1980-12-01
5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS( ES ) Naval Surface Warfare Center CD Code 2230 - Design Integration Tools Building...ADDRESS( ES ) 10. SPONSOR/MONITOR’S ACRONYM(S) 11. SPONSOR/MONITOR’S REPORT NUMBER(S) 12. DISTRIBUTION/AVAILABILITY STATEMENT Approved for public release...tensile and yield strength, percent elongation, and percent reduction of area reported. This testing was performed with a Satec 400 WHVP tensile
DOE Office of Scientific and Technical Information (OSTI.GOV)
Setiani, Tia Dwi, E-mail: tiadwisetiani@gmail.com; Suprijadi; Nuclear Physics and Biophysics Reaserch Division, Faculty of Mathematics and Natural Sciences, Institut Teknologi Bandung Jalan Ganesha 10 Bandung, 40132
Monte Carlo (MC) is one of the powerful techniques for simulation in x-ray imaging. MC method can simulate the radiation transport within matter with high accuracy and provides a natural way to simulate radiation transport in complex systems. One of the codes based on MC algorithm that are widely used for radiographic images simulation is MC-GPU, a codes developed by Andrea Basal. This study was aimed to investigate the time computation of x-ray imaging simulation in GPU (Graphics Processing Unit) compared to a standard CPU (Central Processing Unit). Furthermore, the effect of physical parameters to the quality of radiographic imagesmore » and the comparison of image quality resulted from simulation in the GPU and CPU are evaluated in this paper. The simulations were run in CPU which was simulated in serial condition, and in two GPU with 384 cores and 2304 cores. In simulation using GPU, each cores calculates one photon, so, a large number of photon were calculated simultaneously. Results show that the time simulations on GPU were significantly accelerated compared to CPU. The simulations on the 2304 core of GPU were performed about 64 -114 times faster than on CPU, while the simulation on the 384 core of GPU were performed about 20 – 31 times faster than in a single core of CPU. Another result shows that optimum quality of images from the simulation was gained at the history start from 10{sup 8} and the energy from 60 Kev to 90 Kev. Analyzed by statistical approach, the quality of GPU and CPU images are relatively the same.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Walter, Matthew; Yin, Shengjun; Stevens, Gary
2012-01-01
In past years, the authors have undertaken various studies of nozzles in both boiling water reactors (BWRs) and pressurized water reactors (PWRs) located in the reactor pressure vessel (RPV) adjacent to the core beltline region. Those studies described stress and fracture mechanics analyses performed to assess various RPV nozzle geometries, which were selected based on their proximity to the core beltline region, i.e., those nozzle configurations that are located close enough to the core region such that they may receive sufficient fluence prior to end-of-life (EOL) to require evaluation of embrittlement as part of the RPV analyses associated with pressure-temperaturemore » (P-T) limits. In this paper, additional stress and fracture analyses are summarized that were performed for additional PWR nozzles with the following objectives: To expand the population of PWR nozzle configurations evaluated, which was limited in the previous work to just two nozzles (one inlet and one outlet nozzle). To model and understand differences in stress results obtained for an internal pressure load case using a two-dimensional (2-D) axi-symmetric finite element model (FEM) vs. a three-dimensional (3-D) FEM for these PWR nozzles. In particular, the ovalization (stress concentration) effect of two intersecting cylinders, which is typical of RPV nozzle configurations, was investigated. To investigate the applicability of previously recommended linear elastic fracture mechanics (LEFM) hand solutions for calculating the Mode I stress intensity factor for a postulated nozzle corner crack for pressure loading for these PWR nozzles. These analyses were performed to further expand earlier work completed to support potential revision and refinement of Title 10 to the U.S. Code of Federal Regulations (CFR), Part 50, Appendix G, Fracture Toughness Requirements, and are intended to supplement similar evaluation of nozzles presented at the 2008, 2009, and 2011 Pressure Vessels and Piping (PVP) Conferences. This work is also relevant to the ongoing efforts of the American Society of Mechanical Engineers (ASME) Boiler and Pressure Vessel (B&PV) Code, Section XI, Working Group on Operating Plant Criteria (WGOPC) efforts to incorporate nozzle fracture mechanics solutions into a revision to ASME B&PV Code, Section XI, Nonmandatory Appendix G.« less
Thermal hydraulic-severe accident code interfaces for SCDAP/RELAP5/MOD3.2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Coryell, E.W.; Siefken, L.J.; Harvego, E.A.
1997-07-01
The SCDAP/RELAP5 computer code is designed to describe the overall reactor coolant system thermal-hydraulic response, core damage progression, and fission product release during severe accidents. The code is being developed at the Idaho National Engineering Laboratory under the primary sponsorship of the Office of Nuclear Regulatory Research of the U.S. Nuclear Regulatory Commission. The code is the result of merging the RELAP5, SCDAP, and COUPLE codes. The RELAP5 portion of the code calculates the overall reactor coolant system, thermal-hydraulics, and associated reactor system responses. The SCDAP portion of the code describes the response of the core and associated vessel structures.more » The COUPLE portion of the code describes response of lower plenum structures and debris and the failure of the lower head. The code uses a modular approach with the overall structure, input/output processing, and data structures following the pattern established for RELAP5. The code uses a building block approach to allow the code user to easily represent a wide variety of systems and conditions through a powerful input processor. The user can represent a wide variety of experiments or reactor designs by selecting fuel rods and other assembly structures from a range of representative core component models, and arrange them in a variety of patterns within the thermalhydraulic network. The COUPLE portion of the code uses two-dimensional representations of the lower plenum structures and debris beds. The flow of information between the different portions of the code occurs at each system level time step advancement. The RELAP5 portion of the code describes the fluid transport around the system. These fluid conditions are used as thermal and mass transport boundary conditions for the SCDAP and COUPLE structures and debris beds.« less
Rubus: A compiler for seamless and extensible parallelism.
Adnan, Muhammad; Aslam, Faisal; Nawaz, Zubair; Sarwar, Syed Mansoor
2017-01-01
Nowadays, a typical processor may have multiple processing cores on a single chip. Furthermore, a special purpose processing unit called Graphic Processing Unit (GPU), originally designed for 2D/3D games, is now available for general purpose use in computers and mobile devices. However, the traditional programming languages which were designed to work with machines having single core CPUs, cannot utilize the parallelism available on multi-core processors efficiently. Therefore, to exploit the extraordinary processing power of multi-core processors, researchers are working on new tools and techniques to facilitate parallel programming. To this end, languages like CUDA and OpenCL have been introduced, which can be used to write code with parallelism. The main shortcoming of these languages is that programmer needs to specify all the complex details manually in order to parallelize the code across multiple cores. Therefore, the code written in these languages is difficult to understand, debug and maintain. Furthermore, to parallelize legacy code can require rewriting a significant portion of code in CUDA or OpenCL, which can consume significant time and resources. Thus, the amount of parallelism achieved is proportional to the skills of the programmer and the time spent in code optimizations. This paper proposes a new open source compiler, Rubus, to achieve seamless parallelism. The Rubus compiler relieves the programmer from manually specifying the low-level details. It analyses and transforms a sequential program into a parallel program automatically, without any user intervention. This achieves massive speedup and better utilization of the underlying hardware without a programmer's expertise in parallel programming. For five different benchmarks, on average a speedup of 34.54 times has been achieved by Rubus as compared to Java on a basic GPU having only 96 cores. Whereas, for a matrix multiplication benchmark the average execution speedup of 84 times has been achieved by Rubus on the same GPU. Moreover, Rubus achieves this performance without drastically increasing the memory footprint of a program.
Rubus: A compiler for seamless and extensible parallelism
Adnan, Muhammad; Aslam, Faisal; Sarwar, Syed Mansoor
2017-01-01
Nowadays, a typical processor may have multiple processing cores on a single chip. Furthermore, a special purpose processing unit called Graphic Processing Unit (GPU), originally designed for 2D/3D games, is now available for general purpose use in computers and mobile devices. However, the traditional programming languages which were designed to work with machines having single core CPUs, cannot utilize the parallelism available on multi-core processors efficiently. Therefore, to exploit the extraordinary processing power of multi-core processors, researchers are working on new tools and techniques to facilitate parallel programming. To this end, languages like CUDA and OpenCL have been introduced, which can be used to write code with parallelism. The main shortcoming of these languages is that programmer needs to specify all the complex details manually in order to parallelize the code across multiple cores. Therefore, the code written in these languages is difficult to understand, debug and maintain. Furthermore, to parallelize legacy code can require rewriting a significant portion of code in CUDA or OpenCL, which can consume significant time and resources. Thus, the amount of parallelism achieved is proportional to the skills of the programmer and the time spent in code optimizations. This paper proposes a new open source compiler, Rubus, to achieve seamless parallelism. The Rubus compiler relieves the programmer from manually specifying the low-level details. It analyses and transforms a sequential program into a parallel program automatically, without any user intervention. This achieves massive speedup and better utilization of the underlying hardware without a programmer’s expertise in parallel programming. For five different benchmarks, on average a speedup of 34.54 times has been achieved by Rubus as compared to Java on a basic GPU having only 96 cores. Whereas, for a matrix multiplication benchmark the average execution speedup of 84 times has been achieved by Rubus on the same GPU. Moreover, Rubus achieves this performance without drastically increasing the memory footprint of a program. PMID:29211758
Automating the generation of finite element dynamical cores with Firedrake
NASA Astrophysics Data System (ADS)
Ham, David; Mitchell, Lawrence; Homolya, Miklós; Luporini, Fabio; Gibson, Thomas; Kelly, Paul; Cotter, Colin; Lange, Michael; Kramer, Stephan; Shipton, Jemma; Yamazaki, Hiroe; Paganini, Alberto; Kärnä, Tuomas
2017-04-01
The development of a dynamical core is an increasingly complex software engineering undertaking. As the equations become more complete, the discretisations more sophisticated and the hardware acquires ever more fine-grained parallelism and deeper memory hierarchies, the problem of building, testing and modifying dynamical cores becomes increasingly complex. Here we present Firedrake, a code generation system for the finite element method with specialist features designed to support the creation of geoscientific models. Using Firedrake, the dynamical core developer writes the partial differential equations in weak form in a high level mathematical notation. Appropriate function spaces are chosen and time stepping loops written at the same high level. When the programme is run, Firedrake generates high performance C code for the resulting numerics which are executed in parallel. Models in Firedrake typically take a tiny fraction of the lines of code required by traditional hand-coding techniques. They support more sophisticated numerics than are easily achieved by hand, and the resulting code is frequently higher performance. Critically, debugging, modifying and extending a model written in Firedrake is vastly easier than by traditional methods due to the small, highly mathematical code base. Firedrake supports a wide range of key features for dynamical core creation: A vast range of discretisations, including both continuous and discontinuous spaces and mimetic (C-grid-like) elements which optimally represent force balances in geophysical flows. High aspect ratio layered meshes suitable for ocean and atmosphere domains. Curved elements for high accuracy representations of the sphere. Support for non-finite element operators, such as parametrisations. Access to PETSc, a world-leading library of programmable linear and nonlinear solvers. High performance adjoint models generated automatically by symbolically reasoning about the forward model. This poster will present the key features of the Firedrake system, as well as those of Gusto, an atmospheric dynamical core, and Thetis, a coastal ocean model, both of which are written in Firedrake.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Curca-Tivig, Florin; Merk, Stephan; Pautz, Andreas
2007-07-01
Anticipating future needs of our customers and willing to concentrate synergies and competences existing in the company for the benefit of our customers, AREVA NP decided in 2002 to develop the next generation of coupled neutronics/ core thermal-hydraulic (TH) code systems for fuel assembly and core design calculations for both, PWR and BWR applications. The global CONVERGENCE project was born: after a feasibility study of one year (2002) and a conceptual phase of another year (2003), development was started at the beginning of 2004. The present paper introduces the CONVERGENCE project, presents the main feature of the new code systemmore » ARCADIA{sup R} and concludes on customer benefits. ARCADIA{sup R} is designed to meet AREVA NP market and customers' requirements worldwide. Besides state-of-the-art physical modeling, numerical performance and industrial functionality, the ARCADIA{sup R} system is featuring state-of-the-art software engineering. The new code system will bring a series of benefits for our customers: e.g. improved accuracy for heterogeneous cores (MOX/ UOX, Gd...), better description of nuclide chains, and access to local neutronics/ thermal-hydraulics and possibly thermal-mechanical information (3D pin by pin full core modeling). ARCADIA is a registered trademark of AREVA NP. (authors)« less
The extent of food advertising to children on UK television in 2008.
Boyland, Emma J; Harrold, Joanne A; Kirkham, Tim C; Halford, Jason C G
2011-10-01
To provide the most comprehensive analysis to date of the extent of food advertising on UK television channels popular with young people following regulatory reform of this type of marketing activity. UK television was recorded 06:00-22:00 h for a weekday and a weekend day every month between January and December 2008 for 14 of the most popular commercial channels broadcasting children's/family viewing. Recordings were screened for advertisements, which were coded according to predefined categories including whether they were broadcast in peak/non-peak children's viewing time. Food advertisements were coded as core (healthy)/non-core (unhealthy)/miscellaneous foods. Food and drinks were the third most heavily advertised product category, and there were a significantly greater proportion of advertisements for food/drinks during peak compared to non-peak children's viewing times. A significantly greater proportion of the advertisements broadcast around soap operas than around children's programmes were for food/drinks. Children's channels broadcast a significantly greater proportion of non-core food advertisements than the family channels. There were significant differences between recording months for the proportion of core/non-core/miscellaneous food advertisements. Despite regulation, children in the UK are exposed to more TV advertising for unhealthy than healthy food items, even at peak children's viewing times. There remains scope to strengthen the rules regarding advertising of HFSS foods around programming popular with children and adults alike, where current regulations do not apply. Ongoing, systematic monitoring is essential for evaluation of the effectiveness of regulations designed to reduce children's exposure to HFSS food advertising on television in the UK.
Results of the Simulation of the HTR-Proteus Core 4.2 Using PEBBED-COMBINE: FY10 Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hans Gougar
2010-07-01
ABSTRACT The Idaho National Laboratory’s deterministic neutronics analysis codes and methods were applied to the computation of the core multiplication factor of the HTR-Proteus pebble bed reactor critical facility. This report is a follow-on to INL/EXT-09-16620 in which the same calculation was performed but using earlier versions of the codes and less developed methods. In that report, results indicated that the cross sections generated using COMBINE-7.0 did not yield satisfactory estimates of keff. It was concluded in the report that the modeling of control rods was not satisfactory. In the past year, improvements to the homogenization capability in COMBINE havemore » enabled the explicit modeling of TRIS particles, pebbles, and heterogeneous core zones including control rod regions using a new multi-scale version of COMBINE in which the 1-dimensional discrete ordinate transport code ANISN has been integrated. The new COMBINE is shown to yield benchmark quality results for pebble unit cell models, the first step in preparing few-group diffusion parameters for core simulations. In this report, the full critical core is modeled once again but with cross sections generated using the capabilities and physics of the improved COMBINE code. The new PEBBED-COMBINE model enables the exact modeling of the pebbles and control rod region along with better approximation to structures in the reflector. Initial results for the core multiplication factor indicate significant improvement in the INL’s tools for modeling the neutronic properties of a pebble bed reactor. Errors on the order of 1.6-2.5% in keff are obtained; a significant improvement over the 5-6% error observed in the earlier This is acceptable for a code system and model in the early stages of development but still too high for a production code. Analysis of a simpler core model indicates an over-prediction of the flux in the low end of the thermal spectrum. Causes of this discrepancy are under investigation. New homogenization techniques and assumptions were used in this analysis and as such, they require further confirmation and validation. Further refinement and review of the complex Proteus core model are likely to reduce the errors even further.« less
NASA Astrophysics Data System (ADS)
Featherstone, N. A.; Aurnou, J. M.; Yadav, R. K.; Heimpel, M. H.; Soderlund, K. M.; Matsui, H.; Stanley, S.; Brown, B. P.; Glatzmaier, G.; Olson, P.; Buffett, B. A.; Hwang, L.; Kellogg, L. H.
2017-12-01
In the past three years, CIG's Dynamo Working Group has successfully ported the Rayleigh Code to the Argonne Leadership Computer Facility's Mira BG/Q device. In this poster, we present some our first results, showing simulations of 1) convection in the solar convection zone; 2) dynamo action in Earth's core and 3) convection in the jovian deep atmosphere. These simulations have made efficient use of 131 thousand cores, 131 thousand cores and 232 thousand cores, respectively, on Mira. In addition to our novel results, the joys and logistical challenges of carrying out such large runs will also be discussed.
Porting plasma physics simulation codes to modern computing architectures using the
NASA Astrophysics Data System (ADS)
Germaschewski, Kai; Abbott, Stephen
2015-11-01
Available computing power has continued to grow exponentially even after single-core performance satured in the last decade. The increase has since been driven by more parallelism, both using more cores and having more parallelism in each core, e.g. in GPUs and Intel Xeon Phi. Adapting existing plasma physics codes is challenging, in particular as there is no single programming model that covers current and future architectures. We will introduce the open-source
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bai, D.; Levine, S.L.; Luoma, J.
1992-01-01
The Three Mile Island unit 1 core reloads have been designed using fast but accurate scoping codes, PSUI-LEOPARD and ADMARC. PSUI-LEOPARD has been normalized to EPRI-CPM2 results and used to calculate the two-group constants, whereas ADMARC is a modern two-dimensional, two-group diffusion theory nodal code. Problems in accuracy were encountered for cycles 8 and higher as the core lifetime was increased beyond 500 effective full-power days. This is because the heavier loaded cores in both {sup 235}U and {sup 10}B have harder neutron spectra, which produces a change in the transport effect in the baffle reflector region, and the burnablemore » poison (BP) simulations were not accurate enough for the cores containing the increased amount of {sup 10}B required in the BP rods. In the authors study, a technique has been developed to take into account the change in the transport effect in the baffle region by modifying the fast neutron diffusion coefficient as a function of cycle length and core exposure or burnup. A more accurate BP simulation method is also developed, using integral transport theory and CPM2 data, to calculate the BP contribution to the equivalent fuel assembly (supercell) two-group constants. The net result is that the accuracy of the scoping codes is as good as that produced by CASMO/SIMULATE or CPM2/SIMULATE when comparing with measured data.« less
Portable LQCD Monte Carlo code using OpenACC
NASA Astrophysics Data System (ADS)
Bonati, Claudio; Calore, Enrico; Coscetti, Simone; D'Elia, Massimo; Mesiti, Michele; Negro, Francesco; Fabio Schifano, Sebastiano; Silvi, Giorgio; Tripiccione, Raffaele
2018-03-01
Varying from multi-core CPU processors to many-core GPUs, the present scenario of HPC architectures is extremely heterogeneous. In this context, code portability is increasingly important for easy maintainability of applications; this is relevant in scientific computing where code changes are numerous and frequent. In this talk we present the design and optimization of a state-of-the-art production level LQCD Monte Carlo application, using the OpenACC directives model. OpenACC aims to abstract parallel programming to a descriptive level, where programmers do not need to specify the mapping of the code on the target machine. We describe the OpenACC implementation and show that the same code is able to target different architectures, including state-of-the-art CPUs and GPUs.
TORT/MCNP coupling method for the calculation of neutron flux around a core of BWR.
Kurosawa, Masahiko
2005-01-01
For the analysis of BWR neutronics performance, accurate data are required for neutron flux distribution over the In-Reactor Pressure Vessel equipments taking into account the detailed geometrical arrangement. The TORT code can calculate neutron flux around a core of BWR in a three-dimensional geometry model, but has difficulties in fine geometrical modelling and lacks huge computer resource. On the other hand, the MCNP code enables the calculation of the neutron flux with a detailed geometry model, but requires very long sampling time to give enough number of particles. Therefore, a TORT/MCNP coupling method has been developed to eliminate the two problems mentioned above in each code. In this method, the TORT code calculates angular flux distribution on the core surface and the MCNP code calculates neutron spectrum at the points of interest using the flux distribution. The coupling method will be used as the DOT-DOMINO-MORSE code system. This TORT/MCNP coupling method was applied to calculate the neutron flux at points where induced radioactivity data were measured for 54Mn and 60Co and the radioactivity calculations based on the neutron flux obtained from the above method were compared with the measured data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fischer, G.A.
2011-07-01
Document available in abstract form only, full text of document follows: The dosimetry from the H. B. Robinson Unit 2 Pressure Vessel Benchmark is analyzed with a suite of Westinghouse-developed codes and data libraries. The radiation transport from the reactor core to the surveillance capsule and ex-vessel locations is performed by RAPTOR-M3G, a parallel deterministic radiation transport code that calculates high-resolution neutron flux information in three dimensions. The cross-section library used in this analysis is the ALPAN library, an Evaluated Nuclear Data File (ENDF)/B-VII.0-based library designed for reactor dosimetry and fluence analysis applications. Dosimetry is evaluated with the industry-standard SNLRMLmore » reactor dosimetry cross-section data library. (authors)« less
Using Coding Apps to Support Literacy Instruction and Develop Coding Literacy
ERIC Educational Resources Information Center
Hutchison, Amy; Nadolny, Larysa; Estapa, Anne
2016-01-01
In this article the authors present the concept of Coding Literacy and describe the ways in which coding apps can support the development of Coding Literacy and disciplinary and digital literacy skills. Through detailed examples, we describe how coding apps can be integrated into literacy instruction to support learning of the Common Core English…
INL Results for Phases I and III of the OECD/NEA MHTGR-350 Benchmark
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gerhard Strydom; Javier Ortensi; Sonat Sen
2013-09-01
The Idaho National Laboratory (INL) Very High Temperature Reactor (VHTR) Technology Development Office (TDO) Methods Core Simulation group led the construction of the Organization for Economic Cooperation and Development (OECD) Modular High Temperature Reactor (MHTGR) 350 MW benchmark for comparing and evaluating prismatic VHTR analysis codes. The benchmark is sponsored by the OECD's Nuclear Energy Agency (NEA), and the project will yield a set of reference steady-state, transient, and lattice depletion problems that can be used by the Department of Energy (DOE), the Nuclear Regulatory Commission (NRC), and vendors to assess their code suits. The Methods group is responsible formore » defining the benchmark specifications, leading the data collection and comparison activities, and chairing the annual technical workshops. This report summarizes the latest INL results for Phase I (steady state) and Phase III (lattice depletion) of the benchmark. The INSTANT, Pronghorn and RattleSnake codes were used for the standalone core neutronics modeling of Exercise 1, and the results obtained from these codes are compared in Section 4. Exercise 2 of Phase I requires the standalone steady-state thermal fluids modeling of the MHTGR-350 design, and the results for the systems code RELAP5-3D are discussed in Section 5. The coupled neutronics and thermal fluids steady-state solution for Exercise 3 are reported in Section 6, utilizing the newly developed Parallel and Highly Innovative Simulation for INL Code System (PHISICS)/RELAP5-3D code suit. Finally, the lattice depletion models and results obtained for Phase III are compared in Section 7. The MHTGR-350 benchmark proved to be a challenging simulation set of problems to model accurately, and even with the simplifications introduced in the benchmark specification this activity is an important step in the code-to-code verification of modern prismatic VHTR codes. A final OECD/NEA comparison report will compare the Phase I and III results of all other international participants in 2014, while the remaining Phase II transient case results will be reported in 2015.« less
Benchmark of Atucha-2 PHWR RELAP5-3D control rod model by Monte Carlo MCNP5 core calculation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pecchia, M.; D'Auria, F.; Mazzantini, O.
2012-07-01
Atucha-2 is a Siemens-designed PHWR reactor under construction in the Republic of Argentina. Its geometrical complexity and peculiarities require the adoption of advanced Monte Carlo codes for performing realistic neutronic simulations. Therefore core models of Atucha-2 PHWR were developed using MCNP5. In this work a methodology was set up to collect the flux in the hexagonal mesh by which the Atucha-2 core is represented. The scope of this activity is to evaluate the effect of obliquely inserted control rod on neutron flux in order to validate the RELAP5-3D{sup C}/NESTLE three dimensional neutron kinetic coupled thermal-hydraulic model, applied by GRNSPG/UNIPI formore » performing selected transients of Chapter 15 FSAR of Atucha-2. (authors)« less
CFD and Neutron codes coupling on a computational platform
NASA Astrophysics Data System (ADS)
Cerroni, D.; Da Vià, R.; Manservisi, S.; Menghini, F.; Scardovelli, R.
2017-01-01
In this work we investigate the thermal-hydraulics behavior of a PWR nuclear reactor core, evaluating the power generation distribution taking into account the local temperature field. The temperature field, evaluated using a self-developed CFD module, is exchanged with a neutron code, DONJON-DRAGON, which updates the macroscopic cross sections and evaluates the new neutron flux. From the updated neutron flux the new peak factor is evaluated and the new temperature field is computed. The exchange of data between the two codes is obtained thanks to their inclusion into the computational platform SALOME, an open-source tools developed by the collaborative project NURESAFE. The numerical libraries MEDmem, included into the SALOME platform, are used in this work, for the projection of computational fields from one problem to another. The two problems are driven by a common supervisor that can access to the computational fields of both systems, in every time step, the temperature field, is extracted from the CFD problem and set into the neutron problem. After this iteration the new power peak factor is projected back into the CFD problem and the new time step can be computed. Several computational examples, where both neutron and thermal-hydraulics quantities are parametrized, are finally reported in this work.
MPACT Standard Input User s Manual, Version 2.2.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
Collins, Benjamin S.; Downar, Thomas; Fitzgerald, Andrew
The MPACT (Michigan PArallel Charactistics based Transport) code is designed to perform high-fidelity light water reactor (LWR) analysis using whole-core pin-resolved neutron transport calculations on modern parallel-computing hardware. The code consists of several libraries which provide the functionality necessary to solve steady-state eigenvalue problems. Several transport capabilities are available within MPACT including both 2-D and 3-D Method of Characteristics (MOC). A three-dimensional whole core solution based on the 2D-1D solution method provides the capability for full core depletion calculations.
Three-dimensional pin-to-pin analyses of VVER-440 cores by the MOBY-DICK code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lehmann, M.; Mikolas, P.
1994-12-31
Nuclear design for the Dukovany (EDU) VVER-440s nuclear power plant is routinely performed by the MOBY-DICK system. After its implementation on Hewlett Packard series 700 workstations, it is able to perform routinely three-dimensional pin-to-pin core analyses. For purposes of code validation, the benchmark prepared from EDU operational data was solved.
THR-TH: a high-temperature gas-cooled nuclear reactor core thermal hydraulics code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vondy, D.R.
1984-07-01
The ORNL version of PEBBLE, the (RZ) pebble bed thermal hydraulics code, has been extended for application to a prismatic gas cooled reactor core. The supplemental treatment is of one-dimensional coolant flow in up to a three-dimensional core description. Power density data from a neutronics and exposure calculation are used as the basic information for the thermal hydraulics calculation of heat removal. Two-dimensional neutronics results may be expanded for a three-dimensional hydraulics calculation. The geometric description for the hydraulics problem is the same as used by the neutronics code. A two-dimensional thermal cell model is used to predict temperatures inmore » the fuel channel. The capability is available in the local BOLD VENTURE computation system for reactor core analysis with capability to account for the effect of temperature feedback by nuclear cross section correlation. Some enhancements have also been added to the original code to add pebble bed modeling flexibility and to generate useful auxiliary results. For example, an estimate is made of the distribution of fuel temperatures based on average and extreme conditions regularly calculated at a number of locations.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gordienko, P. V., E-mail: gorpavel@vver.kiae.ru; Kotsarev, A. V.; Lizorkin, M. P.
2014-12-15
The procedure of recovery of pin-by-pin energy-release fields for the BIPR-8 code and the algorithm of the BIPR-8 code which is used in nodal computation of the reactor core and on which the recovery of pin-by-pin fields of energy release is based are briefly described. The description and results of the verification using the module of recovery of pin-by-pin energy-release fields and the TVS-M program are given.
NASA Astrophysics Data System (ADS)
Liu, Tianyu; Wolfe, Noah; Lin, Hui; Zieb, Kris; Ji, Wei; Caracappa, Peter; Carothers, Christopher; Xu, X. George
2017-09-01
This paper contains two parts revolving around Monte Carlo transport simulation on Intel Many Integrated Core coprocessors (MIC, also known as Xeon Phi). (1) MCNP 6.1 was recompiled into multithreading (OpenMP) and multiprocessing (MPI) forms respectively without modification to the source code. The new codes were tested on a 60-core 5110P MIC. The test case was FS7ONNi, a radiation shielding problem used in MCNP's verification and validation suite. It was observed that both codes became slower on the MIC than on a 6-core X5650 CPU, by a factor of 4 for the MPI code and, abnormally, 20 for the OpenMP code, and both exhibited limited capability of strong scaling. (2) We have recently added a Constructive Solid Geometry (CSG) module to our ARCHER code to provide better support for geometry modelling in radiation shielding simulation. The functions of this module are frequently called in the particle random walk process. To identify the performance bottleneck we developed a CSG proxy application and profiled the code using the geometry data from FS7ONNi. The profiling data showed that the code was primarily memory latency bound on the MIC. This study suggests that despite low initial porting e_ort, Monte Carlo codes do not naturally lend themselves to the MIC platform — just like to the GPUs, and that the memory latency problem needs to be addressed in order to achieve decent performance gain.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mandelli, Diego; Prescott, Steven R; Smith, Curtis L
2011-07-01
In the Risk Informed Safety Margin Characterization (RISMC) approach we want to understand not just the frequency of an event like core damage, but how close we are (or are not) to key safety-related events and how might we increase our safety margins. The RISMC Pathway uses the probabilistic margin approach to quantify impacts to reliability and safety by coupling both probabilistic (via stochastic simulation) and mechanistic (via physics models) approaches. This coupling takes place through the interchange of physical parameters and operational or accident scenarios. In this paper we apply the RISMC approach to evaluate the impact of amore » power uprate on a pressurized water reactor (PWR) for a tsunami-induced flooding test case. This analysis is performed using the RISMC toolkit: RELAP-7 and RAVEN codes. RELAP-7 is the new generation of system analysis codes that is responsible for simulating the thermal-hydraulic dynamics of PWR and boiling water reactor systems. RAVEN has two capabilities: to act as a controller of the RELAP-7 simulation (e.g., system activation) and to perform statistical analyses (e.g., run multiple RELAP-7 simulations where sequencing/timing of events have been changed according to a set of stochastic distributions). By using the RISMC toolkit, we can evaluate how power uprate affects the system recovery measures needed to avoid core damage after the PWR lost all available AC power by a tsunami induced flooding. The simulation of the actual flooding is performed by using a smooth particle hydrodynamics code: NEUTRINO.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liang, Thomas K.S.; Ko, F.-K
Although only a few percent of residual power remains during plant outages, the associated risk of core uncovery and corresponding fuel overheating has been identified to be relatively high, particularly under midloop operation (MLO) in pressurized water reactors. However, to analyze the system behavior during outages, the tools currently available, such as RELAP5, RETRAN, etc., cannot easily perform the task. Therefore, a medium-sized program aiming at reactor outage simulation and evaluation, such as MLO with the loss of residual heat removal (RHR), was developed. All important thermal-hydraulic processes involved during MLO with the loss of RHR will be properly simulatedmore » by the newly developed reactor outage simulation and evaluation (ROSE) code. Important processes during MLO with loss of RHR involve a pressurizer insurge caused by the hot-leg flooding, reflux condensation, liquid holdup inside the steam generator, loop-seal clearance, core-level depression, etc. Since the accuracy of the pressure distribution from the classical nodal momentum approach will be degraded when the system is stratified and under atmospheric pressure, the two-region approach with a modified two-fluid model will be the theoretical basis of the new program to analyze the nuclear steam supply system during plant outages. To verify the analytical model in the first step, posttest calculations against the closed integral midloop experiments with loss of RHR were performed. The excellent simulation capacity of the ROSE code against the Institute of Nuclear Energy Research Integral System Test Facility (IIST) test data is demonstrated.« less
Finite element simulation of core inspection in helicopter rotor blades using guided waves.
Chakrapani, Sunil Kishore; Barnard, Daniel; Dayal, Vinay
2015-09-01
This paper extends the work presented earlier on inspection of helicopter rotor blades using guided Lamb modes by focusing on inspecting the spar-core bond. In particular, this research focuses on structures which employ high stiffness, high density core materials. Wave propagation in such structures deviate from the generic Lamb wave propagation in sandwich panels. To understand the various mode conversions, finite element models of a generalized helicopter rotor blade were created and subjected to transient analysis using a commercial finite element code; ANSYS. Numerical simulations showed that a Lamb wave excited in the spar section of the blade gets converted into Rayleigh wave which travels across the spar-core section and mode converts back into Lamb wave. Dispersion of Rayleigh waves in multi-layered half-space was also explored. Damage was modeled in the form of a notch in the core section to simulate a cracked core, and delamination was modeled between the spar and core material to simulate spar-core disbond. Mode conversions under these damaged conditions were examined numerically. The numerical models help in assessing the difficulty of using nondestructive evaluation for complex structures and also highlight the physics behind the mode conversions which occur at various discontinuities. Copyright © 2015 Elsevier B.V. All rights reserved.
Multi-core processing and scheduling performance in CMS
NASA Astrophysics Data System (ADS)
Hernández, J. M.; Evans, D.; Foulkes, S.
2012-12-01
Commodity hardware is going many-core. We might soon not be able to satisfy the job memory needs per core in the current single-core processing model in High Energy Physics. In addition, an ever increasing number of independent and incoherent jobs running on the same physical hardware not sharing resources might significantly affect processing performance. It will be essential to effectively utilize the multi-core architecture. CMS has incorporated support for multi-core processing in the event processing framework and the workload management system. Multi-core processing jobs share common data in memory, such us the code libraries, detector geometry and conditions data, resulting in a much lower memory usage than standard single-core independent jobs. Exploiting this new processing model requires a new model in computing resource allocation, departing from the standard single-core allocation for a job. The experiment job management system needs to have control over a larger quantum of resource since multi-core aware jobs require the scheduling of multiples cores simultaneously. CMS is exploring the approach of using whole nodes as unit in the workload management system where all cores of a node are allocated to a multi-core job. Whole-node scheduling allows for optimization of the data/workflow management (e.g. I/O caching, local merging) but efficient utilization of all scheduled cores is challenging. Dedicated whole-node queues have been setup at all Tier-1 centers for exploring multi-core processing workflows in CMS. We present the evaluation of the performance scheduling and executing multi-core workflows in whole-node queues compared to the standard single-core processing workflows.
EBT reactor systems analysis and cost code: description and users guide (Version 1)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Santoro, R.T.; Uckan, N.A.; Barnes, J.M.
1984-06-01
An ELMO Bumpy Torus (EBT) reactor systems analysis and cost code that incorporates the most recent advances in EBT physics has been written. The code determines a set of reactors that fall within an allowed operating window determined from the coupling of ring and core plasma properties and the self-consistent treatment of the coupled ring-core stability and power balance requirements. The essential elements of the systems analysis and cost code are described, along with the calculational sequences leading to the specification of the reactor options and their associated costs. The input parameters, the constraints imposed upon them, and the operatingmore » range over which the code provides valid results are discussed. A sample problem and the interpretation of the results are also presented.« less
Spectroscopic properties of 130Sb, 132Te and 134I nuclei in 100-132Sn magic cores
NASA Astrophysics Data System (ADS)
Benrachi, Fatima; Khiter, Meriem; Laouet, Nadjet
2017-09-01
We have performed shell model calculations by means of Oxbash nuclear structure code using recent experimental single particle (spes) and single hole (shes) energies with valence space models above the 100sn and 132sn doubly magic cores. The two-body matrix elements (tbme) of original CD-Bonn realistic interaction are introduced after have been modified taking into account the three-body forces. We have focused our study on spectroscopic properties evaluation of 130Sb, 132Te and 134I nuclei, in particular their energy spectra, transition probabilities and moments have been determined. The getting spectra are in reasonable agreement with the experimental data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Romander, C M; Cagliostro, D J
Five experiments were performed to help evaluate the structural integrity of the reactor vessel and head design and to verify code predictions. In the first experiment (SM 1), a detailed model of the head was loaded statically to determine its stiffness. In the remaining four experiments (SM 2 to SM 5), models of the vessel and head were loaded dynamically under a simulated 661 MW-s hypothetical core disruptive accident (HCDA). Models SM 2 to SM 4, each of increasing complexity, systematically showed the effects of upper internals structures, a thermal liner, core support platform, and torospherical bottom on vessel response.more » Model SM 5, identical to SM 4 but more heavily instrumented, demonstrated experimental reproducibility and provided more comprehensive data. The models consisted of a Ni 200 vessel and core barrel, a head with shielding and simulated component masses, and an upper internals structure (UIS).« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rouxelin, Pascal Nicolas; Strydom, Gerhard
Best-estimate plus uncertainty analysis of reactors is replacing the traditional conservative (stacked uncertainty) method for safety and licensing analysis. To facilitate uncertainty analysis applications, a comprehensive approach and methodology must be developed and applied. High temperature gas cooled reactors (HTGRs) have several features that require techniques not used in light-water reactor analysis (e.g., coated-particle design and large graphite quantities at high temperatures). The International Atomic Energy Agency has therefore launched the Coordinated Research Project on HTGR Uncertainty Analysis in Modeling to study uncertainty propagation in the HTGR analysis chain. The benchmark problem defined for the prismatic design is represented bymore » the General Atomics Modular HTGR 350. The main focus of this report is the compilation and discussion of the results obtained for various permutations of Exercise I 2c and the use of the cross section data in Exercise II 1a of the prismatic benchmark, which is defined as the last and first steps of the lattice and core simulation phases, respectively. The report summarizes the Idaho National Laboratory (INL) best estimate results obtained for Exercise I 2a (fresh single-fuel block), Exercise I 2b (depleted single-fuel block), and Exercise I 2c (super cell) in addition to the first results of an investigation into the cross section generation effects for the super-cell problem. The two dimensional deterministic code known as the New ESC based Weighting Transport (NEWT) included in the Standardized Computer Analyses for Licensing Evaluation (SCALE) 6.1.2 package was used for the cross section evaluation, and the results obtained were compared to the three dimensional stochastic SCALE module KENO VI. The NEWT cross section libraries were generated for several permutations of the current benchmark super-cell geometry and were then provided as input to the Phase II core calculation of the stand alone neutronics Exercise II 1a. The steady state core calculations were simulated with the INL coupled-code system known as the Parallel and Highly Innovative Simulation for INL Code System (PHISICS) and the system thermal-hydraulics code known as the Reactor Excursion and Leak Analysis Program (RELAP) 5 3D using the nuclear data libraries previously generated with NEWT. It was observed that significant differences in terms of multiplication factor and neutron flux exist between the various permutations of the Phase I super-cell lattice calculations. The use of these cross section libraries only leads to minor changes in the Phase II core simulation results for fresh fuel but shows significantly larger discrepancies for spent fuel cores. Furthermore, large incongruities were found between the SCALE NEWT and KENO VI results for the super cells, and while some trends could be identified, a final conclusion on this issue could not yet be reached. This report will be revised in mid 2016 with more detailed analyses of the super-cell problems and their effects on the core models, using the latest version of SCALE (6.2). The super-cell models seem to show substantial improvements in terms of neutron flux as compared to single-block models, particularly at thermal energies.« less
Evaluation of a Variable-Impedance Ceramic Matrix Composite Acoustic Liner
NASA Technical Reports Server (NTRS)
Jones, M. G.; Watson, W. R.; Nark, D. M.; Howerton, B. M.
2014-01-01
As a result of significant progress in the reduction of fan and jet noise, there is growing concern regarding core noise. One method for achieving core noise reduction is via the use of acoustic liners. However, these liners must be constructed with materials suitable for high temperature environments and should be designed for optimum absorption of the broadband core noise spectrum. This paper presents results of tests conducted in the NASA Langley Liner Technology Facility to evaluate a variable-impedance ceramic matrix composite acoustic liner that offers the potential to achieve each of these goals. One concern is the porosity of the ceramic matrix composite material, and whether this might affect the predictability of liners constructed with this material. Comparisons between two variable-depth liners, one constructed with ceramic matrix composite material and the other constructed via stereolithography, are used to demonstrate this material porosity is not a concern. Also, some interesting observations are noted regarding the orientation of variable-depth liners. Finally, two propagation codes are validated via comparisons of predicted and measured acoustic pressure profiles for a variable-depth liner.
NASA Astrophysics Data System (ADS)
Homma, Yuto; Moriwaki, Hiroyuki; Ohki, Shigeo; Ikeda, Kazumi
2014-06-01
This paper deals with verification of three dimensional triangular prismatic discrete ordinates transport calculation code ENSEMBLE-TRIZ by comparison with multi-group Monte Carlo calculation code GMVP in a large fast breeder reactor. The reactor is a 750 MWe electric power sodium cooled reactor. Nuclear characteristics are calculated at beginning of cycle of an initial core and at beginning and end of cycle of equilibrium core. According to the calculations, the differences between the two methodologies are smaller than 0.0002 Δk in the multi-plication factor, relatively about 1% in the control rod reactivity, and 1% in the sodium void reactivity.
MILC Code Performance on High End CPU and GPU Supercomputer Clusters
NASA Astrophysics Data System (ADS)
DeTar, Carleton; Gottlieb, Steven; Li, Ruizi; Toussaint, Doug
2018-03-01
With recent developments in parallel supercomputing architecture, many core, multi-core, and GPU processors are now commonplace, resulting in more levels of parallelism, memory hierarchy, and programming complexity. It has been necessary to adapt the MILC code to these new processors starting with NVIDIA GPUs, and more recently, the Intel Xeon Phi processors. We report on our efforts to port and optimize our code for the Intel Knights Landing architecture. We consider performance of the MILC code with MPI and OpenMP, and optimizations with QOPQDP and QPhiX. For the latter approach, we concentrate on the staggered conjugate gradient and gauge force. We also consider performance on recent NVIDIA GPUs using the QUDA library.
Impact of Americium-241 (n,γ) Branching Ratio on SFR Core Reactivity and Spent Fuel Characteristics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hiruta, Hikaru; Youinou, Gilles J.; Dixon, Brent W.
An accurate prediction of core physics and fuel cycle parameters largely depends on the order of details and accuracy in nuclear data taken into account for actual calculations. 241Am is a major gateway nuclide for most of minor actinides and thus important nuclide for core physics and fuel-cycle calculations. The 241Am(n,?) branching ratio (BR) is in fact the energy dependent (see Fig. 1), therefore, it is necessary to taken into account the spectrum effect on the calculation of the average BR for the full-core depletion calculations. Moreover, the accuracy of the BR used in the depletion calculations could significantly influencemore » the core physics performance and post irradiated fuel compositions. The BR of 241Am(n,?) in ENDF/B-VII.0 library is relatively small and flat in thermal energy range, gradually increases within the intermediate energy range, and even becomes larger at the fast energy range. This indicates that the properly collapsed BR for fast reactors could be significantly different from that of thermal reactors. The evaluated BRs are also differ from one evaluation to another. As seen in Table I, average BRs for several evaluated libraries calculated by means of a fast spectrum are similar but have some differences. Most of currently available depletion codes use a pre-determined single value BR for each library. However, ideally it should be determined on-the-fly basis like that of one-group cross sections. These issues provide a strong incentive to investigate the effect of different 241Am(n,?) BRs on core and spent fuel parameters. This paper investigates the impact of the 241Am(n,?) BR on the results of SFR full-core based fuel-cycle calculations. The analysis is performed by gradually increasing the value of BR from 0.15 to 0.25 and studying its impact on the core reactivity and characteristics of SFR spent fuels over extended storage times (~10,000 years).« less
Deploying electromagnetic particle-in-cell (EM-PIC) codes on Xeon Phi accelerators boards
NASA Astrophysics Data System (ADS)
Fonseca, Ricardo
2014-10-01
The complexity of the phenomena involved in several relevant plasma physics scenarios, where highly nonlinear and kinetic processes dominate, makes purely theoretical descriptions impossible. Further understanding of these scenarios requires detailed numerical modeling, but fully relativistic particle-in-cell codes such as OSIRIS are computationally intensive. The quest towards Exaflop computer systems has lead to the development of HPC systems based on add-on accelerator cards, such as GPGPUs and more recently the Xeon Phi accelerators that power the current number 1 system in the world. These cards, also referred to as Intel Many Integrated Core Architecture (MIC) offer peak theoretical performances of >1 TFlop/s for general purpose calculations in a single board, and are receiving significant attention as an attractive alternative to CPUs for plasma modeling. In this work we report on our efforts towards the deployment of an EM-PIC code on a Xeon Phi architecture system. We will focus on the parallelization and vectorization strategies followed, and present a detailed performance evaluation of code performance in comparison with the CPU code.
Unstructured Grids for Sonic Boom Analysis and Design
NASA Technical Reports Server (NTRS)
Campbell, Richard L.; Nayani, Sudheer N.
2015-01-01
An evaluation of two methods for improving the process for generating unstructured CFD grids for sonic boom analysis and design has been conducted. The process involves two steps: the generation of an inner core grid using a conventional unstructured grid generator such as VGRID, followed by the extrusion of a sheared and stretched collar grid through the outer boundary of the core grid. The first method evaluated, known as COB, automatically creates a cylindrical outer boundary definition for use in VGRID that makes the extrusion process more robust. The second method, BG, generates the collar grid by extrusion in a very efficient manner. Parametric studies have been carried out and new options evaluated for each of these codes with the goal of establishing guidelines for best practices for maintaining boom signature accuracy with as small a grid as possible. In addition, a preliminary investigation examining the use of the CDISC design method for reducing sonic boom utilizing these grids was conducted, with initial results confirming the feasibility of a new remote design approach.
Wallace, Sarah J; Worrall, Linda; Rose, Tanya; Le Dorze, Guylaine
2017-11-12
This study synthesised the findings of three separate consensus processes exploring the perspectives of key stakeholder groups about important aphasia treatment outcomes. This process was conducted to generate recommendations for outcome domains to be included in a core outcome set for aphasia treatment trials. International Classification of Functioning, Disability, and Health codes were examined to identify where the groups of: (1) people with aphasia, (2) family members, (3) aphasia researchers, and (4) aphasia clinicians/managers, demonstrated congruence in their perspectives regarding important treatment outcomes. Codes were contextualized using qualitative data. Congruence across three or more stakeholder groups was evident for ICF chapters: Mental functions; Communication; and Services, systems, and policies. Quality of life was explicitly identified by clinicians/managers and researchers, while people with aphasia and their families identified outcomes known to be determinants of quality of life. Core aphasia outcomes include: language, emotional wellbeing, communication, patient-reported satisfaction with treatment and impact of treatment, and quality of life. International Classification of Functioning, Disability, and Health coding can be used to compare stakeholder perspectives and identify domains for core outcome sets. Pairing coding with qualitative data may ensure important nuances of meaning are retained. Implications for rehabilitation The outcomes measured in treatment research should be relevant to stakeholders and support health care decision making. Core outcome sets (agreed, minimum set of outcomes, and outcome measures) are increasingly being used to ensure the relevancy and consistency of the outcomes measured in treatment studies. Important aphasia treatment outcomes span all components of the International Classification of Functioning, Disability, and Health. Stakeholders demonstrated congruence in the identification of important outcomes which related Mental functions; Communication; Services, systems, and policies; and Quality of life. A core outcome set for aphasia treatment research should include measures relating to: language, emotional wellbeing, communication, patient-reported satisfaction with treatment and impact of treatment, and quality of life. Coding using the International Classification of Functioning, Disability, and Health, presents a novel methodology for the comparison of stakeholder perspectives to inform recommendations for outcome constructs to be included in a core outcome set. Coding can be paired with qualitative data to ensure nuances of meaning are retained.
Variation of SNOMED CT coding of clinical research concepts among coding experts.
Andrews, James E; Richesson, Rachel L; Krischer, Jeffrey
2007-01-01
To compare consistency of coding among professional SNOMED CT coders representing three commercial providers of coding services when coding clinical research concepts with SNOMED CT. A sample of clinical research questions from case report forms (CRFs) generated by the NIH-funded Rare Disease Clinical Research Network (RDCRN) were sent to three coding companies with instructions to code the core concepts using SNOMED CT. The sample consisted of 319 question/answer pairs from 15 separate studies. The companies were asked to select SNOMED CT concepts (in any form, including post-coordinated) that capture the core concept(s) reflected in the question. Also, they were asked to state their level of certainty, as well as how precise they felt their coding was. Basic frequencies were calculated to determine raw level agreement among the companies and other descriptive information. Krippendorff's alpha was used to determine a statistical measure of agreement among the coding companies for several measures (semantic, certainty, and precision). No significant level of agreement among the experts was found. There is little semantic agreement in coding of clinical research data items across coders from 3 professional coding services, even using a very liberal definition of agreement.
Computational Performance of a Parallelized Three-Dimensional High-Order Spectral Element Toolbox
NASA Astrophysics Data System (ADS)
Bosshard, Christoph; Bouffanais, Roland; Clémençon, Christian; Deville, Michel O.; Fiétier, Nicolas; Gruber, Ralf; Kehtari, Sohrab; Keller, Vincent; Latt, Jonas
In this paper, a comprehensive performance review of an MPI-based high-order three-dimensional spectral element method C++ toolbox is presented. The focus is put on the performance evaluation of several aspects with a particular emphasis on the parallel efficiency. The performance evaluation is analyzed with help of a time prediction model based on a parameterization of the application and the hardware resources. A tailor-made CFD computation benchmark case is introduced and used to carry out this review, stressing the particular interest for clusters with up to 8192 cores. Some problems in the parallel implementation have been detected and corrected. The theoretical complexities with respect to the number of elements, to the polynomial degree, and to communication needs are correctly reproduced. It is concluded that this type of code has a nearly perfect speed up on machines with thousands of cores, and is ready to make the step to next-generation petaflop machines.
Neutron Radiation Damage Estimation in the Core Structure Base Metal of RSG GAS
NASA Astrophysics Data System (ADS)
Santa, S. A.; Suwoto
2018-02-01
Radiation damage in core structure of the Indonesian RGS GAS multi purpose reactor resulting from the reaction of fast and thermal neutrons with core material structure was investigated for the first time after almost 30 years in operation. The aim is to analyze the degradation level of the critical components of the RSG GAS reactor so that the remaining life of its component can be estimated. Evaluation results of critical components remaining life will be used as data ccompleteness for submission of reactor operating permit extension. Material damage analysis due to neutron radiation is performed for the core structure components made of AlMg3 material and bolts reinforcement of core structure made of SUS304. Material damage evaluation was done on Al and Fe as base metal of AlMg3 and SUS304, respectively. Neutron fluences are evaluated based on the assumption that neutron flux calculations of U3Si8-Al equilibrium core which is operated on power rated of 15 MW. Calculation result using SRAC2006 code of CITATION module shows the maximum total neutron flux and flux >0.1 MeV are 2.537E+14 n/cm2/s and 3.376E+13 n/cm2/s, respectively. It was located at CIP core center close to the fuel element. After operating up to the end of #89 core formation, the total neutron fluence and fluence >0.1 MeV were achieved 9.063E+22 and 1.269E+22 n/cm2, respectively. Those are related to material damage of Al and Fe as much as 17.91 and 10.06 dpa, respectively. Referring to the life time of Al-1100 material irradiated in the neutron field with thermal flux/total flux=1.7 which capable of accepting material damage up to 250 dpa, it was concluded that RSG GAS reactor core structure underwent 7.16% of its operating life span. It means that core structure of RSG GAS reactor is still capable to receive the total neutron fluence of 9.637E+22 n/cm2 or fluence >0.1 MeV of 5.672E+22 n/cm2.
Systematic void fraction studies with RELAP5, FRANCESCA and HECHAN
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stosic, Z.; Preusche, G.
1996-08-01
In enhancing the scope of standard thermal-hydraulic codes applications beyond its capabilities, i.e. coupling with a one and/or three-dimensional kinetics core model, the void fraction, transferred from thermal-hydraulics to the core model, plays a determining role in normal operating range and high core flow, as the generated heat and axial power profiles are direct functions of void distribution in the core. Hence, it is very important to know if the void quality models in the programs which have to be coupled are compatible to allow the interactive exchange of data which are based on these constitutive void-quality relations. The presentedmore » void fraction study is performed in order to give the basis for the conclusion whether a transient core simulation using the RELAP5 void fractions can calculate the axial power shapes adequately. Because of that, the void fractions calculated with RELAP5 are compared with those calculated by BWR safety code for licensing--FRANCESCA and the best estimate model for pre- and post-dryout calculation in BWR heated channel--HECHAN. In addition, a comparison with standard experimental void-quality benchmark tube data is performed for the HECHAN code.« less
Upgrade of Irradiation Test Capability of the Experimental Fast Reactor Joyo
NASA Astrophysics Data System (ADS)
Sekine, Takashi; Aoyama, Takafumi; Suzuki, Soju; Yamashita, Yoshioki
2003-06-01
The JOYO MK-II core was operated from 1983 to 2000 as fast neutron irradiation bed. In order to meet various requirements for irradiation tests for development of FBRs, the JOYO upgrading project named MK-III program was initiated. The irradiation capability in the MK-III core will be about four times larger than that of the MK-II core. Advanced irradiation test subassemblies such as capsule type subassembly and on-line instrumentation rig are planned. As an innovative reactor safety system, the irradiation test of Self-Actuated Shutdown System (SASS) will be conducted. In order to improve the accuracy of neutron fluence, the core management code system was upgraded, and the Monte Carlo code and Helium Accumulation Fluence Monitor (HAFM) were applied. The MK-III core is planned to achieve initial criticality in July 2003.
Development and preliminary verification of the 3D core neutronic code: COCO
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, H.; Mo, K.; Li, W.
As the recent blooming economic growth and following environmental concerns (China)) is proactively pushing forward nuclear power development and encouraging the tapping of clean energy. Under this situation, CGNPC, as one of the largest energy enterprises in China, is planning to develop its own nuclear related technology in order to support more and more nuclear plants either under construction or being operation. This paper introduces the recent progress in software development for CGNPC. The focus is placed on the physical models and preliminary verification results during the recent development of the 3D Core Neutronic Code: COCO. In the COCO code,more » the non-linear Green's function method is employed to calculate the neutron flux. In order to use the discontinuity factor, the Neumann (second kind) boundary condition is utilized in the Green's function nodal method. Additionally, the COCO code also includes the necessary physical models, e.g. single-channel thermal-hydraulic module, burnup module, pin power reconstruction module and cross-section interpolation module. The preliminary verification result shows that the COCO code is sufficient for reactor core design and analysis for pressurized water reactor (PWR). (authors)« less
The CRONOS Code for Astrophysical Magnetohydrodynamics
NASA Astrophysics Data System (ADS)
Kissmann, R.; Kleimann, J.; Krebl, B.; Wiengarten, T.
2018-06-01
We describe the magnetohydrodynamics (MHD) code CRONOS, which has been used in astrophysics and space-physics studies in recent years. CRONOS has been designed to be easily adaptable to the problem in hand, where the user can expand or exchange core modules or add new functionality to the code. This modularity comes about through its implementation using a C++ class structure. The core components of the code include solvers for both hydrodynamical (HD) and MHD problems. These problems are solved on different rectangular grids, which currently support Cartesian, spherical, and cylindrical coordinates. CRONOS uses a finite-volume description with different approximate Riemann solvers that can be chosen at runtime. Here, we describe the implementation of the code with a view toward its ongoing development. We illustrate the code’s potential through several (M)HD test problems and some astrophysical applications.
A flooding induced station blackout analysis for a pressurized water reactor using the RISMC toolkit
Mandelli, Diego; Prescott, Steven; Smith, Curtis; ...
2015-05-17
In this paper we evaluate the impact of a power uprate on a pressurized water reactor (PWR) for a tsunami-induced flooding test case. This analysis is performed using the RISMC toolkit: the RELAP-7 and RAVEN codes. RELAP-7 is the new generation of system analysis codes that is responsible for simulating the thermal-hydraulic dynamics of PWR and boiling water reactor systems. RAVEN has two capabilities: to act as a controller of the RELAP-7 simulation (e.g., component/system activation) and to perform statistical analyses. In our case, the simulation of the flooding is performed by using an advanced smooth particle hydrodynamics code calledmore » NEUTRINO. The obtained results allow the user to investigate and quantify the impact of timing and sequencing of events on system safety. The impact of power uprate is determined in terms of both core damage probability and safety margins.« less
PRIZMA predictions of in-core detection indications in the VVER-1000 reactor
NASA Astrophysics Data System (ADS)
Kandiev, Yadgar Z.; Kashayeva, Elena A.; Malyshin, Gennady N.; Modestov, Dmitry G.; Khatuntsev, Kirill E.
2014-06-01
The paper describes calculations which were done by the PRIZMA code(1) to predict indications of in-core rhodium detectors in the VVER-1000 reactor for some core fragments with allowance for fuel and rhodium burnout.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jeong, Hae-Yong; Ha, Kwi-Seok; Chang, Won-Pyo
The local blockage in a subassembly of a liquid metal-cooled reactor (LMR) is of importance to the plant safety because of the compact design and the high power density of the core. To analyze the thermal-hydraulic parameters in a subassembly of a liquid metal-cooled reactor with a flow blockage, the Korea Atomic Energy Research Institute has developed the MATRA-LMR-FB code. This code uses the distributed resistance model to describe the sweeping flow formed by the wire wrap around the fuel rods and to model the recirculation flow after a blockage. The hybrid difference scheme is also adopted for the descriptionmore » of the convective terms in the recirculating wake region of low velocity. Some state-of-the-art turbulent mixing models were implemented in the code, and the models suggested by Rehme and by Zhukov are analyzed and found to be appropriate for the description of the flow blockage in an LMR subassembly. The MATRA-LMR-FB code predicts accurately the experimental data of the Oak Ridge National Laboratory 19-pin bundle with a blockage for both the high-flow and low-flow conditions. The influences of the distributed resistance model, the hybrid difference method, and the turbulent mixing models are evaluated step by step with the experimental data. The appropriateness of the models also has been evaluated through a comparison with the results from the COMMIX code calculation. The flow blockage for the KALIMER design has been analyzed with the MATRA-LMR-FB code and is compared with the SABRE code to guarantee the design safety for the flow blockage.« less
CoreTSAR: Core Task-Size Adapting Runtime
Scogland, Thomas R. W.; Feng, Wu-chun; Rountree, Barry; ...
2014-10-27
Heterogeneity continues to increase at all levels of computing, with the rise of accelerators such as GPUs, FPGAs, and other co-processors into everything from desktops to supercomputers. As a consequence, efficiently managing such disparate resources has become increasingly complex. CoreTSAR seeks to reduce this complexity by adaptively worksharing parallel-loop regions across compute resources without requiring any transformation of the code within the loop. Lastly, our results show performance improvements of up to three-fold over a current state-of-the-art heterogeneous task scheduler as well as linear performance scaling from a single GPU to four GPUs for many codes. In addition, CoreTSAR demonstratesmore » a robust ability to adapt to both a variety of workloads and underlying system configurations.« less
Fenton, Susan H; Benigni, Mary Sue
2014-01-01
The transition from ICD-9-CM to ICD-10-CM/PCS is expected to result in longitudinal data discontinuities, as occurred with cause-of-death in 1999. The General Equivalence Maps (GEMs), while useful for suggesting potential maps do not provide guidance regarding the frequency of any matches. Longitudinal data comparisons can only be reliable if they use comparability ratios or factors which have been calculated using records coded in both classification systems. This study utilized 3,969 de-identified dually coded records to examine raw comparability ratios, as well as the comparability ratios between the Joint Commission Core Measures. The raw comparability factor results range from 16.216 for Nicotine dependence, unspecified, uncomplicated to 118.009 for Chronic obstructive pulmonary disease, unspecified. The Joint Commission Core Measure comparability factor results range from 27.15 for Acute Respiratory Failure to 130.16 for Acute Myocardial Infarction. These results indicate significant differences in comparability between ICD-9-CM and ICD-10-CM code assignment, including when the codes are used for external reporting such as the Joint Commission Core Measures. To prevent errors in decision-making and reporting, all stakeholders relying on longitudinal data for measure reporting and other purposes should investigate the impact of the conversion on their data.
NASA Astrophysics Data System (ADS)
Mielikainen, Jarno; Huang, Bormin; Huang, Allen H.
2014-10-01
Purdue-Lin scheme is a relatively sophisticated microphysics scheme in the Weather Research and Forecasting (WRF) model. The scheme includes six classes of hydro meteors: water vapor, cloud water, raid, cloud ice, snow and graupel. The scheme is very suitable for massively parallel computation as there are no interactions among horizontal grid points. In this paper, we accelerate the Purdue Lin scheme using Intel Many Integrated Core Architecture (MIC) hardware. The Intel Xeon Phi is a high performance coprocessor consists of up to 61 cores. The Xeon Phi is connected to a CPU via the PCI Express (PICe) bus. In this paper, we will discuss in detail the code optimization issues encountered while tuning the Purdue-Lin microphysics Fortran code for Xeon Phi. In particularly, getting a good performance required utilizing multiple cores, the wide vector operations and make efficient use of memory. The results show that the optimizations improved performance of the original code on Xeon Phi 5110P by a factor of 4.2x. Furthermore, the same optimizations improved performance on Intel Xeon E5-2603 CPU by a factor of 1.2x compared to the original code.
Progress Towards a Rad-Hydro Code for Modern Computing Architectures LA-UR-10-02825
NASA Astrophysics Data System (ADS)
Wohlbier, J. G.; Lowrie, R. B.; Bergen, B.; Calef, M.
2010-11-01
We are entering an era of high performance computing where data movement is the overwhelming bottleneck to scalable performance, as opposed to the speed of floating-point operations per processor. All multi-core hardware paradigms, whether heterogeneous or homogeneous, be it the Cell processor, GPGPU, or multi-core x86, share this common trait. In multi-physics applications such as inertial confinement fusion or astrophysics, one may be solving multi-material hydrodynamics with tabular equation of state data lookups, radiation transport, nuclear reactions, and charged particle transport in a single time cycle. The algorithms are intensely data dependent, e.g., EOS, opacity, nuclear data, and multi-core hardware memory restrictions are forcing code developers to rethink code and algorithm design. For the past two years LANL has been funding a small effort referred to as Multi-Physics on Multi-Core to explore ideas for code design as pertaining to inertial confinement fusion and astrophysics applications. The near term goals of this project are to have a multi-material radiation hydrodynamics capability, with tabular equation of state lookups, on cartesian and curvilinear block structured meshes. In the longer term we plan to add fully implicit multi-group radiation diffusion and material heat conduction, and block structured AMR. We will report on our progress to date.
WOMBAT: A Scalable and High-performance Astrophysical Magnetohydrodynamics Code
NASA Astrophysics Data System (ADS)
Mendygral, P. J.; Radcliffe, N.; Kandalla, K.; Porter, D.; O'Neill, B. J.; Nolting, C.; Edmon, P.; Donnert, J. M. F.; Jones, T. W.
2017-02-01
We present a new code for astrophysical magnetohydrodynamics specifically designed and optimized for high performance and scaling on modern and future supercomputers. We describe a novel hybrid OpenMP/MPI programming model that emerged from a collaboration between Cray, Inc. and the University of Minnesota. This design utilizes MPI-RMA optimized for thread scaling, which allows the code to run extremely efficiently at very high thread counts ideal for the latest generation of multi-core and many-core architectures. Such performance characteristics are needed in the era of “exascale” computing. We describe and demonstrate our high-performance design in detail with the intent that it may be used as a model for other, future astrophysical codes intended for applications demanding exceptional performance.
NASA Astrophysics Data System (ADS)
Aufiero, M.; Cammi, A.; Fiorina, C.; Leppänen, J.; Luzzi, L.; Ricotti, M. E.
2013-10-01
In this work, the Monte Carlo burn-up code SERPENT-2 has been extended and employed to study the material isotopic evolution of the Molten Salt Fast Reactor (MSFR). This promising GEN-IV nuclear reactor concept features peculiar characteristics such as the on-line fuel reprocessing, which prevents the use of commonly available burn-up codes. Besides, the presence of circulating nuclear fuel and radioactive streams from the core to the reprocessing plant requires a precise knowledge of the fuel isotopic composition during the plant operation. The developed extension of SERPENT-2 directly takes into account the effects of on-line fuel reprocessing on burn-up calculations and features a reactivity control algorithm. It is here assessed against a dedicated version of the deterministic ERANOS-based EQL3D procedure (PSI-Switzerland) and adopted to analyze the MSFR fuel salt isotopic evolution. Particular attention is devoted to study the effects of reprocessing time constants and efficiencies on the conversion ratio and the molar concentration of elements relevant for solubility issues (e.g., trivalent actinides and lanthanides). Quantities of interest for fuel handling and safety issues are investigated, including decay heat and activities of hazardous isotopes (neutron and high energy gamma emitters) in the core and in the reprocessing stream. The radiotoxicity generation is also analyzed for the MSFR nominal conditions. The production of helium and the depletion in tungsten content due to nuclear reactions are calculated for the nickel-based alloy selected as reactor structural material of the MSFR. These preliminary evaluations can be helpful in studying the radiation damage of both the primary salt container and the axial reflectors.
NASA Astrophysics Data System (ADS)
Nijssen, B.; Hamman, J.; Bohn, T. J.
2015-12-01
The Variable Infiltration Capacity (VIC) model is a macro-scale semi-distributed hydrologic model. VIC development began in the early 1990s and it has been used extensively, applied from basin to global scales. VIC has been applied in a many use cases, including the construction of hydrologic data sets, trend analysis, data evaluation and assimilation, forecasting, coupled climate modeling, and climate change impact analysis. Ongoing applications of the VIC model include the University of Washington's drought monitor and forecast systems, and NASA's land data assimilation systems. The development of VIC version 5.0 focused on reconfiguring the legacy VIC source code to support a wider range of modern modeling applications. The VIC source code has been moved to a public Github repository to encourage participation by the model development community-at-large. The reconfiguration has separated the physical core of the model from the driver, which is responsible for memory allocation, pre- and post-processing and I/O. VIC 5.0 includes four drivers that use the same physical model core: classic, image, CESM, and Python. The classic driver supports legacy VIC configurations and runs in the traditional time-before-space configuration. The image driver includes a space-before-time configuration, netCDF I/O, and uses MPI for parallel processing. This configuration facilitates the direct coupling of streamflow routing, reservoir, and irrigation processes within VIC. The image driver is the foundation of the CESM driver; which couples VIC to CESM's CPL7 and a prognostic atmosphere. Finally, we have added a Python driver that provides access to the functions and datatypes of VIC's physical core from a Python interface. This presentation demonstrates how reconfiguring legacy source code extends the life and applicability of a research model.
NASA Astrophysics Data System (ADS)
Al Zain, Jamal; El Hajjaji, O.; El Bardouni, T.; Boukhal, H.; Jaï, Otman
2018-06-01
The MNSR is a pool type research reactor, which is difficult to model because of the importance of neutron leakage. The aim of this study is to evaluate a 2-D transport model for the reactor compatible with the latest release of the DRAGON code and 3-D diffusion of the DONJON code. DRAGON code is then used to generate the group macroscopic cross sections needed for full core diffusion calculations. The diffusion DONJON code, is then used to compute the effective multiplication factor (keff), the feedback reactivity coefficients and neutron flux which account for variation in fuel and moderator temperatures as well as the void coefficient have been calculated using the DRAGON and DONJON codes for the MNSR research reactor. The cross sections of all the reactor components at different temperatures were generated using the DRAGON code. These group constants were used then in the DONJON code to calculate the multiplication factor and the neutron spectrum at different water and fuel temperatures using 69 energy groups. Only one parameter was changed where all other parameters were kept constant. Finally, Good agreements between the calculated and measured have been obtained for every of the feedback reactivity coefficients and neutron flux.
Multi-core processing and scheduling performance in CMS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hernandez, J. M.; Evans, D.; Foulkes, S.
2012-01-01
Commodity hardware is going many-core. We might soon not be able to satisfy the job memory needs per core in the current single-core processing model in High Energy Physics. In addition, an ever increasing number of independent and incoherent jobs running on the same physical hardware not sharing resources might significantly affect processing performance. It will be essential to effectively utilize the multi-core architecture. CMS has incorporated support for multi-core processing in the event processing framework and the workload management system. Multi-core processing jobs share common data in memory, such us the code libraries, detector geometry and conditions data, resultingmore » in a much lower memory usage than standard single-core independent jobs. Exploiting this new processing model requires a new model in computing resource allocation, departing from the standard single-core allocation for a job. The experiment job management system needs to have control over a larger quantum of resource since multi-core aware jobs require the scheduling of multiples cores simultaneously. CMS is exploring the approach of using whole nodes as unit in the workload management system where all cores of a node are allocated to a multi-core job. Whole-node scheduling allows for optimization of the data/workflow management (e.g. I/O caching, local merging) but efficient utilization of all scheduled cores is challenging. Dedicated whole-node queues have been setup at all Tier-1 centers for exploring multi-core processing workflows in CMS. We present the evaluation of the performance scheduling and executing multi-core workflows in whole-node queues compared to the standard single-core processing workflows.« less
NASA Astrophysics Data System (ADS)
Serpa-Imbett, C. M.; Marín-Alfonso, J.; Gómez-Santamaría, C.; Betancur-Agudelo, L.; Amaya-Fernández, F.
2013-12-01
Space division multiplexing in multicore fibers is one of the most promise technologies in order to support transmissions of next-generation peta-to-exaflop-scale supercomputers and mega data centers, owing to advantages in terms of costs and space saving of the new optical fibers with multiple cores. Additionally, multicore fibers allow photonic signal processing in optical communication systems, taking advantage of the mode coupling phenomena. In this work, we numerically have simulated an optical MIMO-OFDM (multiple-input multiple-output orthogonal frequency division multiplexing) by using the coded Alamouti to be transmitted through a twin-core fiber with low coupling. Furthermore, an optical OFDM is transmitted through a core of a singlemode fiber, using pilot-aided channel estimation. We compare the transmission performance in the twin-core fiber and in the singlemode fiber taking into account numerical results of the bit-error rate, considering linear propagation, and Gaussian noise through an optical fiber link. We carry out an optical fiber transmission of OFDM frames using 8 PSK and 16 QAM, with bit rates values of 130 Gb/s and 170 Gb/s, respectively. We obtain a penalty around 4 dB for the 8 PSK transmissions, after 100 km of linear fiber optic propagation for both singlemode and twin core fiber. We obtain a penalty around 6 dB for the 16 QAM transmissions, with linear propagation after 100 km of optical fiber. The transmission in a two-core fiber by using Alamouti coded OFDM-MIMO exhibits a better performance, offering a good alternative in the mitigation of fiber impairments, allowing to expand Alamouti coded in multichannel systems spatially multiplexed in multicore fibers.
From core to coax: extending core RF modelling to include SOL, Antenna, and PFC
NASA Astrophysics Data System (ADS)
Shiraiwa, Syun'ichi
2017-10-01
A new technique for the calculation of RF waves in toroidal geometry enables the simultaneous incorporation of antenna geometry, plasma facing components (PFCs), the scrape off-layer (SOL), and core propagation. Traditionally, core RF wave propagation and antenna coupling has been calculated separately both using rather simplified SOL plasmas. The new approach, instead, allows capturing wave propagation in the SOL and its interactions with non-conforming PFCs permitting self-consistent calculation of core absorption and edge power loss, as well as investigating far and near field impurity generation from RF sheaths and a breakdown issue from antenna electric fields. Our approach combines the field solutions obtained from a core spectral code with a hot plasma dielectric and an edge FEM code using a cold plasma approximation via surface admittance-like matrix. Our approach was verified using the TORIC core ICRF spectral code and the commercial COMSOL FEM package, and was extended to 3D torus using open-source scalable MFEM library. The simulation result revealed that as the core wave damping gets weaker, the wave absorption in edge could become non-negligible. Three dimensional capabilities with non axisymmetric edge are being applied to study the antenna characteristic difference between the field aligned and toroidally aligned antennas on Alcator C-Mod, as well as the surface wave excitation on NSTX-U. Work supported by the U.S. DoE, OFES, using User Facility Alcator C-Mod, DE-FC02-99ER54512 and Contract No. DE-FC02-01ER54648.
Are Military and Medical Ethics Necessarily Incompatible? A Canadian Case Study.
Rochon, Christiane; Williams-Jones, Bryn
2016-12-01
Military physicians are often perceived to be in a position of 'dual loyalty' because they have responsibilities towards their patients but also towards their employer, the military institution. Further, they have to ascribe to and are bound by two distinct codes of ethics (i.e., medical and military), each with its own set of values and duties, that could at first glance be considered to be very different or even incompatible. How, then, can military physicians reconcile these two codes of ethics and their distinct professional/institutional values, and assume their responsibilities towards both their patients and the military institution? To clarify this situation, and to show how such a reconciliation might be possible, we compared the history and content of two national professional codes of ethics: the Defence Ethics of the Canadian Armed Forces and the Code of Ethics of the Canadian Medical Association. Interestingly, even if the medical code is more focused on duties and responsibility while the military code is more focused on core values and is supported by a comprehensive ethical training program, they also have many elements in common. Further, both are based on the same core values of loyalty and integrity, and they are broad in scope but are relatively flexible in application. While there are still important sources of tension between and limits within these two codes of ethics, there are fewer differences than may appear at first glance because the core values and principles of military and medical ethics are not so different.
Low-power lead-cooled fast reactor loaded with MOX-fuel
NASA Astrophysics Data System (ADS)
Sitdikov, E. R.; Terekhova, A. M.
2017-01-01
Fast reactor for the purpose of implementation of research, education of undergraduate and doctoral students in handling innovative fast reactors and training specialists for atomic research centers and nuclear power plants (BRUTs) was considered. Hard neutron spectrum achieved in the fast reactor with compact core and lead coolant. Possibility of prompt neutron runaway of the reactor is excluded due to the low reactivity margin which is less than the effective fraction of delayed neutrons. The possibility of using MOX fuel in the BRUTs reactor was examined. The effect of Keff growth connected with replacement of natural lead coolant to 208Pb coolant was evaluated. The calculations and reactor core model were performed using the Serpent Monte Carlo code.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kontogeorgakos, D.; Derstine, K.; Wright, A.
2013-06-01
The purpose of the TREAT reactor is to generate large transient neutron pulses in test samples without over-heating the core to simulate fuel assembly accident conditions. The power transients in the present HEU core are inherently self-limiting such that the core prevents itself from overheating even in the event of a reactivity insertion accident. The objective of this study was to support the assessment of the feasibility of the TREAT core conversion based on the present reactor performance metrics and the technical specifications of the HEU core. The LEU fuel assembly studied had the same overall design, materials (UO 2more » particles finely dispersed in graphite) and impurities content as the HEU fuel assembly. The Monte Carlo N–Particle code (MCNP) and the point kinetics code TREKIN were used in the analyses.« less
VENTURE/PC manual: A multidimensional multigroup neutron diffusion code system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shapiro, A.; Huria, H.C.; Cho, K.W.
1991-12-01
VENTURE/PC is a recompilation of part of the Oak Ridge BOLD VENTURE code system, which will operate on an IBM PC or compatible computer. Neutron diffusion theory solutions are obtained for multidimensional, multigroup problems. This manual contains information associated with operating the code system. The purpose of the various modules used in the code system, and the input for these modules are discussed. The PC code structure is also given. Version 2 included several enhancements not given in the original version of the code. In particular, flux iterations can be done in core rather than by reading and writing tomore » disk, for problems which allow sufficient memory for such in-core iterations. This speeds up the iteration process. Version 3 does not include any of the special processors used in the previous versions. These special processors utilized formatted input for various elements of the code system. All such input data is now entered through the Input Processor, which produces standard interface files for the various modules in the code system. In addition, a Standard Interface File Handbook is included in the documentation which is distributed with the code, to assist in developing the input for the Input Processor.« less
NASA Astrophysics Data System (ADS)
Darmawan, R.
2018-01-01
Nuclear power industry is facing uncertainties since the occurrence of the unfortunate accident at Fukushima Daiichi Nuclear Power Plant. The issue of nuclear power plant safety becomes the major hindrance in the planning of nuclear power program for new build countries. Thus, the understanding of the behaviour of reactor system is very important to ensure the continuous development and improvement on reactor safety. Throughout the development of nuclear reactor technology, investigation and analysis on reactor safety have gone through several phases. In the early days, analytical and experimental methods were employed. For the last four decades 1D system level codes were widely used. The continuous development of nuclear reactor technology has brought about more complex system and processes of nuclear reactor operation. More detailed dimensional simulation codes are needed to assess these new reactors. Recently, 2D and 3D system level codes such as CFD are being explored. This paper discusses a comparative study on two different approaches of CFD modelling on reactor core cooling behaviour.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feltus, M.A.
1987-01-01
Analysis results for multiple steam generator blow down caused by an auxiliary feedwater steam-line break performed with the RETRAN-02 MOD 003 computer code are presented to demonstrate the capabilities of the RETRAN code to predict system transient response for verifying changes in operational procedures and supporting plant equipment modifications. A typical four-loop Westinghouse pressurized water reactor was modeled using best-estimate versus worst case licensing assumptions. This paper presents analyses performed to evaluate the necessity of implementing an auxiliary feedwater steam-line isolation modification. RETRAN transient analysis can be used to determine core cooling capability response, departure from nucleate boiling ratio (DNBR)more » status, and reactor trip signal actuation times.« less
Preliminary weight and costs of sandwich panels to distribute concentrated loads
NASA Technical Reports Server (NTRS)
Belleman, G.; Mccarty, J. E.
1976-01-01
Minimum mass honeycomb sandwich panels were sized for transmitting a concentrated load to a uniform reaction through various distances. The form skin gages were fully stressed with a finite element computer code. The panel general stability was evaluated with a buckling computer code labeled STAGS-B. Two skin materials were considered; aluminum and graphite-epoxy. The core was constant thickness aluminum honeycomb. Various panel sizes and load levels were considered. The computer generated data were generalized to allow preliminary least mass panel designs for a wide range of panel sizes and load intensities. An assessment of panel fabrication cost was also conducted. Various comparisons between panel mass, panel size, panel loading, and panel cost are presented in both tabular and graphical form.
Toward performance portability of the Albany finite element analysis code using the Kokkos library
DOE Office of Scientific and Technical Information (OSTI.GOV)
Demeshko, Irina; Watkins, Jerry; Tezaur, Irina K.
Performance portability on heterogeneous high-performance computing (HPC) systems is a major challenge faced today by code developers: parallel code needs to be executed correctly as well as with high performance on machines with different architectures, operating systems, and software libraries. The finite element method (FEM) is a popular and flexible method for discretizing partial differential equations arising in a wide variety of scientific, engineering, and industrial applications that require HPC. This paper presents some preliminary results pertaining to our development of a performance portable implementation of the FEM-based Albany code. Performance portability is achieved using the Kokkos library. We presentmore » performance results for the Aeras global atmosphere dynamical core module in Albany. Finally, numerical experiments show that our single code implementation gives reasonable performance across three multicore/many-core architectures: NVIDIA General Processing Units (GPU’s), Intel Xeon Phis, and multicore CPUs.« less
Toward performance portability of the Albany finite element analysis code using the Kokkos library
Demeshko, Irina; Watkins, Jerry; Tezaur, Irina K.; ...
2018-02-05
Performance portability on heterogeneous high-performance computing (HPC) systems is a major challenge faced today by code developers: parallel code needs to be executed correctly as well as with high performance on machines with different architectures, operating systems, and software libraries. The finite element method (FEM) is a popular and flexible method for discretizing partial differential equations arising in a wide variety of scientific, engineering, and industrial applications that require HPC. This paper presents some preliminary results pertaining to our development of a performance portable implementation of the FEM-based Albany code. Performance portability is achieved using the Kokkos library. We presentmore » performance results for the Aeras global atmosphere dynamical core module in Albany. Finally, numerical experiments show that our single code implementation gives reasonable performance across three multicore/many-core architectures: NVIDIA General Processing Units (GPU’s), Intel Xeon Phis, and multicore CPUs.« less
XGC developments for a more efficient XGC-GENE code coupling
NASA Astrophysics Data System (ADS)
Dominski, Julien; Hager, Robert; Ku, Seung-Hoe; Chang, Cs
2017-10-01
In the Exascale Computing Program, the High-Fidelity Whole Device Modeling project initially aims at delivering a tightly-coupled simulation of plasma neoclassical and turbulence dynamics from the core to the edge of the tokamak. To permit such simulations, the gyrokinetic codes GENE and XGC will be coupled together. Numerical efforts are made to improve the numerical schemes agreement in the coupling region. One of the difficulties of coupling those codes together is the incompatibility of their grids. GENE is a continuum grid-based code and XGC is a Particle-In-Cell code using unstructured triangular mesh. A field-aligned filter is thus implemented in XGC. Even if XGC originally had an approximately field-following mesh, this field-aligned filter permits to have a perturbation discretization closer to the one solved in the field-aligned code GENE. Additionally, new XGC gyro-averaging matrices are implemented on a velocity grid adapted to the plasma properties, thus ensuring same accuracy from the core to the edge regions.
Multi-Core Processor Memory Contention Benchmark Analysis Case Study
NASA Technical Reports Server (NTRS)
Simon, Tyler; McGalliard, James
2009-01-01
Multi-core processors dominate current mainframe, server, and high performance computing (HPC) systems. This paper provides synthetic kernel and natural benchmark results from an HPC system at the NASA Goddard Space Flight Center that illustrate the performance impacts of multi-core (dual- and quad-core) vs. single core processor systems. Analysis of processor design, application source code, and synthetic and natural test results all indicate that multi-core processors can suffer from significant memory subsystem contention compared to similar single-core processors.
Measurement of the complete core plasma flow across the LOC-SOC transition at ASDEX Upgrade
NASA Astrophysics Data System (ADS)
Lebschy, A.; McDermott, R. M.; Angioni, C.; Geiger, B.; Prisiazhniuk, D.; Cavedon, M.; Conway, G. D.; Dux, R.; Dunne, M. G.; Kappatou, A.; Pütterich, T.; Stroth, U.; Viezzer, E.; the ASDEX Upgrade Team
2018-02-01
A newly installed core charge exchange recombination spectroscopy (CXRS) diagnostic at ASDEX Upgrade (AUG) enables the evaluation of the core poloidal rotation (upol ) through the inboard-outboard asymmetry of the toroidal rotation with an accuracy of 0.5 to 1 km s-1 . Using this technique, the total plasma flow has been measured in Ohmic L-mode plasmas across the transition from the linear to saturated ohmic confinement (LOC-SOC) regimes. The core poloidal rotation of the plasma around mid-radius is found to be always in the ion diamagnetic direction, in disagreement with neoclassical (NC) predictions. The edge rotation is found to be electron-directed and consistent with NC codes. This measurement provides as well the missing ingredient to evaluate the core E×B velocity (uE×B ) from data only, which can then be compared to measurements of the perpendicular velocity of the turbulent fluctuations (u\\perp ) to gain information on the turbulent phase velocity (vph ). The non neoclassical upol from CXRS leads to good agreement between uE×B and u\\perp indicating that vph is small and at similar values as found with gyrokinetic simulations. Moreover, the data shows a shift of vph in the ion-diamagnetic direction at the edge after the transition from LOC to SOC consistent with a change in the dominant turbulence regime. The upgrade of the core CXRS system provides as well a deeper insight into the intrinsic rotation. This paper shows that the reversal of the core toroidal rotation occurs clearly after the LOC-SOC transition and concomitant with the peaking of the electron density.
a Dosimetry Assessment for the Core Restraint of AN Advanced Gas Cooled Reactor
NASA Astrophysics Data System (ADS)
Thornton, D. A.; Allen, D. A.; Tyrrell, R. J.; Meese, T. C.; Huggon, A. P.; Whiley, G. S.; Mossop, J. R.
2009-08-01
This paper describes calculations of neutron damage rates within the core restraint structures of Advanced Gas Cooled Reactors (AGRs). Using advanced features of the Monte Carlo radiation transport code MCBEND, and neutron source data from core follow calculations performed with the reactor physics code PANTHER, a detailed model of the reactor cores of two of British Energy's AGR power plants has been developed for this purpose. Because there are no relevant neutron fluence measurements directly supporting this assessment, results of benchmark comparisons and successful validation of MCBEND for Magnox reactors have been used to estimate systematic and random uncertainties on the predictions. In particular, it has been necessary to address the known under-prediction of lower energy fast neutron responses associated with the penetration of large thicknesses of graphite.
WOMBAT: A Scalable and High-performance Astrophysical Magnetohydrodynamics Code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mendygral, P. J.; Radcliffe, N.; Kandalla, K.
2017-02-01
We present a new code for astrophysical magnetohydrodynamics specifically designed and optimized for high performance and scaling on modern and future supercomputers. We describe a novel hybrid OpenMP/MPI programming model that emerged from a collaboration between Cray, Inc. and the University of Minnesota. This design utilizes MPI-RMA optimized for thread scaling, which allows the code to run extremely efficiently at very high thread counts ideal for the latest generation of multi-core and many-core architectures. Such performance characteristics are needed in the era of “exascale” computing. We describe and demonstrate our high-performance design in detail with the intent that it maymore » be used as a model for other, future astrophysical codes intended for applications demanding exceptional performance.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
D'Azevedo, Ed F; Nintcheu Fata, Sylvain
2012-01-01
A collocation boundary element code for solving the three-dimensional Laplace equation, publicly available from \\url{http://www.intetec.org}, has been adapted to run on an Nvidia Tesla general purpose graphics processing unit (GPU). Global matrix assembly and LU factorization of the resulting dense matrix were performed on the GPU. Out-of-core techniques were used to solve problems larger than available GPU memory. The code achieved over eight times speedup in matrix assembly and about 56~Gflops/sec in the LU factorization using only 512~Mbytes of GPU memory. Details of the GPU implementation and comparisons with the standard sequential algorithm are included to illustrate the performance ofmore » the GPU code.« less
Present Status and Extensions of the Monte Carlo Performance Benchmark
NASA Astrophysics Data System (ADS)
Hoogenboom, J. Eduard; Petrovic, Bojan; Martin, William R.
2014-06-01
The NEA Monte Carlo Performance benchmark started in 2011 aiming to monitor over the years the abilities to perform a full-size Monte Carlo reactor core calculation with a detailed power production for each fuel pin with axial distribution. This paper gives an overview of the contributed results thus far. It shows that reaching a statistical accuracy of 1 % for most of the small fuel zones requires about 100 billion neutron histories. The efficiency of parallel execution of Monte Carlo codes on a large number of processor cores shows clear limitations for computer clusters with common type computer nodes. However, using true supercomputers the speedup of parallel calculations is increasing up to large numbers of processor cores. More experience is needed from calculations on true supercomputers using large numbers of processors in order to predict if the requested calculations can be done in a short time. As the specifications of the reactor geometry for this benchmark test are well suited for further investigations of full-core Monte Carlo calculations and a need is felt for testing other issues than its computational performance, proposals are presented for extending the benchmark to a suite of benchmark problems for evaluating fission source convergence for a system with a high dominance ratio, for coupling with thermal-hydraulics calculations to evaluate the use of different temperatures and coolant densities and to study the correctness and effectiveness of burnup calculations. Moreover, other contemporary proposals for a full-core calculation with realistic geometry and material composition will be discussed.
Efficient implementation of core-excitation Bethe-Salpeter equation calculations
NASA Astrophysics Data System (ADS)
Gilmore, K.; Vinson, John; Shirley, E. L.; Prendergast, D.; Pemmaraju, C. D.; Kas, J. J.; Vila, F. D.; Rehr, J. J.
2015-12-01
We present an efficient implementation of the Bethe-Salpeter equation (BSE) method for obtaining core-level spectra including X-ray absorption (XAS), X-ray emission (XES), and both resonant and non-resonant inelastic X-ray scattering spectra (N/RIXS). Calculations are based on density functional theory (DFT) electronic structures generated either by ABINIT or QuantumESPRESSO, both plane-wave basis, pseudopotential codes. This electronic structure is improved through the inclusion of a GW self energy. The projector augmented wave technique is used to evaluate transition matrix elements between core-level and band states. Final two-particle scattering states are obtained with the NIST core-level BSE solver (NBSE). We have previously reported this implementation, which we refer to as OCEAN (Obtaining Core Excitations from Ab initio electronic structure and NBSE) (Vinson et al., 2011). Here, we present additional efficiencies that enable us to evaluate spectra for systems ten times larger than previously possible; containing up to a few thousand electrons. These improvements include the implementation of optimal basis functions that reduce the cost of the initial DFT calculations, more complete parallelization of the screening calculation and of the action of the BSE Hamiltonian, and various memory reductions. Scaling is demonstrated on supercells of SrTiO3 and example spectra for the organic light emitting molecule Tris-(8-hydroxyquinoline)aluminum (Alq3) are presented. The ability to perform large-scale spectral calculations is particularly advantageous for investigating dilute or non-periodic systems such as doped materials, amorphous systems, or complex nano-structures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Romander, C. M.; Cagliostro, D. J.
Five experiments were performed to help evaluate the structural integrity of the reactor vessel and head design and to verify code predictions. In the first experiment (SM 1), a detailed model of the head was loaded statically to determine its stiffness. In the remaining four experiments (SM 2 to SM 5), models of the vessel and head were loaded dynamically under a simulated 661 MW-sec hypothetical core disruptive accident (HCDA). Models SM 2 to SM 4, each of increasing complexity, systematically showed the effects of upper internals structures, a thermal liner, core support platform, and torospherical bottom on vessel response.more » Model SM 5, identical to SM 4 but more heavily instrumented, demonstrated experimental reproducibility and provided more comprehensive data. The models consisted of a Ni 200 vessel and core barrel, a head with shielding and simulated component masses, an upper internals structure (UIS), and, in the more complex models SM 4 and SM 5, a Ni 200 thermal liner and core support structure. Water simulated the liquid sodium coolant and a low-density explosive simulated the HCDA loads.« less
Dependence of core heating properties on heating pulse duration and intensity
NASA Astrophysics Data System (ADS)
Johzaki, Tomoyuki; Nagatomo, Hideo; Sunahara, Atsushi; Cai, Hongbo; Sakagami, Hitoshi; Mima, Kunioki
2009-11-01
In the cone-guiding fast ignition, an imploded core is heated by the energy transport of fast electrons generated by the ultra-intense short-pulse laser at the cone inner surface. The fast core heating (˜800eV) has been demonstrated at integrated experiments with GEKKO-XII+ PW laser systems. As the next step, experiments using more powerful heating laser, FIREX, have been started at ILE, Osaka university. In FIREX-I (phase-I of FIREX), our goal is the demonstration of efficient core heating (Ti ˜ 5keV) using a newly developed 10kJ LFEX laser. In the first integrated experiments, the LFEX laser is operated with low energy mode (˜0.5kJ/4ps) to validate the previous GEKKO+PW experiments. Between the two experiments, though the laser energy is similar (˜0.5kJ), the duration is different; ˜0.5ps in the PW laser and ˜ 4ps in the LFEX laser. In this paper, we evaluate the dependence of core heating properties on the heating pulse duration on the basis of integrated simulations with FI^3 (Fast Ignition Integrated Interconnecting) code system.
SANTA BARBARA CLUSTER COMPARISON TEST WITH DISPH
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saitoh, Takayuki R.; Makino, Junichiro, E-mail: saitoh@elsi.jp
2016-06-01
The Santa Barbara cluster comparison project revealed that there is a systematic difference between entropy profiles of clusters of galaxies obtained by Eulerian mesh and Lagrangian smoothed particle hydrodynamics (SPH) codes: mesh codes gave a core with a constant entropy, whereas SPH codes did not. One possible reason for this difference is that mesh codes are not Galilean invariant. Another possible reason is the problem of the SPH method, which might give too much “protection” to cold clumps because of the unphysical surface tension induced at contact discontinuities. In this paper, we apply the density-independent formulation of SPH (DISPH), whichmore » can handle contact discontinuities accurately, to simulations of a cluster of galaxies and compare the results with those with the standard SPH. We obtained the entropy core when we adopt DISPH. The size of the core is, however, significantly smaller than those obtained with mesh simulations and is comparable to those obtained with quasi-Lagrangian schemes such as “moving mesh” and “mesh free” schemes. We conclude that both the standard SPH without artificial conductivity and Eulerian mesh codes have serious problems even with such an idealized simulation, while DISPH, SPH with artificial conductivity, and quasi-Lagrangian schemes have sufficient capability to deal with it.« less
ERIC Educational Resources Information Center
Yamamoto, Kentaro; He, Qiwei; Shin, Hyo Jeong; von Davier, Mattias
2017-01-01
Approximately a third of the Programme for International Student Assessment (PISA) items in the core domains (math, reading, and science) are constructed-response items and require human coding (scoring). This process is time-consuming, expensive, and prone to error as often (a) humans code inconsistently, and (b) coding reliability in…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moridis, George J.
TOUGH+ v1.5 is a numerical code for the simulation of multi-phase, multi-component flow and transport of mass and heat through porous and fractured media, and represents the third update of the code since its first release [Moridis et al., 2008]. TOUGH+ is a successor to the TOUGH2 [Pruess et al., 1991; 2012] family of codes for multi-component, multiphase fluid and heat flow developed at the Lawrence Berkeley National Laboratory. It is written in standard FORTRAN 95/2003, and can be run on any computational platform (workstations, PC, Macintosh). TOUGH+ v1.5 employs dynamic memory allocation, thus minimizing storage requirements. It has amore » completely modular structure, follows the tenets of Object-Oriented Programming (OOP), and involves the advanced features of FORTRAN 95/2003, i.e., modules, derived data types, the use of pointers, lists and trees, data encapsulation, defined operators and assignments, operator extension and overloading, use of generic procedures, and maximum use of the powerful intrinsic vector and matrix processing operations. TOUGH+ v1.5 is the core code for its family of applications, i.e., the part of the code that is common to all its applications. It provides a description of the underlying physics and thermodynamics of non-isothermal flow, of the mathematical and numerical approaches, as well as a detailed explanation of the general (common to all applications) input requirements, options, capabilities and output specifications. The core code cannot run by itself: it needs to be coupled with the code for the specific TOUGH+ application option that describes a particular type of problem. The additional input requirements specific to a particular TOUGH+ application options and related illustrative examples can be found in the corresponding User's Manual.« less
OCTGRAV: Sparse Octree Gravitational N-body Code on Graphics Processing Units
NASA Astrophysics Data System (ADS)
Gaburov, Evghenii; Bédorf, Jeroen; Portegies Zwart, Simon
2010-10-01
Octgrav is a very fast tree-code which runs on massively parallel Graphical Processing Units (GPU) with NVIDIA CUDA architecture. The algorithms are based on parallel-scan and sort methods. The tree-construction and calculation of multipole moments is carried out on the host CPU, while the force calculation which consists of tree walks and evaluation of interaction list is carried out on the GPU. In this way, a sustained performance of about 100GFLOP/s and data transfer rates of about 50GB/s is achieved. It takes about a second to compute forces on a million particles with an opening angle of heta approx 0.5. To test the performance and feasibility, we implemented the algorithms in CUDA in the form of a gravitational tree-code which completely runs on the GPU. The tree construction and traverse algorithms are portable to many-core devices which have support for CUDA or OpenCL programming languages. The gravitational tree-code outperforms tuned CPU code during the tree-construction and shows a performance improvement of more than a factor 20 overall, resulting in a processing rate of more than 2.8 million particles per second. The code has a convenient user interface and is freely available for use.
Application of Advanced Multi-Core Processor Technologies to Oceanographic Research
2014-09-30
Jordan Stanway are taking on the work of analyzing their code, and we are working on the Robot Operating System (ROS) and MOOS-DB systems to evaluate...Linux/GNU operating system that should reduce the time required to build the kernel and userspace significantly. This part of the work is vital to...the platform to be used not only as a service, but also as a private deployable package. As much as possible, this system was built using operating
Impact of thorium based molten salt reactor on the closure of the nuclear fuel cycle
NASA Astrophysics Data System (ADS)
Jaradat, Safwan Qasim Mohammad
Molten salt reactor (MSR) is one of six reactors selected by the Generation IV International Forum (GIF). The liquid fluoride thorium reactor (LFTR) is a MSR concept based on thorium fuel cycle. LFTR uses liquid fluoride salts as a nuclear fuel. It uses 232Th and 233U as the fertile and fissile materials, respectively. Fluoride salt of these nuclides is dissolved in a mixed carrier salt of lithium and beryllium (FLiBe). The objective of this research was to complete feasibility studies of a small commercial thermal LFTR. The focus was on neutronic calculations in order to prescribe core design parameter such as core size, fuel block pitch (p), fuel channel radius, fuel path, reflector thickness, fuel salt composition, and power. In order to achieve this objective, the applicability of Monte Carlo N-Particle Transport Code (MCNP) to MSR modeling was verified. Then, a prescription for conceptual small thermal reactor LFTR and relevant calculations were performed using MCNP to determine the main neutronic parameters of the core reactor. The MCNP code was used to study the reactor physics characteristics for the FUJI-U3 reactor. The results were then compared with the results obtained from the original FUJI-U3 using the reactor physics code SRAC95 and the burnup analysis code ORIPHY2. The results were comparable with each other. Based on the results, MCNP was found to be a reliable code to model a small thermal LFTR and study all the related reactor physics characteristics. The results of this study were promising and successful in demonstrating a prefatory small commercial LFTR design. The outcome of using a small core reactor with a diameter/height of 280/260 cm that would operate for more than five years at a power level of 150 MWth was studied. The fuel system 7LiF - BeF2 - ThF4 - UF4 with a (233U/ 232Th) = 2.01 % was the candidate fuel for this reactor core.
Royo-Bordonada, M Á; León-Flández, K; Damián, J; Bosqued-Estefanía, M J; Moya-Geromini, M Á; López-Jurado, L
2016-08-01
To examine the extent and nature of food television advertising directed at children in Spain using an international food-based system and the United Kingdom nutrient profile model (UKNPM). Cross-sectional study of advertisements of food and drinks shown on five television channels over 7 days in 2012 (8am-midnight). Showing time and duration of each advertisement was recorded. Advertisements were classified as core (nutrient-rich/calorie-low products), non-core, or miscellaneous based on the international system, and either healthy/less healthy, i.e., high in saturated fats, trans-fatty acids, salt, or free sugars (HFSS), according to UKNPM. The food industry accounted for 23.7% of the advertisements (4212 out of 17,722) with 7.5 advertisements per hour of broadcasting. The international food-based coding system classified 60.2% of adverts as non-core, and UKNPM classified 64.0% as HFSS. Up to 31.5% of core, 86.8% of non-core, and 8.3% of miscellaneous advertisements were for HFSS products. The percentage of advertisements for HFSS products was higher during reinforced protected viewing times (69.0%), on weekends (71.1%), on channels of particular appeal to children and teenagers (67.8%), and on broadcasts regulated by the Spanish Code of self-regulation of the advertising of food products directed at children (70.7%). Both schemes identified that a majority of foods advertised were unhealthy, although some classification differences between the two systems are important to consider. The food advertising Code is not limiting Spanish children's exposure to advertisements for HFSS products, which were more frequent on Code-regulated broadcasts and during reinforced protected viewing time. Copyright © 2016 The Royal Society for Public Health. Published by Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burns, T.D. Jr.
1996-05-01
The Monte Carlo Model System (MCMS) for the Washington State University (WSU) Radiation Center provides a means through which core criticality and power distributions can be calculated, as well as providing a method for neutron and photon transport necessary for BNCT epithermal neutron beam design. The computational code used in this Model System is MCNP4A. The geometric capability of this Monte Carlo code allows the WSU system to be modeled very accurately. A working knowledge of the MCNP4A neutron transport code increases the flexibility of the Model System and is recommended, however, the eigenvalue/power density problems can be run withmore » little direct knowledge of MCNP4A. Neutron and photon particle transport require more experience with the MCNP4A code. The Model System consists of two coupled subsystems; the Core Analysis and Source Plane Generator Model (CASP), and the BeamPort Shell Particle Transport Model (BSPT). The CASP Model incorporates the S({alpha}, {beta}) thermal treatment, and is run as a criticality problem yielding, the system eigenvalue (k{sub eff}), the core power distribution, and an implicit surface source for subsequent particle transport in the BSPT Model. The BSPT Model uses the source plane generated by a CASP run to transport particles through the thermal column beamport. The user can create filter arrangements in the beamport and then calculate characteristics necessary for assessing the BNCT potential of the given filter want. Examples of the characteristics to be calculated are: neutron fluxes, neutron currents, fast neutron KERMAs and gamma KERMAs. The MCMS is a useful tool for the WSU system. Those unfamiliar with the MCNP4A code can use the MCMS transparently for core analysis, while more experienced users will find the particle transport capabilities very powerful for BNCT filter design.« less
MCNP-model for the OAEP Thai Research Reactor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gallmeier, F.X.; Tang, J.S.; Primm, R.T. III
An MCNP input was prepared for the Thai Research Reactor, making extensive use of the MCNP geometry`s lattice feature that allows a flexible and easy rearrangement of the core components and the adjustment of the control elements. The geometry was checked for overdefined or undefined zones by two-dimensional plots of cuts through the core configuration with the MCNP geometry plotting capabilities, and by a three-dimensional view of the core configuration with the SABRINA code. Cross sections were defined for a hypothetical core of 67 standard fuel elements and 38 low-enriched uranium fuel elements--all filled with fresh fuel. Three test calculationsmore » were performed with the MCNP4B-code to obtain the multiplication factor for the cases with control elements fully inserted, fully withdrawn, and at a working position.« less
NASA Technical Reports Server (NTRS)
Pao, J. L.; Mehrotra, S. C.; Lan, C. E.
1982-01-01
A computer code base on an improved vortex filament/vortex core method for predicting aerodynamic characteristics of slender wings with edge vortex separations is developed. The code is applicable to camber wings, straked wings or wings with leading edge vortex flaps at subsonic speeds. The prediction of lifting pressure distribution and the computer time are improved by using a pair of concentrated vortex cores above the wing surface. The main features of this computer program are: (1) arbitrary camber shape may be defined and an option for exactly defining leading edge flap geometry is also provided; (2) the side edge vortex system is incorporated.
VENTURE/PC manual: A multidimensional multigroup neutron diffusion code system. Version 3
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shapiro, A.; Huria, H.C.; Cho, K.W.
1991-12-01
VENTURE/PC is a recompilation of part of the Oak Ridge BOLD VENTURE code system, which will operate on an IBM PC or compatible computer. Neutron diffusion theory solutions are obtained for multidimensional, multigroup problems. This manual contains information associated with operating the code system. The purpose of the various modules used in the code system, and the input for these modules are discussed. The PC code structure is also given. Version 2 included several enhancements not given in the original version of the code. In particular, flux iterations can be done in core rather than by reading and writing tomore » disk, for problems which allow sufficient memory for such in-core iterations. This speeds up the iteration process. Version 3 does not include any of the special processors used in the previous versions. These special processors utilized formatted input for various elements of the code system. All such input data is now entered through the Input Processor, which produces standard interface files for the various modules in the code system. In addition, a Standard Interface File Handbook is included in the documentation which is distributed with the code, to assist in developing the input for the Input Processor.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rose, S.D.; Dearing, J.F.
An understanding of conditions that may cause sodium boiling and boiling propagation that may lead to dryout and fuel failure is crucial in liquid-metal fast-breeder reactor safety. In this study, the SABRE-2P subchannel analysis code has been used to analyze the ultimate transient of the in-core W-1 Sodium Loop Safety Facility experiment. This code has a 3-D simple nondynamic boiling model which is able to predict the flow instability which caused dryout. In other analyses dryout has been predicted for out-of-core test bundles and so this study provides additional confirmation of the model.
Responsive Image Inline Filter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Freeman, Ian
2016-10-20
RIIF is a contributed module for the Drupal php web application framework (drupal.org). It is written as a helper or sub-module of other code which is part of version 8 "core Drupal" and is intended to extend its functionality. It allows Drupal to resize images uploaded through the user-facing text editor within the Drupal GUI (a.k.a. "inline images") for various browser widths. This resizing is already done foe other images through the parent "Responsive Image" core module. This code extends that functionality to inline images.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anthony, Stephen
The Sandia hyperspectral upper-bound spectrum algorithm (hyper-UBS) is a cosmic ray despiking algorithm for hyperspectral data sets. When naturally-occurring, high-energy (gigaelectronvolt) cosmic rays impact the earth’s atmosphere, they create an avalanche of secondary particles which will register as a large, positive spike on any spectroscopic detector they hit. Cosmic ray spikes are therefore an unavoidable spectroscopic contaminant which can interfere with subsequent analysis. A variety of cosmic ray despiking algorithms already exist and can potentially be applied to hyperspectral data matrices, most notably the upper-bound spectrum data matrices (UBS-DM) algorithm by Dongmao Zhang and Dor Ben-Amotz which served as themore » basis for the hyper-UBS algorithm. However, the existing algorithms either cannot be applied to hyperspectral data, require information that is not always available, introduce undesired spectral bias, or have otherwise limited effectiveness for some experimentally relevant conditions. Hyper-UBS is more effective at removing a wider variety of cosmic ray spikes from hyperspectral data without introducing undesired spectral bias. In addition to the core algorithm the Sandia hyper-UBS software package includes additional source code useful in evaluating the effectiveness of the hyper-UBS algorithm. The accompanying source code includes code to generate simulated hyperspectral data contaminated by cosmic ray spikes, several existing despiking algorithms, and code to evaluate the performance of the despiking algorithms on simulated data.« less
NASA Astrophysics Data System (ADS)
Schreier, Franz; Gimeno García, Sebastián; Hedelt, Pascal; Hess, Michael; Mendrok, Jana; Vasquez, Mayte; Xu, Jian
2014-04-01
A suite of programs for high resolution infrared-microwave atmospheric radiative transfer modeling has been developed with emphasis on efficient and reliable numerical algorithms and a modular approach appropriate for simulation and/or retrieval in a variety of applications. The Generic Atmospheric Radiation Line-by-line Infrared Code - GARLIC - is suitable for arbitrary observation geometry, instrumental field-of-view, and line shape. The core of GARLIC's subroutines constitutes the basis of forward models used to implement inversion codes to retrieve atmospheric state parameters from limb and nadir sounding instruments. This paper briefly introduces the physical and mathematical basics of GARLIC and its descendants and continues with an in-depth presentation of various implementation aspects: An optimized Voigt function algorithm combined with a two-grid approach is used to accelerate the line-by-line modeling of molecular cross sections; various quadrature methods are implemented to evaluate the Schwarzschild and Beer integrals; and Jacobians, i.e. derivatives with respect to the unknowns of the atmospheric inverse problem, are implemented by means of automatic differentiation. For an assessment of GARLIC's performance, a comparison of the quadrature methods for solution of the path integral is provided. Verification and validation are demonstrated using intercomparisons with other line-by-line codes and comparisons of synthetic spectra with spectra observed on Earth and from Venus.
Audigé, Laurent; Cornelius, Carl-Peter; Ieva, Antonio Di; Prein, Joachim
2014-01-01
Validated trauma classification systems are the sole means to provide the basis for reliable documentation and evaluation of patient care, which will open the gateway to evidence-based procedures and healthcare in the coming years. With the support of AO Investigation and Documentation, a classification group was established to develop and evaluate a comprehensive classification system for craniomaxillofacial (CMF) fractures. Blueprints for fracture classification in the major constituents of the human skull were drafted and then evaluated by a multispecialty group of experienced CMF surgeons and a radiologist in a structured process during iterative agreement sessions. At each session, surgeons independently classified the radiological imaging of up to 150 consecutive cases with CMF fractures. During subsequent review meetings, all discrepancies in the classification outcome were critically appraised for clarification and improvement until consensus was reached. The resulting CMF classification system is structured in a hierarchical fashion with three levels of increasing complexity. The most elementary level 1 simply distinguishes four fracture locations within the skull: mandible (code 91), midface (code 92), skull base (code 93), and cranial vault (code 94). Levels 2 and 3 focus on further defining the fracture locations and for fracture morphology, achieving an almost individual mapping of the fracture pattern. This introductory article describes the rationale for the comprehensive AO CMF classification system, discusses the methodological framework, and provides insight into the experiences and interactions during the evaluation process within the core groups. The details of this system in terms of anatomy and levels are presented in a series of focused tutorials illustrated with case examples in this special issue of the Journal. PMID:25489387
Audigé, Laurent; Cornelius, Carl-Peter; Di Ieva, Antonio; Prein, Joachim
2014-12-01
Validated trauma classification systems are the sole means to provide the basis for reliable documentation and evaluation of patient care, which will open the gateway to evidence-based procedures and healthcare in the coming years. With the support of AO Investigation and Documentation, a classification group was established to develop and evaluate a comprehensive classification system for craniomaxillofacial (CMF) fractures. Blueprints for fracture classification in the major constituents of the human skull were drafted and then evaluated by a multispecialty group of experienced CMF surgeons and a radiologist in a structured process during iterative agreement sessions. At each session, surgeons independently classified the radiological imaging of up to 150 consecutive cases with CMF fractures. During subsequent review meetings, all discrepancies in the classification outcome were critically appraised for clarification and improvement until consensus was reached. The resulting CMF classification system is structured in a hierarchical fashion with three levels of increasing complexity. The most elementary level 1 simply distinguishes four fracture locations within the skull: mandible (code 91), midface (code 92), skull base (code 93), and cranial vault (code 94). Levels 2 and 3 focus on further defining the fracture locations and for fracture morphology, achieving an almost individual mapping of the fracture pattern. This introductory article describes the rationale for the comprehensive AO CMF classification system, discusses the methodological framework, and provides insight into the experiences and interactions during the evaluation process within the core groups. The details of this system in terms of anatomy and levels are presented in a series of focused tutorials illustrated with case examples in this special issue of the Journal.
Evaluation of RAPID for a UNF cask benchmark problem
NASA Astrophysics Data System (ADS)
Mascolino, Valerio; Haghighat, Alireza; Roskoff, Nathan J.
2017-09-01
This paper examines the accuracy and performance of the RAPID (Real-time Analysis for Particle transport and In-situ Detection) code system for the simulation of a used nuclear fuel (UNF) cask. RAPID is capable of determining eigenvalue, subcritical multiplication, and pin-wise, axially-dependent fission density throughout a UNF cask. We study the source convergence based on the analysis of the different parameters used in an eigenvalue calculation in the MCNP Monte Carlo code. For this study, we consider a single assembly surrounded by absorbing plates with reflective boundary conditions. Based on the best combination of eigenvalue parameters, a reference MCNP solution for the single assembly is obtained. RAPID results are in excellent agreement with the reference MCNP solutions, while requiring significantly less computation time (i.e., minutes vs. days). A similar set of eigenvalue parameters is used to obtain a reference MCNP solution for the whole UNF cask. Because of time limitation, the MCNP results near the cask boundaries have significant uncertainties. Except for these, the RAPID results are in excellent agreement with the MCNP predictions, and its computation time is significantly lower, 35 second on 1 core versus 9.5 days on 16 cores.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, X. G.; Kim, Y. S.; Choi, K. Y.
2012-07-01
A SBO (station blackout) experiment named SBO-01 was performed at full-pressure IET (Integral Effect Test) facility ATLAS (Advanced Test Loop for Accident Simulation) which is scaled down from the APR1400 (Advanced Power Reactor 1400 MWe). In this study, the transient of SBO-01 is discussed and is subdivided into three phases: the SG fluid loss phase, the RCS fluid loss phase, and the core coolant depletion and core heatup phase. In addition, the typical phenomena in SBO-01 test - SG dryout, natural circulation, core coolant boiling, the PRZ full, core heat-up - are identified. Furthermore, the SBO-01 test is reproduced bymore » the MARS code calculation with the ATLAS model which represents the ATLAS test facility. The experimental and calculated transients are then compared and discussed. The comparison reveals there was malfunction of equipments: the SG leakage through SG MSSV and the measurement error of loop flow meter. As the ATLAS model is validated against the experimental results, it can be further employed to investigate the other possible SBO scenarios and to study the scaling distortions in the ATLAS. (authors)« less
Optimizing zonal advection of the Advanced Research WRF (ARW) dynamics for Intel MIC
NASA Astrophysics Data System (ADS)
Mielikainen, Jarno; Huang, Bormin; Huang, Allen H.
2014-10-01
The Weather Research and Forecast (WRF) model is the most widely used community weather forecast and research model in the world. There are two distinct varieties of WRF. The Advanced Research WRF (ARW) is an experimental, advanced research version featuring very high resolution. The WRF Nonhydrostatic Mesoscale Model (WRF-NMM) has been designed for forecasting operations. WRF consists of dynamics code and several physics modules. The WRF-ARW core is based on an Eulerian solver for the fully compressible nonhydrostatic equations. In the paper, we will use Intel Intel Many Integrated Core (MIC) architecture to substantially increase the performance of a zonal advection subroutine for optimization. It is of the most time consuming routines in the ARW dynamics core. Advection advances the explicit perturbation horizontal momentum equations by adding in the large-timestep tendency along with the small timestep pressure gradient tendency. We will describe the challenges we met during the development of a high-speed dynamics code subroutine for MIC architecture. Furthermore, lessons learned from the code optimization process will be discussed. The results show that the optimizations improved performance of the original code on Xeon Phi 5110P by a factor of 2.4x.
Fundamental approaches for analysis thermal hydraulic parameter for Puspati Research Reactor
NASA Astrophysics Data System (ADS)
Hashim, Zaredah; Lanyau, Tonny Anak; Farid, Mohamad Fairus Abdul; Kassim, Mohammad Suhaimi; Azhar, Noraishah Syahirah
2016-01-01
The 1-MW PUSPATI Research Reactor (RTP) is the one and only nuclear pool type research reactor developed by General Atomic (GA) in Malaysia. It was installed at Malaysian Nuclear Agency and has reached the first criticality on 8 June 1982. Based on the initial core which comprised of 80 standard TRIGA fuel elements, the very fundamental thermal hydraulic model was investigated during steady state operation using the PARET-code. The main objective of this paper is to determine the variation of temperature profiles and Departure of Nucleate Boiling Ratio (DNBR) of RTP at full power operation. The second objective is to confirm that the values obtained from PARET-code are in agreement with Safety Analysis Report (SAR) for RTP. The code was employed for the hot and average channels in the core in order to calculate of fuel's center and surface, cladding, coolant temperatures as well as DNBR's values. In this study, it was found that the results obtained from the PARET-code showed that the thermal hydraulic parameters related to safety for initial core which was cooled by natural convection was in agreement with the designed values and safety limit in SAR.
Fuel Performance Calculations for FeCrAl Cladding in BWRs
DOE Office of Scientific and Technical Information (OSTI.GOV)
George, Nathan; Sweet, Ryan; Maldonado, G. Ivan
2015-01-01
This study expands upon previous neutronics analyses of the reactivity impact of alternate cladding concepts in boiling water reactor (BWR) cores and directs focus toward contrasting fuel performance characteristics of FeCrAl cladding against those of traditional Zircaloy. Using neutronics results from a modern version of the 3D nodal simulator NESTLE, linear power histories were generated and supplied to the BISON-CASL code for fuel performance evaluations. BISON-CASL (formerly Peregrine) expands on material libraries implemented in the BISON fuel performance code and the MOOSE framework by providing proprietary material data. By creating material libraries for Zircaloy and FeCrAl cladding, the thermomechanical behaviormore » of the fuel rod (e.g., strains, centerline fuel temperature, and time to gap closure) were investigated and contrasted.« less
Efficient molecular dynamics simulations with many-body potentials on graphics processing units
NASA Astrophysics Data System (ADS)
Fan, Zheyong; Chen, Wei; Vierimaa, Ville; Harju, Ari
2017-09-01
Graphics processing units have been extensively used to accelerate classical molecular dynamics simulations. However, there is much less progress on the acceleration of force evaluations for many-body potentials compared to pairwise ones. In the conventional force evaluation algorithm for many-body potentials, the force, virial stress, and heat current for a given atom are accumulated within different loops, which could result in write conflict between different threads in a CUDA kernel. In this work, we provide a new force evaluation algorithm, which is based on an explicit pairwise force expression for many-body potentials derived recently (Fan et al., 2015). In our algorithm, the force, virial stress, and heat current for a given atom can be accumulated within a single thread and is free of write conflicts. We discuss the formulations and algorithms and evaluate their performance. A new open-source code, GPUMD, is developed based on the proposed formulations. For the Tersoff many-body potential, the double precision performance of GPUMD using a Tesla K40 card is equivalent to that of the LAMMPS (Large-scale Atomic/Molecular Massively Parallel Simulator) molecular dynamics code running with about 100 CPU cores (Intel Xeon CPU X5670 @ 2.93 GHz).
SASS-1--SUBASSEMBLY STRESS SURVEY CODE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Friedrich, C.M.
1960-01-01
SASS-1, an IBM-704 FORTRAN code, calculates pressure, thermal, and combined stresses in a nuclear reactor core subassembly. In addition to cross- section stresses, the code calculates axial shear stresses needed to keep plane cross sections plane under axial variations of temperature. The input and output nomenclature, arrangement, and formats are described. (B.O.G.)
Coupling of TRAC-PF1/MOD2, Version 5.4.25, with NESTLE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Knepper, P.L.; Hochreiter, L.E.; Ivanov, K.N.
1999-09-01
A three-dimensional (3-D) spatial kinetics capability within a thermal-hydraulics system code provides a more correct description of the core physics during reactor transients that involve significant variations in the neutron flux distribution. Coupled codes provide the ability to forecast safety margins in a best-estimate manner. The behavior of a reactor core and the feedback to the plant dynamics can be accurately simulated. For each time step, coupled codes are capable of resolving system interaction effects on neutronics feedback and are capable of describing local neutronics effects caused by the thermal hydraulics and neutronics coupling. With the improvements in computational technology,more » modeling complex reactor behaviors with coupled thermal hydraulics and spatial kinetics is feasible. Previously, reactor analysis codes were limited to either a detailed thermal-hydraulics model with simplified kinetics or multidimensional neutron kinetics with a simplified thermal-hydraulics model. The authors discuss the coupling of the Transient Reactor Analysis Code (TRAC)-PF1/MOD2, Version 5.4.25, with the NESTLE code.« less
The Modeling of Advanced BWR Fuel Designs with the NRC Fuel Depletion Codes PARCS/PATHS
Ward, Andrew; Downar, Thomas J.; Xu, Y.; ...
2015-04-22
The PATHS (PARCS Advanced Thermal Hydraulic Solver) code was developed at the University of Michigan in support of U.S. Nuclear Regulatory Commission research to solve the steady-state, two-phase, thermal-hydraulic equations for a boiling water reactor (BWR) and to provide thermal-hydraulic feedback for BWR depletion calculations with the neutronics code PARCS (Purdue Advanced Reactor Core Simulator). The simplified solution methodology, including a three-equation drift flux formulation and an optimized iteration scheme, yields very fast run times in comparison to conventional thermal-hydraulic systems codes used in the industry, while still retaining sufficient accuracy for applications such as BWR depletion calculations. Lastly, themore » capability to model advanced BWR fuel designs with part-length fuel rods and heterogeneous axial channel flow geometry has been implemented in PATHS, and the code has been validated against previously benchmarked advanced core simulators as well as BWR plant and experimental data. We describe the modifications to the codes and the results of the validation in this paper.« less
NASA Astrophysics Data System (ADS)
Lohn, Stefan B.; Dong, Xin; Carminati, Federico
2012-12-01
Chip-Multiprocessors are going to support massive parallelism by many additional physical and logical cores. Improving performance can no longer be obtained by increasing clock-frequency because the technical limits are almost reached. Instead, parallel execution must be used to gain performance. Resources like main memory, the cache hierarchy, bandwidth of the memory bus or links between cores and sockets are not going to be improved as fast. Hence, parallelism can only result into performance gains if the memory usage is optimized and the communication between threads is minimized. Besides concurrent programming has become a domain for experts. Implementing multi-threading is error prone and labor-intensive. A full reimplementation of the whole AliRoot source-code is unaffordable. This paper describes the effort to evaluate the adaption of AliRoot to the needs of multi-threading and to provide the capability of parallel processing by using a semi-automatic source-to-source transformation to address the problems as described before and to provide a straight-forward way of parallelization with almost no interference between threads. This makes the approach simple and reduces the required manual changes in the code. In a first step, unconditional thread-safety will be introduced to bring the original sequential and thread unaware source-code into the position of utilizing multi-threading. Afterwards further investigations have to be performed to point out candidates of classes that are useful to share amongst threads. Then in a second step, the transformation has to change the code to share these classes and finally to verify if there are anymore invalid interferences between threads.
Development of a Next Generation Concurrent Framework for the ATLAS Experiment
NASA Astrophysics Data System (ADS)
Calafiura, P.; Lampl, W.; Leggett, C.; Malon, D.; Stewart, G.; Wynne, B.
2015-12-01
The ATLAS experiment has successfully used its Gaudi/Athena software framework for data taking and analysis during the first LHC run, with billions of events successfully processed. However, the design of Gaudi/Athena dates from early 2000 and the software and the physics code has been written using a single threaded, serial design. This programming model has increasing difficulty in exploiting the potential of current CPUs, which offer their best performance only through taking full advantage of multiple cores and wide vector registers. Future CPU evolution will intensify this trend, with core counts increasing and memory per core falling. With current memory consumption for 64 bit ATLAS reconstruction in a high luminosity environment approaching 4GB, it will become impossible to fully occupy all cores in a machine without exhausting available memory. However, since maximizing performance per watt will be a key metric, a mechanism must be found to use all cores as efficiently as possible. In this paper we report on our progress with a practical demonstration of the use of multithreading in the ATLAS reconstruction software, using the GaudiHive framework. We have expanded support to Calorimeter, Inner Detector, and Tracking code, discussing what changes were necessary in order to allow the serially designed ATLAS code to run, both to the framework and to the tools and algorithms used. We report on both the performance gains, and what general lessons were learned about the code patterns that had been employed in the software and which patterns were identified as particularly problematic for multi-threading. We also present our findings on implementing a hybrid multi-threaded / multi-process framework, to take advantage of the strengths of each type of concurrency, while avoiding some of their corresponding limitations.
NASA Astrophysics Data System (ADS)
Tanikawa, Ataru; Yoshikawa, Kohji; Okamoto, Takashi; Nitadori, Keigo
2012-02-01
We present a high-performance N-body code for self-gravitating collisional systems accelerated with the aid of a new SIMD instruction set extension of the x86 architecture: Advanced Vector eXtensions (AVX), an enhanced version of the Streaming SIMD Extensions (SSE). With one processor core of Intel Core i7-2600 processor (8 MB cache and 3.40 GHz) based on Sandy Bridge micro-architecture, we implemented a fourth-order Hermite scheme with individual timestep scheme ( Makino and Aarseth, 1992), and achieved the performance of ˜20 giga floating point number operations per second (GFLOPS) for double-precision accuracy, which is two times and five times higher than that of the previously developed code implemented with the SSE instructions ( Nitadori et al., 2006b), and that of a code implemented without any explicit use of SIMD instructions with the same processor core, respectively. We have parallelized the code by using so-called NINJA scheme ( Nitadori et al., 2006a), and achieved ˜90 GFLOPS for a system containing more than N = 8192 particles with 8 MPI processes on four cores. We expect to achieve about 10 tera FLOPS (TFLOPS) for a self-gravitating collisional system with N ˜ 10 5 on massively parallel systems with at most 800 cores with Sandy Bridge micro-architecture. This performance will be comparable to that of Graphic Processing Unit (GPU) cluster systems, such as the one with about 200 Tesla C1070 GPUs ( Spurzem et al., 2010). This paper offers an alternative to collisional N-body simulations with GRAPEs and GPUs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liao, J.; Kucukboyaci, V. N.; Nguyen, L.
2012-07-01
The Westinghouse Small Modular Reactor (SMR) is an 800 MWt (> 225 MWe) integral pressurized water reactor (iPWR) with all primary components, including the steam generator and the pressurizer located inside the reactor vessel. The reactor core is based on a partial-height 17x17 fuel assembly design used in the AP1000{sup R} reactor core. The Westinghouse SMR utilizes passive safety systems and proven components from the AP1000 plant design with a compact containment that houses the integral reactor vessel and the passive safety systems. A preliminary loss of coolant accident (LOCA) analysis of the Westinghouse SMR has been performed using themore » WCOBRA/TRAC-TF2 code, simulating a transient caused by a double ended guillotine (DEG) break in the direct vessel injection (DVI) line. WCOBRA/TRAC-TF2 is a new generation Westinghouse LOCA thermal-hydraulics code evolving from the US NRC licensed WCOBRA/TRAC code. It is designed to simulate PWR LOCA events from the smallest break size to the largest break size (DEG cold leg). A significant number of fluid dynamics models and heat transfer models were developed or improved in WCOBRA/TRAC-TF2. A large number of separate effects and integral effects tests were performed for a rigorous code assessment and validation. WCOBRA/TRAC-TF2 was introduced into the Westinghouse SMR design phase to assist a quick and robust passive cooling system design and to identify thermal-hydraulic phenomena for the development of the SMR Phenomena Identification Ranking Table (PIRT). The LOCA analysis of the Westinghouse SMR demonstrates that the DEG DVI break LOCA is mitigated by the injection and venting from the Westinghouse SMR passive safety systems without core heat up, achieving long term core cooling. (authors)« less
Revisiting Molecular Dynamics on a CPU/GPU system: Water Kernel and SHAKE Parallelization.
Ruymgaart, A Peter; Elber, Ron
2012-11-13
We report Graphics Processing Unit (GPU) and Open-MP parallel implementations of water-specific force calculations and of bond constraints for use in Molecular Dynamics simulations. We focus on a typical laboratory computing-environment in which a CPU with a few cores is attached to a GPU. We discuss in detail the design of the code and we illustrate performance comparable to highly optimized codes such as GROMACS. Beside speed our code shows excellent energy conservation. Utilization of water-specific lists allows the efficient calculations of non-bonded interactions that include water molecules and results in a speed-up factor of more than 40 on the GPU compared to code optimized on a single CPU core for systems larger than 20,000 atoms. This is up four-fold from a factor of 10 reported in our initial GPU implementation that did not include a water-specific code. Another optimization is the implementation of constrained dynamics entirely on the GPU. The routine, which enforces constraints of all bonds, runs in parallel on multiple Open-MP cores or entirely on the GPU. It is based on Conjugate Gradient solution of the Lagrange multipliers (CG SHAKE). The GPU implementation is partially in double precision and requires no communication with the CPU during the execution of the SHAKE algorithm. The (parallel) implementation of SHAKE allows an increase of the time step to 2.0fs while maintaining excellent energy conservation. Interestingly, CG SHAKE is faster than the usual bond relaxation algorithm even on a single core if high accuracy is expected. The significant speedup of the optimized components transfers the computational bottleneck of the MD calculation to the reciprocal part of Particle Mesh Ewald (PME).
Low-Power Embedded DSP Core for Communication Systems
NASA Astrophysics Data System (ADS)
Tsao, Ya-Lan; Chen, Wei-Hao; Tan, Ming Hsuan; Lin, Maw-Ching; Jou, Shyh-Jye
2003-12-01
This paper proposes a parameterized digital signal processor (DSP) core for an embedded digital signal processing system designed to achieve demodulation/synchronization with better performance and flexibility. The features of this DSP core include parameterized data path, dual MAC unit, subword MAC, and optional function-specific blocks for accelerating communication system modulation operations. This DSP core also has a low-power structure, which includes the gray-code addressing mode, pipeline sharing, and advanced hardware looping. Users can select the parameters and special functional blocks based on the character of their applications and then generating a DSP core. The DSP core has been implemented via a cell-based design method using a synthesizable Verilog code with TSMC 0.35[InlineEquation not available: see fulltext.]m SPQM and 0.25[InlineEquation not available: see fulltext.]m 1P5M library. The equivalent gate count of the core area without memory is approximately 50 k. Moreover, the maximum operating frequency of a[InlineEquation not available: see fulltext.] version is 100 MHz (0.35[InlineEquation not available: see fulltext.]m) and 140 MHz (0.25[InlineEquation not available: see fulltext.]m).
Brown, Cameron S.; Zhang, Hongbin; Kucukboyaci, Vefa; ...
2016-09-07
VERA-CS (Virtual Environment for Reactor Applications, Core Simulator) is a coupled neutron transport and thermal-hydraulics subchannel code under development by the Consortium for Advanced Simulation of Light Water Reactors (CASL). VERA-CS was used to simulate a typical pressurized water reactor (PWR) full core response with 17x17 fuel assemblies for a main steam line break (MSLB) accident scenario with the most reactive rod cluster control assembly stuck out of the core. The accident scenario was initiated at the hot zero power (HZP) at the end of the first fuel cycle with return to power state points that were determined by amore » system analysis code and the most limiting state point was chosen for core analysis. The best estimate plus uncertainty (BEPU) analysis method was applied using Wilks’ nonparametric statistical approach. In this way, 59 full core simulations were performed to provide the minimum departure from nucleate boiling ratio (MDNBR) at the 95/95 (95% probability with 95% confidence level) tolerance limit. The results show that this typical PWR core remains within MDNBR safety limits for the MSLB accident.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, Cameron S.; Zhang, Hongbin; Kucukboyaci, Vefa
VERA-CS (Virtual Environment for Reactor Applications, Core Simulator) is a coupled neutron transport and thermal-hydraulics subchannel code under development by the Consortium for Advanced Simulation of Light Water Reactors (CASL). VERA-CS was used to simulate a typical pressurized water reactor (PWR) full core response with 17x17 fuel assemblies for a main steam line break (MSLB) accident scenario with the most reactive rod cluster control assembly stuck out of the core. The accident scenario was initiated at the hot zero power (HZP) at the end of the first fuel cycle with return to power state points that were determined by amore » system analysis code and the most limiting state point was chosen for core analysis. The best estimate plus uncertainty (BEPU) analysis method was applied using Wilks’ nonparametric statistical approach. In this way, 59 full core simulations were performed to provide the minimum departure from nucleate boiling ratio (MDNBR) at the 95/95 (95% probability with 95% confidence level) tolerance limit. The results show that this typical PWR core remains within MDNBR safety limits for the MSLB accident.« less
Numerical optimization of three-dimensional coils for NSTX-U
Lazerson, S. A.; Park, J. -K.; Logan, N.; ...
2015-09-03
A tool for the calculation of optimal three-dimensional (3D) perturbative magnetic fields in tokamaks has been developed. The IPECOPT code builds upon the stellarator optimization code STELLOPT to allow for optimization of linear ideal magnetohydrodynamic perturbed equilibrium (IPEC). This tool has been applied to NSTX-U equilibria, addressing which fields are the most effective at driving NTV torques. The NTV torque calculation is performed by the PENT code. Optimization of the normal field spectrum shows that fields with n = 1 character can drive a large core torque. It is also shown that fields with n = 3 features are capablemore » of driving edge torque and some core torque. Coil current optimization (using the planned in-vessel and existing RWM coils) on NSTX-U suggest the planned coils set is adequate for core and edge torque control. In conclusion, comparison between error field correction experiments on DIII-D and the optimizer show good agreement.« less
Yeh, Yuan-Chieh; Chen, Hsing-Yu; Yang, Sien-Hung; Lin, Yi-Hsien; Chiu, Jen-Hwey; Lin, Yi-Hsuan; Chen, Jiun-Liang
2014-01-01
Traditional Chinese medicine (TCM), which is the most common type of complementary and alternative medicine (CAM) used in Taiwan, is increasingly used to treat patients with breast cancer. However, large-scale studies on the patterns of TCM prescriptions for breast cancer are still lacking. The aim of this study was to determine the core treatment of TCM prescriptions used for breast cancer recorded in the Taiwan National Health Insurance Research Database. TCM visits made for breast cancer in 2008 were identified using ICD-9 codes. The prescriptions obtained at these TCM visits were evaluated using association rule mining to evaluate the combinations of Chinese herbal medicine (CHM) used to treat breast cancer patients. A total of 37,176 prescriptions were made for 4,436 outpatients with breast cancer. Association rule mining and network analysis identified Hedyotis diffusa plus Scutellaria barbata as the most common duplex medicinal (10.9%) used for the core treatment of breast cancer. Jia-Wei-Xiao-Yao-San (19.6%) and Hedyotis diffusa (41.9%) were the most commonly prescribed herbal formula (HF) and single herb (SH), respectively. Only 35% of the commonly used CHM had been studied for efficacy. More clinical trials are needed to evaluate the efficacy and safety of these CHM used to treat breast cancer. PMID:24734104
Yeh, Yuan-Chieh; Chen, Hsing-Yu; Yang, Sien-Hung; Lin, Yi-Hsien; Chiu, Jen-Hwey; Lin, Yi-Hsuan; Chen, Jiun-Liang
2014-01-01
Traditional Chinese medicine (TCM), which is the most common type of complementary and alternative medicine (CAM) used in Taiwan, is increasingly used to treat patients with breast cancer. However, large-scale studies on the patterns of TCM prescriptions for breast cancer are still lacking. The aim of this study was to determine the core treatment of TCM prescriptions used for breast cancer recorded in the Taiwan National Health Insurance Research Database. TCM visits made for breast cancer in 2008 were identified using ICD-9 codes. The prescriptions obtained at these TCM visits were evaluated using association rule mining to evaluate the combinations of Chinese herbal medicine (CHM) used to treat breast cancer patients. A total of 37,176 prescriptions were made for 4,436 outpatients with breast cancer. Association rule mining and network analysis identified Hedyotis diffusa plus Scutellaria barbata as the most common duplex medicinal (10.9%) used for the core treatment of breast cancer. Jia-Wei-Xiao-Yao-San (19.6%) and Hedyotis diffusa (41.9%) were the most commonly prescribed herbal formula (HF) and single herb (SH), respectively. Only 35% of the commonly used CHM had been studied for efficacy. More clinical trials are needed to evaluate the efficacy and safety of these CHM used to treat breast cancer.
NASA Astrophysics Data System (ADS)
Morant, Maria; Llorente, Roberto
2017-01-01
In this work we propose and evaluate experimentally the performance of IEEE 802.11ac WLAN standard signals in radio-over-fiber (RoF) distributed-antenna systems based on multicore fiber (MCF) for in-building WLAN connectivity. The RoF performance of WLAN signals with different bandwidth is investigated considering up to IEEE 802.11ac maximum of 160 MHz per user. We evaluate experimentally the performance of WLAN signals employing different modulation and coding schemes achieving bitrates from 78 Mbps to 1404 Mbps per user in distances up to 300 m in a 4-core MCF. The performance of the wireless standard multiple-input multiple-output (MIMO) processing algorithms included in WLAN signals applied to the RoF transmission in MCF optical systems is also evaluated. The impact on the quality of the signal from one of the cores in the MIMO processing is investigated and compared with the results achieved with single-input single-output (SISO) transmission in each core. We measured the error vector magnitude (EVM) and the OFDM data burst information of the received WLAN signals after RoF transmission for different distributed-antenna systems with uni- and bi-directional MCF communication. Finally, we compare the received EVM of a single-antenna system (SISO arrangement) with WLAN systems using two antennas (2×2 MIMO) and four antennas (4×4 MIMO).
Assessment of the prevailing physics codes: LEOPARD, LASER, and EPRI-CELL
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lan, J.S.
1981-01-01
In order to analyze core performance and fuel management, it is necessary to verify reactor physics codes in great detail. This kind of work not only serves the purpose of understanding and controlling the characteristics of each code, but also ensures the reliability as codes continually change due to constant modifications and machine transfers. This paper will present the results of a comprehensive verification of three code packages - LEOPARD, LASER, and EPRI-CELL.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grebennikov, A.N.; Zhitnik, A.K.; Zvenigorodskaya, O.A.
1995-12-31
In conformity with the protocol of the Workshop under Contract {open_quotes}Assessment of RBMK reactor safety using modern Western Codes{close_quotes} VNIIEF performed a neutronics computation series to compare western and VNIIEF codes and assess whether VNIIEF codes are suitable for RBMK type reactor safety assessment computation. The work was carried out in close collaboration with M.I. Rozhdestvensky and L.M. Podlazov, NIKIET employees. The effort involved: (1) cell computations with the WIMS, EKRAN codes (improved modification of the LOMA code) and the S-90 code (VNIIEF Monte Carlo). Cell, polycell, burnup computation; (2) 3D computation of static states with the KORAT-3D and NEUmore » codes and comparison with results of computation with the NESTLE code (USA). The computations were performed in the geometry and using the neutron constants presented by the American party; (3) 3D computation of neutron kinetics with the KORAT-3D and NEU codes. These computations were performed in two formulations, both being developed in collaboration with NIKIET. Formulation of the first problem maximally possibly agrees with one of NESTLE problems and imitates gas bubble travel through a core. The second problem is a model of the RBMK as a whole with imitation of control and protection system controls (CPS) movement in a core.« less
Time-dependent simulations of disk-embedded planetary atmospheres
NASA Astrophysics Data System (ADS)
Stökl, A.; Dorfi, E. A.
2014-03-01
At the early stages of evolution of planetary systems, young Earth-like planets still embedded in the protoplanetary disk accumulate disk gas gravitationally into planetary atmospheres. The established way to study such atmospheres are hydrostatic models, even though in many cases the assumption of stationarity is unlikely to be fulfilled. Furthermore, such models rely on the specification of a planetary luminosity, attributed to a continuous, highly uncertain accretion of planetesimals onto the surface of the solid core. We present for the first time time-dependent, dynamic simulations of the accretion of nebula gas into an atmosphere around a proto-planet and the evolution of such embedded atmospheres while integrating the thermal energy budget of the solid core. The spherical symmetric models computed with the TAPIR-Code (short for The adaptive, implicit RHD-Code) range from the surface of the rocky core up to the Hill radius where the surrounding protoplanetary disk provides the boundary conditions. The TAPIR-Code includes the hydrodynamics equations, gray radiative transport and convective energy transport. The results indicate that diskembedded planetary atmospheres evolve along comparatively simple outlines and in particular settle, dependent on the mass of the solid core, at characteristic surface temperatures and planetary luminosities, quite independent on numerical parameters and initial conditions. For sufficiently massive cores, this evolution ultimately also leads to runaway accretion and the formation of a gas planet.
Bumper 3 Update for IADC Protection Manual
NASA Technical Reports Server (NTRS)
Christiansen, Eric L.; Nagy, Kornel; Hyde, Jim
2016-01-01
The Bumper code has been the standard in use by NASA and contractors to perform meteoroid/debris risk assessments since 1990. It has undergone extensive revisions and updates [NASA JSC HITF website; Christiansen et al., 1992, 1997]. NASA Johnson Space Center (JSC) has applied BUMPER to risk assessments for Space Station, Shuttle, Mir, Extravehicular Mobility Units (EMU) space suits, and other spacecraft (e.g., LDEF, Iridium, TDRS, and Hubble Space Telescope). Bumper continues to be updated with changes in the ballistic limit equations describing failure threshold of various spacecraft components, as well as changes in the meteoroid and debris environment models. Significant efforts are expended to validate Bumper and benchmark it to other meteoroid/debris risk assessment codes. Bumper 3 is a refactored version of Bumper II. The structure of the code was extensively modified to improve maintenance, performance and flexibility. The architecture was changed to separate the frequently updated ballistic limit equations from the relatively stable common core functions of the program. These updates allow NASA to produce specific editions of the Bumper 3 that are tailored for specific customer requirements. The core consists of common code necessary to process the Micrometeoroid and Orbital Debris (MMOD) environment models, assess shadowing and calculate MMOD risk. The library of target response subroutines includes a board range of different types of MMOD shield ballistic limit equations as well as equations describing damage to various spacecraft subsystems or hardware (thermal protection materials, windows, radiators, solar arrays, cables, etc.). The core and library of ballistic response subroutines are maintained under configuration control. A change in the core will affect all editions of the code, whereas a change in one or more of the response subroutines will affect all editions of the code that contain the particular response subroutines which are modified. Note that the Bumper II program is no longer maintained or distributed by NASA.
Fukushima Daiichi Unit 1 Ex-Vessel Prediction: Core-Concrete Interaction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robb, Kevin R.; Farmer, Mitchell T.; Francis, Matthew W.
Lower head failure and corium-concrete interaction were predicted to occur at Fukushima Daiichi Unit 1 (1F1) by several different system-level code analyses, including MELCOR v2.1 and MAAP5. Although these codes capture a wide range of accident phenomena, they do not contain detailed models for ex-vessel core melt behavior. However, specialized codes exist for the analysis of ex-vessel melt spreading (e.g., MELTSPREAD) and long-term debris coolability (e.g., CORQUENCH). On this basis, in this paper an analysis was carried out to further evaluate ex-vessel behavior for 1F1 using MELTSPREAD and CORQUENCH. Best-estimate melt pour conditions predicted by MELCOR v2.1 and MAAP5 weremore » used as input. MELTSPREAD was then used to predict the spatially dependent melt conditions and extent of spreading during relocation from the vessel. The results of the MELTSPREAD analysis are reported in a companion paper. This information was used as input for the long-term debris coolability analysis with CORQUENCH. For the MELCOR-based melt pour scenario, CORQUENCH predicted the melt would readily cool within 2.5 h after the pour, and the sumps would experience limited ablation (approximately 18 cm) under water-flooded conditions. Finally, for the MAAP-based melt pour scenarios, CORQUENCH predicted that the melt would cool in approximately 22.5 h, and the sumps would experience approximately 65 cm of concrete ablation under water-flooded conditions.« less
Fukushima Daiichi Unit 1 Ex-Vessel Prediction: Core-Concrete Interaction
Robb, Kevin R.; Farmer, Mitchell T.; Francis, Matthew W.
2016-10-31
Lower head failure and corium-concrete interaction were predicted to occur at Fukushima Daiichi Unit 1 (1F1) by several different system-level code analyses, including MELCOR v2.1 and MAAP5. Although these codes capture a wide range of accident phenomena, they do not contain detailed models for ex-vessel core melt behavior. However, specialized codes exist for the analysis of ex-vessel melt spreading (e.g., MELTSPREAD) and long-term debris coolability (e.g., CORQUENCH). On this basis, in this paper an analysis was carried out to further evaluate ex-vessel behavior for 1F1 using MELTSPREAD and CORQUENCH. Best-estimate melt pour conditions predicted by MELCOR v2.1 and MAAP5 weremore » used as input. MELTSPREAD was then used to predict the spatially dependent melt conditions and extent of spreading during relocation from the vessel. The results of the MELTSPREAD analysis are reported in a companion paper. This information was used as input for the long-term debris coolability analysis with CORQUENCH. For the MELCOR-based melt pour scenario, CORQUENCH predicted the melt would readily cool within 2.5 h after the pour, and the sumps would experience limited ablation (approximately 18 cm) under water-flooded conditions. Finally, for the MAAP-based melt pour scenarios, CORQUENCH predicted that the melt would cool in approximately 22.5 h, and the sumps would experience approximately 65 cm of concrete ablation under water-flooded conditions.« less
NASA Astrophysics Data System (ADS)
Hadade, Ioan; di Mare, Luca
2016-08-01
Modern multicore and manycore processors exhibit multiple levels of parallelism through a wide range of architectural features such as SIMD for data parallel execution or threads for core parallelism. The exploitation of multi-level parallelism is therefore crucial for achieving superior performance on current and future processors. This paper presents the performance tuning of a multiblock CFD solver on Intel SandyBridge and Haswell multicore CPUs and the Intel Xeon Phi Knights Corner coprocessor. Code optimisations have been applied on two computational kernels exhibiting different computational patterns: the update of flow variables and the evaluation of the Roe numerical fluxes. We discuss at great length the code transformations required for achieving efficient SIMD computations for both kernels across the selected devices including SIMD shuffles and transpositions for flux stencil computations and global memory transformations. Core parallelism is expressed through threading based on a number of domain decomposition techniques together with optimisations pertaining to alleviating NUMA effects found in multi-socket compute nodes. Results are correlated with the Roofline performance model in order to assert their efficiency for each distinct architecture. We report significant speedups for single thread execution across both kernels: 2-5X on the multicore CPUs and 14-23X on the Xeon Phi coprocessor. Computations at full node and chip concurrency deliver a factor of three speedup on the multicore processors and up to 24X on the Xeon Phi manycore coprocessor.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dokhane, A.; Canepa, S.; Ferroukhi, H.
For stability analyses of the Swiss operating Boiling-Water-Reactors (BWRs), the methodology employed and validated so far at the Paul Scherrer Inst. (PSI) was based on the RAMONA-3 code with a hybrid upstream static lattice/core analysis approach using CASMO-4 and PRESTO-2. More recently, steps were undertaken towards a new methodology based on the SIMULATE-3K (S3K) code for the dynamical analyses combined with the CMSYS system relying on the CASMO/SIMULATE-3 suite of codes and which was established at PSI to serve as framework for the development and validation of reference core models of all the Swiss reactors and operated cycles. This papermore » presents a first validation of the new methodology on the basis of a benchmark recently organised by a Swiss utility and including the participation of several international organisations with various codes/methods. Now in parallel, a transition from CASMO-4E (C4E) to CASMO-5M (C5M) as basis for the CMSYS core models was also recently initiated at PSI. Consequently, it was considered adequate to address the impact of this transition both for the steady-state core analyses as well as for the stability calculations and to achieve thereby, an integral approach for the validation of the new S3K methodology. Therefore, a comparative assessment of C4 versus C5M is also presented in this paper with particular emphasis on the void coefficients and their impact on the downstream stability analysis results. (authors)« less
Development of a New 47-Group Library for the CASL Neutronics Simulators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Kang Seog; Williams, Mark L; Wiarda, Dorothea
The CASL core simulator MPACT is under development for the neutronics and thermal-hydraulics coupled simulation for the pressurized light water reactors. The key characteristics of the MPACT code include a subgroup method for resonance self-shielding, and a whole core solver with a 1D/2D synthesis method. The ORNL AMPX/SCALE code packages have been significantly improved to support various intermediate resonance self-shielding approximations such as the subgroup and embedded self-shielding methods. New 47-group AMPX and MPACT libraries based on ENDF/B-VII.0 have been generated for the CASL core simulator MPACT of which group structure comes from the HELIOS library. The new 47-group MPACTmore » library includes all nuclear data required for static and transient core simulations. This study discusses a detailed procedure to generate the 47-group AMPX and MPACT libraries and benchmark results for the VERA progression problems.« less
Development and Implementation of CFD-Informed Models for the Advanced Subchannel Code CTF
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blyth, Taylor S.; Avramova, Maria
The research described in this PhD thesis contributes to the development of efficient methods for utilization of high-fidelity models and codes to inform low-fidelity models and codes in the area of nuclear reactor core thermal-hydraulics. The objective is to increase the accuracy of predictions of quantities of interests using high-fidelity CFD models while preserving the efficiency of low-fidelity subchannel core calculations. An original methodology named Physics- based Approach for High-to-Low Model Information has been further developed and tested. The overall physical phenomena and corresponding localized effects, which are introduced by the presence of spacer grids in light water reactor (LWR)more » cores, are dissected in corresponding four building basic processes, and corresponding models are informed using high-fidelity CFD codes. These models are a spacer grid-directed cross-flow model, a grid-enhanced turbulent mixing model, a heat transfer enhancement model, and a spacer grid pressure loss model. The localized CFD-models are developed and tested using the CFD code STAR-CCM+, and the corresponding global model development and testing in sub-channel formulation is performed in the thermal- hydraulic subchannel code CTF. The improved CTF simulations utilize data-files derived from CFD STAR-CCM+ simulation results covering the spacer grid design desired for inclusion in the CTF calculation. The current implementation of these models is examined and possibilities for improvement and further development are suggested. The validation experimental database is extended by including the OECD/NRC PSBT benchmark data. The outcome is an enhanced accuracy of CTF predictions while preserving the computational efficiency of a low-fidelity subchannel code.« less
Development and Implementation of CFD-Informed Models for the Advanced Subchannel Code CTF
NASA Astrophysics Data System (ADS)
Blyth, Taylor S.
The research described in this PhD thesis contributes to the development of efficient methods for utilization of high-fidelity models and codes to inform low-fidelity models and codes in the area of nuclear reactor core thermal-hydraulics. The objective is to increase the accuracy of predictions of quantities of interests using high-fidelity CFD models while preserving the efficiency of low-fidelity subchannel core calculations. An original methodology named Physics-based Approach for High-to-Low Model Information has been further developed and tested. The overall physical phenomena and corresponding localized effects, which are introduced by the presence of spacer grids in light water reactor (LWR) cores, are dissected in corresponding four building basic processes, and corresponding models are informed using high-fidelity CFD codes. These models are a spacer grid-directed cross-flow model, a grid-enhanced turbulent mixing model, a heat transfer enhancement model, and a spacer grid pressure loss model. The localized CFD-models are developed and tested using the CFD code STAR-CCM+, and the corresponding global model development and testing in sub-channel formulation is performed in the thermal-hydraulic subchannel code CTF. The improved CTF simulations utilize data-files derived from CFD STAR-CCM+ simulation results covering the spacer grid design desired for inclusion in the CTF calculation. The current implementation of these models is examined and possibilities for improvement and further development are suggested. The validation experimental database is extended by including the OECD/NRC PSBT benchmark data. The outcome is an enhanced accuracy of CTF predictions while preserving the computational efficiency of a low-fidelity subchannel code.
Jones, S.; Hirschi, R.; Pignatari, M.; ...
2015-01-15
We present a comparison of 15M ⊙ , 20M ⊙ and 25M ⊙ stellar models from three different codes|GENEC, KEPLER and MESA|and their nucleosynthetic yields. The models are calculated from the main sequence up to the pre-supernova (pre-SN) stage and do not include rotation. The GENEC and KEPLER models hold physics assumptions that are characteristic of the two codes. The MESA code is generally more flexible; overshooting of the convective core during the hydrogen and helium burning phases in MESA is chosen such that the CO core masses are consistent with those in the GENEC models. Full nucleosynthesis calculations aremore » performed for all models using the NuGrid post-processing tool MPPNP and the key energy-generating nuclear reaction rates are the same for all codes. We are thus able to highlight the key diferences between the models that are caused by the contrasting physics assumptions and numerical implementations of the three codes. A reasonable agreement is found between the surface abundances predicted by the models computed using the different codes, with GENEC exhibiting the strongest enrichment of H-burning products and KEPLER exhibiting the weakest. There are large variations in both the structure and composition of the models—the 15M ⊙ and 20M ⊙ in particular—at the pre-SN stage from code to code caused primarily by convective shell merging during the advanced stages. For example the C-shell abundances of O, Ne and Mg predicted by the three codes span one order of magnitude in the 15M ⊙ models. For the alpha elements between Si and Fe the differences are even larger. The s-process abundances in the C shell are modified by the merging of convective shells; the modification is strongest in the 15M ⊙ model in which the C-shell material is exposed to O-burning temperatures and the γ -process is activated. The variation in the s-process abundances across the codes is smallest in the 25M ⊙ models, where it is comparable to the impact of nuclear reaction rate uncertainties. In general the differences in the results from the three codes are due to their contrasting physics assumptions (e.g. prescriptions for mass loss and convection). The broadly similar evolution of the 25M ⊙ models gives us reassurance that different stellar evolution codes do produce similar results. For the 15M ⊙ and 20M ⊙ models, however, the different input physics and the interplay between the various convective zones lead to important differences in both the pre-supernova structure and nucleosynthesis predicted by the three codes. For the KEPLER models the core masses are different and therefore an exact match could not be expected.« less
A Common Set of Core Values - The Foundation for a More Effective Joint Force
2015-05-18
these codes stopped short of codifying a set of core values and instead focused on right and wrong behaviors. This adherence to sets of rules and...Armed Forces independently recognized the limitations of compliance-based rules and the criticality of establishing a strong foundation with core...institutional values vice core values? The knee -jerk reaction of the 1990s and a subsequent lack of a formal effort to institute a single set of core
Coupled Monte Carlo neutronics and thermal hydraulics for power reactors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bernnat, W.; Buck, M.; Mattes, M.
The availability of high performance computing resources enables more and more the use of detailed Monte Carlo models even for full core power reactors. The detailed structure of the core can be described by lattices, modeled by so-called repeated structures e.g. in Monte Carlo codes such as MCNP5 or MCNPX. For cores with mainly uniform material compositions, fuel and moderator temperatures, there is no problem in constructing core models. However, when the material composition and the temperatures vary strongly a huge number of different material cells must be described which complicate the input and in many cases exceed code ormore » memory limits. The second problem arises with the preparation of corresponding temperature dependent cross sections and thermal scattering laws. Only if these problems can be solved, a realistic coupling of Monte Carlo neutronics with an appropriate thermal-hydraulics model is possible. In this paper a method for the treatment of detailed material and temperature distributions in MCNP5 is described based on user-specified internal functions which assign distinct elements of the core cells to material specifications (e.g. water density) and temperatures from a thermal-hydraulics code. The core grid itself can be described with a uniform material specification. The temperature dependency of cross sections and thermal neutron scattering laws is taken into account by interpolation, requiring only a limited number of data sets generated for different temperatures. Applications will be shown for the stationary part of the Purdue PWR benchmark using ATHLET for thermal- hydraulics and for a generic Modular High Temperature reactor using THERMIX for thermal- hydraulics. (authors)« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harrison, Cyrus; Larsen, Matt; Brugger, Eric
Strawman is a system designed to explore the in situ visualization and analysis needs of simulation code teams running multi-physics calculations on many-core HPC architectures. It porvides rendering pipelines that can leverage both many-core CPUs and GPUs to render images of simulation meshes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, W.W.; Layton, J.P.
1976-09-13
The three-volume report describes a dual-mode nuclear space power and propulsion system concept that employs an advanced solid-core nuclear fission reactor coupled via heat pipes to one of several electric power conversion systems. The NUROC3A systems analysis code was designed to provide the user with performance characteristics of the dual-mode system. Volume 3 describes utilization of the NUROC3A code to produce a detailed parameter study of the system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carbajo, J.J.
1995-12-31
This study compares results obtained with two U.S. Nuclear Regulatory Commission (NRC)-sponsored codes, MELCOR version 1.8.3 (1.8PQ) and SCDAP/RELAP5 Mod3.1 release C, for the same transient - a low-pressure, short-term station blackout accident at the Browns Ferry nuclear plant. This work is part of MELCOR assessment activities to compare core damage progression calculations of MELCOR against SCDAP/RELAP5 since the two codes model core damage progression very differently.
Deterministic Modeling of the High Temperature Test Reactor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ortensi, J.; Cogliati, J. J.; Pope, M. A.
2010-06-01
Idaho National Laboratory (INL) is tasked with the development of reactor physics analysis capability of the Next Generation Nuclear Power (NGNP) project. In order to examine INL’s current prismatic reactor deterministic analysis tools, the project is conducting a benchmark exercise based on modeling the High Temperature Test Reactor (HTTR). This exercise entails the development of a model for the initial criticality, a 19 column thin annular core, and the fully loaded core critical condition with 30 columns. Special emphasis is devoted to the annular core modeling, which shares more characteristics with the NGNP base design. The DRAGON code is usedmore » in this study because it offers significant ease and versatility in modeling prismatic designs. Despite some geometric limitations, the code performs quite well compared to other lattice physics codes. DRAGON can generate transport solutions via collision probability (CP), method of characteristics (MOC), and discrete ordinates (Sn). A fine group cross section library based on the SHEM 281 energy structure is used in the DRAGON calculations. HEXPEDITE is the hexagonal z full core solver used in this study and is based on the Green’s Function solution of the transverse integrated equations. In addition, two Monte Carlo (MC) based codes, MCNP5 and PSG2/SERPENT, provide benchmarking capability for the DRAGON and the nodal diffusion solver codes. The results from this study show a consistent bias of 2–3% for the core multiplication factor. This systematic error has also been observed in other HTTR benchmark efforts and is well documented in the literature. The ENDF/B VII graphite and U235 cross sections appear to be the main source of the error. The isothermal temperature coefficients calculated with the fully loaded core configuration agree well with other benchmark participants but are 40% higher than the experimental values. This discrepancy with the measurement stems from the fact that during the experiments the control rods were adjusted to maintain criticality, whereas in the model, the rod positions were fixed. In addition, this work includes a brief study of a cross section generation approach that seeks to decouple the domain in order to account for neighbor effects. This spectral interpenetration is a dominant effect in annular HTR physics. This analysis methodology should be further explored in order to reduce the error that is systematically propagated in the traditional generation of cross sections.« less
The Quality of Written Feedback by Attendings of Internal Medicine Residents.
Jackson, Jeffrey L; Kay, Cynthia; Jackson, Wilkins C; Frank, Michael
2015-07-01
Attending evaluations are commonly used to evaluate residents. Evaluate the quality of written feedback of internal medicine residents. Retrospective. Internal medicine residents and faculty at the Medical College of Wisconsin from 2004 to 2012. From monthly evaluations of residents by attendings, a randomly selected sample of 500 written comments by attendings were qualitatively coded and rated as high-, moderate-, or low-quality feedback by two independent coders with good inter-rater reliability (kappa: 0.94). Small group exercises with residents and attendings also coded the utterances as high, moderate, or low quality and developed criteria for this categorization. In-service examination scores were correlated with written feedback. There were 228 internal medicine residents who had 6,603 evaluations by 334 attendings. Among 500 randomly selected written comments, there were 2,056 unique utterances: 29% were coded as nonspecific statements, 20% were comments about resident personality, 16% about patient care, 14% interpersonal communication, 7% medical knowledge, 6% professionalism, and 4% each on practice-based learning and systems-based practice. Based on criteria developed by group exercises, the majority of written comments were rated as moderate quality (65%); 22% were rated as high quality and 13% as low quality. Attendings who provided high-quality feedback rated residents significantly lower in all six of the Accreditation Council for Graduate Medical Education (ACGME) competencies (p <0.0005 for all), and had a greater range of scores. Negative comments on medical knowledge were associated with lower in-service examination scores. Most attending written evaluation was of moderate or low quality. Attendings who provided high-quality feedback appeared to be more discriminating, providing significantly lower ratings of residents in all six ACGME core competencies, and across a greater range. Attendings' negative written comments on medical knowledge correlated with lower in-service training scores.
The numerical simulation of a high-speed axial flow compressor
NASA Technical Reports Server (NTRS)
Mulac, Richard A.; Adamczyk, John J.
1991-01-01
The advancement of high-speed axial-flow multistage compressors is impeded by a lack of detailed flow-field information. Recent development in compressor flow modeling and numerical simulation have the potential to provide needed information in a timely manner. The development of a computer program is described to solve the viscous form of the average-passage equation system for multistage turbomachinery. Programming issues such as in-core versus out-of-core data storage and CPU utilization (parallelization, vectorization, and chaining) are addressed. Code performance is evaluated through the simulation of the first four stages of a five-stage, high-speed, axial-flow compressor. The second part addresses the flow physics which can be obtained from the numerical simulation. In particular, an examination of the endwall flow structure is made, and its impact on blockage distribution assessed.
Development of 3D pseudo pin-by-pin calculation methodology in ANC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, B.; Mayhue, L.; Huria, H.
2012-07-01
Advanced cores and fuel assembly designs have been developed to improve operational flexibility, economic performance and further enhance safety features of nuclear power plants. The simulation of these new designs, along with strong heterogeneous fuel loading, have brought new challenges to the reactor physics methodologies currently employed in the industrial codes for core analyses. Control rod insertion during normal operation is one operational feature in the AP1000{sup R} plant of Westinghouse next generation Pressurized Water Reactor (PWR) design. This design improves its operational flexibility and efficiency but significantly challenges the conventional reactor physics methods, especially in pin power calculations. Themore » mixture loading of fuel assemblies with significant neutron spectrums causes a strong interaction between different fuel assembly types that is not fully captured with the current core design codes. To overcome the weaknesses of the conventional methods, Westinghouse has developed a state-of-the-art 3D Pin-by-Pin Calculation Methodology (P3C) and successfully implemented in the Westinghouse core design code ANC. The new methodology has been qualified and licensed for pin power prediction. The 3D P3C methodology along with its application and validation will be discussed in the paper. (authors)« less
2D and 3D Models of Convective Turbulence and Oscillations in Intermediate-Mass Main-Sequence Stars
NASA Astrophysics Data System (ADS)
Guzik, Joyce Ann; Morgan, Taylor H.; Nelson, Nicholas J.; Lovekin, Catherine; Kitiashvili, Irina N.; Mansour, Nagi N.; Kosovichev, Alexander
2015-08-01
We present multidimensional modeling of convection and oscillations in main-sequence stars somewhat more massive than the sun, using three separate approaches: 1) Applying the spherical 3D MHD ASH (Anelastic Spherical Harmonics) code to simulate the core convection and radiative zone. Our goal is to determine whether core convection can excite low-frequency gravity modes, and thereby explain the presence of low frequencies for some hybrid gamma Dor/delta Sct variables for which the envelope convection zone is too shallow for the convective blocking mechanism to drive g modes; 2) Using the 3D planar ‘StellarBox’ radiation hydrodynamics code to model the envelope convection zone and part of the radiative zone. Our goals are to examine the interaction of stellar pulsations with turbulent convection in the envelope, excitation of acoustic modes, and the role of convective overshooting; 3) Applying the ROTORC 2D stellar evolution and dynamics code to calculate evolution with a variety of initial rotation rates and extents of core convective overshooting. The nonradial adiabatic pulsation frequencies of these nonspherical models will be calculated using the 2D pulsation code NRO of Clement. We will present new insights into gamma Dor and delta Sct pulsations gained by multidimensional modeling compared to 1D model expectations.
Hybrid parallel code acceleration methods in full-core reactor physics calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Courau, T.; Plagne, L.; Ponicot, A.
2012-07-01
When dealing with nuclear reactor calculation schemes, the need for three dimensional (3D) transport-based reference solutions is essential for both validation and optimization purposes. Considering a benchmark problem, this work investigates the potential of discrete ordinates (Sn) transport methods applied to 3D pressurized water reactor (PWR) full-core calculations. First, the benchmark problem is described. It involves a pin-by-pin description of a 3D PWR first core, and uses a 8-group cross-section library prepared with the DRAGON cell code. Then, a convergence analysis is performed using the PENTRAN parallel Sn Cartesian code. It discusses the spatial refinement and the associated angular quadraturemore » required to properly describe the problem physics. It also shows that initializing the Sn solution with the EDF SPN solver COCAGNE reduces the number of iterations required to converge by nearly a factor of 6. Using a best estimate model, PENTRAN results are then compared to multigroup Monte Carlo results obtained with the MCNP5 code. Good consistency is observed between the two methods (Sn and Monte Carlo), with discrepancies that are less than 25 pcm for the k{sub eff}, and less than 2.1% and 1.6% for the flux at the pin-cell level and for the pin-power distribution, respectively. (authors)« less
3D Field Modifications of Core Neutral Fueling In the EMC3-EIRENE Code
NASA Astrophysics Data System (ADS)
Waters, Ian; Frerichs, Heinke; Schmitz, Oliver; Ahn, Joon-Wook; Canal, Gustavo; Evans, Todd; Feng, Yuehe; Kaye, Stanley; Maingi, Rajesh; Soukhanovskii, Vsevolod
2017-10-01
The application of 3-D magnetic field perturbations to the edge plasmas of tokamaks has long been seen as a viable way to control damaging Edge Localized Modes (ELMs). These 3-D fields have also been correlated with a density drop in the core plasmas of tokamaks; known as `pump-out'. While pump-out is typically explained as the result of enhanced outward transport, degraded fueling of the core may also play a role. By altering the temperature and density of the plasma edge, 3-D fields will impact the distribution function of high energy neutral particles produced through ion-neutral energy exchange processes. Starved of the deeply penetrating neutral source, the core density will decrease. Numerical studies carried out with the EMC3-EIRENE code on National Spherical Tokamak eXperiment-Upgrade (NSTX-U) equilibria show that this change to core fueling by high energy neutrals may be a significant contributor to the overall particle balance in the NSTX-U tokamak: deep core (Ψ < 0.5) fueling from neutral ionization sources is decreased by 40-60% with RMPs. This work was funded by the US Department of Energy under Grant DE-SC0012315.
Preliminary Analysis of the BASALA-H Experimental Programme
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blaise, Patrick; Fougeras, Philippe; Philibert, Herve
2002-07-01
This paper is focused on the preliminary analysis of results obtained on the first cores of the first phase of the BASALA (Boiling water reactor Advanced core physics Study Aimed at mox fuel Lattice) programme, aimed at studying the neutronic parameters in ABWR core in hot conditions, currently under investigation in the French EOLE critical facility, within the framework of a cooperation between NUPEC, CEA and Cogema. The first 'on-line' analysis of the results has been made, using a new preliminary design and safety scheme based on the French APOLLO-2 code in its 2.4 qualified version and associated CEA-93 V4more » (JEF-2.2) Library, that will enable the Experimental Physics Division (SPEx) to perform future core designs. It describes the scheme adopted and the results obtained in various cases, going to the critical size determination to the reactivity worth of the perturbed configurations (voided, over-moderated, and poisoned with Gd{sub 2}O{sub 3}-UO{sub 2} pins). A preliminary study on the experimental results on the MISTRAL-4 is resumed, and the comparison of APOLLO-2 versus MCNP-4C calculations on these cores is made. The results obtained show very good agreements between the two codes, and versus the experiment. This work opens the way to the future full analysis of the experimental results of the qualifying teams with completely validated schemes, based on the new 2.5 version of the APOLLO-2 code. (authors)« less
ERIC Educational Resources Information Center
Foster, Catherine; McMenemy, David
2012-01-01
Thirty-six ethical codes from national professional associations were studied, the aim to test whether librarians have global shared values or if political and cultural contexts have significantly influenced the codes' content. Gorman's eight core values of stewardship, service, intellectual freedom, rationalism, literacy and learning, equity of…
Validation of the WIMSD4M cross-section generation code with benchmark results
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deen, J.R.; Woodruff, W.L.; Leal, L.E.
1995-01-01
The WIMSD4 code has been adopted for cross-section generation in support of the Reduced Enrichment Research and Test Reactor (RERTR) program at Argonne National Laboratory (ANL). Subsequently, the code has undergone several updates, and significant improvements have been achieved. The capability of generating group-collapsed micro- or macroscopic cross sections from the ENDF/B-V library and the more recent evaluation, ENDF/B-VI, in the ISOTXS format makes the modified version of the WIMSD4 code, WIMSD4M, very attractive, not only for the RERTR program, but also for the reactor physics community. The intent of the present paper is to validate the WIMSD4M cross-section librariesmore » for reactor modeling of fresh water moderated cores. The results of calculations performed with multigroup cross-section data generated with the WIMSD4M code will be compared against experimental results. These results correspond to calculations carried out with thermal reactor benchmarks of the Oak Ridge National Laboratory (ORNL) unreflected HEU critical spheres, the TRX LEU critical experiments, and calculations of a modified Los Alamos HEU D{sub 2}O moderated benchmark critical system. The benchmark calculations were performed with the discrete-ordinates transport code, TWODANT, using WIMSD4M cross-section data. Transport calculations using the XSDRNPM module of the SCALE code system are also included. In addition to transport calculations, diffusion calculations with the DIF3D code were also carried out, since the DIF3D code is used in the RERTR program for reactor analysis and design. For completeness, Monte Carlo results of calculations performed with the VIM and MCNP codes are also presented.« less
NASA Astrophysics Data System (ADS)
Wang, Rongjiang; Heimann, Sebastian; Zhang, Yong; Wang, Hansheng; Dahm, Torsten
2017-04-01
A hybrid method is proposed to calculate complete synthetic seismograms based on a spherically symmetric and self-gravitating Earth with a multi-layered structure of atmosphere, ocean, mantle, liquid core and solid core. For large wavelengths, a numerical scheme is used to solve the geodynamic boundary-value problem without any approximation on the deformation and gravity coupling. With the decreasing wavelength, the gravity effect on the deformation becomes negligible and the analytical propagator scheme can be used. Many useful approaches are used to overcome the numerical problems that may arise in both analytical and numerical schemes. Some of these approaches have been established in the seismological community and the others are developed for the first time. Based on the stable and efficient hybrid algorithm, an all-in-one code QSSP is implemented to cover the complete spectrum of seismological interests. The performance of the code is demonstrated by various tests including the curvature effect on teleseismic body and surface waves, the appearance of multiple reflected, teleseismic core phases, the gravity effect on long period surface waves and free oscillations, the simulation of near-field displacement seismograms with the static offset, the coupling of tsunami and infrasound waves, and free oscillations of the solid Earth, the atmosphere and the ocean. QSSP is open source software that can be used as a stand-alone FORTRAN code or may be applied in combination with a Python toolbox to calculate and handle Green's function databases for efficient coding of source inversion problems.
NASA Astrophysics Data System (ADS)
Wang, Rongjiang; Heimann, Sebastian; Zhang, Yong; Wang, Hansheng; Dahm, Torsten
2017-09-01
A hybrid method is proposed to calculate complete synthetic seismograms based on a spherically symmetric and self-gravitating Earth with a multilayered structure of atmosphere, ocean, mantle, liquid core and solid core. For large wavelengths, a numerical scheme is used to solve the geodynamic boundary-value problem without any approximation on the deformation and gravity coupling. With decreasing wavelength, the gravity effect on the deformation becomes negligible and the analytical propagator scheme can be used. Many useful approaches are used to overcome the numerical problems that may arise in both analytical and numerical schemes. Some of these approaches have been established in the seismological community and the others are developed for the first time. Based on the stable and efficient hybrid algorithm, an all-in-one code QSSP is implemented to cover the complete spectrum of seismological interests. The performance of the code is demonstrated by various tests including the curvature effect on teleseismic body and surface waves, the appearance of multiple reflected, teleseismic core phases, the gravity effect on long period surface waves and free oscillations, the simulation of near-field displacement seismograms with the static offset, the coupling of tsunami and infrasound waves, and free oscillations of the solid Earth, the atmosphere and the ocean. QSSP is open source software that can be used as a stand-alone FORTRAN code or may be applied in combination with a Python toolbox to calculate and handle Green's function databases for efficient coding of source inversion problems.
PanCoreGen - Profiling, detecting, annotating protein-coding genes in microbial genomes.
Paul, Sandip; Bhardwaj, Archana; Bag, Sumit K; Sokurenko, Evgeni V; Chattopadhyay, Sujay
2015-12-01
A large amount of genomic data, especially from multiple isolates of a single species, has opened new vistas for microbial genomics analysis. Analyzing the pan-genome (i.e. the sum of genetic repertoire) of microbial species is crucial in understanding the dynamics of molecular evolution, where virulence evolution is of major interest. Here we present PanCoreGen - a standalone application for pan- and core-genomic profiling of microbial protein-coding genes. PanCoreGen overcomes key limitations of the existing pan-genomic analysis tools, and develops an integrated annotation-structure for a species-specific pan-genomic profile. It provides important new features for annotating draft genomes/contigs and detecting unidentified genes in annotated genomes. It also generates user-defined group-specific datasets within the pan-genome. Interestingly, analyzing an example-set of Salmonella genomes, we detect potential footprints of adaptive convergence of horizontally transferred genes in two human-restricted pathogenic serovars - Typhi and Paratyphi A. Overall, PanCoreGen represents a state-of-the-art tool for microbial phylogenomics and pathogenomics study. Copyright © 2015 Elsevier Inc. All rights reserved.
Neutron flux and power in RTP core-15
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rabir, Mohamad Hairie, E-mail: m-hairie@nuclearmalaysia.gov.my; Zin, Muhammad Rawi Md; Usang, Mark Dennis
PUSPATI TRIGA Reactor achieved initial criticality on June 28, 1982. The reactor is designed to effectively implement the various fields of basic nuclear research, manpower training, and production of radioisotopes. This paper describes the reactor parameters calculation for the PUSPATI TRIGA REACTOR (RTP); focusing on the application of the developed reactor 3D model for criticality calculation, analysis of power and neutron flux distribution of TRIGA core. The 3D continuous energy Monte Carlo code MCNP was used to develop a versatile and accurate full model of the TRIGA reactor. The model represents in detailed all important components of the core withmore » literally no physical approximation. The consistency and accuracy of the developed RTP MCNP model was established by comparing calculations to the available experimental results and TRIGLAV code calculation.« less
Use of a Standardized Patient Exercise to Assess Core Competencies During Fellowship Training
Barry, Curtis T.; Avissar, Uri; Asebrook, Maureen; Sostok, Michael A.; Sherman, Kenneth E.; Zucker, Stephen D.
2010-01-01
Background The Accreditation Council for Graduate Medical Education requires fellows in many specialties to demonstrate attainment of 6 core competencies, yet relatively few validated assessment tools currently exist. We present our initial experience with the design and implementation of a standardized patient (SP) exercise during gastroenterology fellowship that facilitates appraisal of all core clinical competencies. Methods Fellows evaluated an SP trained to portray an individual referred for evaluation of abnormal liver tests. The encounters were independently graded by the SP and a faculty preceptor for patient care, professionalism, and interpersonal and communication skills using quantitative checklist tools. Trainees' consultation notes were scored using predefined key elements (medical knowledge) and subjected to a coding audit (systems-based practice). Practice-based learning and improvement was addressed via verbal feedback from the SP and self-assessment of the videotaped encounter. Results Six trainees completed the exercise. Second-year fellows received significantly higher scores in medical knowledge (55.0 ± 4.2 [standard deviation], P = .05) and patient care skills (19.5 ± 0.7, P = .04) by a faculty evaluator as compared with first-year trainees (46.2 ± 2.3 and 14.7 ± 1.5, respectively). Scores correlated by Spearman rank (0.82, P = .03) with the results of the Gastroenterology Training Examination. Ratings of the fellows by the SP did not differ by level of training, nor did they correlate with faculty scores. Fellows viewed the exercise favorably, with most indicating they would alter their practice based on the experience. Conclusions An SP exercise is an efficient and effective tool for assessing core clinical competencies during fellowship training. PMID:21975896
Extension of the XGC code for global gyrokinetic simulations in stellarator geometry
NASA Astrophysics Data System (ADS)
Cole, Michael; Moritaka, Toseo; White, Roscoe; Hager, Robert; Ku, Seung-Hoe; Chang, Choong-Seock
2017-10-01
In this work, the total-f, gyrokinetic particle-in-cell code XGC is extended to treat stellarator geometries. Improvements to meshing tools and the code itself have enabled the first physics studies, including single particle tracing and flux surface mapping in the magnetic geometry of the heliotron LHD and quasi-isodynamic stellarator Wendelstein 7-X. These have provided the first successful test cases for our approach. XGC is uniquely placed to model the complex edge physics of stellarators. A roadmap to such a global confinement modeling capability will be presented. Single particle studies will include the physics of energetic particles' global stochastic motions and their effect on confinement. Good confinement of energetic particles is vital for a successful stellarator reactor design. These results can be compared in the core region with those of other codes, such as ORBIT3d. In subsequent work, neoclassical transport and turbulence can then be considered and compared to results from codes such as EUTERPE and GENE. After sufficient verification in the core region, XGC will move into the stellarator edge region including the material wall and neutral particle recycling.
3 Lectures: "Lagrangian Models", "Numerical Transport Schemes", and "Chemical and Transport Models"
NASA Technical Reports Server (NTRS)
Douglass, A.
2005-01-01
The topics for the three lectures for the Canadian Summer School are Lagrangian Models, numerical transport schemes, and chemical and transport models. In the first lecture I will explain the basic components of the Lagrangian model (a trajectory code and a photochemical code), the difficulties in using such a model (initialization) and show some applications in interpretation of aircraft and satellite data. If time permits I will show some results concerning inverse modeling which is being used to evaluate sources of tropospheric pollutants. In the second lecture I will discuss one of the core components of any grid point model, the numerical transport scheme. I will explain the basics of shock capturing schemes, and performance criteria. I will include an example of the importance of horizontal resolution to polar processes. We have learned from NASA's global modeling initiative that horizontal resolution matters for predictions of the future evolution of the ozone hole. The numerical scheme will be evaluated using performance metrics based on satellite observations of long-lived tracers. The final lecture will discuss the evolution of chemical transport models over the last decade. Some of the problems with assimilated winds will be demonstrated, using satellite data to evaluate the simulations.
Syazwan, AI; Rafee, B Mohd; Hafizan, Juahir; Azman, AZF; Nizar, AM; Izwyn, Z; Muhaimin, AA; Yunos, MA Syafiq; Anita, AR; Hanafiah, J Muhamad; Shaharuddin, MS; Ibthisham, A Mohd; Ismail, Mohd Hasmadi; Azhar, MN Mohamad; Azizan, HS; Zulfadhli, I; Othman, J
2012-01-01
Background To meet the current diversified health needs in workplaces, especially in nonindustrial workplaces in developing countries, an indoor air quality (IAQ) component of a participatory occupational safety and health survey should be included. Objectives The purpose of this study was to evaluate and suggest a multidisciplinary, integrated IAQ checklist for evaluating the health risk of building occupants. This IAQ checklist proposed to support employers, workers, and assessors in understanding a wide range of important elements in the indoor air environment to promote awareness in nonindustrial workplaces. Methods The general structure of and specific items in the IAQ checklist were discussed in a focus group meeting with IAQ assessors based upon the result of a literature review, previous industrial code of practice, and previous interviews with company employers and workers. Results For practicality and validity, several sessions were held to elicit the opinions of company members, and, as a result, modifications were made. The newly developed IAQ checklist was finally formulated, consisting of seven core areas, nine technical areas, and 71 essential items. Each item was linked to a suitable section in the Industry Code of Practice on Indoor Air Quality published by the Department of Occupational Safety and Health. Conclusion Combined usage of an IAQ checklist with the information from the Industry Code of Practice on Indoor Air Quality would provide easily comprehensible information and practical support. Intervention and evaluation studies using this newly developed IAQ checklist will clarify the effectiveness of a new approach in evaluating the risk of indoor air pollutants in the workplace. PMID:22570579
Syazwan, Ai; Rafee, B Mohd; Hafizan, Juahir; Azman, Azf; Nizar, Am; Izwyn, Z; Muhaimin, Aa; Yunos, Ma Syafiq; Anita, Ar; Hanafiah, J Muhamad; Shaharuddin, Ms; Ibthisham, A Mohd; Ismail, Mohd Hasmadi; Azhar, Mn Mohamad; Azizan, Hs; Zulfadhli, I; Othman, J
2012-01-01
To meet the current diversified health needs in workplaces, especially in nonindustrial workplaces in developing countries, an indoor air quality (IAQ) component of a participatory occupational safety and health survey should be included. The purpose of this study was to evaluate and suggest a multidisciplinary, integrated IAQ checklist for evaluating the health risk of building occupants. This IAQ checklist proposed to support employers, workers, and assessors in understanding a wide range of important elements in the indoor air environment to promote awareness in nonindustrial workplaces. The general structure of and specific items in the IAQ checklist were discussed in a focus group meeting with IAQ assessors based upon the result of a literature review, previous industrial code of practice, and previous interviews with company employers and workers. For practicality and validity, several sessions were held to elicit the opinions of company members, and, as a result, modifications were made. The newly developed IAQ checklist was finally formulated, consisting of seven core areas, nine technical areas, and 71 essential items. Each item was linked to a suitable section in the Industry Code of Practice on Indoor Air Quality published by the Department of Occupational Safety and Health. Combined usage of an IAQ checklist with the information from the Industry Code of Practice on Indoor Air Quality would provide easily comprehensible information and practical support. Intervention and evaluation studies using this newly developed IAQ checklist will clarify the effectiveness of a new approach in evaluating the risk of indoor air pollutants in the workplace.
NASA Technical Reports Server (NTRS)
Lawton, Pat
2004-01-01
The objective of this work was to support the design of improved IUE NEWSIPS high dispersion extraction algorithms. The purpose of this work was to evaluate use of the Linearized Image (LIHI) file versus the Re-Sampled Image (SIHI) file, evaluate various extraction, and design algorithms for evaluation of IUE High Dispersion spectra. It was concluded the use of the Re-Sampled Image (SIHI) file was acceptable. Since the Gaussian profile worked well for the core and the Lorentzian profile worked well for the wings, the Voigt profile was chosen for use in the extraction algorithm. It was found that the gamma and sigma parameters varied significantly across the detector, so gamma and sigma masks for the SWP detector were developed. Extraction code was written.
Accident Analysis for the NIST Research Reactor Before and After Fuel Conversion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baek J.; Diamond D.; Cuadra, A.
Postulated accidents have been analyzed for the 20 MW D2O-moderated research reactor (NBSR) at the National Institute of Standards and Technology (NIST). The analysis has been carried out for the present core, which contains high enriched uranium (HEU) fuel and for a proposed equilibrium core with low enriched uranium (LEU) fuel. The analyses employ state-of-the-art calculational methods. Three-dimensional Monte Carlo neutron transport calculations were performed with the MCNPX code to determine homogenized fuel compositions in the lower and upper halves of each fuel element and to determine the resulting neutronic properties of the core. The accident analysis employed a modelmore » of the primary loop with the RELAP5 code. The model includes the primary pumps, shutdown pumps outlet valves, heat exchanger, fuel elements, and flow channels for both the six inner and twenty-four outer fuel elements. Evaluations were performed for the following accidents: (1) control rod withdrawal startup accident, (2) maximum reactivity insertion accident, (3) loss-of-flow accident resulting from loss of electrical power with an assumption of failure of shutdown cooling pumps, (4) loss-of-flow accident resulting from a primary pump seizure, and (5) loss-of-flow accident resulting from inadvertent throttling of a flow control valve. In addition, natural circulation cooling at low power operation was analyzed. The analysis shows that the conversion will not lead to significant changes in the safety analysis and the calculated minimum critical heat flux ratio and maximum clad temperature assure that there is adequate margin to fuel failure.« less
Fault Tolerance Middleware for a Multi-Core System
NASA Technical Reports Server (NTRS)
Some, Raphael R.; Springer, Paul L.; Zima, Hans P.; James, Mark; Wagner, David A.
2012-01-01
Fault Tolerance Middleware (FTM) provides a framework to run on a dedicated core of a multi-core system and handles detection of single-event upsets (SEUs), and the responses to those SEUs, occurring in an application running on multiple cores of the processor. This software was written expressly for a multi-core system and can support different kinds of fault strategies, such as introspection, algorithm-based fault tolerance (ABFT), and triple modular redundancy (TMR). It focuses on providing fault tolerance for the application code, and represents the first step in a plan to eventually include fault tolerance in message passing and the FTM itself. In the multi-core system, the FTM resides on a single, dedicated core, separate from the cores used by the application. This is done in order to isolate the FTM from application faults and to allow it to swap out any application core for a substitute. The structure of the FTM consists of an interface to a fault tolerant strategy module, a responder module, a fault manager module, an error factory, and an error mapper that determines the severity of the error. In the present reference implementation, the only fault tolerant strategy implemented is introspection. The introspection code waits for an application node to send an error notification to it. It then uses the error factory to create an error object, and at this time, a severity level is assigned to the error. The introspection code uses its built-in knowledge base to generate a recommended response to the error. Responses might include ignoring the error, logging it, rolling back the application to a previously saved checkpoint, swapping in a new node to replace a bad one, or restarting the application. The original error and recommended response are passed to the top-level fault manager module, which invokes the response. The responder module also notifies the introspection module of the generated response. This provides additional information to the introspection module that it can use in generating its next response. For example, if the responder triggers an application rollback and errors are still occurring, the introspection module may decide to recommend an application restart.
Gimbel, Sarah; Rustagi, Alison S; Robinson, Julia; Kouyate, Seydou; Coutinho, Joana; Nduati, Ruth; Pfeiffer, James; Gloyd, Stephen; Sherr, Kenneth; Granato, S Adam; Kone, Ahoua; Cruz, Emilia; Manuel, Joao Luis; Zucule, Justina; Napua, Manuel; Mbatia, Grace; Wariua, Grace; Maina, Martin
2016-08-01
Despite large investments to prevent mother-to-child-transmission (PMTCT), pediatric HIV elimination goals are not on track in many countries. The Systems Analysis and Improvement Approach (SAIA) study was a cluster randomized trial to test whether a package of systems engineering tools could strengthen PMTCT programs. We sought to (1) define core and adaptable components of the SAIA intervention, and (2) explain the heterogeneity in SAIA's success between facilities. The Consolidated Framework for Implementation Research (CFIR) guided all data collection efforts. CFIR constructs were assessed in focus group discussions and interviews with study and facility staff in 6 health facilities (1 high-performing and 1 low-performing site per country, identified by study staff) in December 2014 at the end of the intervention period. SAIA staff identified the intervention's core and adaptable components at an end-of-study meeting in August 2015. Two independent analysts used CFIR constructs to code transcripts before reaching consensus. Flow mapping and continuous quality improvement were the core to the SAIA in all settings, whereas the PMTCT cascade analysis tool was the core in high HIV prevalence settings. Five CFIR constructs distinguished strongly between high and low performers: 2 in inner setting (networks and communication, available resources) and 3 in process (external change agents, executing, reflecting and evaluating). The CFIR is a valuable tool to categorize elements of an intervention as core versus adaptable, and to understand heterogeneity in study implementation. Future intervention studies should apply evidence-based implementation science frameworks, like the CFIR, to provide salient data to expand implementation to other settings.
Rustagi, Alison S.; Robinson, Julia; Kouyate, Seydou; Coutinho, Joana; Nduati, Ruth; Pfeiffer, James; Gloyd, Stephen; Sherr, Kenneth; Granato, S. Adam; Kone, Ahoua; Cruz, Emilia; Manuel, Joao Luis; Zucule, Justina; Napua, Manuel; Mbatia, Grace; Wariua, Grace; Maina, Martin
2016-01-01
Background: Despite large investments to prevent mother-to-child-transmission (PMTCT), pediatric HIV elimination goals are not on track in many countries. The Systems Analysis and Improvement Approach (SAIA) study was a cluster randomized trial to test whether a package of systems engineering tools could strengthen PMTCT programs. We sought to (1) define core and adaptable components of the SAIA intervention, and (2) explain the heterogeneity in SAIA's success between facilities. Methods: The Consolidated Framework for Implementation Research (CFIR) guided all data collection efforts. CFIR constructs were assessed in focus group discussions and interviews with study and facility staff in 6 health facilities (1 high-performing and 1 low-performing site per country, identified by study staff) in December 2014 at the end of the intervention period. SAIA staff identified the intervention's core and adaptable components at an end-of-study meeting in August 2015. Two independent analysts used CFIR constructs to code transcripts before reaching consensus. Results: Flow mapping and continuous quality improvement were the core to the SAIA in all settings, whereas the PMTCT cascade analysis tool was the core in high HIV prevalence settings. Five CFIR constructs distinguished strongly between high and low performers: 2 in inner setting (networks and communication, available resources) and 3 in process (external change agents, executing, reflecting and evaluating). Discussion: The CFIR is a valuable tool to categorize elements of an intervention as core versus adaptable, and to understand heterogeneity in study implementation. Future intervention studies should apply evidence-based implementation science frameworks, like the CFIR, to provide salient data to expand implementation to other settings. PMID:27355497
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roseberry, R.J.
The experimental measurements and nuclear analysis of a uniformly loaded, unpoisoned slab core with a partially inserted hafnium rod and/or a partially inserted water gap are described. Comparisons of experimental data with calculated results of the UFO core and flux synthesis techniques are given. It is concluded that one of the flux synthesis techniques and the UFO code are able to predict flux distributions to within approximately -5% of experiment for most cases, with a maximum error of approximately -10% for a channel at the core- reflector boundary. The second synthesis technique failed to give comparable agreement with experiment evenmore » when various refinements were used, e.g. increasing the number of mesh points, performing the flux synthesis technique of iteration, and spectrum-weighting the appropriate calculated fluxes through the use of the SWAKRAUM code. These results are comparable to those reported in Part I of this study. (auth)« less
NASA Astrophysics Data System (ADS)
Mielikainen, Jarno; Huang, Bormin; Huang, Allen H.-L.
2015-05-01
Intel Many Integrated Core (MIC) ushers in a new era of supercomputing speed, performance, and compatibility. It allows the developers to run code at trillions of calculations per second using the familiar programming model. In this paper, we present our results of optimizing the updated Goddard shortwave radiation Weather Research and Forecasting (WRF) scheme on Intel Many Integrated Core Architecture (MIC) hardware. The Intel Xeon Phi coprocessor is the first product based on Intel MIC architecture, and it consists of up to 61 cores connected by a high performance on-die bidirectional interconnect. The co-processor supports all important Intel development tools. Thus, the development environment is familiar one to a vast number of CPU developers. Although, getting a maximum performance out of Xeon Phi will require using some novel optimization techniques. Those optimization techniques are discusses in this paper. The results show that the optimizations improved performance of the original code on Xeon Phi 7120P by a factor of 1.3x.
PLUMED 2: New feathers for an old bird
NASA Astrophysics Data System (ADS)
Tribello, Gareth A.; Bonomi, Massimiliano; Branduardi, Davide; Camilloni, Carlo; Bussi, Giovanni
2014-02-01
Enhancing sampling and analyzing simulations are central issues in molecular simulation. Recently, we introduced PLUMED, an open-source plug-in that provides some of the most popular molecular dynamics (MD) codes with implementations of a variety of different enhanced sampling algorithms and collective variables (CVs). The rapid changes in this field, in particular new directions in enhanced sampling and dimensionality reduction together with new hardware, require a code that is more flexible and more efficient. We therefore present PLUMED 2 here—a complete rewrite of the code in an object-oriented programming language (C++). This new version introduces greater flexibility and greater modularity, which both extends its core capabilities and makes it far easier to add new methods and CVs. It also has a simpler interface with the MD engines and provides a single software library containing both tools and core facilities. Ultimately, the new code better serves the ever-growing community of users and contributors in coping with the new challenges arising in the field.
Budkowska, Agata; Kakkanas, Athanassios; Nerrienet, Eric; Kalinina, Olga; Maillard, Patrick; Horm, Srey Viseth; Dalagiorgou, Geena; Vassilaki, Niki; Georgopoulou, Urania; Martinot, Michelle; Sall, Amadou Alpha; Mavromara, Penelope
2011-01-01
The biological role of the protein encoded by the alternative open reading frame (core+1/ARF) of the Hepatitis C virus (HCV) genome remains elusive, as does the significance of the production of corresponding antibodies in HCV infection. We investigated the prevalence of anti-core and anti-core+1/ARFP antibodies in HCV-positive blood donors from Cambodia, using peptide and recombinant protein-based ELISAs. We detected unusual serological profiles in 3 out of 58 HCV positive plasma of genotype 1a. These patients were negative for anti-core antibodies by commercial and peptide-based assays using C-terminal fragments of core but reacted by Western Blot with full-length core protein. All three patients had high levels of anti-core+1/ARFP antibodies. Cloning of the cDNA that corresponds to the core-coding region from these sera resulted in the expression of both core and core+1/ARFP in mammalian cells. The core protein exhibited high amino-acid homology with a consensus HCV1a sequence. However, 10 identical synonymous mutations were found, and 7 were located in the aa(99–124) region of core. All mutations concerned the third base of a codon, and 5/10 represented a T>C mutation. Prediction analyses of the RNA secondary structure revealed conformational changes within the stem-loop region that contains the core+1/ARFP internal AUG initiator at position 85/87. Using the luciferase tagging approach, we showed that core+1/ARFP expression is more efficient from such a sequence than from the prototype HCV1a RNA. We provide additional evidence of the existence of core+1/ARFP in vivo and new data concerning expression of HCV core protein. We show that HCV patients who do not produce normal anti-core antibodies have unusually high levels of antit-core+1/ARFP and harbour several identical synonymous mutations in the core and core+1/ARFP coding region that result in major changes in predicted RNA structure. Such HCV variants may favour core+1/ARFP production during HCV infection. PMID:21283512
Neutron dose rate analysis on HTGR-10 reactor using Monte Carlo code
NASA Astrophysics Data System (ADS)
Suwoto; Adrial, H.; Hamzah, A.; Zuhair; Bakhri, S.; Sunaryo, G. R.
2018-02-01
The HTGR-10 reactor is cylinder-shaped core fuelled with kernel TRISO coated fuel particles in the spherical pebble with helium cooling system. The outlet helium gas coolant temperature outputted from the reactor core is designed to 700 °C. One advantage HTGR type reactor is capable of co-generation, as an addition to generating electricity, the reactor was designed to produce heat at high temperature can be used for other processes. The spherical fuel pebble contains 8335 TRISO UO2 kernel coated particles with enrichment of 10% and 17% are dispersed in a graphite matrix. The main purpose of this study was to analysis the distribution of neutron dose rates generated from HTGR-10 reactors. The calculation and analysis result of neutron dose rate in the HTGR-10 reactor core was performed using Monte Carlo MCNP5v1.6 code. The problems of double heterogeneity in kernel fuel coated particles TRISO and spherical fuel pebble in the HTGR-10 core are modelled well with MCNP5v1.6 code. The neutron flux to dose conversion factors taken from the International Commission on Radiological Protection (ICRP-74) was used to determine the dose rate that passes through the active core, reflectors, core barrel, reactor pressure vessel (RPV) and a biological shield. The calculated results of neutron dose rate with MCNP5v1.6 code using a conversion factor of ICRP-74 (2009) for radiation workers in the radial direction on the outside of the RPV (radial position = 220 cm from the center of the patio HTGR-10) provides the respective value of 9.22E-4 μSv/h and 9.58E-4 μSv/h for enrichment 10% and 17%, respectively. The calculated values of neutron dose rates are compliant with BAPETEN Chairman’s Regulation Number 4 Year 2013 on Radiation Protection and Safety in Nuclear Energy Utilization which sets the limit value for the average effective dose for radiation workers 20 mSv/year or 10μSv/h. Thus the protection and safety for radiation workers to be safe from the radiation source has been fulfilled. From the result analysis, it can be concluded that the model of calculation result of neutron dose rate for HTGR-10 core has met the required radiation safety standards.
Recent update of the RPLUS2D/3D codes
NASA Technical Reports Server (NTRS)
Tsai, Y.-L. Peter
1991-01-01
The development of the RPLUS2D/3D codes is summarized. These codes utilize LU algorithms to solve chemical non-equilibrium flows in a body-fitted coordinate system. The motivation behind the development of these codes is the need to numerically predict chemical non-equilibrium flows for the National AeroSpace Plane Program. Recent improvements include vectorization method, blocking algorithms for geometric flexibility, out-of-core storage for large-size problems, and an LU-SW/UP combination for CPU-time efficiency and solution quality.
CoCoNuT: General relativistic hydrodynamics code with dynamical space-time evolution
NASA Astrophysics Data System (ADS)
Dimmelmeier, Harald; Novak, Jérôme; Cerdá-Durán, Pablo
2012-02-01
CoCoNuT is a general relativistic hydrodynamics code with dynamical space-time evolution. The main aim of this numerical code is the study of several astrophysical scenarios in which general relativity can play an important role, namely the collapse of rapidly rotating stellar cores and the evolution of isolated neutron stars. The code has two flavors: CoCoA, the axisymmetric (2D) magnetized version, and CoCoNuT, the 3D non-magnetized version.
Building Automatic Grading Tools for Basic of Programming Lab in an Academic Institution
NASA Astrophysics Data System (ADS)
Harimurti, Rina; Iwan Nurhidayat, Andi; Asmunin
2018-04-01
The skills of computer programming is a core competency that must be mastered by students majoring in computer sciences. The best way to improve this skill is through the practice of writing many programs to solve various problems from simple to complex. It takes hard work and a long time to check and evaluate the results of student labs one by one, especially if the number of students a lot. Based on these constrain, web proposes Automatic Grading Tools (AGT), the application that can evaluate and deeply check the source code in C, C++. The application architecture consists of students, web-based applications, compilers, and operating systems. Automatic Grading Tools (AGT) is implemented MVC Architecture and using open source software, such as laravel framework version 5.4, PostgreSQL 9.6, Bootstrap 3.3.7, and jquery library. Automatic Grading Tools has also been tested for real problems by submitting source code in C/C++ language and then compiling. The test results show that the AGT application has been running well.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gauntt, Randall O.; Mattie, Patrick D.
Sandia National Laboratories (SNL) has conducted an uncertainty analysis (UA) on the Fukushima Daiichi unit (1F1) accident progression with the MELCOR code. The model used was developed for a previous accident reconstruction investigation jointly sponsored by the US Department of Energy (DOE) and Nuclear Regulatory Commission (NRC). That study focused on reconstructing the accident progressions, as postulated by the limited plant data. This work was focused evaluation of uncertainty in core damage progression behavior and its effect on key figures-of-merit (e.g., hydrogen production, reactor damage state, fraction of intact fuel, vessel lower head failure). The primary intent of this studymore » was to characterize the range of predicted damage states in the 1F1 reactor considering state of knowledge uncertainties associated with MELCOR modeling of core damage progression and to generate information that may be useful in informing the decommissioning activities that will be employed to defuel the damaged reactors at the Fukushima Daiichi Nuclear Power Plant. Additionally, core damage progression variability inherent in MELCOR modeling numerics is investigated.« less
Neutron-gamma flux and dose calculations in a Pressurized Water Reactor (PWR)
NASA Astrophysics Data System (ADS)
Brovchenko, Mariya; Dechenaux, Benjamin; Burn, Kenneth W.; Console Camprini, Patrizio; Duhamel, Isabelle; Peron, Arthur
2017-09-01
The present work deals with Monte Carlo simulations, aiming to determine the neutron and gamma responses outside the vessel and in the basemat of a Pressurized Water Reactor (PWR). The model is based on the Tihange-I Belgian nuclear reactor. With a large set of information and measurements available, this reactor has the advantage to be easily modelled and allows validation based on the experimental measurements. Power distribution calculations were therefore performed with the MCNP code at IRSN and compared to the available in-core measurements. Results showed a good agreement between calculated and measured values over the whole core. In this paper, the methods and hypotheses used for the particle transport simulation from the fission distribution in the core to the detectors outside the vessel of the reactor are also summarized. The results of the simulations are presented including the neutron and gamma doses and flux energy spectra. MCNP6 computational results comparing JEFF3.1 and ENDF-B/VII.1 nuclear data evaluations and sensitivity of the results to some model parameters are presented.
Liu, Chang; Deng, Lei; He, Jiale; Li, Di; Fu, Songnian; Tang, Ming; Cheng, Mengfan; Liu, Deming
2017-07-24
In this paper, 4 × 4 multiple-input multiple-output (MIMO) radio over 7-core fiber system based on sparse code multiple access (SCMA) and OFDM/OQAM techniques is proposed. No cyclic prefix (CP) is required by properly designing the prototype filters in OFDM/OQAM modulator, and non-orthogonally overlaid codewords by using SCMA is help to serve more users simultaneously under the condition of using equal number of time and frequency resources compared with OFDMA, resulting in the increase of spectral efficiency (SE) and system capacity. In our experiment, 11.04 Gb/s 4 × 4 MIMO SCMA-OFDM/OQAM signal is successfully transmitted over 20 km 7-core fiber and 0.4 m air distance in both uplink and downlink. As a comparison, 6.681 Gb/s traditional MIMO-OFDM signal with the same occupied bandwidth has been evaluated for both uplink and downlink transmission. The experimental results show that SE could be increased by 65.2% with no bit error rate (BER) performance degradation compared with the traditional MIMO-OFDM technique.
NASA Astrophysics Data System (ADS)
Bouffard, M.
2016-12-01
Convection in the Earth's outer core is driven by the combination of two buoyancy sources: a thermal source directly related to the Earth's secular cooling, the release of latent heat and possibly the heat generated by radioactive decay, and a compositional source due to the crystallization of the growing inner core which releases light elements into the liquid outer core. The dynamics of fusion/crystallization being dependent on the heat flux distribution, the thermochemical boundary conditions are coupled at the inner core boundary which may affect the dynamo in various ways, particularly if heterogeneous conditions are imposed at one boundary. In addition, the thermal and compositional molecular diffusivities differ by three orders of magnitude. This can produce significant differences in the convective dynamics compared to pure thermal or compositional convection due to the potential occurence of double-diffusive phenomena. Traditionally, temperature and composition have been combined into one single variable called codensity under the assumption that turbulence mixes all physical properties at an "eddy-diffusion" rate. This description does not allow for a proper treatment of the thermochemical coupling and is certainly incorrect within stratified layers in which double-diffusive phenomena can be expected. For a more general and rigorous approach, two distinct transport equations should therefore be solved for temperature and composition. However, the weak compositional diffusivity is technically difficult to handle in current geodynamo codes and requires the use of a semi-Lagrangian description to minimize numerical diffusion. We implemented a "particle-in-cell" method into a geodynamo code to properly describe the compositional field. The code is suitable for High Parallel Computing architectures and was successfully tested on two benchmarks. Following the work by Aubert et al. (2008) we use this new tool to perform dynamo simulations including thermochemical coupling at the inner core boundary as well as exploration of the infinite Lewis number limit to study the effect of a heterogeneous core mantle boundary heat flow on the inner core growth.
Implicit time-integration method for simultaneous solution of a coupled non-linear system
NASA Astrophysics Data System (ADS)
Watson, Justin Kyle
Historically large physical problems have been divided into smaller problems based on the physics involved. This is no different in reactor safety analysis. The problem of analyzing a nuclear reactor for design basis accidents is performed by a handful of computer codes each solving a portion of the problem. The reactor thermal hydraulic response to an event is determined using a system code like TRAC RELAP Advanced Computational Engine (TRACE). The core power response to the same accident scenario is determined using a core physics code like Purdue Advanced Core Simulator (PARCS). Containment response to the reactor depressurization in a Loss Of Coolant Accident (LOCA) type event is calculated by a separate code. Sub-channel analysis is performed with yet another computer code. This is just a sample of the computer codes used to solve the overall problems of nuclear reactor design basis accidents. Traditionally each of these codes operates independently from each other using only the global results from one calculation as boundary conditions to another. Industry's drive to uprate power for reactors has motivated analysts to move from a conservative approach to design basis accident towards a best estimate method. To achieve a best estimate calculation efforts have been aimed at coupling the individual physics models to improve the accuracy of the analysis and reduce margins. The current coupling techniques are sequential in nature. During a calculation time-step data is passed between the two codes. The individual codes solve their portion of the calculation and converge to a solution before the calculation is allowed to proceed to the next time-step. This thesis presents a fully implicit method of simultaneous solving the neutron balance equations, heat conduction equations and the constitutive fluid dynamics equations. It discusses the problems involved in coupling different physics phenomena within multi-physics codes and presents a solution to these problems. The thesis also outlines the basic concepts behind the nodal balance equations, heat transfer equations and the thermal hydraulic equations, which will be coupled to form a fully implicit nonlinear system of equations. The coupling of separate physics models to solve a larger problem and improve accuracy and efficiency of a calculation is not a new idea, however implementing them in an implicit manner and solving the system simultaneously is. Also the application to reactor safety codes is new and has not be done with thermal hydraulics and neutronics codes on realistic applications in the past. The coupling technique described in this thesis is applicable to other similar coupled thermal hydraulic and core physics reactor safety codes. This technique is demonstrated using coupled input decks to show that the system is solved correctly and then verified by using two derivative test problems based on international benchmark problems the OECD/NRC Three mile Island (TMI) Main Steam Line Break (MSLB) problem (representative of pressurized water reactor analysis) and the OECD/NRC Peach Bottom (PB) Turbine Trip (TT) benchmark (representative of boiling water reactor analysis).
Confinement properties of tokamak plasmas with extended regions of low magnetic shear
NASA Astrophysics Data System (ADS)
Graves, J. P.; Cooper, W. A.; Kleiner, A.; Raghunathan, M.; Neto, E.; Nicolas, T.; Lanthaler, S.; Patten, H.; Pfefferle, D.; Brunetti, D.; Lutjens, H.
2017-10-01
Extended regions of low magnetic shear can be advantageous to tokamak plasmas. But the core and edge can be susceptible to non-resonant ideal fluctuations due to the weakened restoring force associated with magnetic field line bending. This contribution shows how saturated non-linear phenomenology, such as 1 / 1 Long Lived Modes, and Edge Harmonic Oscillations associated with QH-modes, can be modelled accurately using the non-linear stability code XTOR, the free boundary 3D equilibrium code VMEC, and non-linear analytic theory. That the equilibrium approach is valid is particularly valuable because it enables advanced particle confinement studies to be undertaken in the ordinarily difficult environment of strongly 3D magnetic fields. The VENUS-LEVIS code exploits the Fourier description of the VMEC equilibrium fields, such that full Lorenzian and guiding centre approximated differential operators in curvilinear angular coordinates can be evaluated analytically. Consequently, the confinement properties of minority ions such as energetic particles and high Z impurities can be calculated accurately over slowing down timescales in experimentally relevant 3D plasmas.
Investigating the impact of the cielo cray XE6 architecture on scientific application codes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rajan, Mahesh; Barrett, Richard; Pedretti, Kevin Thomas Tauke
2010-12-01
Cielo, a Cray XE6, is the Department of Energy NNSA Advanced Simulation and Computing (ASC) campaign's newest capability machine. Rated at 1.37 PFLOPS, it consists of 8,944 dual-socket oct-core AMD Magny-Cours compute nodes, linked using Cray's Gemini interconnect. Its primary mission objective is to enable a suite of the ASC applications implemented using MPI to scale to tens of thousands of cores. Cielo is an evolutionary improvement to a successful architecture previously available to many of our codes, thus enabling a basis for understanding the capabilities of this new architecture. Using three codes strategically important to the ASC campaign, andmore » supplemented with some micro-benchmarks that expose the fundamental capabilities of the XE6, we report on the performance characteristics and capabilities of Cielo.« less
Implementing Molecular Dynamics for Hybrid High Performance Computers - 1. Short Range Forces
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, W Michael; Wang, Peng; Plimpton, Steven J
The use of accelerators such as general-purpose graphics processing units (GPGPUs) have become popular in scientific computing applications due to their low cost, impressive floating-point capabilities, high memory bandwidth, and low electrical power requirements. Hybrid high performance computers, machines with more than one type of floating-point processor, are now becoming more prevalent due to these advantages. In this work, we discuss several important issues in porting a large molecular dynamics code for use on parallel hybrid machines - 1) choosing a hybrid parallel decomposition that works on central processing units (CPUs) with distributed memory and accelerator cores with shared memory,more » 2) minimizing the amount of code that must be ported for efficient acceleration, 3) utilizing the available processing power from both many-core CPUs and accelerators, and 4) choosing a programming model for acceleration. We present our solution to each of these issues for short-range force calculation in the molecular dynamics package LAMMPS. We describe algorithms for efficient short range force calculation on hybrid high performance machines. We describe a new approach for dynamic load balancing of work between CPU and accelerator cores. We describe the Geryon library that allows a single code to compile with both CUDA and OpenCL for use on a variety of accelerators. Finally, we present results on a parallel test cluster containing 32 Fermi GPGPUs and 180 CPU cores.« less
Hybrid Reduced Order Modeling Algorithms for Reactor Physics Calculations
NASA Astrophysics Data System (ADS)
Bang, Youngsuk
Reduced order modeling (ROM) has been recognized as an indispensable approach when the engineering analysis requires many executions of high fidelity simulation codes. Examples of such engineering analyses in nuclear reactor core calculations, representing the focus of this dissertation, include the functionalization of the homogenized few-group cross-sections in terms of the various core conditions, e.g. burn-up, fuel enrichment, temperature, etc. This is done via assembly calculations which are executed many times to generate the required functionalization for use in the downstream core calculations. Other examples are sensitivity analysis used to determine important core attribute variations due to input parameter variations, and uncertainty quantification employed to estimate core attribute uncertainties originating from input parameter uncertainties. ROM constructs a surrogate model with quantifiable accuracy which can replace the original code for subsequent engineering analysis calculations. This is achieved by reducing the effective dimensionality of the input parameter, the state variable, or the output response spaces, by projection onto the so-called active subspaces. Confining the variations to the active subspace allows one to construct an ROM model of reduced complexity which can be solved more efficiently. This dissertation introduces a new algorithm to render reduction with the reduction errors bounded based on a user-defined error tolerance which represents the main challenge of existing ROM techniques. Bounding the error is the key to ensuring that the constructed ROM models are robust for all possible applications. Providing such error bounds represents one of the algorithmic contributions of this dissertation to the ROM state-of-the-art. Recognizing that ROM techniques have been developed to render reduction at different levels, e.g. the input parameter space, the state space, and the response space, this dissertation offers a set of novel hybrid ROM algorithms which can be readily integrated into existing methods and offer higher computational efficiency and defendable accuracy of the reduced models. For example, the snapshots ROM algorithm is hybridized with the range finding algorithm to render reduction in the state space, e.g. the flux in reactor calculations. In another implementation, the perturbation theory used to calculate first order derivatives of responses with respect to parameters is hybridized with a forward sensitivity analysis approach to render reduction in the parameter space. Reduction at the state and parameter spaces can be combined to render further reduction at the interface between different physics codes in a multi-physics model with the accuracy quantified in a similar manner to the single physics case. Although the proposed algorithms are generic in nature, we focus here on radiation transport models used in support of the design and analysis of nuclear reactor cores. In particular, we focus on replacing the traditional assembly calculations by ROM models to facilitate the generation of homogenized cross-sections for downstream core calculations. The implication is that assembly calculations could be done instantaneously therefore precluding the need for the expensive evaluation of the few-group cross-sections for all possible core conditions. Given the generic natures of the algorithms, we make an effort to introduce the material in a general form to allow non-nuclear engineers to benefit from this work.
Soft-core processor study for node-based architectures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van Houten, Jonathan Roger; Jarosz, Jason P.; Welch, Benjamin James
2008-09-01
Node-based architecture (NBA) designs for future satellite projects hold the promise of decreasing system development time and costs, size, weight, and power and positioning the laboratory to address other emerging mission opportunities quickly. Reconfigurable Field Programmable Gate Array (FPGA) based modules will comprise the core of several of the NBA nodes. Microprocessing capabilities will be necessary with varying degrees of mission-specific performance requirements on these nodes. To enable the flexibility of these reconfigurable nodes, it is advantageous to incorporate the microprocessor into the FPGA itself, either as a hardcore processor built into the FPGA or as a soft-core processor builtmore » out of FPGA elements. This document describes the evaluation of three reconfigurable FPGA based processors for use in future NBA systems--two soft cores (MicroBlaze and non-fault-tolerant LEON) and one hard core (PowerPC 405). Two standard performance benchmark applications were developed for each processor. The first, Dhrystone, is a fixed-point operation metric. The second, Whetstone, is a floating-point operation metric. Several trials were run at varying code locations, loop counts, processor speeds, and cache configurations. FPGA resource utilization was recorded for each configuration. Cache configurations impacted the results greatly; for optimal processor efficiency it is necessary to enable caches on the processors. Processor caches carry a penalty; cache error mitigation is necessary when operating in a radiation environment.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kodavasal, Janardhan; Harms, Kevin; Srivastava, Priyesh
A closed-cycle gasoline compression ignition engine simulation near top dead center (TDC) was used to profile the performance of a parallel commercial engine computational fluid dynamics code, as it was scaled on up to 4096 cores of an IBM Blue Gene/Q supercomputer. The test case has 9 million cells near TDC, with a fixed mesh size of 0.15 mm, and was run on configurations ranging from 128 to 4096 cores. Profiling was done for a small duration of 0.11 crank angle degrees near TDC during ignition. Optimization of input/output performance resulted in a significant speedup in reading restart files, andmore » in an over 100-times speedup in writing restart files and files for post-processing. Improvements to communication resulted in a 1400-times speedup in the mesh load balancing operation during initialization, on 4096 cores. An improved, “stiffness-based” algorithm for load balancing chemical kinetics calculations was developed, which results in an over 3-times faster run-time near ignition on 4096 cores relative to the original load balancing scheme. With this improvement to load balancing, the code achieves over 78% scaling efficiency on 2048 cores, and over 65% scaling efficiency on 4096 cores, relative to 256 cores.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Denman, Matthew R.; Brooks, Dusty Marie
Sandia National Laboratories (SNL) has conducted an uncertainty analysi s (UA) on the Fukushima Daiichi unit (1F1) accident progression wit h the MELCOR code. Volume I of the 1F1 UA discusses the physical modeling details and time history results of the UA. Volume II of the 1F1 UA discusses the statistical viewpoint. The model used was developed for a previous accident reconstruction investigation jointly sponsored by the US Department of Energy (DOE) and Nuclear Regulatory Commission (NRC). The goal of this work was to perform a focused evaluation of uncertainty in core damage progression behavior and its effect on keymore » figures - of - merit (e.g., hydrogen production, fraction of intact fuel, vessel lower head failure) and in doing so assess the applicability of traditional sensitivity analysis techniques .« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cardoni, Jeffrey N.; Kalinich, Donald A.
2014-02-01
Sandia National Laboratories (SNL) plans to conduct uncertainty analyses (UA) on the Fukushima Daiichi unit (1F1) plant with the MELCOR code. The model to be used was developed for a previous accident reconstruction investigation jointly sponsored by the US Department of Energy (DOE) and Nuclear Regulatory Commission (NRC). However, that study only examined a handful of various model inputs and boundary conditions, and the predictions yielded only fair agreement with plant data and current release estimates. The goal of this uncertainty study is to perform a focused evaluation of uncertainty in core melt progression behavior and its effect on keymore » figures-of-merit (e.g., hydrogen production, vessel lower head failure, etc.). In preparation for the SNL Fukushima UA work, a scoping study has been completed to identify important core melt progression parameters for the uncertainty analysis. The study also lays out a preliminary UA methodology.« less
Programming for 1.6 Millon cores: Early experiences with IBM's BG/Q SMP architecture
NASA Astrophysics Data System (ADS)
Glosli, James
2013-03-01
With the stall in clock cycle improvements a decade ago, the drive for computational performance has continues along a path of increasing core counts on a processor. The multi-core evolution has been expressed in both a symmetric multi processor (SMP) architecture and cpu/GPU architecture. Debates rage in the high performance computing (HPC) community which architecture best serves HPC. In this talk I will not attempt to resolve that debate but perhaps fuel it. I will discuss the experience of exploiting Sequoia, a 98304 node IBM Blue Gene/Q SMP at Lawrence Livermore National Laboratory. The advantages and challenges of leveraging the computational power BG/Q will be detailed through the discussion of two applications. The first application is a Molecular Dynamics code called ddcMD. This is a code developed over the last decade at LLNL and ported to BG/Q. The second application is a cardiac modeling code called Cardioid. This is a code that was recently designed and developed at LLNL to exploit the fine scale parallelism of BG/Q's SMP architecture. Through the lenses of these efforts I'll illustrate the need to rethink how we express and implement our computational approaches. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andrews, Nathan; Faucett, Christopher; Haskin, Troy Christopher
Following the conclusion of the first phase of the crosswalk analysis, one of the key unanswered questions was whether or not the deviations found would persist during a partially recovered accident scenario, similar to the one that occurred in TMI - 2. In particular this analysis aims to compare the impact of core degradation morphology on quenching models inherent within the two codes and the coolability of debris during partially recovered accidents. A primary motivation for this study is the development of insights into how uncertainties in core damage progression models impact the ability to assess the potential for recoverymore » of a degraded core. These quench and core recovery models are of the most interest when there is a significant amount of core damage, but intact and degraded fuel still remain in the cor e region or the lower plenum. Accordingly this analysis presents a spectrum of partially recovered accident scenarios by varying both water injection timing and rate to highlight the impact of core degradation phenomena on recovered accident scenarios. This analysis uses the newly released MELCOR 2.2 rev. 966 5 and MAAP5, Version 5.04. These code versions, which incorporate a significant number of modifications that have been driven by analyses and forensic evidence obtained from the Fukushima - Daiichi reactor site.« less
NASA Astrophysics Data System (ADS)
Guan, Zhen; Pekurovsky, Dmitry; Luce, Jason; Thornton, Katsuyo; Lowengrub, John
The structural phase field crystal (XPFC) model can be used to model grain growth in polycrystalline materials at diffusive time-scales while maintaining atomic scale resolution. However, the governing equation of the XPFC model is an integral-partial-differential-equation (IPDE), which poses challenges in implementation onto high performance computing (HPC) platforms. In collaboration with the XSEDE Extended Collaborative Support Service, we developed a distributed memory HPC solver for the XPFC model, which combines parallel multigrid and P3DFFT. The performance benchmarking on the Stampede supercomputer indicates near linear strong and weak scaling for both multigrid and transfer time between multigrid and FFT modules up to 1024 cores. Scalability of the FFT module begins to decline at 128 cores, but it is sufficient for the type of problem we will be examining. We have demonstrated simulations using 1024 cores, and we expect to achieve 4096 cores and beyond. Ongoing work involves optimization of MPI/OpenMP-based codes for the Intel KNL Many-Core Architecture. This optimizes the code for coming pre-exascale systems, in particular many-core systems such as Stampede 2.0 and Cori 2 at NERSC, without sacrificing efficiency on other general HPC systems.
Decay heat uncertainty quantification of MYRRHA
NASA Astrophysics Data System (ADS)
Fiorito, Luca; Buss, Oliver; Hoefer, Axel; Stankovskiy, Alexey; Eynde, Gert Van den
2017-09-01
MYRRHA is a lead-bismuth cooled MOX-fueled accelerator driven system (ADS) currently in the design phase at SCK·CEN in Belgium. The correct evaluation of the decay heat and of its uncertainty level is very important for the safety demonstration of the reactor. In the first part of this work we assessed the decay heat released by the MYRRHA core using the ALEPH-2 burnup code. The second part of the study focused on the nuclear data uncertainty and covariance propagation to the MYRRHA decay heat. Radioactive decay data, independent fission yield and cross section uncertainties/covariances were propagated using two nuclear data sampling codes, namely NUDUNA and SANDY. According to the results, 238U cross sections and fission yield data are the largest contributors to the MYRRHA decay heat uncertainty. The calculated uncertainty values are deemed acceptable from the safety point of view as they are well within the available regulatory limits.
Coding and Billing in Surgical Education: A Systems-Based Practice Education Program.
Ghaderi, Kimeya F; Schmidt, Scott T; Drolet, Brian C
Despite increased emphasis on systems-based practice through the Accreditation Council for Graduate Medical Education core competencies, few studies have examined what surgical residents know about coding and billing. We sought to create and measure the effectiveness of a multifaceted approach to improving resident knowledge and performance of documenting and coding outpatient encounters. We identified knowledge gaps and barriers to documentation and coding in the outpatient setting. We implemented a series of educational and workflow interventions with a group of 12 residents in a surgical clinic at a tertiary care center. To measure the effect of this program, we compared billing codes for 1 year before intervention (FY2012) to prospectively collected data from the postintervention period (FY2013). All related documentation and coding were verified by study-blinded auditors. Interventions took place at the outpatient surgical clinic at Rhode Island Hospital, a tertiary-care center. A cohort of 12 plastic surgery residents ranging from postgraduate year 2 through postgraduate year 6 participated in the interventional sequence. A total of 1285 patient encounters in the preintervention group were compared with 1170 encounters in the postintervention group. Using evaluation and management codes (E&M) as a measure of documentation and coding, we demonstrated a significant and durable increase in billing with supporting clinical documentation after the intervention. For established patient visits, the monthly average E&M code level increased from 2.14 to 3.05 (p < 0.01); for new patients the monthly average E&M level increased from 2.61 to 3.19 (p < 0.01). This study describes a series of educational and workflow interventions, which improved resident coding and billing of outpatient clinic encounters. Using externally audited coding data, we demonstrate significantly increased rates of higher complexity E&M coding in a stable patient population based on improved documentation and billing awareness by the residents. Copyright © 2017 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.
Development of an object-oriented ORIGEN for advanced nuclear fuel modeling applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Skutnik, S.; Havloej, F.; Lago, D.
2013-07-01
The ORIGEN package serves as the core depletion and decay calculation module within the SCALE code system. A recent major re-factor to the ORIGEN code architecture as part of an overall modernization of the SCALE code system has both greatly enhanced its maintainability as well as afforded several new capabilities useful for incorporating depletion analysis into other code frameworks. This paper will present an overview of the improved ORIGEN code architecture (including the methods and data structures introduced) as well as current and potential future applications utilizing the new ORIGEN framework. (authors)
Academic Interventions for Children with Dyslexia Who Have Phonological Core Deficits.
ERIC Educational Resources Information Center
Frost, Julie A.; Emery, Michael J.
1996-01-01
This article briefly defines phonological core deficits in cases of dyslexia; considers student classification based on federal and state learning disability placement guidelines; and suggests 10 interventions such as teaching metacognitive strategies, providing direct instruction in language analysis and the alphabetic code, and teaching reading…
NASA Astrophysics Data System (ADS)
Takeda, Takeshi; Maruyama, Yu; Watanabe, Tadashi; Nakamura, Hideo
Experiments simulating PWR intermediate-break loss-of-coolant accidents (IBLOCAs) with 17% break at hot leg or cold leg were conducted in OECD/NEA ROSA-2 Project using the Large Scale Test Facility (LSTF). In the hot leg IBLOCA test, core uncovery started simultaneously with liquid level drop in crossover leg downflow-side before loop seal clearing (LSC) induced by steam condensation on accumulator coolant injected into cold leg. Water remained on upper core plate in upper plenum due to counter-current flow limiting (CCFL) because of significant upward steam flow from the core. In the cold leg IBLOCA test, core dryout took place due to rapid liquid level drop in the core before LSC. Liquid was accumulated in upper plenum, steam generator (SG) U-tube upflow-side and SG inlet plenum before the LSC due to CCFL by high velocity vapor flow, causing enhanced decrease in the core liquid level. The RELAP5/MOD3.2.1.2 post-test analyses of the two LSTF experiments were performed employing critical flow model in the code with a discharge coefficient of 1.0. In the hot leg IBLOCA case, cladding surface temperature of simulated fuel rods was underpredicted due to overprediction of core liquid level after the core uncovery. In the cold leg IBLOCA case, the cladding surface temperature was underpredicted too due to later core uncovery than in the experiment. These may suggest that the code has remaining problems in proper prediction of primary coolant distribution.
Demonstration of Efficient Core Heating of Magnetized Fast Ignition in FIREX project
NASA Astrophysics Data System (ADS)
Johzaki, Tomoyuki
2017-10-01
Extensive theoretical and experimental research in the FIREX ``I project over the past decade revealed that the large angular divergence of the laser generated electron beam is one of the most critical problems inhibiting efficient core heating in electron-driven fast ignition. To solve this problem, beam guiding using externally applied kilo-tesla class magnetic field was proposed, and its feasibility has recently been numerically demonstrated. In 2016, integrated experiments at ILE Osaka University demonstrated core heating efficiencies reaching > 5 % and heated core temperatures of 1.7 keV. In these experiments, a kilo-tesla class magnetic field was applied to a cone-attached Cu(II) oleate spherical solid target by using a laser-driven capacitor-coil. The target was then imploded by G-XII laser and heated by the PW-class LFEX laser. The heating efficiency was evaluated by measuring the number of Cu-K- α photons emitted. The heated core temperature was estimated by the X-ray intensity ratio of Cu Li-like and He-like emission lines. To understand the detailed dynamics of the core heating process, we carried out integrated simulations using the FI3 code system. Effects of magnetic fields on the implosion and electron beam transport, detailed core heating dynamics, and the resultant heating efficiency and core temperature will be presented. I will also discuss the prospect for an ignition-scale design of magnetized fast ignition using a solid ball target. This work is partially supported by JSPA KAKENHI Grant Number JP16H02245, JP26400532, JP15K21767, JP26400532, JP16K05638 and is performed with the support and the auspices of the NIFS Collaboration Research program (NIFS12KUGK057, NIFS15KUGK087).
A Two-Step Approach to Uncertainty Quantification of Core Simulators
Yankov, Artem; Collins, Benjamin; Klein, Markus; ...
2012-01-01
For the multiple sources of error introduced into the standard computational regime for simulating reactor cores, rigorous uncertainty analysis methods are available primarily to quantify the effects of cross section uncertainties. Two methods for propagating cross section uncertainties through core simulators are the XSUSA statistical approach and the “two-step” method. The XSUSA approach, which is based on the SUSA code package, is fundamentally a stochastic sampling method. Alternatively, the two-step method utilizes generalized perturbation theory in the first step and stochastic sampling in the second step. The consistency of these two methods in quantifying uncertainties in the multiplication factor andmore » in the core power distribution was examined in the framework of phase I-3 of the OECD Uncertainty Analysis in Modeling benchmark. With the Three Mile Island Unit 1 core as a base model for analysis, the XSUSA and two-step methods were applied with certain limitations, and the results were compared to those produced by other stochastic sampling-based codes. Based on the uncertainty analysis results, conclusions were drawn as to the method that is currently more viable for computing uncertainties in burnup and transient calculations.« less
Nuclear modules for space electric propulsion
NASA Technical Reports Server (NTRS)
Difilippo, F. C.
1998-01-01
Analysis of interplanetary cargo and piloted missions requires calculations of the performances and masses of subsystems to be integrated in a final design. In a preliminary and scoping stage the designer needs to evaluate options iteratively by using fast computer simulations. The Oak Ridge National Laboratory (ORNL) has been involved in the development of models and calculational procedures for the analysis (neutronic and thermal hydraulic) of power sources for nuclear electric propulsion. The nuclear modules will be integrated into the whole simulation of the nuclear electric propulsion system. The vehicles use either a Brayton direct-conversion cycle, using the heated helium from a NERVA-type reactor, or a potassium Rankine cycle, with the working fluid heated on the secondary side of a heat exchanger and lithium on the primary side coming from a fast reactor. Given a set of input conditions, the codes calculate composition. dimensions, volumes, and masses of the core, reflector, control system, pressure vessel, neutron and gamma shields, as well as the thermal hydraulic conditions of the coolant, clad and fuel. Input conditions are power, core life, pressure and temperature of the coolant at the inlet of the core, either the temperature of the coolant at the outlet of the core or the coolant mass flow and the fluences and integrated doses at the cargo area. Using state-of-the-art neutron cross sections and transport codes, a database was created for the neutronic performance of both reactor designs. The free parameters of the models are the moderator/fuel mass ratio for the NERVA reactor and the enrichment and the pitch of the lattice for the fast reactor. Reactivity and energy balance equations are simultaneously solved to find the reactor design. Thermalhydraulic conditions are calculated by solving the one-dimensional versions of the equations of conservation of mass, energy, and momentum with compressible flow.
Nuclear thermal propulsion engine system design analysis code development
NASA Astrophysics Data System (ADS)
Pelaccio, Dennis G.; Scheil, Christine M.; Petrosky, Lyman J.; Ivanenok, Joseph F.
1992-01-01
A Nuclear Thermal Propulsion (NTP) Engine System Design Analyis Code has recently been developed to characterize key NTP engine system design features. Such a versatile, standalone NTP system performance and engine design code is required to support ongoing and future engine system and vehicle design efforts associated with proposed Space Exploration Initiative (SEI) missions of interest. Key areas of interest in the engine system modeling effort were the reactor, shielding, and inclusion of an engine multi-redundant propellant pump feed system design option. A solid-core nuclear thermal reactor and internal shielding code model was developed to estimate the reactor's thermal-hydraulic and physical parameters based on a prescribed thermal output which was integrated into a state-of-the-art engine system design model. The reactor code module has the capability to model graphite, composite, or carbide fuels. Key output from the model consists of reactor parameters such as thermal power, pressure drop, thermal profile, and heat generation in cooled structures (reflector, shield, and core supports), as well as the engine system parameters such as weight, dimensions, pressures, temperatures, mass flows, and performance. The model's overall analysis methodology and its key assumptions and capabilities are summarized in this paper.
Fukui, Sadaaki; Matthias, Marianne S; Salyers, Michelle P
2015-01-01
Shared decision-making (SDM) is imperative to person-centered care, yet little is known about what aspects of SDM are targeted during psychiatric visits. This secondary data analysis (191 psychiatric visits with 11 providers, coded with a validated SDM coding system) revealed two factors (scientific and preference-based discussions) underlying SDM communication. Preference-based discussion occurred less. Both provider and consumer initiation of SDM elements and decision complexity were associated with greater discussions in both factors, but were more strongly associated with scientific discussion. Longer visit length correlated with only scientific discussion. Providers' understanding of core domains could facilitate engaging consumers in SDM.
Cross-separatrix Coupling in Nonlinear Global Electrostatic Turbulent Transport in C-2U
NASA Astrophysics Data System (ADS)
Lau, Calvin; Fulton, Daniel; Bao, Jian; Lin, Zhihong; Binderbauer, Michl; Tajima, Toshiki; Schmitz, Lothar; TAE Team
2017-10-01
In recent years, the progress of the C-2/C-2U advanced beam-driven field-reversed configuration (FRC) experiments at Tri Alpha Energy, Inc. has pushed FRCs to transport limited regimes. Understanding particle and energy transport is a vital step towards an FRC reactor, and two particle-in-cell microturbulence codes, the Gyrokinetic Toroidal Code (GTC) and A New Code (ANC), are being developed and applied toward this goal. Previous local electrostatic GTC simulations find the core to be robustly stable with drift-wave instability only in the scrape-off layer (SOL) region. However, experimental measurements showed fluctuations in both regions; one possibility is that fluctuations in the core originate from the SOL, suggesting the need for non-local simulations with cross-separatrix coupling. Current global ANC simulations with gyrokinetic ions and adiabatic electrons find that non-local effects (1) modify linear growth-rates and frequencies of instabilities and (2) allow instability to move from the unstable SOL to the linearly stable core. Nonlinear spreading is also seen prior to mode saturation. We also report on the progress of the first turbulence simulations in the SOL. This work is supported by the Norman Rostoker Fellowship.
Zhang, S.; Yuen, D.A.; Zhu, A.; Song, S.; George, D.L.
2011-01-01
We parallelized the GeoClaw code on one-level grid using OpenMP in March, 2011 to meet the urgent need of simulating tsunami waves at near-shore from Tohoku 2011 and achieved over 75% of the potential speed-up on an eight core Dell Precision T7500 workstation [1]. After submitting that work to SC11 - the International Conference for High Performance Computing, we obtained an unreleased OpenMP version of GeoClaw from David George, who developed the GeoClaw code as part of his PH.D thesis. In this paper, we will show the complementary characteristics of the two approaches used in parallelizing GeoClaw and the speed-up obtained by combining the advantage of each of the two individual approaches with adaptive mesh refinement (AMR), demonstrating the capabilities of running GeoClaw efficiently on many-core systems. We will also show a novel simulation of the Tohoku 2011 Tsunami waves inundating the Sendai airport and Fukushima Nuclear Power Plants, over which the finest grid distance of 20 meters is achieved through a 4-level AMR. This simulation yields quite good predictions about the wave-heights and travel time of the tsunami waves. ?? 2011 IEEE.
TRAC-PD2 posttest analysis of CCTF Test C1-16 (Run 025). [Cylindrical Core Test Facility
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sugimoto, J.
The TRAC-PD2 code version was used to analyze CCTF Test C1-16 (Run 025). The results indicate that the core heater rod temperatures, the liquid mass in the vessel, and differential pressures in the primary loop are predicted well, but the void fraction distribution in the core and water accumulation in the upper plenum are not in good agreement with the data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sesonske, A.
1980-08-01
Detailed core management arrangements are developed requiring four operating cycles for the transition from present three-batch loading to an extended burnup four-batch plan for Zion-1. The ARMP code EPRI-NODE-P was used for core modeling. Although this work is preliminary, uranium and economic savings during the transition cycles appear of the order of 6 percent.
NASA Astrophysics Data System (ADS)
Rossi, Francesco; Londrillo, Pasquale; Sgattoni, Andrea; Sinigardi, Stefano; Turchetti, Giorgio
2012-12-01
We present `jasmine', an implementation of a fully relativistic, 3D, electromagnetic Particle-In-Cell (PIC) code, capable of running simulations in various laser plasma acceleration regimes on Graphics-Processing-Units (GPUs) HPC clusters. Standard energy/charge preserving FDTD-based algorithms have been implemented using double precision and quadratic (or arbitrary sized) shape functions for the particle weighting. When porting a PIC scheme to the GPU architecture (or, in general, a shared memory environment), the particle-to-grid operations (e.g. the evaluation of the current density) require special care to avoid memory inconsistencies and conflicts. Here we present a robust implementation of this operation that is efficient for any number of particles per cell and particle shape function order. Our algorithm exploits the exposed GPU memory hierarchy and avoids the use of atomic operations, which can hurt performance especially when many particles lay on the same cell. We show the code multi-GPU scalability results and present a dynamic load-balancing algorithm. The code is written using a python-based C++ meta-programming technique which translates in a high level of modularity and allows for easy performance tuning and simple extension of the core algorithms to various simulation schemes.
Analysis of unmitigated large break loss of coolant accidents using MELCOR code
NASA Astrophysics Data System (ADS)
Pescarini, M.; Mascari, F.; Mostacci, D.; De Rosa, F.; Lombardo, C.; Giannetti, F.
2017-11-01
In the framework of severe accident research activity developed by ENEA, a MELCOR nodalization of a generic Pressurized Water Reactor of 900 MWe has been developed. The aim of this paper is to present the analysis of MELCOR code calculations concerning two independent unmitigated large break loss of coolant accident transients, occurring in the cited type of reactor. In particular, the analysis and comparison between the transients initiated by an unmitigated double-ended cold leg rupture and an unmitigated double-ended hot leg rupture in the loop 1 of the primary cooling system is presented herein. This activity has been performed focusing specifically on the in-vessel phenomenology that characterizes this kind of accidents. The analysis of the thermal-hydraulic transient phenomena and the core degradation phenomena is therefore here presented. The analysis of the calculated data shows the capability of the code to reproduce the phenomena typical of these transients and permits their phenomenological study. A first sequence of main events is here presented and shows that the cold leg break transient results faster than the hot leg break transient because of the position of the break. Further analyses are in progress to quantitatively assess the results of the code nodalization for accident management strategy definition and fission product source term evaluation.
Power Aware Signal Processing Environment (PASPE) for PAC/C
2003-02-01
vs. FFT Size For our implementation , the Annapolis FFT core was radix-256, and therefore the smallest PN code length that could be processed was the...PN-64. A C- code version of correlate was compared to the FPGA 61 implementation . The results in Figure 68 show that for a PN-1024, the...12a. DISTRIBUTION / AVAILABILITY STATEMENT APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED. 12b. DISTRIBUTION CODE 13. ABSTRACT (Maximum
Method for detecting core malware sites related to biomedical information systems.
Kim, Dohoon; Choi, Donghee; Jin, Jonghyun
2015-01-01
Most advanced persistent threat attacks target web users through malicious code within landing (exploit) or distribution sites. There is an urgent need to block the affected websites. Attacks on biomedical information systems are no exception to this issue. In this paper, we present a method for locating malicious websites that attempt to attack biomedical information systems. Our approach uses malicious code crawling to rearrange websites in the order of their risk index by analyzing the centrality between malware sites and proactively eliminates the root of these sites by finding the core-hub node, thereby reducing unnecessary security policies. In particular, we dynamically estimate the risk index of the affected websites by analyzing various centrality measures and converting them into a single quantified vector. On average, the proactive elimination of core malicious websites results in an average improvement in zero-day attack detection of more than 20%.
Method for Detecting Core Malware Sites Related to Biomedical Information Systems
Kim, Dohoon; Choi, Donghee; Jin, Jonghyun
2015-01-01
Most advanced persistent threat attacks target web users through malicious code within landing (exploit) or distribution sites. There is an urgent need to block the affected websites. Attacks on biomedical information systems are no exception to this issue. In this paper, we present a method for locating malicious websites that attempt to attack biomedical information systems. Our approach uses malicious code crawling to rearrange websites in the order of their risk index by analyzing the centrality between malware sites and proactively eliminates the root of these sites by finding the core-hub node, thereby reducing unnecessary security policies. In particular, we dynamically estimate the risk index of the affected websites by analyzing various centrality measures and converting them into a single quantified vector. On average, the proactive elimination of core malicious websites results in an average improvement in zero-day attack detection of more than 20%. PMID:25821511
PlasmaPy: beginning a community developed Python package for plasma physics
NASA Astrophysics Data System (ADS)
Murphy, Nicholas A.; Huang, Yi-Min; PlasmaPy Collaboration
2016-10-01
In recent years, researchers in several disciplines have collaborated on community-developed open source Python packages such as Astropy, SunPy, and SpacePy. These packages provide core functionality, common frameworks for data analysis and visualization, and educational tools. We propose that our community begins the development of PlasmaPy: a new open source core Python package for plasma physics. PlasmaPy could include commonly used functions in plasma physics, easy-to-use plasma simulation codes, Grad-Shafranov solvers, eigenmode solvers, and tools to analyze both simulations and experiments. The development will include modern programming practices such as version control, embedding documentation in the code, unit tests, and avoiding premature optimization. We will describe early code development on PlasmaPy, and discuss plans moving forward. The success of PlasmaPy depends on active community involvement and a welcoming and inclusive environment, so anyone interested in joining this collaboration should contact the authors.
FPGA acceleration of rigid-molecule docking codes
Sukhwani, B.; Herbordt, M.C.
2011-01-01
Modelling the interactions of biological molecules, or docking, is critical both to understanding basic life processes and to designing new drugs. The field programmable gate array (FPGA) based acceleration of a recently developed, complex, production docking code is described. The authors found that it is necessary to extend their previous three-dimensional (3D) correlation structure in several ways, most significantly to support simultaneous computation of several correlation functions. The result for small-molecule docking is a 100-fold speed-up of a section of the code that represents over 95% of the original run-time. An additional 2% is accelerated through a previously described method, yielding a total acceleration of 36× over a single core and 10× over a quad-core. This approach is found to be an ideal complement to graphics processing unit (GPU) based docking, which excels in the protein–protein domain. PMID:21857870
DOE Office of Scientific and Technical Information (OSTI.GOV)
Galloway, Jack; Matthews, Topher
The development of MAMBA is targeted at capturing both core wide CRUD induced power shifts (CIPS) as well as pin-level CRUD induced localized corrosion (CILC). Both CIPS and CILC require some sort of information from thermal-hydraulic, neutronics, and fuel performance codes, although the degree of coupling is different for the two effects. Since CIPS necessarily requires a core-wide power distribution solve, it requires tight coupling with a neutronics code. Conversely, CIPS tends to be an individual pin phenomenon, requiring tight coupling a fuel performance code. As efforts are now focused on coupling MAMBA within the VERA suite, a natural separationmore » has surfaced in which a FORTRAN rewrite of MAMBA is optimal for VERA integration to capture CIPS behavior, while a CILC focused calculation would benefit from a tight coupling with BISON, motivating a MOOSE version of MAMBA.« less
GeauxDock: Accelerating Structure-Based Virtual Screening with Heterogeneous Computing
Fang, Ye; Ding, Yun; Feinstein, Wei P.; Koppelman, David M.; Moreno, Juana; Jarrell, Mark; Ramanujam, J.; Brylinski, Michal
2016-01-01
Computational modeling of drug binding to proteins is an integral component of direct drug design. Particularly, structure-based virtual screening is often used to perform large-scale modeling of putative associations between small organic molecules and their pharmacologically relevant protein targets. Because of a large number of drug candidates to be evaluated, an accurate and fast docking engine is a critical element of virtual screening. Consequently, highly optimized docking codes are of paramount importance for the effectiveness of virtual screening methods. In this communication, we describe the implementation, tuning and performance characteristics of GeauxDock, a recently developed molecular docking program. GeauxDock is built upon the Monte Carlo algorithm and features a novel scoring function combining physics-based energy terms with statistical and knowledge-based potentials. Developed specifically for heterogeneous computing platforms, the current version of GeauxDock can be deployed on modern, multi-core Central Processing Units (CPUs) as well as massively parallel accelerators, Intel Xeon Phi and NVIDIA Graphics Processing Unit (GPU). First, we carried out a thorough performance tuning of the high-level framework and the docking kernel to produce a fast serial code, which was then ported to shared-memory multi-core CPUs yielding a near-ideal scaling. Further, using Xeon Phi gives 1.9× performance improvement over a dual 10-core Xeon CPU, whereas the best GPU accelerator, GeForce GTX 980, achieves a speedup as high as 3.5×. On that account, GeauxDock can take advantage of modern heterogeneous architectures to considerably accelerate structure-based virtual screening applications. GeauxDock is open-sourced and publicly available at www.brylinski.org/geauxdock and https://figshare.com/articles/geauxdock_tar_gz/3205249. PMID:27420300
Monte Carlo modelling of TRIGA research reactor
NASA Astrophysics Data System (ADS)
El Bakkari, B.; Nacir, B.; El Bardouni, T.; El Younoussi, C.; Merroun, O.; Htet, A.; Boulaich, Y.; Zoubair, M.; Boukhal, H.; Chakir, M.
2010-10-01
The Moroccan 2 MW TRIGA MARK II research reactor at Centre des Etudes Nucléaires de la Maâmora (CENM) achieved initial criticality on May 2, 2007. The reactor is designed to effectively implement the various fields of basic nuclear research, manpower training, and production of radioisotopes for their use in agriculture, industry, and medicine. This study deals with the neutronic analysis of the 2-MW TRIGA MARK II research reactor at CENM and validation of the results by comparisons with the experimental, operational, and available final safety analysis report (FSAR) values. The study was prepared in collaboration between the Laboratory of Radiation and Nuclear Systems (ERSN-LMR) from Faculty of Sciences of Tetuan (Morocco) and CENM. The 3-D continuous energy Monte Carlo code MCNP (version 5) was used to develop a versatile and accurate full model of the TRIGA core. The model represents in detailed all components of the core with literally no physical approximation. Continuous energy cross-section data from the more recent nuclear data evaluations (ENDF/B-VI.8, ENDF/B-VII.0, JEFF-3.1, and JENDL-3.3) as well as S( α, β) thermal neutron scattering functions distributed with the MCNP code were used. The cross-section libraries were generated by using the NJOY99 system updated to its more recent patch file "up259". The consistency and accuracy of both the Monte Carlo simulation and neutron transport physics were established by benchmarking the TRIGA experiments. Core excess reactivity, total and integral control rods worth as well as power peaking factors were used in the validation process. Results of calculations are analysed and discussed.
GeauxDock: Accelerating Structure-Based Virtual Screening with Heterogeneous Computing.
Fang, Ye; Ding, Yun; Feinstein, Wei P; Koppelman, David M; Moreno, Juana; Jarrell, Mark; Ramanujam, J; Brylinski, Michal
2016-01-01
Computational modeling of drug binding to proteins is an integral component of direct drug design. Particularly, structure-based virtual screening is often used to perform large-scale modeling of putative associations between small organic molecules and their pharmacologically relevant protein targets. Because of a large number of drug candidates to be evaluated, an accurate and fast docking engine is a critical element of virtual screening. Consequently, highly optimized docking codes are of paramount importance for the effectiveness of virtual screening methods. In this communication, we describe the implementation, tuning and performance characteristics of GeauxDock, a recently developed molecular docking program. GeauxDock is built upon the Monte Carlo algorithm and features a novel scoring function combining physics-based energy terms with statistical and knowledge-based potentials. Developed specifically for heterogeneous computing platforms, the current version of GeauxDock can be deployed on modern, multi-core Central Processing Units (CPUs) as well as massively parallel accelerators, Intel Xeon Phi and NVIDIA Graphics Processing Unit (GPU). First, we carried out a thorough performance tuning of the high-level framework and the docking kernel to produce a fast serial code, which was then ported to shared-memory multi-core CPUs yielding a near-ideal scaling. Further, using Xeon Phi gives 1.9× performance improvement over a dual 10-core Xeon CPU, whereas the best GPU accelerator, GeForce GTX 980, achieves a speedup as high as 3.5×. On that account, GeauxDock can take advantage of modern heterogeneous architectures to considerably accelerate structure-based virtual screening applications. GeauxDock is open-sourced and publicly available at www.brylinski.org/geauxdock and https://figshare.com/articles/geauxdock_tar_gz/3205249.
Well logs and core data from selected cored intervals, National Petroleum Reserve, Alaska
Nelson, Philip H.; Kibler, Joyce E.
2001-01-01
This report is preliminary and has not been reviewed for conformity with U.S. Geological Survey editorial standards or with the North American Stratigraphic Code. Any use of trade, firm, or product names is for descriptive purposes only and does not imply endorsement by the U.S. Government.
Sensitivity analysis of Monju using ERANOS with JENDL-4.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tamagno, P.; Van Rooijen, W. F. G.; Takeda, T.
2012-07-01
This paper deals with sensitivity analysis using JENDL-4.0 nuclear data applied to the Monju reactor. In 2010 the Japan Atomic Energy Agency - JAEA - released a new set of nuclear data: JENDL-4.0. This new evaluation is expected to contain improved data on actinides and covariance matrices. Covariance matrices are a key point in quantification of uncertainties due to basic nuclear data. For sensitivity analysis, the well-established ERANOS [1] code was chosen because of its integrated modules that allow users to perform a sensitivity analysis of complex reactor geometries. A JENDL-4.0 cross-section library is not available for ERANOS. Therefore amore » cross-section library had to be made from the original nuclear data set, available as ENDF formatted files. This is achieved by using the following codes: NJOY, CALENDF, MERGE and GECCO in order to create a library for the ECCO cell code (part of ERANOS). In order to make sure of the accuracy of the new ECCO library, two benchmark experiments have been analyzed: the MZA and MZB cores of the MOZART program measured at the ZEBRA facility in the UK. These were chosen due to their similarity to the Monju core. Using the JENDL-4.0 ECCO library we have analyzed the criticality of Monju during the restart in 2010. We have obtained good agreement with the measured criticality. Perturbation calculations have been performed between JENDL-3.3 and JENDL-4.0 based models. The isotopes {sup 239}Pu, {sup 238}U, {sup 241}Am and {sup 241}Pu account for a major part of observed differences. (authors)« less
Preliminary Analysis of SiC BWR Channel Box Performance under Normal Operation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wirth, Brian; Singh, Gyanender P.; Gorton, Jacob
SiC-SiC composites are being considered for applications in the core components, including BWR channel box and fuel rod cladding, of light water reactors to improve accident tolerance. In the extreme nuclear reactor environment, core components like the BWR channel box will be exposed to neutron damage and a corrosive environment. To ensure reliable and safe operation of a SiC channel box, it is important to assess its deformation behavior under in-reactor conditions including the expected neutron flux and temperature distributions. In particular, this work has evaluated the effect of non-uniform dimensional changes caused by spatially varying neutron flux and temperaturesmore » on the deformation behavior of the channel box over the course of one cycle of irradiation. These analyses have been performed using the fuel performance modeling code BISON and the commercial finite element analysis code Abaqus, based on fast flux and temperature boundary conditions have been calculated using the neutronics and thermal-hydraulics codes Serpent2 and COBRA-TF, respectively. The dependence of dimensions and thermophysical properties on fast flux and temperature has been incorporated into the material models. These initial results indicate significant bowing of the channel box with a lateral displacement greater than 6.5mm. The channel box bowing behavior is time dependent, and driven by the temperature dependence of the SiC irradiation-induced swelling and the neutron flux/fluence gradients. The bowing behavior gradually recovers during the course of the operating cycle as the swelling of the SiC-SiC material saturates. However, the bending relaxation due to temperature gradients does not fully recover and residual bending remains after the swelling saturates in the entire channel box.« less
Organization patterns of the AGFG genes: an evolutionary study.
Panaro, Maria Antonietta; Acquafredda, Angela; Calvello, Rosa; Lisi, Sabrina; Dragone, Teresa; Cianciulli, Antonia
2011-03-01
A number of proteins which are needed for the building of new immunodeficiency virus type 1 virions can only be translated from unspliced virus-derived pre-mRNAs. These unspliced mRNAs are shuttled through the nuclear pores reaching the cytosol when bound to the viral protein Rev. However, as a cellular co-factor Rev requires a Rev-binding protein of the AGFG family (nucleoporin-related Arf-GAP domain and FG repeats-containing proteins). In this article we address the evolution of the AGFGs by analyzing the first section of the coding mRNAs. This contains a "core module" which can be traced from Drosophilae to fish, amphibia, birds, and mammals, including man. In the subfamily of AGFG1 molecules the estimated conservation from Drosophilae to primates is 67% (with limited gaps). In some Drosophilae the core module is preceded by a long stretch of more than 300 coding nucleotides, but this additional module is absent in other Drosophilae and in all AGFG1s of other species. The AGFG2 molecules emerged later in evolution, possibly deriving from a duplication of AGFG1s. AGFG2s, present in mammals only, exhibit an additional module of about 50 coding nucleotides ahead of the core module, which is significantly less conserved (54%, with more remarkable gaps). This additional module does not seem to have homologies with the additional module of Drosophilae nor with the precoding section of AGFG1s. Interestingly, in birds a highly re-edited form of the AGFG1 core module (Gallus gallus, Galliformes) coexists with a typical form of the AGFG1 core module (Taeniopygia guttata, Passeriformes).
NASA Astrophysics Data System (ADS)
Nagakura, Hiroki; Iwakami, Wakana; Furusawa, Shun; Sumiyoshi, Kohsuke; Yamada, Shoichi; Matsufuru, Hideo; Imakura, Akira
2017-04-01
We present a newly developed moving-mesh technique for the multi-dimensional Boltzmann-Hydro code for the simulation of core-collapse supernovae (CCSNe). What makes this technique different from others is the fact that it treats not only hydrodynamics but also neutrino transfer in the language of the 3 + 1 formalism of general relativity (GR), making use of the shift vector to specify the time evolution of the coordinate system. This means that the transport part of our code is essentially general relativistic, although in this paper it is applied only to the moving curvilinear coordinates in the flat Minknowski spacetime, since the gravity part is still Newtonian. The numerical aspect of the implementation is also described in detail. Employing the axisymmetric two-dimensional version of the code, we conduct two test computations: oscillations and runaways of proto-neutron star (PNS). We show that our new method works fine, tracking the motions of PNS correctly. We believe that this is a major advancement toward the realistic simulation of CCSNe.
Flow Field and Acoustic Predictions for Three-Stream Jets
NASA Technical Reports Server (NTRS)
Simmons, Shaun Patrick; Henderson, Brenda S.; Khavaran, Abbas
2014-01-01
Computational fluid dynamics was used to analyze a three-stream nozzle parametric design space. The study varied bypass-to-core area ratio, tertiary-to-core area ratio and jet operating conditions. The flowfield solutions from the Reynolds-Averaged Navier-Stokes (RANS) code Overflow 2.2e were used to pre-screen experimental models for a future test in the Aero-Acoustic Propulsion Laboratory (AAPL) at the NASA Glenn Research Center (GRC). Flowfield solutions were considered in conjunction with the jet-noise-prediction code JeNo to screen the design concepts. A two-stream versus three-stream computation based on equal mass flow rates showed a reduction in peak turbulent kinetic energy (TKE) for the three-stream jet relative to that for the two-stream jet which resulted in reduced acoustic emission. Additional three-stream solutions were analyzed for salient flowfield features expected to impact farfield noise. As tertiary power settings were increased there was a corresponding near nozzle increase in shear rate that resulted in an increase in high frequency noise and a reduction in peak TKE. As tertiary-to-core area ratio was increased the tertiary potential core elongated and the peak TKE was reduced. The most noticeable change occurred as secondary-to-core area ratio was increased thickening the secondary potential core, elongating the primary potential core and reducing peak TKE. As forward flight Mach number was increased the jet plume region decreased and reduced peak TKE.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Daily, Charles R.
2015-10-01
An assessment of the impact on the High Flux Isotope Reactor (HFIR) reactor vessel (RV) displacements-per-atom (dpa) rates due to operations with the proposed low enriched uranium (LEU) core described by Ilas and Primm has been performed and is presented herein. The analyses documented herein support the conclusion that conversion of HFIR to low-enriched uranium (LEU) core operations using the LEU core design of Ilas and Primm will have no negative impact on HFIR RV dpa rates. Since its inception, HFIR has been operated with highly enriched uranium (HEU) cores. As part of an effort sponsored by the National Nuclearmore » Security Administration (NNSA), conversion to LEU cores is being considered for future HFIR operations. The HFIR LEU configurations analyzed are consistent with the LEU core models used by Ilas and Primm and the HEU balance-of-plant models used by Risner and Blakeman in the latest analyses performed to support the HFIR materials surveillance program. The Risner and Blakeman analyses, as well as the studies documented herein, are the first to apply the hybrid transport methods available in the Automated Variance reduction Generator (ADVANTG) code to HFIR RV dpa rate calculations. These calculations have been performed on the Oak Ridge National Laboratory (ORNL) Institutional Cluster (OIC) with version 1.60 of the Monte Carlo N-Particle 5 (MCNP5) computer code.« less
NASA Astrophysics Data System (ADS)
Takamatsu, Kuniyoshi; Nakagawa, Shigeaki; Takeda, Tetsuaki
Safety demonstration tests using the High Temperature Engineering Test Reactor (HTTR) are in progress to verify its inherent safety features and improve the safety technology and design methodology for High-temperature Gas-cooled Reactors (HTGRs). The reactivity insertion test is one of the safety demonstration tests for the HTTR. This test simulates the rapid increase in the reactor power by withdrawing the control rod without operating the reactor power control system. In addition, the loss of coolant flow tests has been conducted to simulate the rapid decrease in the reactor power by tripping one, two or all out of three gas circulators. The experimental results have revealed the inherent safety features of HTGRs, such as the negative reactivity feedback effect. The numerical analysis code, which was named-ACCORD-, was developed to analyze the reactor dynamics including the flow behavior in the HTTR core. We have modified this code to use a model with four parallel channels and twenty temperature coefficients. Furthermore, we added another analytical model of the core for calculating the heat conduction between the fuel channels and the core in the case of the loss of coolant flow tests. This paper describes the validation results for the newly developed code using the experimental results. Moreover, the effect of the model is formulated quantitatively with our proposed equation. Finally, the pre-analytical result of the loss of coolant flow test by tripping all gas circulators is also discussed.
Expert system for maintenance management of a boiling water reactor power plant
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hong Shen; Liou, L.W.; Levine, S.
1992-01-01
An expert system code has been developed for the maintenance of two boiling water reactor units in Berwick, Pennsylvania, that are operated by the Pennsylvania Power and Light Company (PP and L). The objective of this expert system code, where the knowledge of experienced operators and engineers is captured and implemented, is to support the decisions regarding which components can be safely and reliably removed from service for maintenance. It can also serve as a query-answering facility for checking the plant system status and for training purposes. The operating and maintenance information of a large number of support systems, whichmore » must be available for emergencies and/or in the event of an accident, is stored in the data base of the code. It identifies the relevant technical specifications and management rules for shutting down any one of the systems or removing a component from service to support maintenance. Because of the complexity and time needed to incorporate a large number of systems and their components, the first phase of the expert system develops a prototype code, which includes only the reactor core isolation coolant system, the high-pressure core injection system, the instrument air system, the service water system, and the plant electrical system. The next phase is scheduled to expand the code to include all other systems. This paper summarizes the prototype code and the design concept of the complete expert system code for maintenance management of all plant systems and components.« less
FAST: framework for heterogeneous medical image computing and visualization.
Smistad, Erik; Bozorgi, Mohammadmehdi; Lindseth, Frank
2015-11-01
Computer systems are becoming increasingly heterogeneous in the sense that they consist of different processors, such as multi-core CPUs and graphic processing units. As the amount of medical image data increases, it is crucial to exploit the computational power of these processors. However, this is currently difficult due to several factors, such as driver errors, processor differences, and the need for low-level memory handling. This paper presents a novel FrAmework for heterogeneouS medical image compuTing and visualization (FAST). The framework aims to make it easier to simultaneously process and visualize medical images efficiently on heterogeneous systems. FAST uses common image processing programming paradigms and hides the details of memory handling from the user, while enabling the use of all processors and cores on a system. The framework is open-source, cross-platform and available online. Code examples and performance measurements are presented to show the simplicity and efficiency of FAST. The results are compared to the insight toolkit (ITK) and the visualization toolkit (VTK) and show that the presented framework is faster with up to 20 times speedup on several common medical imaging algorithms. FAST enables efficient medical image computing and visualization on heterogeneous systems. Code examples and performance evaluations have demonstrated that the toolkit is both easy to use and performs better than existing frameworks, such as ITK and VTK.
NASA Astrophysics Data System (ADS)
Stone, Christopher P.; Alferman, Andrew T.; Niemeyer, Kyle E.
2018-05-01
Accurate and efficient methods for solving stiff ordinary differential equations (ODEs) are a critical component of turbulent combustion simulations with finite-rate chemistry. The ODEs governing the chemical kinetics at each mesh point are decoupled by operator-splitting allowing each to be solved concurrently. An efficient ODE solver must then take into account the available thread and instruction-level parallelism of the underlying hardware, especially on many-core coprocessors, as well as the numerical efficiency. A stiff Rosenbrock and a nonstiff Runge-Kutta ODE solver are both implemented using the single instruction, multiple thread (SIMT) and single instruction, multiple data (SIMD) paradigms within OpenCL. Both methods solve multiple ODEs concurrently within the same instruction stream. The performance of these parallel implementations was measured on three chemical kinetic models of increasing size across several multicore and many-core platforms. Two separate benchmarks were conducted to clearly determine any performance advantage offered by either method. The first benchmark measured the run-time of evaluating the right-hand-side source terms in parallel and the second benchmark integrated a series of constant-pressure, homogeneous reactors using the Rosenbrock and Runge-Kutta solvers. The right-hand-side evaluations with SIMD parallelism on the host multicore Xeon CPU and many-core Xeon Phi co-processor performed approximately three times faster than the baseline multithreaded C++ code. The SIMT parallel model on the host and Phi was 13%-35% slower than the baseline while the SIMT model on the NVIDIA Kepler GPU provided approximately the same performance as the SIMD model on the Phi. The runtimes for both ODE solvers decreased significantly with the SIMD implementations on the host CPU (2.5-2.7 ×) and Xeon Phi coprocessor (4.7-4.9 ×) compared to the baseline parallel code. The SIMT implementations on the GPU ran 1.5-1.6 times faster than the baseline multithreaded CPU code; however, this was significantly slower than the SIMD versions on the host CPU or the Xeon Phi. The performance difference between the three platforms was attributed to thread divergence caused by the adaptive step-sizes within the ODE integrators. Analysis showed that the wider vector width of the GPU incurs a higher level of divergence than the narrower Sandy Bridge or Xeon Phi. The significant performance improvement provided by the SIMD parallel strategy motivates further research into more ODE solver methods that are both SIMD-friendly and computationally efficient.
Characterization of the core poloidal flow at ASDEX Upgrade
NASA Astrophysics Data System (ADS)
Lebschy, Alexander
2017-10-01
An essential result from neoclassical (NC) theory is that the fluid poloidal rotation (upol) of the main ions is strongly damped by magnetic pumping and, therefore, expected to be small (< 2 km / s). Despite many previous investigations, the nature of the core upol remains an open question: studies at DIII-D show that at low collisionalities, upol is significantly higher in the plasma core than expected. At higher collisionalities, however, a rather good agreement between experiment and theory has been found at both DIII-D and TCV. This is qualitatively consistent with the edge results from both Alcator C-Mod and ASDEX Upgrade (AUG). At AUG thanks to an upgrade of the core charge exchange recombination spectroscopy (CXRS) diagnostics, the core upol can be evaluated through the inboard-outboard asymmetry of the toroidal rotation with an accuracy of 0.5 - 1 km / s . This measurement also provides the missing ingredient to evaluate the core (E-> × B->) velocity (uE-> × B->) via the radial force balance equation. At AUG the core upol (0.35 <ρtor < 0.65) is found to be ion-diamagnetic directed in contradiction to NC predictions. However, the edge rotation is always found to be electron-directed and in good quantitative agreement with NC codes. Additionally, the intrinsic rotation has been measured in Ohmic L-mode plasmas. From the observed data, it is clear that the gradient of the toroidal rotation is flat to slightly negative at the critical density defining the transition from the linear to the saturated Ohmic confinement regime. Furthermore, the non-neoclassical upol observed in these plasma leads to a good agreement between the uE-> × B-> determined from CXRS and the perpendicular velocity measured from turbulence propagation. The difference between these two quantities is the turbulent phase velocity. The gathered dataset indicates that the transition in the turbulence regime occurs after the saturation of the energy confinement time. The author thankfully acknowledges the financial support from the Helmholtz Association of German Research Centers through the Helmholtz Young Investigators Group program.
Evaluation of isotopic composition of fast reactor core in closed nuclear fuel cycle
NASA Astrophysics Data System (ADS)
Tikhomirov, Georgy; Ternovykh, Mikhail; Saldikov, Ivan; Fomichenko, Peter; Gerasimov, Alexander
2017-09-01
The strategy of the development of nuclear power in Russia provides for use of fast power reactors in closed nuclear fuel cycle. The PRORYV (i.e. «Breakthrough» in Russian) project is currently under development. Within the framework of this project, fast reactors BN-1200 and BREST-OD-300 should be built to, inter alia, demonstrate possibility of the closed nuclear fuel cycle technologies with plutonium as a main source of energy. Russia has a large inventory of plutonium which was accumulated in the result of reprocessing of spent fuel of thermal power reactors and conversion of nuclear weapons. This kind of plutonium will be used for development of initial fuel assemblies for fast reactors. The closed nuclear fuel cycle concept of the PRORYV assumes self-supplied mode of operation with fuel regeneration by neutron capture reaction in non-enriched uranium, which is used as a raw material. Operating modes of reactors and its characteristics should be chosen so as to provide the self-sufficient mode by using of fissile isotopes while refueling by depleted uranium and to support this state during the entire period of reactor operation. Thus, the actual issue is modeling fuel handling processes. To solve these problems, the code REPRORYV (Recycle for PRORYV) has been developed. It simulates nuclide streams in non-reactor stages of the closed fuel cycle. At the same time various verified codes can be used to evaluate in-core characteristics of a reactor. By using this approach various options for nuclide streams and assess the impact of different plutonium content in the fuel, fuel processing conditions, losses during fuel processing, as well as the impact of initial uncertainties on neutron-physical characteristics of reactor are considered in this study.
2017-04-13
modelling code, a parallel benchmark , and a communication avoiding version of the QR algorithm. Further, several improvements to the OmpSs model were...movement; and a port of the dynamic load balancing library to OmpSs. Finally, several updates to the tools infrastructure were accomplished, including: an...OmpSs: a basic algorithm on image processing applications, a mini application representative of an ocean modelling code, a parallel benchmark , and a
Palkowski, Marek; Bielecki, Wlodzimierz
2017-06-02
RNA secondary structure prediction is a compute intensive task that lies at the core of several search algorithms in bioinformatics. Fortunately, the RNA folding approaches, such as the Nussinov base pair maximization, involve mathematical operations over affine control loops whose iteration space can be represented by the polyhedral model. Polyhedral compilation techniques have proven to be a powerful tool for optimization of dense array codes. However, classical affine loop nest transformations used with these techniques do not optimize effectively codes of dynamic programming of RNA structure predictions. The purpose of this paper is to present a novel approach allowing for generation of a parallel tiled Nussinov RNA loop nest exposing significantly higher performance than that of known related code. This effect is achieved due to improving code locality and calculation parallelization. In order to improve code locality, we apply our previously published technique of automatic loop nest tiling to all the three loops of the Nussinov loop nest. This approach first forms original rectangular 3D tiles and then corrects them to establish their validity by means of applying the transitive closure of a dependence graph. To produce parallel code, we apply the loop skewing technique to a tiled Nussinov loop nest. The technique is implemented as a part of the publicly available polyhedral source-to-source TRACO compiler. Generated code was run on modern Intel multi-core processors and coprocessors. We present the speed-up factor of generated Nussinov RNA parallel code and demonstrate that it is considerably faster than related codes in which only the two outer loops of the Nussinov loop nest are tiled.
Experimental investigation and CFD analysis on cross flow in the core of PMR200
Lee, Jeong -Hun; Yoon, Su -Jong; Cho, Hyoung -Kyu; ...
2015-04-16
The Prismatic Modular Reactor (PMR) is one of the major Very High Temperature Reactor (VHTR) concepts, which consists of hexagonal prismatic fuel blocks and reflector blocks made of nuclear gradegraphite. However, the shape of the graphite blocks could be easily changed by neutron damage duringthe reactor operation and the shape change can create gaps between the blocks inducing the bypass flow.In the VHTR core, two types of gaps, a vertical gap and a horizontal gap which are called bypass gap and cross gap, respectively, can be formed. The cross gap complicates the flow field in the reactor core by connectingmore » the coolant channel to the bypass gap and it could lead to a loss of effective coolant flow in the fuel blocks. Thus, a cross flow experimental facility was constructed to investigate the cross flow phenomena in the core of the VHTR and a series of experiments were carried out under varying flow rates and gap sizes. The results of the experiments were compared with CFD (Computational Fluid Dynamics) analysis results in order to verify its prediction capability for the cross flow phenomena. Fairly good agreement was seen between experimental results and CFD predictions and the local characteristics of the cross flow was discussed in detail. Based on the calculation results, pressure loss coefficient across the cross gap was evaluated, which is necessary for the thermo-fluid analysis of the VHTR core using a lumped parameter code.« less
NASA Astrophysics Data System (ADS)
Clay, M. P.; Buaria, D.; Gotoh, T.; Yeung, P. K.
2017-10-01
A new dual-communicator algorithm with very favorable performance characteristics has been developed for direct numerical simulation (DNS) of turbulent mixing of a passive scalar governed by an advection-diffusion equation. We focus on the regime of high Schmidt number (S c), where because of low molecular diffusivity the grid-resolution requirements for the scalar field are stricter than those for the velocity field by a factor √{ S c }. Computational throughput is improved by simulating the velocity field on a coarse grid of Nv3 points with a Fourier pseudo-spectral (FPS) method, while the passive scalar is simulated on a fine grid of Nθ3 points with a combined compact finite difference (CCD) scheme which computes first and second derivatives at eighth-order accuracy. A static three-dimensional domain decomposition and a parallel solution algorithm for the CCD scheme are used to avoid the heavy communication cost of memory transposes. A kernel is used to evaluate several approaches to optimize the performance of the CCD routines, which account for 60% of the overall simulation cost. On the petascale supercomputer Blue Waters at the University of Illinois, Urbana-Champaign, scalability is improved substantially with a hybrid MPI-OpenMP approach in which a dedicated thread per NUMA domain overlaps communication calls with computational tasks performed by a separate team of threads spawned using OpenMP nested parallelism. At a target production problem size of 81923 (0.5 trillion) grid points on 262,144 cores, CCD timings are reduced by 34% compared to a pure-MPI implementation. Timings for 163843 (4 trillion) grid points on 524,288 cores encouragingly maintain scalability greater than 90%, although the wall clock time is too high for production runs at this size. Performance monitoring with CrayPat for problem sizes up to 40963 shows that the CCD routines can achieve nearly 6% of the peak flop rate. The new DNS code is built upon two existing FPS and CCD codes. With the grid ratio Nθ /Nv = 8, the disparity in the computational requirements for the velocity and scalar problems is addressed by splitting the global communicator MPI_COMM_WORLD into disjoint communicators for the velocity and scalar fields, respectively. Inter-communicator transfer of the velocity field from the velocity communicator to the scalar communicator is handled with discrete send and non-blocking receive calls, which are overlapped with other operations on the scalar communicator. For production simulations at Nθ = 8192 and Nv = 1024 on 262,144 cores for the scalar field, the DNS code achieves 94% strong scaling relative to 65,536 cores and 92% weak scaling relative to Nθ = 1024 and Nv = 128 on 512 cores.
Tuning iteration space slicing based tiled multi-core code implementing Nussinov's RNA folding.
Palkowski, Marek; Bielecki, Wlodzimierz
2018-01-15
RNA folding is an ongoing compute-intensive task of bioinformatics. Parallelization and improving code locality for this kind of algorithms is one of the most relevant areas in computational biology. Fortunately, RNA secondary structure approaches, such as Nussinov's recurrence, involve mathematical operations over affine control loops whose iteration space can be represented by the polyhedral model. This allows us to apply powerful polyhedral compilation techniques based on the transitive closure of dependence graphs to generate parallel tiled code implementing Nussinov's RNA folding. Such techniques are within the iteration space slicing framework - the transitive dependences are applied to the statement instances of interest to produce valid tiles. The main problem at generating parallel tiled code is defining a proper tile size and tile dimension which impact parallelism degree and code locality. To choose the best tile size and tile dimension, we first construct parallel parametric tiled code (parameters are variables defining tile size). With this purpose, we first generate two nonparametric tiled codes with different fixed tile sizes but with the same code structure and then derive a general affine model, which describes all integer factors available in expressions of those codes. Using this model and known integer factors present in the mentioned expressions (they define the left-hand side of the model), we find unknown integers in this model for each integer factor available in the same fixed tiled code position and replace in this code expressions, including integer factors, with those including parameters. Then we use this parallel parametric tiled code to implement the well-known tile size selection (TSS) technique, which allows us to discover in a given search space the best tile size and tile dimension maximizing target code performance. For a given search space, the presented approach allows us to choose the best tile size and tile dimension in parallel tiled code implementing Nussinov's RNA folding. Experimental results, received on modern Intel multi-core processors, demonstrate that this code outperforms known closely related implementations when the length of RNA strands is bigger than 2500.
CESAR5.3: Isotopic depletion for Research and Testing Reactor decommissioning
NASA Astrophysics Data System (ADS)
Ritter, Guillaume; Eschbach, Romain; Girieud, Richard; Soulard, Maxime
2018-05-01
CESAR stands in French for "simplified depletion applied to reprocessing". The current version is now number 5.3 as it started 30 years ago from a long lasting cooperation with ORANO, co-owner of the code with CEA. This computer code can characterize several types of nuclear fuel assemblies, from the most regular PWR power plants to the most unexpected gas cooled and graphite moderated old timer research facility. Each type of fuel can also include numerous ranges of compositions like UOX, MOX, LEU or HEU. Such versatility comes from a broad catalog of cross section libraries, each corresponding to a specific reactor and fuel matrix design. CESAR goes beyond fuel characterization and can also provide an evaluation of structural materials activation. The cross-sections libraries are generated using the most refined assembly or core level transport code calculation schemes (CEA APOLLO2 or ERANOS), based on the European JEFF3.1.1 nuclear data base. Each new CESAR self shielded cross section library benefits all most recent CEA recommendations as for deterministic physics options. Resulting cross sections are organized as a function of burn up and initial fuel enrichment which allows to condensate this costly process into a series of Legendre polynomials. The final outcome is a fast, accurate and compact CESAR cross section library. Each library is fully validated, against a stochastic transport code (CEA TRIPOLI 4) if needed and against a reference depletion code (CEA DARWIN). Using CESAR does not require any of the neutron physics expertise implemented into cross section libraries generation. It is based on top quality nuclear data (JEFF3.1.1 for ˜400 isotopes) and includes up to date Bateman equation solving algorithms. However, defining a CESAR computation case can be very straightforward. Most results are only 3 steps away from any beginner's ambition: Initial composition, in core depletion and pool decay scenario. On top of a simple utilization architecture, CESAR includes a portable Graphical User Interface which can be broadly deployed in R&D or industrial facilities. Aging facilities currently face decommissioning and dismantling issues. This way to the end of the nuclear fuel cycle requires a careful assessment of source terms in the fuel, core structures and all parts of a facility that must be disposed of with "industrial nuclear" constraints. In that perspective, several CESAR cross section libraries were constructed for early CEA Research and Testing Reactors (RTR's). The aim of this paper is to describe how CESAR operates and how it can be used to help these facilities care for waste disposal, nuclear materials transport or basic safety cases. The test case will be based on the PHEBUS Facility located at CEA - Cadarache.
Managing Evaluation: A Community Arts Organisation Perspective.
Swan, Peter; Atkinson, Sarah
2012-09-01
Arts and health organisations must increasingly provide measurable evidence of impact to stakeholders, which can pose both logistical and ideological challenges. This paper examines the relationship between the ethos of an arts and health organisation with external demands for evaluation. Research involved an ethnographic engagement where the first author worked closely with the organisation for a year. In addition to informal discussions, twenty semi-structured interviews were conducted with core staff and participants. Transcribed interviews were coded and emerging themes were identified. Staff considered evaluation to be necessary and useful, yet also to be time consuming and a potential threat to their ethos. Nevertheless, they were able to negotiate the terms of evaluation to enable them to meet their own needs as well as those of funders and other stakeholders. While not completely resisting outside demands for evaluation, the organisation was seen to intentionally rework demands for evidence into processes they felt they could work with, thus enabling their ethos to be maintained.
Managing Evaluation: A Community Arts Organisation Perspective
Swan, Peter; Atkinson, Sarah
2014-01-01
Background Arts and health organisations must increasingly provide measurable evidence of impact to stakeholders, which can pose both logistical and ideological challenges. This paper examines the relationship between the ethos of an arts and health organisation with external demands for evaluation. Methods Research involved an ethnographic engagement where the first author worked closely with the organisation for a year. In addition to informal discussions, twenty semi-structured interviews were conducted with core staff and participants. Transcribed interviews were coded and emerging themes were identified. Results Staff considered evaluation to be necessary and useful, yet also to be time consuming and a potential threat to their ethos. Nevertheless, they were able to negotiate the terms of evaluation to enable them to meet their own needs as well as those of funders and other stakeholders. Conclusions While not completely resisting outside demands for evaluation, the organisation was seen to intentionally rework demands for evidence into processes they felt they could work with, thus enabling their ethos to be maintained. PMID:25429306
A MATLAB based 3D modeling and inversion code for MT data
NASA Astrophysics Data System (ADS)
Singh, Arun; Dehiya, Rahul; Gupta, Pravin K.; Israil, M.
2017-07-01
The development of a MATLAB based computer code, AP3DMT, for modeling and inversion of 3D Magnetotelluric (MT) data is presented. The code comprises two independent components: grid generator code and modeling/inversion code. The grid generator code performs model discretization and acts as an interface by generating various I/O files. The inversion code performs core computations in modular form - forward modeling, data functionals, sensitivity computations and regularization. These modules can be readily extended to other similar inverse problems like Controlled-Source EM (CSEM). The modular structure of the code provides a framework useful for implementation of new applications and inversion algorithms. The use of MATLAB and its libraries makes it more compact and user friendly. The code has been validated on several published models. To demonstrate its versatility and capabilities the results of inversion for two complex models are presented.
Navier-Stokes Simulation of Homogeneous Turbulence on the CYBER 205
NASA Technical Reports Server (NTRS)
Wu, C. T.; Ferziger, J. H.; Chapman, D. R.; Rogallo, R. S.
1984-01-01
A computer code which solves the Navier-Stokes equations for three dimensional, time-dependent, homogenous turbulence has been written for the CYBER 205. The code has options for both 64-bit and 32-bit arithmetic. With 32-bit computation, mesh sizes up to 64 (3) are contained within core of a 2 million 64-bit word memory. Computer speed timing runs were made for various vector lengths up to 6144. With this code, speeds a little over 100 Mflops have been achieved on a 2-pipe CYBER 205. Several problems encountered in the coding are discussed.
ERIC Educational Resources Information Center
Buckland, Roger
2004-01-01
The Lambert Model Code of Governance proposes to institutionalise the dominance of governors from commercial and industrial organisations as core members of compact and effective boards controlling UK universities. It is the latest expression of a fashion for viewing university governance as an overly-simple example of an obsolete system, where…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sattison, M.B.; Schroeder, J.A.; Russell, K.D.
The Idaho National Engineering Laboratory (INEL) over the past year has created 75 plant-specific Accident Sequence Precursor (ASP) models using the SAPHIRE suite of PRA codes. Along with the new models, the INEL has also developed a new module for SAPHIRE which is tailored specifically to the unique needs of conditional core damage probability (CCDP) evaluations. These models and software will be the next generation of risk tools for the evaluation of accident precursors by both NRR and AEOD. This paper presents an overview of the models and software. Key characteristics include: (1) classification of the plant models according tomore » plant response with a unique set of event trees for each plant class, (2) plant-specific fault trees using supercomponents, (3) generation and retention of all system and sequence cutsets, (4) full flexibility in modifying logic, regenerating cutsets, and requantifying results, and (5) user interface for streamlined evaluation of ASP events.« less
Numerical study of core formation of asymmetrically driven cone-guided targets
Sawada, Hiroshi; Sakagami, Hitoshi
2017-09-22
Compression of a directly driven fast ignition cone-sphere target with a finite number of laser beams is numerically studied using a three-dimensional hydrodynamics code IMPACT-3D. The formation of a dense plasma core is simulated for 12-, 9-, 6-, and 4-beam configurations of the GEKKO XII laser. The complex 3D shapes of the cores are analyzed by elucidating synthetic 2D x-ray radiographic images in two orthogonal directions. Finally, the simulated x-ray images show significant differences in the core shape between the two viewing directions and rotation of the stagnating core axis in the top view for the axisymmetric 9- and 6-beammore » configurations.« less
Numerical study of core formation of asymmetrically driven cone-guided targets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sawada, Hiroshi; Sakagami, Hitoshi
Compression of a directly driven fast ignition cone-sphere target with a finite number of laser beams is numerically studied using a three-dimensional hydrodynamics code IMPACT-3D. The formation of a dense plasma core is simulated for 12-, 9-, 6-, and 4-beam configurations of the GEKKO XII laser. The complex 3D shapes of the cores are analyzed by elucidating synthetic 2D x-ray radiographic images in two orthogonal directions. Finally, the simulated x-ray images show significant differences in the core shape between the two viewing directions and rotation of the stagnating core axis in the top view for the axisymmetric 9- and 6-beammore » configurations.« less
Representing metabolic pathway information: an object-oriented approach.
Ellis, L B; Speedie, S M; McLeish, R
1998-01-01
The University of Minnesota Biocatalysis/Biodegradation Database (UM-BBD) is a website providing information and dynamic links for microbial metabolic pathways, enzyme reactions, and their substrates and products. The Compound, Organism, Reaction and Enzyme (CORE) object-oriented database management system was developed to contain and serve this information. CORE was developed using Java, an object-oriented programming language, and PSE persistent object classes from Object Design, Inc. CORE dynamically generates descriptive web pages for reactions, compounds and enzymes, and reconstructs ad hoc pathway maps starting from any UM-BBD reaction. CORE code is available from the authors upon request. CORE is accessible through the UM-BBD at: http://www. labmed.umn.edu/umbbd/index.html.
Hot zero power reactor calculations using the Insilico code
Hamilton, Steven P.; Evans, Thomas M.; Davidson, Gregory G.; ...
2016-03-18
In this paper we describe the reactor physics simulation capabilities of the insilico code. A description of the various capabilities of the code is provided, including detailed discussion of the geometry, meshing, cross section processing, and neutron transport options. Numerical results demonstrate that the insilico SP N solver with pin-homogenized cross section generation is capable of delivering highly accurate full-core simulation of various PWR problems. Comparison to both Monte Carlo calculations and measured plant data is provided.
Lawrence, Renée H; Tomolo, Anne M
2011-03-01
Although practice-based learning and improvement (PBLI) is now recognized as a fundamental and necessary skill set, we are still in need of tools that yield specific information about gaps in knowledge and application to help nurture the development of quality improvement (QI) skills in physicians in a proficient and proactive manner. We developed a questionnaire and coding system as an assessment tool to evaluate and provide feedback regarding PBLI self-efficacy, knowledge, and application skills for residency programs and related professional requirements. Five nationally recognized QI experts/leaders reviewed and completed our questionnaire. Through an iterative process, a coding system based on identifying key variables needed for ideal responses was developed to score project proposals. The coding system comprised 14 variables related to the QI projects, and an additional 30 variables related to the core knowledge concepts related to PBLI. A total of 86 residents completed the questionnaire, and 2 raters coded their open-ended responses. Interrater reliability was assessed by percentage agreement and Cohen κ for individual variables and Lin concordance correlation for total scores for knowledge and application. Discriminative validity (t test to compare known groups) and coefficient of reproducibility as an indicator of construct validity (item difficulty hierarchy) were also assessed. Interrater reliability estimates were good (percentage of agreements, above 90%; κ, above 0.4 for most variables; concordances for total scores were R = .88 for knowledge and R = .98 for application). Despite the residents' limited range of experiences in the group with prior PBLI exposure, our tool met our goal of differentiating between the 2 groups in our preliminary analyses. Correcting for chance agreement identified some variables that are potentially problematic. Although additional evaluation is needed, our tool may prove helpful and provide detailed information about trainees' progress and the curriculum.
Lawrence, Renée H; Tomolo, Anne M
2011-01-01
Background Although practice-based learning and improvement (PBLI) is now recognized as a fundamental and necessary skill set, we are still in need of tools that yield specific information about gaps in knowledge and application to help nurture the development of quality improvement (QI) skills in physicians in a proficient and proactive manner. We developed a questionnaire and coding system as an assessment tool to evaluate and provide feedback regarding PBLI self-efficacy, knowledge, and application skills for residency programs and related professional requirements. Methods Five nationally recognized QI experts/leaders reviewed and completed our questionnaire. Through an iterative process, a coding system based on identifying key variables needed for ideal responses was developed to score project proposals. The coding system comprised 14 variables related to the QI projects, and an additional 30 variables related to the core knowledge concepts related to PBLI. A total of 86 residents completed the questionnaire, and 2 raters coded their open-ended responses. Interrater reliability was assessed by percentage agreement and Cohen κ for individual variables and Lin concordance correlation for total scores for knowledge and application. Discriminative validity (t test to compare known groups) and coefficient of reproducibility as an indicator of construct validity (item difficulty hierarchy) were also assessed. Results Interrater reliability estimates were good (percentage of agreements, above 90%; κ, above 0.4 for most variables; concordances for total scores were R = .88 for knowledge and R = .98 for application). Conclusion Despite the residents' limited range of experiences in the group with prior PBLI exposure, our tool met our goal of differentiating between the 2 groups in our preliminary analyses. Correcting for chance agreement identified some variables that are potentially problematic. Although additional evaluation is needed, our tool may prove helpful and provide detailed information about trainees' progress and the curriculum. PMID:22379522
Code manual for CONTAIN 2.0: A computer code for nuclear reactor containment analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Murata, K.K.; Williams, D.C.; Griffith, R.O.
1997-12-01
The CONTAIN 2.0 computer code is an integrated analysis tool used for predicting the physical conditions, chemical compositions, and distributions of radiological materials inside a containment building following the release of material from the primary system in a light-water reactor accident. It can also predict the source term to the environment. CONTAIN 2.0 is intended to replace the earlier CONTAIN 1.12, which was released in 1991. The purpose of this Code Manual is to provide full documentation of the features and models in CONTAIN 2.0. Besides complete descriptions of the models, this Code Manual provides a complete description of themore » input and output from the code. CONTAIN 2.0 is a highly flexible and modular code that can run problems that are either quite simple or highly complex. An important aspect of CONTAIN is that the interactions among thermal-hydraulic phenomena, aerosol behavior, and fission product behavior are taken into account. The code includes atmospheric models for steam/air thermodynamics, intercell flows, condensation/evaporation on structures and aerosols, aerosol behavior, and gas combustion. It also includes models for reactor cavity phenomena such as core-concrete interactions and coolant pool boiling. Heat conduction in structures, fission product decay and transport, radioactive decay heating, and the thermal-hydraulic and fission product decontamination effects of engineered safety features are also modeled. To the extent possible, the best available models for severe accident phenomena have been incorporated into CONTAIN, but it is intrinsic to the nature of accident analysis that significant uncertainty exists regarding numerous phenomena. In those cases, sensitivity studies can be performed with CONTAIN by means of user-specified input parameters. Thus, the code can be viewed as a tool designed to assist the knowledge reactor safety analyst in evaluating the consequences of specific modeling assumptions.« less
Evolution of the ATLAS Software Framework towards Concurrency
NASA Astrophysics Data System (ADS)
Jones, R. W. L.; Stewart, G. A.; Leggett, C.; Wynne, B. M.
2015-05-01
The ATLAS experiment has successfully used its Gaudi/Athena software framework for data taking and analysis during the first LHC run, with billions of events successfully processed. However, the design of Gaudi/Athena dates from early 2000 and the software and the physics code has been written using a single threaded, serial design. This programming model has increasing difficulty in exploiting the potential of current CPUs, which offer their best performance only through taking full advantage of multiple cores and wide vector registers. Future CPU evolution will intensify this trend, with core counts increasing and memory per core falling. Maximising performance per watt will be a key metric, so all of these cores must be used as efficiently as possible. In order to address the deficiencies of the current framework, ATLAS has embarked upon two projects: first, a practical demonstration of the use of multi-threading in our reconstruction software, using the GaudiHive framework; second, an exercise to gather requirements for an updated framework, going back to the first principles of how event processing occurs. In this paper we report on both these aspects of our work. For the hive based demonstrators, we discuss what changes were necessary in order to allow the serially designed ATLAS code to run, both to the framework and to the tools and algorithms used. We report on what general lessons were learned about the code patterns that had been employed in the software and which patterns were identified as particularly problematic for multi-threading. These lessons were fed into our considerations of a new framework and we present preliminary conclusions on this work. In particular we identify areas where the framework can be simplified in order to aid the implementation of a concurrent event processing scheme. Finally, we discuss the practical difficulties involved in migrating a large established code base to a multi-threaded framework and how this can be achieved for LHC Run 3.
Moustafa, Savvina; Karakasiliotis, Ioannis; Mavromara, Penelope
2018-05-01
Viruses often encompass overlapping reading frames and unconventional translation mechanisms in order to maximize the output from a minimum genome and to orchestrate their timely gene expression. Hepatitis C virus (HCV) possesses such an unconventional open reading frame (ORF) within the core-coding region, encoding an additional protein, initially designated ARFP, F, or core+1. Two predominant isoforms of core+1/ARFP have been reported, core+1/L, initiating from codon 26, and core+1/S, initiating from codons 85/87 of the polyprotein coding region. The biological significance of core+1/ARFP expression remains elusive. The aim of the present study was to gain insight into the functional and pathological properties of core+1/ARFP through its interaction with the host cell, combining in vitro and in vivo approaches. Our data provide strong evidence that the core+1/ARFP of HCV-1a stimulates cell proliferation in Huh7-based cell lines expressing either core+1/S or core+1/L isoforms and in transgenic liver disease mouse models expressing core+1/S protein in a liver-specific manner. Both isoforms of core+1/ARFP increase the levels of cyclin D1 and phosphorylated Rb, thus promoting the cell cycle. In addition, core+1/S was found to enhance liver regeneration and oncogenesis in transgenic mice. The induction of the cell cycle together with increased mRNA levels of cell proliferation-related oncogenes in cells expressing the core+1/ARFP proteins argue for an oncogenic potential of these proteins and an important role in HCV-associated pathogenesis. IMPORTANCE This study sheds light on the biological importance of a unique HCV protein. We show here that core+1/ARFP of HCV-1a interacts with the host machinery, leading to acceleration of the cell cycle and enhancement of liver carcinogenesis. This pathological mechanism(s) may complement the action of other viral proteins with oncogenic properties, leading to the development of hepatocellular carcinoma. In addition, given that immunological responses to core+1/ARFP have been correlated with liver disease severity in chronic HCV patients, we expect that the present work will assist in clarifying the pathophysiological relevance of this protein as a biomarker of disease progression. Copyright © 2018 American Society for Microbiology.
NASA Astrophysics Data System (ADS)
Duan, Aiying; Jiang, Chaowei; Hu, Qiang; Zhang, Huai; Gary, G. Allen; Wu, S. T.; Cao, Jinbin
2017-06-01
Magnetic field extrapolation is an important tool to study the three-dimensional (3D) solar coronal magnetic field, which is difficult to directly measure. Various analytic models and numerical codes exist, but their results often drastically differ. Thus, a critical comparison of the modeled magnetic field lines with the observed coronal loops is strongly required to establish the credibility of the model. Here we compare two different non-potential extrapolation codes, a nonlinear force-free field code (CESE-MHD-NLFFF) and a non-force-free field (NFFF) code, in modeling a solar active region (AR) that has a sigmoidal configuration just before a major flare erupted from the region. A 2D coronal-loop tracing and fitting method is employed to study the 3D misalignment angles between the extrapolated magnetic field lines and the EUV loops as imaged by SDO/AIA. It is found that the CESE-MHD-NLFFF code with preprocessed magnetogram performs the best, outputting a field that matches the coronal loops in the AR core imaged in AIA 94 Å with a misalignment angle of ˜10°. This suggests that the CESE-MHD-NLFFF code, even without using the information of the coronal loops in constraining the magnetic field, performs as good as some coronal-loop forward-fitting models. For the loops as imaged by AIA 171 Å in the outskirts of the AR, all the codes including the potential field give comparable results of the mean misalignment angle (˜30°). Thus, further improvement of the codes is needed for a better reconstruction of the long loops enveloping the core region.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duan, Aiying; Zhang, Huai; Jiang, Chaowei
Magnetic field extrapolation is an important tool to study the three-dimensional (3D) solar coronal magnetic field, which is difficult to directly measure. Various analytic models and numerical codes exist, but their results often drastically differ. Thus, a critical comparison of the modeled magnetic field lines with the observed coronal loops is strongly required to establish the credibility of the model. Here we compare two different non-potential extrapolation codes, a nonlinear force-free field code (CESE–MHD–NLFFF) and a non-force-free field (NFFF) code, in modeling a solar active region (AR) that has a sigmoidal configuration just before a major flare erupted from themore » region. A 2D coronal-loop tracing and fitting method is employed to study the 3D misalignment angles between the extrapolated magnetic field lines and the EUV loops as imaged by SDO /AIA. It is found that the CESE–MHD–NLFFF code with preprocessed magnetogram performs the best, outputting a field that matches the coronal loops in the AR core imaged in AIA 94 Å with a misalignment angle of ∼10°. This suggests that the CESE–MHD–NLFFF code, even without using the information of the coronal loops in constraining the magnetic field, performs as good as some coronal-loop forward-fitting models. For the loops as imaged by AIA 171 Å in the outskirts of the AR, all the codes including the potential field give comparable results of the mean misalignment angle (∼30°). Thus, further improvement of the codes is needed for a better reconstruction of the long loops enveloping the core region.« less
NASA Astrophysics Data System (ADS)
Morris, Joseph W.; Lowry, Mac; Boren, Brett; Towers, James B.; Trimble, Darian E.; Bunfield, Dennis H.
2011-06-01
The US Army Aviation and Missile Research, Development and Engineering Center (AMRDEC) and the Redstone Test Center (RTC) has formed the Scene Generation Development Center (SGDC) to support the Department of Defense (DoD) open source EO/IR Scene Generation initiative for real-time hardware-in-the-loop and all-digital simulation. Various branches of the DoD have invested significant resources in the development of advanced scene and target signature generation codes. The SGDC goal is to maintain unlimited government rights and controlled access to government open source scene generation and signature codes. In addition, the SGDC provides development support to a multi-service community of test and evaluation (T&E) users, developers, and integrators in a collaborative environment. The SGDC has leveraged the DoD Defense Information Systems Agency (DISA) ProjectForge (https://Project.Forge.mil) which provides a collaborative development and distribution environment for the DoD community. The SGDC will develop and maintain several codes for tactical and strategic simulation, such as the Joint Signature Image Generator (JSIG), the Multi-spectral Advanced Volumetric Real-time Imaging Compositor (MAVRIC), and Office of the Secretary of Defense (OSD) Test and Evaluation Science and Technology (T&E/S&T) thermal modeling and atmospherics packages, such as EOView, CHARM, and STAR. Other utility packages included are the ContinuumCore for real-time messaging and data management and IGStudio for run-time visualization and scenario generation.
Aero-thermo-dynamic analysis of the Spaceliner-7.1 vehicle in high altitude flight
NASA Astrophysics Data System (ADS)
Zuppardi, Gennaro; Morsa, Luigi; Sippel, Martin; Schwanekamp, Tobias
2014-12-01
SpaceLiner, designed by DLR, is a visionary, extremely fast passenger transportation concept. It consists of two stages: a winged booster, a vehicle. After separation of the two stages, the booster makes a controlled re-entry and returns to the launch site. According to the current project, version 7-1 of SpaceLiner (SpaceLiner-7.1), the vehicle should be brought at an altitude of 75 km and then released, undertaking the descent path. In the perspective that the vehicle of SpaceLiner-7.1 could be brought to altitudes higher than 75 km, e.g. 100 km or above and also for a speculative purpose, in this paper the aerodynamic parameters of the SpaceLiner-7.1 vehicle are calculated in the whole transition regime, from continuum low density to free molecular flows. Computer simulations have been carried out by three codes: two DSMC codes, DS3V in the altitude interval 100-250 km for the evaluation of the global aerodynamic coefficients and DS2V at the altitude of 60 km for the evaluation of the heat flux and pressure distributions along the vehicle nose, and the DLR HOTSOSE code for the evaluation of the global aerodynamic coefficients in continuum, hypersonic flow at the altitude of 44.6 km. The effectiveness of the flaps with deflection angle of -35 deg. was evaluated in the above mentioned altitude interval. The vehicle showed longitudinal stability in the whole altitude interval even with no flap. The global bridging formulae verified to be proper for the evaluation of the aerodynamic coefficients in the altitude interval 80-100 km where the computations cannot be fulfilled either by CFD, because of the failure of the classical equations computing the transport coefficients, or by DSMC because of the requirement of very high computer resources both in terms of the core storage (a high number of simulated molecules is needed) and to the very long processing time.
NASA Astrophysics Data System (ADS)
Mielikainen, Jarno; Huang, Bormin; Huang, Allen
2015-10-01
The Thompson cloud microphysics scheme is a sophisticated cloud microphysics scheme in the Weather Research and Forecasting (WRF) model. The scheme is very suitable for massively parallel computation as there are no interactions among horizontal grid points. Compared to the earlier microphysics schemes, the Thompson scheme incorporates a large number of improvements. Thus, we have optimized the speed of this important part of WRF. Intel Many Integrated Core (MIC) ushers in a new era of supercomputing speed, performance, and compatibility. It allows the developers to run code at trillions of calculations per second using the familiar programming model. In this paper, we present our results of optimizing the Thompson microphysics scheme on Intel Many Integrated Core Architecture (MIC) hardware. The Intel Xeon Phi coprocessor is the first product based on Intel MIC architecture, and it consists of up to 61 cores connected by a high performance on-die bidirectional interconnect. The coprocessor supports all important Intel development tools. Thus, the development environment is familiar one to a vast number of CPU developers. Although, getting a maximum performance out of MICs will require using some novel optimization techniques. New optimizations for an updated Thompson scheme are discusses in this paper. The optimizations improved the performance of the original Thompson code on Xeon Phi 7120P by a factor of 1.8x. Furthermore, the same optimizations improved the performance of the Thompson on a dual socket configuration of eight core Intel Xeon E5-2670 CPUs by a factor of 1.8x compared to the original Thompson code.
How collaboration in therapy becomes therapeutic: the therapeutic collaboration coding system.
Ribeiro, Eugénia; Ribeiro, António P; Gonçalves, Miguel M; Horvath, Adam O; Stiles, William B
2013-09-01
The quality and strength of the therapeutic collaboration, the core of the alliance, is reliably associated with positive therapy outcomes. The urgent challenge for clinicians and researchers is constructing a conceptual framework to integrate the dialectical work that fosters collaboration, with a model of how clients make progress in therapy. We propose a conceptual account of how collaboration in therapy becomes therapeutic. In addition, we report on the construction of a coding system - the therapeutic collaboration coding system (TCCS) - designed to analyse and track on a moment-by-moment basis the interaction between therapist and client. Preliminary evidence is presented regarding the coding system's psychometric properties. The TCCS evaluates each speaking turn and assesses whether and how therapists are working within the client's therapeutic zone of proximal development, defined as the space between the client's actual therapeutic developmental level and their potential developmental level that can be reached in collaboration with the therapist. We applied the TCCS to five cases: a good and a poor outcome case of narrative therapy, a good and a poor outcome case of cognitive-behavioural therapy, and a dropout case of narrative therapy. The TCCS offers markers that may help researchers better understand the therapeutic collaboration on a moment-to-moment basis and may help therapists better regulate the relationship. © 2012 The British Psychological Society.
Block-Parallel Data Analysis with DIY2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morozov, Dmitriy; Peterka, Tom
DIY2 is a programming model and runtime for block-parallel analytics on distributed-memory machines. Its main abstraction is block-structured data parallelism: data are decomposed into blocks; blocks are assigned to processing elements (processes or threads); computation is described as iterations over these blocks, and communication between blocks is defined by reusable patterns. By expressing computation in this general form, the DIY2 runtime is free to optimize the movement of blocks between slow and fast memories (disk and flash vs. DRAM) and to concurrently execute blocks residing in memory with multiple threads. This enables the same program to execute in-core, out-of-core, serial,more » parallel, single-threaded, multithreaded, or combinations thereof. This paper describes the implementation of the main features of the DIY2 programming model and optimizations to improve performance. DIY2 is evaluated on benchmark test cases to establish baseline performance for several common patterns and on larger complete analysis codes running on large-scale HPC machines.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huml, O.
The objective of this work was to determine the neutron flux density distribution in various places of the training reactor VR-1 Sparrow. This experiment was performed on the new core design C1, composed of the new low-enriched uranium fuel cells IRT-4M (19.7 %). This fuel replaced the old high-enriched uranium fuel IRT-3M (36 %) within the framework of the RERTR Program in September 2005. The measurement used the neutron activation analysis method with gold wires. The principle of this method consists in neutron capture in a nucleus of the material forming the activation detector. This capture can change the nucleusmore » in a radioisotope, whose activity can be measured. The absorption cross-section values were evaluated by MCNP computer code. The gold wires were irradiated in seven different positions in the core C1. All irradiations were performed at reactor power level 1E8 (1 kW{sub therm}). The activity of segments of irradiated wires was measured by special automatic device called 'Drat' (Wire in English). (author)« less
Decay Heat Removal in GEN IV Gas-Cooled Fast Reactors
Cheng, Lap-Yan; Wei, Thomas Y. C.
2009-01-01
The safety goal of the current designs of advanced high-temperature thermal gas-cooled reactors (HTRs) is that no core meltdown would occur in a depressurization event with a combination of concurrent safety system failures. This study focused on the analysis of passive decay heat removal (DHR) in a GEN IV direct-cycle gas-cooled fast reactor (GFR) which is based on the technology developments of the HTRs. Given the different criteria and design characteristics of the GFR, an approach different from that taken for the HTRs for passive DHR would have to be explored. Different design options based on maintaining core flow weremore » evaluated by performing transient analysis of a depressurization accident using the system code RELAP5-3D. The study also reviewed the conceptual design of autonomous systems for shutdown decay heat removal and recommends that future work in this area should be focused on the potential for Brayton cycle DHRs.« less
Performance of the MTR core with MOX fuel using the MCNP4C2 code.
Shaaban, Ismail; Albarhoum, Mohamad
2016-08-01
The MCNP4C2 code was used to simulate the MTR-22 MW research reactor and perform the neutronic analysis for a new fuel namely: a MOX (U3O8&PuO2) fuel dispersed in an Al matrix for One Neutronic Trap (ONT) and Three Neutronic Traps (TNTs) in its core. Its new characteristics were compared to its original characteristics based on the U3O8-Al fuel. Experimental data for the neutronic parameters including criticality relative to the MTR-22 MW reactor for the original U3O8-Al fuel at nominal power were used to validate the calculated values and were found acceptable. The achieved results seem to confirm that the use of MOX fuel in the MTR-22 MW will not degrade the safe operational conditions of the reactor. In addition, the use of MOX fuel in the MTR-22 MW core leads to reduce the uranium fuel enrichment with (235)U and the amount of loaded (235)U in the core by about 34.84% and 15.21% for the ONT and TNTs cases, respectively. Copyright © 2016 Elsevier Ltd. All rights reserved.
Core signaling pathways in human pancreatic cancers revealed by global genomic analyses.
Jones, Siân; Zhang, Xiaosong; Parsons, D Williams; Lin, Jimmy Cheng-Ho; Leary, Rebecca J; Angenendt, Philipp; Mankoo, Parminder; Carter, Hannah; Kamiyama, Hirohiko; Jimeno, Antonio; Hong, Seung-Mo; Fu, Baojin; Lin, Ming-Tseh; Calhoun, Eric S; Kamiyama, Mihoko; Walter, Kimberly; Nikolskaya, Tatiana; Nikolsky, Yuri; Hartigan, James; Smith, Douglas R; Hidalgo, Manuel; Leach, Steven D; Klein, Alison P; Jaffee, Elizabeth M; Goggins, Michael; Maitra, Anirban; Iacobuzio-Donahue, Christine; Eshleman, James R; Kern, Scott E; Hruban, Ralph H; Karchin, Rachel; Papadopoulos, Nickolas; Parmigiani, Giovanni; Vogelstein, Bert; Velculescu, Victor E; Kinzler, Kenneth W
2008-09-26
There are currently few therapeutic options for patients with pancreatic cancer, and new insights into the pathogenesis of this lethal disease are urgently needed. Toward this end, we performed a comprehensive genetic analysis of 24 pancreatic cancers. We first determined the sequences of 23,219 transcripts, representing 20,661 protein-coding genes, in these samples. Then, we searched for homozygous deletions and amplifications in the tumor DNA by using microarrays containing probes for approximately 10(6) single-nucleotide polymorphisms. We found that pancreatic cancers contain an average of 63 genetic alterations, the majority of which are point mutations. These alterations defined a core set of 12 cellular signaling pathways and processes that were each genetically altered in 67 to 100% of the tumors. Analysis of these tumors' transcriptomes with next-generation sequencing-by-synthesis technologies provided independent evidence for the importance of these pathways and processes. Our data indicate that genetically altered core pathways and regulatory processes only become evident once the coding regions of the genome are analyzed in depth. Dysregulation of these core pathways and processes through mutation can explain the major features of pancreatic tumorigenesis.
Multi-dimensional Core-Collapse Supernova Simulations with Neutrino Transport
NASA Astrophysics Data System (ADS)
Pan, Kuo-Chuan; Liebendörfer, Matthias; Hempel, Matthias; Thielemann, Friedrich-Karl
We present multi-dimensional core-collapse supernova simulations using the Isotropic Diffusion Source Approximation (IDSA) for the neutrino transport and a modified potential for general relativity in two different supernova codes: FLASH and ELEPHANT. Due to the complexity of the core-collapse supernova explosion mechanism, simulations require not only high-performance computers and the exploitation of GPUs, but also sophisticated approximations to capture the essential microphysics. We demonstrate that the IDSA is an elegant and efficient neutrino radiation transfer scheme, which is portable to multiple hydrodynamics codes and fast enough to investigate long-term evolutions in two and three dimensions. Simulations with a 40 solar mass progenitor are presented in both FLASH (1D and 2D) and ELEPHANT (3D) as an extreme test condition. It is found that the black hole formation time is delayed in multiple dimensions and we argue that the strong standing accretion shock instability before black hole formation will lead to strong gravitational waves.
Kinetic turbulence simulations at extreme scale on leadership-class systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Bei; Ethier, Stephane; Tang, William
2013-01-01
Reliable predictive simulation capability addressing confinement properties in magnetically confined fusion plasmas is critically-important for ITER, a 20 billion dollar international burning plasma device under construction in France. The complex study of kinetic turbulence, which can severely limit the energy confinement and impact the economic viability of fusion systems, requires simulations at extreme scale for such an unprecedented device size. Our newly optimized, global, ab initio particle-in-cell code solving the nonlinear equations underlying gyrokinetic theory achieves excellent performance with respect to "time to solution" at the full capacity of the IBM Blue Gene/Q on 786,432 cores of Mira at ALCFmore » and recently of the 1,572,864 cores of Sequoia at LLNL. Recent multithreading and domain decomposition optimizations in the new GTC-P code represent critically important software advances for modern, low memory per core systems by enabling routine simulations at unprecedented size (130 million grid points ITER-scale) and resolution (65 billion particles).« less
Simulation of drift wave instability in field-reversed configurations using global magnetic geometry
NASA Astrophysics Data System (ADS)
Fulton, D. P.; Lau, C. K.; Lin, Z.; Tajima, T.; Holod, I.; the TAE Team
2016-10-01
Minimizing transport in the field-reversed configuration (FRC) is essential to enable FRC-based fusion reactors. Recently, significant progress on advanced beam-driven FRCs in C-2 and C-2U (at Tri Alpha Energy) provides opportunities to study transport properties using Doppler backscattering (DBS) measurements of turbulent fluctuations and kinetic particle-in-cell simulations of driftwaves in realistic equilibria via the Gyrokinetic Toroidal Code (GTC). Both measurements and simulations indicate relatively small fluctuations in the scrape-off layer (SOL). In the FRC core, local, single flux surface simulations reveal strong stabilization, while experiments indicate quiescent but finite fluctuations. One possible explanation is that turbulence may originate in the SOL and propagate at very low levels across the separatrix into the core. To test this hypothesis, a significant effort has been made to develop A New Code (ANC) based on GTC physics formulations, but using cylindrical coordinates which span the magnetic separatrix, including both core and SOL. Here, we present first results from global ANC simulations.
NIRP Core Software Suite v. 1.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
Whitener, Dustin Heath; Folz, Wesley; Vo, Duong
The NIRP Core Software Suite is a core set of code that supports multiple applications. It includes miscellaneous base code for data objects, mathematic equations, and user interface components; and the framework includes several fully-developed software applications that exist as stand-alone tools to compliment other applications. The stand-alone tools are described below. Analyst Manager: An application to manage contact information for people (analysts) that use the software products. This information is often included in generated reports and may be used to identify the owners of calculations. Radionuclide Viewer: An application for viewing the DCFPAK radiological data. Compliments the Mixture Managermore » tool. Mixture Manager: An application to create and manage radionuclides mixtures that are commonly used in other applications. High Explosive Manager: An application to manage explosives and their properties. Chart Viewer: An application to view charts of data (e.g. meteorology charts). Other applications may use this framework to create charts specific to their data needs.« less
Magnetic Reconnections in Mast
NASA Astrophysics Data System (ADS)
Turri, G.; Buttery, R. J.; Hastie, R. J.; Gimblett, C. G.; Cowley, S. C.; Lehane, I.
2004-11-01
In MAST the appearance of a spontaneous snake in the plasma core has many of the properties of a full reconnection. Analysis of SXR and TS data indicates a strongly radiating core with high impurity levels forming before the onset of the snake. Following the appearance of an x-point (island on the q=1 surface) the former core is hypothesised to move off axis and shrink, appearing as a radiative region with flux-tube-like rotating helical structure (the snake). A code has been developed to compare this with a slow full Kadomtsev type reconnection process including effects of impurities, density and temperature perturbations, current profile evolution and transport. The code reproduces many of the trends and effects seen in the data, confirming the event as consistent with full reconnection. The time-scale of the event is also consistent with estimates of hybrid growth times for such a reconnection process. Further analysis will be presented exploring the physics of this process in more detail.
High Performance Radiation Transport Simulations on TITAN
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baker, Christopher G; Davidson, Gregory G; Evans, Thomas M
2012-01-01
In this paper we describe the Denovo code system. Denovo solves the six-dimensional, steady-state, linear Boltzmann transport equation, of central importance to nuclear technology applications such as reactor core analysis (neutronics), radiation shielding, nuclear forensics and radiation detection. The code features multiple spatial differencing schemes, state-of-the-art linear solvers, the Koch-Baker-Alcouffe (KBA) parallel-wavefront sweep algorithm for inverting the transport operator, a new multilevel energy decomposition method scaling to hundreds of thousands of processing cores, and a modern, novel code architecture that supports straightforward integration of new features. In this paper we discuss the performance of Denovo on the 10--20 petaflop ORNLmore » GPU-based system, Titan. We describe algorithms and techniques used to exploit the capabilities of Titan's heterogeneous compute node architecture and the challenges of obtaining good parallel performance for this sparse hyperbolic PDE solver containing inherently sequential computations. Numerical results demonstrating Denovo performance on early Titan hardware are presented.« less
Efficient Calculation of Exact Exchange Within the Quantum Espresso Software Package
NASA Astrophysics Data System (ADS)
Barnes, Taylor; Kurth, Thorsten; Carrier, Pierre; Wichmann, Nathan; Prendergast, David; Kent, Paul; Deslippe, Jack
Accurate simulation of condensed matter at the nanoscale requires careful treatment of the exchange interaction between electrons. In the context of plane-wave DFT, these interactions are typically represented through the use of approximate functionals. Greater accuracy can often be obtained through the use of functionals that incorporate some fraction of exact exchange; however, evaluation of the exact exchange potential is often prohibitively expensive. We present an improved algorithm for the parallel computation of exact exchange in Quantum Espresso, an open-source software package for plane-wave DFT simulation. Through the use of aggressive load balancing and on-the-fly transformation of internal data structures, our code exhibits speedups of approximately an order of magnitude for practical calculations. Additional optimizations are presented targeting the many-core Intel Xeon-Phi ``Knights Landing'' architecture, which largely powers NERSC's new Cori system. We demonstrate the successful application of the code to difficult problems, including simulation of water at a platinum interface and computation of the X-ray absorption spectra of transition metal oxides.
Uranus: a rapid prototyping tool for FPGA embedded computer vision
NASA Astrophysics Data System (ADS)
Rosales-Hernández, Victor; Castillo-Jimenez, Liz; Viveros-Velez, Gilberto; Zuñiga-Grajeda, Virgilio; Treviño Torres, Abel; Arias-Estrada, M.
2007-01-01
The starting point for all successful system development is the simulation. Performing high level simulation of a system can help to identify, insolate and fix design problems. This work presents Uranus, a software tool for simulation and evaluation of image processing algorithms with support to migrate them to an FPGA environment for algorithm acceleration and embedded processes purposes. The tool includes an integrated library of previous coded operators in software and provides the necessary support to read and display image sequences as well as video files. The user can use the previous compiled soft-operators in a high level process chain, and code his own operators. Additional to the prototyping tool, Uranus offers FPGA-based hardware architecture with the same organization as the software prototyping part. The hardware architecture contains a library of FPGA IP cores for image processing that are connected with a PowerPC based system. The Uranus environment is intended for rapid prototyping of machine vision and the migration to FPGA accelerator platform, and it is distributed for academic purposes.
GPU-accelerated phase-field simulation of dendritic solidification in a binary alloy
NASA Astrophysics Data System (ADS)
Yamanaka, Akinori; Aoki, Takayuki; Ogawa, Satoi; Takaki, Tomohiro
2011-03-01
The phase-field simulation for dendritic solidification of a binary alloy has been accelerated by using a graphic processing unit (GPU). To perform the phase-field simulation of the alloy solidification on GPU, a program code was developed with computer unified device architecture (CUDA). In this paper, the implementation technique of the phase-field model on GPU is presented. Also, we evaluated the acceleration performance of the three-dimensional solidification simulation by using a single NVIDIA TESLA C1060 GPU and the developed program code. The results showed that the GPU calculation for 5763 computational grids achieved the performance of 170 GFLOPS by utilizing the shared memory as a software-managed cache. Furthermore, it can be demonstrated that the computation with the GPU is 100 times faster than that with a single CPU core. From the obtained results, we confirmed the feasibility of realizing a real-time full three-dimensional phase-field simulation of microstructure evolution on a personal desktop computer.
NASA Astrophysics Data System (ADS)
Mielikainen, Jarno; Huang, Bormin; Huang, Allen H.-L.
2015-05-01
The most widely used community weather forecast and research model in the world is the Weather Research and Forecast (WRF) model. Two distinct varieties of WRF exist. The one we are interested is the Advanced Research WRF (ARW) is an experimental, advanced research version featuring very high resolution. The WRF Nonhydrostatic Mesoscale Model (WRF-NMM) has been designed for forecasting operations. WRF consists of dynamics code and several physics modules. The WRF-ARW core is based on an Eulerian solver for the fully compressible nonhydrostatic equations. In the paper, we optimize a meridional (north-south direction) advection subroutine for Intel Xeon Phi coprocessor. Advection is of the most time consuming routines in the ARW dynamics core. It advances the explicit perturbation horizontal momentum equations by adding in the large-timestep tendency along with the small timestep pressure gradient tendency. We will describe the challenges we met during the development of a high-speed dynamics code subroutine for MIC architecture. Furthermore, lessons learned from the code optimization process will be discussed. The results show that the optimizations improved performance of the original code on Xeon Phi 7120P by a factor of 1.2x.
A NEW HYBRID N-BODY-COAGULATION CODE FOR THE FORMATION OF GAS GIANT PLANETS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bromley, Benjamin C.; Kenyon, Scott J., E-mail: bromley@physics.utah.edu, E-mail: skenyon@cfa.harvard.edu
2011-04-20
We describe an updated version of our hybrid N-body-coagulation code for planet formation. In addition to the features of our 2006-2008 code, our treatment now includes algorithms for the one-dimensional evolution of the viscous disk, the accretion of small particles in planetary atmospheres, gas accretion onto massive cores, and the response of N-bodies to the gravitational potential of the gaseous disk and the swarm of planetesimals. To validate the N-body portion of the algorithm, we use a battery of tests in planetary dynamics. As a first application of the complete code, we consider the evolution of Pluto-mass planetesimals in amore » swarm of 0.1-1 cm pebbles. In a typical evolution time of 1-3 Myr, our calculations transform 0.01-0.1 M{sub sun} disks of gas and dust into planetary systems containing super-Earths, Saturns, and Jupiters. Low-mass planets form more often than massive planets; disks with smaller {alpha} form more massive planets than disks with larger {alpha}. For Jupiter-mass planets, masses of solid cores are 10-100 M{sub +}.« less
Framework GRASP: routine library for optimize processing of aerosol remote sensing observation
NASA Astrophysics Data System (ADS)
Fuertes, David; Torres, Benjamin; Dubovik, Oleg; Litvinov, Pavel; Lapyonok, Tatyana; Ducos, Fabrice; Aspetsberger, Michael; Federspiel, Christian
The present the development of a Framework for the Generalized Retrieval of Aerosol and Surface Properties (GRASP) developed by Dubovik et al., (2011). The framework is a source code project that attempts to strengthen the value of the GRASP inversion algorithm by transforming it into a library that will be used later for a group of customized application modules. The functions of the independent modules include the managing of the configuration of the code execution, as well as preparation of the input and output. The framework provides a number of advantages in utilization of the code. First, it implements loading data to the core of the scientific code directly from memory without passing through intermediary files on disk. Second, the framework allows consecutive use of the inversion code without the re-initiation of the core routine when new input is received. These features are essential for optimizing performance of the data production in processing of large observation sets, such as satellite images by the GRASP. Furthermore, the framework is a very convenient tool for further development, because this open-source platform is easily extended for implementing new features. For example, it could accommodate loading of raw data directly onto the inversion code from a specific instrument not included in default settings of the software. Finally, it will be demonstrated that from the user point of view, the framework provides a flexible, powerful and informative configuration system.
Development of high-fidelity multiphysics system for light water reactor analysis
NASA Astrophysics Data System (ADS)
Magedanz, Jeffrey W.
There has been a tendency in recent years toward greater heterogeneity in reactor cores, due to the use of mixed-oxide (MOX) fuel, burnable absorbers, and longer cycles with consequently higher fuel burnup. The resulting asymmetry of the neutron flux and energy spectrum between regions with different compositions causes a need to account for the directional dependence of the neutron flux, instead of the traditional diffusion approximation. Furthermore, the presence of both MOX and high-burnup fuel in the core increases the complexity of the heat conduction. The heat transfer properties of the fuel pellet change with irradiation, and the thermal and mechanical expansion of the pellet and cladding strongly affect the size of the gap between them, and its consequent thermal resistance. These operational tendencies require higher fidelity multi-physics modeling capabilities, and this need is addressed by the developments performed within this PhD research. The dissertation describes the development of a High-Fidelity Multi-Physics System for Light Water Reactor Analysis. It consists of three coupled codes -- CTF for Thermal Hydraulics, TORT-TD for Neutron Kinetics, and FRAPTRAN for Fuel Performance. It is meant to address these modeling challenges in three ways: (1) by resolving the state of the system at the level of each fuel pin, rather than homogenizing entire fuel assemblies, (2) by using the multi-group Discrete Ordinates method to account for the directional dependence of the neutron flux, and (3) by using a fuel-performance code, rather than a Thermal Hydraulics code's simplified fuel model, to account for the material behavior of the fuel and its feedback to the hydraulic and neutronic behavior of the system. While the first two are improvements, the third, the use of a fuel-performance code for feedback, constitutes an innovation in this PhD project. Also important to this work is the manner in which such coupling is written. While coupling involves combining codes into a single executable, they are usually still developed and maintained separately. It should thus be a design objective to minimize the changes to those codes, and keep the changes to each code free of dependence on the details of the other codes. This will ease the incorporation of new versions of the code into the coupling, as well as re-use of parts of the coupling to couple with different codes. In order to fulfill this objective, an interface for each code was created in the form of an object-oriented abstract data type. Object-oriented programming is an effective method for enforcing a separation between different parts of a program, and clarifying the communication between them. The interfaces enable the main program to control the codes in terms of high-level functionality. This differs from the established practice of a master/slave relationship, in which the slave code is incorporated into the master code as a set of subroutines. While this PhD research continues previous work with a coupling between CTF and TORT-TD, it makes two major original contributions: (1) using a fuel-performance code, instead of a thermal-hydraulics code's simplified built-in models, to model the feedback from the fuel rods, and (2) the design of an object-oriented interface as an innovative method to interact with a coupled code in a high-level, easily-understandable manner. The resulting code system will serve as a tool to study the question of under what conditions, and to what extent, these higher-fidelity methods will provide benefits to reactor core analysis. (Abstract shortened by UMI.)
Statistical core design methodology using the VIPRE thermal-hydraulics code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lloyd, M.W.; Feltus, M.A.
1994-12-31
This Penn State Statistical Core Design Methodology (PSSCDM) is unique because it not only includes the EPRI correlation/test data standard deviation but also the computational uncertainty for the VIPRE code model and the new composite box design correlation. The resultant PSSCDM equation mimics the EPRI DNBR correlation results well, with an uncertainty of 0.0389. The combined uncertainty yields a new DNBR limit of 1.18 that will provide more plant operational flexibility. This methodology and its associated correlation and uniqe coefficients are for a very particular VIPRE model; thus, the correlation will be specifically linked with the lumped channel and subchannelmore » layout. The results of this research and methodology, however, can be applied to plant-specific VIPRE models.« less
A solid reactor core thermal model for nuclear thermal rockets
NASA Astrophysics Data System (ADS)
Rider, William J.; Cappiello, Michael W.; Liles, Dennis R.
1991-01-01
A Helium/Hydrogen Cooled Reactor Analysis (HERA) computer code has been developed. HERA has the ability to model arbitrary geometries in three dimensions, which allows the user to easily analyze reactor cores constructed of prismatic graphite elements. The code accounts for heat generation in the fuel, control rods, and other structures; conduction and radiation across gaps; convection to the coolant; and a variety of boundary conditions. The numerical solution scheme has been optimized for vector computers, making long transient analyses economical. Time integration is either explicit or implicit, which allows the use of the model to accurately calculate both short- or long-term transients with an efficient use of computer time. Both the basic spatial and temporal integration schemes have been benchmarked against analytical solutions.
[Series: Medical Applications of the PHITS Code (2): Acceleration by Parallel Computing].
Furuta, Takuya; Sato, Tatsuhiko
2015-01-01
Time-consuming Monte Carlo dose calculation becomes feasible owing to the development of computer technology. However, the recent development is due to emergence of the multi-core high performance computers. Therefore, parallel computing becomes a key to achieve good performance of software programs. A Monte Carlo simulation code PHITS contains two parallel computing functions, the distributed-memory parallelization using protocols of message passing interface (MPI) and the shared-memory parallelization using open multi-processing (OpenMP) directives. Users can choose the two functions according to their needs. This paper gives the explanation of the two functions with their advantages and disadvantages. Some test applications are also provided to show their performance using a typical multi-core high performance workstation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Renzi, N.E.; Roseberry, R.J.
>The experimental measurements and nuclear analysis of a uniformly loaded, unpoisoned slab core with a partially insented hafnium rod are described. Comparisons of experimental data with calculated results of the UFO code and flux synthesis techniques are given. It was concluded that one of the flux synthesis techniques and the UFO code are able to predict flux distributions to within approximately 5% of experiment for most cases. An error of approximately 10% was found in the synthesis technique for a channel near the partially inserted rod. The various calculations were able to predict neutron pulsed shutdowns to only approximately 30%.more » (auth)« less
Progenitors of Core-Collapse Supernovae
NASA Astrophysics Data System (ADS)
Hirschi, R.; Arnett, D.; Cristini, A.; Georgy, C.; Meakin, C.; Walkington, I.
2017-02-01
Massive stars have a strong impact on their surroundings, in particular when they produce a core-collapse supernova at the end of their evolution. In these proceedings, we review the general evolution of massive stars and their properties at collapse as well as the transition between massive and intermediate-mass stars. We also summarise the effects of metallicity and rotation. We then discuss some of the major uncertainties in the modelling of massive stars, with a particular emphasis on the treatment of convection in 1D stellar evolution codes. Finally, we present new 3D hydrodynamic simulations of convection in carbon burning and list key points to take from 3D hydrodynamic studies for the development of new prescriptions for convective boundary mixing in 1D stellar evolution codes.
Advanced Test Reactor Core Modeling Update Project Annual Report for Fiscal Year 2012
DOE Office of Scientific and Technical Information (OSTI.GOV)
David W. Nigg, Principal Investigator; Kevin A. Steuhm, Project Manager
Legacy computational reactor physics software tools and protocols currently used for support of Advanced Test Reactor (ATR) core fuel management and safety assurance, and to some extent, experiment management, are inconsistent with the state of modern nuclear engineering practice, and are difficult, if not impossible, to properly verify and validate (V&V) according to modern standards. Furthermore, the legacy staff knowledge required for application of these tools and protocols from the 1960s and 1970s is rapidly being lost due to staff turnover and retirements. In late 2009, the Idaho National Laboratory (INL) initiated a focused effort, the ATR Core Modeling Updatemore » Project, to address this situation through the introduction of modern high-fidelity computational software and protocols. This aggressive computational and experimental campaign will have a broad strategic impact on the operation of the ATR, both in terms of improved computational efficiency and accuracy for support of ongoing DOE programs as well as in terms of national and international recognition of the ATR National Scientific User Facility (NSUF). The ATR Core Modeling Update Project, targeted for full implementation in phase with the next anticipated ATR Core Internals Changeout (CIC) in the 2014-2015 time frame, began during the last quarter of Fiscal Year 2009, and has just completed its third full year. Key accomplishments so far have encompassed both computational as well as experimental work. A new suite of stochastic and deterministic transport theory based reactor physics codes and their supporting nuclear data libraries (HELIOS, KENO6/SCALE, NEWT/SCALE, ATTILA, and an extended implementation of MCNP5) has been installed at the INL under various licensing arrangements. Corresponding models of the ATR and ATRC are now operational with all five codes, demonstrating the basic feasibility of the new code packages for their intended purpose. Of particular importance, a set of as-run core depletion HELIOS calculations for all ATR cycles since August 2009, Cycle 145A through Cycle 151B, was successfully completed during 2012. This major effort supported a decision late in the year to proceed with the phased incorporation of the HELIOS methodology into the ATR Core Safety Analysis Package (CSAP) preparation process, in parallel with the established PDQ-based methodology, beginning late in Fiscal Year 2012. Acquisition of the advanced SERPENT (VTT-Finland) and MC21 (DOE-NR) Monte Carlo stochastic neutronics simulation codes was also initiated during the year and some initial applications of SERPENT to ATRC experiment analysis were demonstrated. These two new codes will offer significant additional capability, including the possibility of full-3D Monte Carlo fuel management support capabilities for the ATR at some point in the future. Finally, a capability for rigorous sensitivity analysis and uncertainty quantification based on the TSUNAMI system has been implemented and initial computational results have been obtained. This capability will have many applications as a tool for understanding the margins of uncertainty in the new models as well as for validation experiment design and interpretation.« less
Geoconservation and scientific rock sampling: Call for geoethical education strategies
NASA Astrophysics Data System (ADS)
Druguet, Elena; Passchier, Cees W.; Pennacchioni, Giorgio; Carreras, Jordi
2013-04-01
Some geological outcrops have a special scientific or educational value, represent a geological type locality and/or have a considerable aesthetical/photographic value. Such important outcrops require appropriate management to safeguard them from potentially damaging and destructive activities. Damage done to such rock exposures can include drill sampling by geologist undertaken in the name of scientific advancement. In order to illustrate the serious damage scientific sampling can do, we give some examples of outcrops from Western Europe, North America and South Africa, important to structural geology and petrology, where sampling was undertaken by means of drilling methods without any protective measures. After the rock coring, the aesthetic and photographic value of these delicate outcrops has decreased considerably. Unfortunately, regulation and protection mechanisms and codes of conduct can be ineffective. The many resources of geological information available to the geoscientist community (e.g. via Internet, such as outcrops stored in websites like "Outcropedia") promote access to sites of geological interest, but can also have a negative effect on their conservation. Geoethical education on rock sampling is therefore critical for conservation of the geological heritage. Geoethical principles and educational actions are aimed to be promoted at different levels to improve geological sciences development and to enhance conservation of important geological sites. Ethical protocols and codes of conduct should include geoconservation issues, being explicit about responsible sampling. Guided and inspired by the UK Geologists's Association "Code of Conduct for Rock Coring" (MacFadyen, 2010), we present a tentative outline requesting responsible behaviour: » Drill sampling is particularly threatening because it has a negative visual impact, whilst it is often unnecessary. Before sampling, geologists should think about the question "is drill sampling necessary for the study being carried on?" » Do not take samples from the centre of a geological type locality or a site of especial scientific, didactic interest or aesthetical/photographic value. If an outcrop is spectacular enough to be photographed, then you should not core or sample the rock face that has been recorded. The same applies to outstanding outcrops stored in websites. » Sample other parts of the same or a neighbouring outcrop where there is less impact. Core samples must be discrete in location; take cores from the least exposed, least spectacular part of an outcrop and try to plug the holes using the outer end of the core, if possible. » Before sampling ask experts and authorities (e.g. Natural Reserve or National Park managers if the area is protected) for advise and permission. References: MacFadyen, C.C.J., 2010. The vandalizing effects of irresponsible core sampling: a call for a new code of conduct: Geology Today 26, 146-151. Outcropedia: http://www.outcropedia.org/
NASA Astrophysics Data System (ADS)
Schultz, A.
2010-12-01
3D forward solvers lie at the core of inverse formulations used to image the variation of electrical conductivity within the Earth's interior. This property is associated with variations in temperature, composition, phase, presence of volatiles, and in specific settings, the presence of groundwater, geothermal resources, oil/gas or minerals. The high cost of 3D solutions has been a stumbling block to wider adoption of 3D methods. Parallel algorithms for modeling frequency domain 3D EM problems have not achieved wide scale adoption, with emphasis on fairly coarse grained parallelism using MPI and similar approaches. The communications bandwidth as well as the latency required to send and receive network communication packets is a limiting factor in implementing fine grained parallel strategies, inhibiting wide adoption of these algorithms. Leading Graphics Processor Unit (GPU) companies now produce GPUs with hundreds of GPU processor cores per die. The footprint, in silicon, of the GPU's restricted instruction set is much smaller than the general purpose instruction set required of a CPU. Consequently, the density of processor cores on a GPU can be much greater than on a CPU. GPUs also have local memory, registers and high speed communication with host CPUs, usually through PCIe type interconnects. The extremely low cost and high computational power of GPUs provides the EM geophysics community with an opportunity to achieve fine grained (i.e. massive) parallelization of codes on low cost hardware. The current generation of GPUs (e.g. NVidia Fermi) provides 3 billion transistors per chip die, with nearly 500 processor cores and up to 6 GB of fast (DDR5) GPU memory. This latest generation of GPU supports fast hardware double precision (64 bit) floating point operations of the type required for frequency domain EM forward solutions. Each Fermi GPU board can sustain nearly 1 TFLOP in double precision, and multiple boards can be installed in the host computer system. We describe our ongoing efforts to achieve massive parallelization on a novel hybrid GPU testbed machine currently configured with 12 Intel Westmere Xeon CPU cores (or 24 parallel computational threads) with 96 GB DDR3 system memory, 4 GPU subsystems which in aggregate contain 960 NVidia Tesla GPU cores with 16 GB dedicated DDR3 GPU memory, and a second interleved bank of 4 GPU subsystems containing in aggregate 1792 NVidia Fermi GPU cores with 12 GB dedicated DDR5 GPU memory. We are applying domain decomposition methods to a modified version of Weiss' (2001) 3D frequency domain full physics EM finite difference code, an open source GPL licensed f90 code available for download from www.OpenEM.org. This will be the core of a new hybrid 3D inversion that parallelizes frequencies across CPUs and individual forward solutions across GPUs. We describe progress made in modifying the code to use direct solvers in GPU cores dedicated to each small subdomain, iteratively improving the solution by matching adjacent subdomain boundary solutions, rather than iterative Krylov space sparse solvers as currently applied to the whole domain.
Parallelization of a Monte Carlo particle transport simulation code
NASA Astrophysics Data System (ADS)
Hadjidoukas, P.; Bousis, C.; Emfietzoglou, D.
2010-05-01
We have developed a high performance version of the Monte Carlo particle transport simulation code MC4. The original application code, developed in Visual Basic for Applications (VBA) for Microsoft Excel, was first rewritten in the C programming language for improving code portability. Several pseudo-random number generators have been also integrated and studied. The new MC4 version was then parallelized for shared and distributed-memory multiprocessor systems using the Message Passing Interface. Two parallel pseudo-random number generator libraries (SPRNG and DCMT) have been seamlessly integrated. The performance speedup of parallel MC4 has been studied on a variety of parallel computing architectures including an Intel Xeon server with 4 dual-core processors, a Sun cluster consisting of 16 nodes of 2 dual-core AMD Opteron processors and a 200 dual-processor HP cluster. For large problem size, which is limited only by the physical memory of the multiprocessor server, the speedup results are almost linear on all systems. We have validated the parallel implementation against the serial VBA and C implementations using the same random number generator. Our experimental results on the transport and energy loss of electrons in a water medium show that the serial and parallel codes are equivalent in accuracy. The present improvements allow for studying of higher particle energies with the use of more accurate physical models, and improve statistics as more particles tracks can be simulated in low response time.
Xia, Yidong; Lou, Jialin; Luo, Hong; ...
2015-02-09
Here, an OpenACC directive-based graphics processing unit (GPU) parallel scheme is presented for solving the compressible Navier–Stokes equations on 3D hybrid unstructured grids with a third-order reconstructed discontinuous Galerkin method. The developed scheme requires the minimum code intrusion and algorithm alteration for upgrading a legacy solver with the GPU computing capability at very little extra effort in programming, which leads to a unified and portable code development strategy. A face coloring algorithm is adopted to eliminate the memory contention because of the threading of internal and boundary face integrals. A number of flow problems are presented to verify the implementationmore » of the developed scheme. Timing measurements were obtained by running the resulting GPU code on one Nvidia Tesla K20c GPU card (Nvidia Corporation, Santa Clara, CA, USA) and compared with those obtained by running the equivalent Message Passing Interface (MPI) parallel CPU code on a compute node (consisting of two AMD Opteron 6128 eight-core CPUs (Advanced Micro Devices, Inc., Sunnyvale, CA, USA)). Speedup factors of up to 24× and 1.6× for the GPU code were achieved with respect to one and 16 CPU cores, respectively. The numerical results indicate that this OpenACC-based parallel scheme is an effective and extensible approach to port unstructured high-order CFD solvers to GPU computing.« less
Benchmarking GPU and CPU codes for Heisenberg spin glass over-relaxation
NASA Astrophysics Data System (ADS)
Bernaschi, M.; Parisi, G.; Parisi, L.
2011-06-01
We present a set of possible implementations for Graphics Processing Units (GPU) of the Over-relaxation technique applied to the 3D Heisenberg spin glass model. The results show that a carefully tuned code can achieve more than 100 GFlops/s of sustained performance and update a single spin in about 0.6 nanoseconds. A multi-hit technique that exploits the GPU shared memory further reduces this time. Such results are compared with those obtained by means of a highly-tuned vector-parallel code on latest generation multi-core CPUs.
Analysis of JSI TRIGA MARK II reactor physical parameters calculated with TRIPOLI and MCNP.
Henry, R; Tiselj, I; Snoj, L
2015-03-01
New computational model of the JSI TRIGA Mark II research reactor was built for TRIPOLI computer code and compared with existing MCNP code model. The same modelling assumptions were used in order to check the differences of the mathematical models of both Monte Carlo codes. Differences between the TRIPOLI and MCNP predictions of keff were up to 100pcm. Further validation was performed with analyses of the normalized reaction rates and computations of kinetic parameters for various core configurations. Copyright © 2014 Elsevier Ltd. All rights reserved.
It's Not Education by Zip Code Anymore--But What is It? Conceptions of Equity under the Common Core
ERIC Educational Resources Information Center
Kornhaber, Mindy L.; Griffith, Kelly; Tyler, Alison
2014-01-01
The Common Core State Standards Initiative is a standards-based reform in which 45 U.S. states and the District of Columbia have agreed to participate. The reform seeks to anchor primary and secondary education across these states in one set of demanding, internationally benchmarked standards. Thereby, all students will be prepared for further…
NASA Astrophysics Data System (ADS)
Faure, Bastien
The neutronic calculation of a reactor's core is usually done in two steps. After solving the neutron transport equation over an elementary domain of the core, a set of parameters, namely macroscopic cross sections and potentially diffusion coefficients, are defined in order to perform a full core calculation. In the first step, the cell or assembly is calculated using the "fundamental mode theory", the pattern being inserted in an infinite lattice of periodic structures. This simple representation allows a precise modeling for the geometry and the energy variable and can be treated within transport theory with minimalist approximations. However, it supposes that the reactor's core can be treated as a periodic lattice of elementary domains, which is already a big hypothesis, and cannot, at first sight, take into account neutron leakage between two different zones and out of the core. The leakage models propose to correct the transport equation with an additional leakage term in order to represent this phenomenon. For historical reasons, numerical methods for solving the transport equation being limited by computer's features (processor speeds and memory sizes), the leakage term is, in most cases, modeled by a homogeneous and isotropic probability within a "homogeneous leakage model". Driven by technological innovation in the computer science field, "heterogeneous leakage models" have been developed and implemented in several neutron transport calculation codes. This work focuses on a study of some of those models, including the TIBERE model from the DRAGON-3 code developed at Ecole Polytechnique de Montreal, as well as the heterogeneous model from the APOLLO-3 code developed at Commissariat a l'Energie Atomique et aux energies alternatives. The research based on sodium cooled fast reactors and light water reactors has allowed us to demonstrate the interest of those models compared to a homogeneous leakage model. In particular, it has been shown that a heterogeneous model has a significant impact on the calculation of the out of core leakage rate that permits a better estimation of the transport equation eigenvalue Keff . The neutron streaming between two zones of different compositions was also proven to be better calculated.
Coupled field effects in BWR stability simulations using SIMULATE-3K
DOE Office of Scientific and Technical Information (OSTI.GOV)
Borkowski, J.; Smith, K.; Hagrman, D.
1996-12-31
The SIMULATE-3K code is the transient analysis version of the Studsvik advanced nodal reactor analysis code, SIMULATE-3. Recent developments have focused on further broadening the range of transient applications by refinement of core thermal-hydraulic models and on comparison with boiling water reactor (BWR) stability measurements performed at Ringhals unit 1, during the startups of cycles 14 through 17.
The effects of temperatures on the pebble flow in a pebble bed high temperature reactor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sen, R. S.; Cogliati, J. J.; Gougar, H. D.
2012-07-01
The core of a pebble bed high temperature reactor (PBHTR) moves during operation, a feature which leads to better fuel economy (online refueling with no burnable poisons) and lower fuel stress. The pebbles are loaded at the top and trickle to the bottom of the core after which the burnup of each is measured. The pebbles that are not fully burned are recirculated through the core until the target burnup is achieved. The flow pattern of the pebbles through the core is of importance for core simulations because it couples the burnup distribution to the core temperature and power profiles,more » especially in cores with two or more radial burnup 'zones '. The pebble velocity profile is a strong function of the core geometry and the friction between the pebbles and the surrounding structures (other pebbles or graphite reflector blocks). The friction coefficient for graphite in a helium environment is inversely related to the temperature. The Thorium High Temperature Reactor (THTR) operated in Germany between 1983 and 1989. It featured a two-zone core, an inner core (IC) and outer core (OC), with different fuel mixtures loaded in each zone. The rate at which the IC was refueled relative to the OC in THTR was designed to be 0.56. During its operation, however, this ratio was measured to be 0.76, suggesting the pebbles in the inner core traveled faster than expected. It has been postulated that the positive feedback effect between inner core temperature, burnup, and pebble flow was underestimated in THTR. Because of the power shape, the center of the core in a typical cylindrical PBHTR operates at a higher temperature than the region next to the side reflector. The friction between pebbles in the IC is lower than that in the OC, perhaps causing a higher relative flow rate and lower average burnup, which in turn yield a higher local power density. Furthermore, the pebbles in the center region have higher velocities than the pebbles next to the side reflector due to the interaction between the pebbles and the immobile graphite reflector as well as the geometry of the discharge conus near the bottom of the core. In this paper, the coupling between the temperature profile and the pebble flow dynamics was analyzed by using PEBBED/THERMIX and PEBBLES codes by modeling the HTR-10 reactor in China. Two extreme and opposing velocity profiles are used as a starting point for the iterations. The PEBBED/THERMIX code is used to calculate the burnup, power and temperature profiles with one of the velocity profiles as input. The resulting temperature profile is then passed to PEBBLES code to calculate the updated pebble velocity profile taking the new temperature profile into account. If the aforementioned hypothesis is correct, the strong temperature effect upon the friction coefficients would cause the two cases to converge to different final velocity and temperature profiles. The results of this analysis indicates that a single zone pebble bed core is self-stabilizing in terms of the pebble velocity profile and the effect of the temperature profile on the pebble flow is insignificant. (authors)« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Beibei; Zhang, Xiaojia; Lin, Douglas N. C.
2015-01-01
Nearly 15%-20% of solar type stars contain one or more gas giant planets. According to the core-accretion scenario, the acquisition of their gaseous envelope must be preceded by the formation of super-critical cores with masses 10 times or larger than that of the Earth. It is natural to link the formation probability of gas giant planets with the supply of gases and solids in their natal disks. However, a much richer population of super Earths suggests that (1) there is no shortage of planetary building block material, (2) a gas giant's growth barrier is probably associated with whether it can mergemore » into super-critical cores, and (3) super Earths are probably failed cores that did not attain sufficient mass to initiate efficient accretion of gas before it is severely depleted. Here we construct a model based on the hypothesis that protoplanetary embryos migrated extensively before they were assembled into bona fide planets. We construct a Hermite-Embryo code based on a unified viscous-irradiation disk model and a prescription for the embryo-disk tidal interaction. This code is used to simulate the convergent migration of embryos, and their close encounters and coagulation. Around the progenitors of solar-type stars, the progenitor super-critical-mass cores of gas giant planets primarily form in protostellar disks with relatively high (≳ 10{sup –7} M {sub ☉} yr{sup –1}) mass accretion rates, whereas systems of super Earths (failed cores) are more likely to emerge out of natal disks with modest mass accretion rates, due to the mean motion resonance barrier and retention efficiency.« less
Numerical optimization of three-dimensional coils for NSTX-U
NASA Astrophysics Data System (ADS)
Lazerson, S. A.; Park, J.-K.; Logan, N.; Boozer, A.
2015-10-01
A tool for the calculation of optimal three-dimensional (3D) perturbative magnetic fields in tokamaks has been developed. The IPECOPT code builds upon the stellarator optimization code STELLOPT to allow for optimization of linear ideal magnetohydrodynamic perturbed equilibrium (IPEC). This tool has been applied to NSTX-U equilibria, addressing which fields are the most effective at driving NTV torques. The NTV torque calculation is performed by the PENT code. Optimization of the normal field spectrum shows that fields with n = 1 character can drive a large core torque. It is also shown that fields with n = 3 features are capable of driving edge torque and some core torque. Coil current optimization (using the planned in-vessel and existing RWM coils) on NSTX-U suggest the planned coils set is adequate for core and edge torque control. Comparison between error field correction experiments on DIII-D and the optimizer show good agreement. Notice: This manuscript has been authored by Princeton University under Contract Number DE-AC02-09CH11466 with the U.S. Department of Energy. The publisher, by accepting the article for publication acknowledges, that the United States Government retains a non-exclusive,paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes.
Program Instrumentation and Trace Analysis
NASA Technical Reports Server (NTRS)
Havelund, Klaus; Goldberg, Allen; Filman, Robert; Rosu, Grigore; Koga, Dennis (Technical Monitor)
2002-01-01
Several attempts have been made recently to apply techniques such as model checking and theorem proving to the analysis of programs. This shall be seen as a current trend to analyze real software systems instead of just their designs. This includes our own effort to develop a model checker for Java, the Java PathFinder 1, one of the very first of its kind in 1998. However, model checking cannot handle very large programs without some kind of abstraction of the program. This paper describes a complementary scalable technique to handle such large programs. Our interest is turned on the observation part of the equation: How much information can be extracted about a program from observing a single execution trace? It is our intention to develop a technology that can be applied automatically and to large full-size applications, with minimal modification to the code. We present a tool, Java PathExplorer (JPaX), for exploring execution traces of Java programs. The tool prioritizes scalability for completeness, and is directed towards detecting errors in programs, not to prove correctness. One core element in JPaX is an instrumentation package that allows to instrument Java byte code files to log various events when executed. The instrumentation is driven by a user provided script that specifies what information to log. Examples of instructions that such a script can contain are: 'report name and arguments of all called methods defined in class C, together with a timestamp'; 'report all updates to all variables'; and 'report all acquisitions and releases of locks'. In more complex instructions one can specify that certain expressions should be evaluated and even that certain code should be executed under various conditions. The instrumentation package can hence be seen as implementing Aspect Oriented Programming for Java in the sense that one can add functionality to a Java program without explicitly changing the code of the original program, but one rather writes an aspect and compiles it into the original program using the instrumentation. Another core element of JPaX is an observation package that supports the analysis of the generated event stream. Two kinds of analysis are currently supported. In temporal analysis the execution trace is evaluated against formulae written in temporal logic. We have implemented a temporal logic evaluator on finite traces using the Maude rewriting system from SRI International, USA. Temporal logic is defined in Maude by giving its syntax as a signature and its semantics as rewrite equations. The resulting semantics is extremely efficient and can handle event streams of hundreds of millions events in few minutes. Furthermore, the implementation is very succinct. The second form of even stream analysis supported is error pattern analysis where an execution trace is analyzed using various error detection algorithms that can identify error-prone programming practices that may potentially lead to errors in some different executions. Two such algorithms focusing on concurrency errors have been implemented in JPaX, one for deadlocks and the other for data races. It is important to note, that a deadlock or data race potential does not need to occur in order for its potential to be detected with these algorithms. This is what makes them very scalable in practice. The data race algorithm implemented is the Eraser algorithm from Compaq, however adopted to Java. The tool is currently being applied to a code base for controlling a spacecraft by the developers of that software in order to evaluate its applicability.
The ADVANCE Code of Conduct for collaborative vaccine studies.
Kurz, Xavier; Bauchau, Vincent; Mahy, Patrick; Glismann, Steffen; van der Aa, Lieke Maria; Simondon, François
2017-04-04
Lessons learnt from the 2009 (H1N1) flu pandemic highlighted factors limiting the capacity to collect European data on vaccine exposure, safety and effectiveness, including lack of rapid access to available data sources or expertise, difficulties to establish efficient interactions between multiple parties, lack of confidence between private and public sectors, concerns about possible or actual conflicts of interest (or perceptions thereof) and inadequate funding mechanisms. The Innovative Medicines Initiative's Accelerated Development of VAccine benefit-risk Collaboration in Europe (ADVANCE) consortium was established to create an efficient and sustainable infrastructure for rapid and integrated monitoring of post-approval benefit-risk of vaccines, including a code of conduct and governance principles for collaborative studies. The development of the code of conduct was guided by three core and common values (best science, strengthening public health, transparency) and a review of existing guidance and relevant published articles. The ADVANCE Code of Conduct includes 45 recommendations in 10 topics (Scientific integrity, Scientific independence, Transparency, Conflicts of interest, Study protocol, Study report, Publication, Subject privacy, Sharing of study data, Research contract). Each topic includes a definition, a set of recommendations and a list of additional reading. The concept of the study team is introduced as a key component of the ADVANCE Code of Conduct with a core set of roles and responsibilities. It is hoped that adoption of the ADVANCE Code of Conduct by all partners involved in a study will facilitate and speed-up its initiation, design, conduct and reporting. Adoption of the ADVANCE Code of Conduct should be stated in the study protocol, study report and publications and journal editors are encouraged to use it as an indication that good principles of public health, science and transparency were followed throughout the study. Copyright © 2017. Published by Elsevier Ltd.
NASA Technical Reports Server (NTRS)
Boyd, D. Douglas, Jr.; Brooks, Thomas F.; Burley, Casey L.; Jolly, J. Ralph, Jr.
1998-01-01
This document details the methodology and use of the CAMRAD.Mod1/HIRES codes, which were developed at NASA Langley Research Center for the prediction of helicopter harmonic and Blade-Vortex Interaction (BVI) noise. CANMAD.Mod1 is a substantially modified version of the performance/trim/wake code CANMAD. High resolution blade loading is determined in post-processing by HIRES and an associated indicial aerodynamics code. Extensive capabilities of importance to noise prediction accuracy are documented, including a new multi-core tip vortex roll-up wake model, higher harmonic and individual blade control, tunnel and fuselage correction input, diagnostic blade motion input, and interfaces for acoustic and CFD aerodynamics codes. Modifications and new code capabilities are documented with examples. A users' job preparation guide and listings of variables and namelists are given.
Full 3D visualization tool-kit for Monte Carlo and deterministic transport codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frambati, S.; Frignani, M.
2012-07-01
We propose a package of tools capable of translating the geometric inputs and outputs of many Monte Carlo and deterministic radiation transport codes into open source file formats. These tools are aimed at bridging the gap between trusted, widely-used radiation analysis codes and very powerful, more recent and commonly used visualization software, thus supporting the design process and helping with shielding optimization. Three main lines of development were followed: mesh-based analysis of Monte Carlo codes, mesh-based analysis of deterministic codes and Monte Carlo surface meshing. The developed kit is considered a powerful and cost-effective tool in the computer-aided design formore » radiation transport code users of the nuclear world, and in particular in the fields of core design and radiation analysis. (authors)« less
NASA Technical Reports Server (NTRS)
Long, M. S.; Yantosca, R.; Nielsen, J. E; Keller, C. A.; Da Silva, A.; Sulprizio, M. P.; Pawson, S.; Jacob, D. J.
2015-01-01
The GEOS-Chem global chemical transport model (CTM), used by a large atmospheric chemistry research community, has been re-engineered to also serve as an atmospheric chemistry module for Earth system models (ESMs). This was done using an Earth System Modeling Framework (ESMF) interface that operates independently of the GEOSChem scientific code, permitting the exact same GEOSChem code to be used as an ESM module or as a standalone CTM. In this manner, the continual stream of updates contributed by the CTM user community is automatically passed on to the ESM module, which remains state of science and referenced to the latest version of the standard GEOS-Chem CTM. A major step in this re-engineering was to make GEOS-Chem grid independent, i.e., capable of using any geophysical grid specified at run time. GEOS-Chem data sockets were also created for communication between modules and with external ESM code. The grid-independent, ESMF-compatible GEOS-Chem is now the standard version of the GEOS-Chem CTM. It has been implemented as an atmospheric chemistry module into the NASA GEOS- 5 ESM. The coupled GEOS-5-GEOS-Chem system was tested for scalability and performance with a tropospheric oxidant-aerosol simulation (120 coupled species, 66 transported tracers) using 48-240 cores and message-passing interface (MPI) distributed-memory parallelization. Numerical experiments demonstrate that the GEOS-Chem chemistry module scales efficiently for the number of cores tested, with no degradation as the number of cores increases. Although inclusion of atmospheric chemistry in ESMs is computationally expensive, the excellent scalability of the chemistry module means that the relative cost goes down with increasing number of cores in a massively parallel environment.
NASA Astrophysics Data System (ADS)
Rodriguez, M.; Brualla, L.
2018-04-01
Monte Carlo simulation of radiation transport is computationally demanding to obtain reasonably low statistical uncertainties of the estimated quantities. Therefore, it can benefit in a large extent from high-performance computing. This work is aimed at assessing the performance of the first generation of the many-integrated core architecture (MIC) Xeon Phi coprocessor with respect to that of a CPU consisting of a double 12-core Xeon processor in Monte Carlo simulation of coupled electron-photonshowers. The comparison was made twofold, first, through a suite of basic tests including parallel versions of the random number generators Mersenne Twister and a modified implementation of RANECU. These tests were addressed to establish a baseline comparison between both devices. Secondly, through the p DPM code developed in this work. p DPM is a parallel version of the Dose Planning Method (DPM) program for fast Monte Carlo simulation of radiation transport in voxelized geometries. A variety of techniques addressed to obtain a large scalability on the Xeon Phi were implemented in p DPM. Maximum scalabilities of 84 . 2 × and 107 . 5 × were obtained in the Xeon Phi for simulations of electron and photon beams, respectively. Nevertheless, in none of the tests involving radiation transport the Xeon Phi performed better than the CPU. The disadvantage of the Xeon Phi with respect to the CPU owes to the low performance of the single core of the former. A single core of the Xeon Phi was more than 10 times less efficient than a single core of the CPU for all radiation transport simulations.
Mean Flow and Noise Prediction for a Separate Flow Jet With Chevron Mixers
NASA Technical Reports Server (NTRS)
Koch, L. Danielle; Bridges, James; Khavaran, Abbas
2004-01-01
Experimental and numerical results are presented here for a separate flow nozzle employing chevrons arranged in an alternating pattern on the core nozzle. Comparisons of these results demonstrate that the combination of the WIND/MGBK suite of codes can predict the noise reduction trends measured between separate flow jets with and without chevrons on the core nozzle. Mean flow predictions were validated against Particle Image Velocimetry (PIV), pressure, and temperature data, and noise predictions were validated against acoustic measurements recorded in the NASA Glenn Aeroacoustic Propulsion Lab. Comparisons are also made to results from the CRAFT code. The work presented here is part of an on-going assessment of the WIND/MGBK suite for use in designing the next generation of quiet nozzles for turbofan engines.
Multi-dimensional simulations of core-collapse supernova explosions with CHIMERA
NASA Astrophysics Data System (ADS)
Messer, O. E. B.; Harris, J. A.; Hix, W. R.; Lentz, E. J.; Bruenn, S. W.; Mezzacappa, A.
2018-04-01
Unraveling the core-collapse supernova (CCSN) mechanism is a problem that remains essentially unsolved despite more than four decades of effort. Spherically symmetric models with otherwise high physical fidelity generally fail to produce explosions, and it is widely accepted that CCSNe are inherently multi-dimensional. Progress in realistic modeling has occurred recently through the availability of petascale platforms and the increasing sophistication of supernova codes. We will discuss our most recent work on understanding neutrino-driven CCSN explosions employing multi-dimensional neutrino-radiation hydrodynamics simulations with the Chimera code. We discuss the inputs and resulting outputs from these simulations, the role of neutrino radiation transport, and the importance of multi-dimensional fluid flows in shaping the explosions. We also highlight the production of 48Ca in long-running Chimera simulations.
Odong, T L; Jansen, J; van Eeuwijk, F A; van Hintum, T J L
2013-02-01
Definition of clear criteria for evaluation of the quality of core collections is a prerequisite for selecting high-quality cores. However, a critical examination of the different methods used in literature, for evaluating the quality of core collections, shows that there are no clear guidelines on the choices of quality evaluation criteria and as a result, inappropriate analyses are sometimes made leading to false conclusions being drawn regarding the quality of core collections and the methods to select such core collections. The choice of criteria for evaluating core collections appears to be based mainly on the fact that those criteria have been used in earlier publications rather than on the actual objectives of the core collection. In this study, we provide insight into different criteria used for evaluating core collections. We also discussed different types of core collections and related each type of core collection to their respective evaluation criteria. Two new criteria based on genetic distance are introduced. The consequences of the different evaluation criteria are illustrated using simulated and experimental data. We strongly recommend the use of the distance-based criteria since they not only allow the simultaneous evaluation of all variables describing the accessions, but they also provide intuitive and interpretable criteria, as compared with the univariate criteria generally used for the evaluation of core collections. Our findings will provide genebank curators and researchers with possibilities to make informed choices when creating, comparing and using core collections.
Investigation of Abnormal Heat Transfer and Flow in a VHTR Reactor Core
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kawaji, Masahiro; Valentin, Francisco I.; Artoun, Narbeh
2015-12-21
The main objective of this project was to identify and characterize the conditions under which abnormal heat transfer phenomena would occur in a Very High Temperature Reactor (VHTR) with a prismatic core. High pressure/high temperature experiments have been conducted to obtain data that could be used for validation of VHTR design and safety analysis codes. The focus of these experiments was on the generation of benchmark data for design and off-design heat transfer for forced, mixed and natural circulation in a VHTR core. In particular, a flow laminarization phenomenon was intensely investigated since it could give rise to hot spotsmore » in the VHTR core.« less
Verifying Architectural Design Rules of the Flight Software Product Line
NASA Technical Reports Server (NTRS)
Ganesan, Dharmalingam; Lindvall, Mikael; Ackermann, Chris; McComas, David; Bartholomew, Maureen
2009-01-01
This paper presents experiences of verifying architectural design rules of the NASA Core Flight Software (CFS) product line implementation. The goal of the verification is to check whether the implementation is consistent with the CFS architectural rules derived from the developer's guide. The results indicate that consistency checking helps a) identifying architecturally significant deviations that were eluded during code reviews, b) clarifying the design rules to the team, and c) assessing the overall implementation quality. Furthermore, it helps connecting business goals to architectural principles, and to the implementation. This paper is the first step in the definition of a method for analyzing and evaluating product line implementations from an architecture-centric perspective.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bylaska, Eric J.; Jacquelin, Mathias; De Jong, Wibe A.
2017-10-20
Ab-initio Molecular Dynamics (AIMD) methods are an important class of algorithms, as they enable scientists to understand the chemistry and dynamics of molecular and condensed phase systems while retaining a first-principles-based description of their interactions. Many-core architectures such as the Intel® Xeon Phi™ processor are an interesting and promising target for these algorithms, as they can provide the computational power that is needed to solve interesting problems in chemistry. In this paper, we describe the efforts of refactoring the existing AIMD plane-wave method of NWChem from an MPI-only implementation to a scalable, hybrid code that employs MPI and OpenMP tomore » exploit the capabilities of current and future many-core architectures. We describe the optimizations required to get close to optimal performance for the multiplication of the tall-and-skinny matrices that form the core of the computational algorithm. We present strong scaling results on the complete AIMD simulation for a test case that simulates 256 water molecules and that strong-scales well on a cluster of 1024 nodes of Intel Xeon Phi processors. We compare the performance obtained with a cluster of dual-socket Intel® Xeon® E5–2698v3 processors.« less
ERIC Educational Resources Information Center
Geverdt, Douglas; Phan, Tai
2006-01-01
The Common Core of Data (CCD) Nonfiscal surveys consist of data submitted annually by state education agencies (SEAs) to the National Center for Education Statistics (NCES). School, local education agency, and state data are sent to NCES by SEA personnel who are designated CCD Coordinators. The data are edited and maintained in machine-readable…
ERIC Educational Resources Information Center
Hernandez, Pepe J.; Andrzejewski, Matthew E.; Sadeghian, Kenneth; Panksepp, Jules B.; Kelley, Ann E.
2005-01-01
Neural integration of glutamate- and dopamine-coded signals within the nucleus accumbens (NAc) is a fundamental process governing cellular plasticity underlying reward-related learning. Intra-NAc core blockade of NMDA or D1 receptors in rats impairs instrumental learning (lever-pressing for sugar pellets), but it is not known during which phase of…
Conserved Curvature of RNA Polymerase I Core Promoter Beyond rRNA Genes: The Case of the Tritryps
Smircich, Pablo; Duhagon, María Ana; Garat, Beatriz
2015-01-01
In trypanosomatids, the RNA polymerase I (RNAPI)-dependent promoters controlling the ribosomal RNA (rRNA) genes have been well identified. Although the RNAPI transcription machinery recognizes the DNA conformation instead of the DNA sequence of promoters, no conformational study has been reported for these promoters. Here we present the in silico analysis of the intrinsic DNA curvature of the rRNA gene core promoters in Trypanosoma brucei, Trypanosoma cruzi, and Leishmania major. We found that, in spite of the absence of sequence conservation, these promoters hold conformational properties similar to other eukaryotic rRNA promoters. Our results also indicated that the intrinsic DNA curvature pattern is conserved within the Leishmania genus and also among strains of T. cruzi and T. brucei. Furthermore, we analyzed the impact of point mutations on the intrinsic curvature and their impact on the promoter activity. Furthermore, we found that the core promoters of protein-coding genes transcribed by RNAPI in T. brucei show the same conserved conformational characteristics. Overall, our results indicate that DNA intrinsic curvature of the rRNA gene core promoters is conserved in these ancient eukaryotes and such conserved curvature might be a requirement of RNAPI machinery for transcription of not only rRNA genes but also protein-coding genes. PMID:26718450
Developing and validating advanced divertor solutions on DIII-D for next-step fusion devices
NASA Astrophysics Data System (ADS)
Guo, H. Y.; Hill, D. N.; Leonard, A. W.; Allen, S. L.; Stangeby, P. C.; Thomas, D.; Unterberg, E. A.; Abrams, T.; Boedo, J.; Briesemeister, A. R.; Buchenauer, D.; Bykov, I.; Canik, J. M.; Chrobak, C.; Covele, B.; Ding, R.; Doerner, R.; Donovan, D.; Du, H.; Elder, D.; Eldon, D.; Lasa, A.; Groth, M.; Guterl, J.; Jarvinen, A.; Hinson, E.; Kolemen, E.; Lasnier, C. J.; Lore, J.; Makowski, M. A.; McLean, A.; Meyer, B.; Moser, A. L.; Nygren, R.; Owen, L.; Petrie, T. W.; Porter, G. D.; Rognlien, T. D.; Rudakov, D.; Sang, C. F.; Samuell, C.; Si, H.; Schmitz, O.; Sontag, A.; Soukhanovskii, V.; Wampler, W.; Wang, H.; Watkins, J. G.
2016-12-01
A major challenge facing the design and operation of next-step high-power steady-state fusion devices is to develop a viable divertor solution with order-of-magnitude increases in power handling capability relative to present experience, while having acceptable divertor target plate erosion and being compatible with maintaining good core plasma confinement. A new initiative has been launched on DIII-D to develop the scientific basis for design, installation, and operation of an advanced divertor to evaluate boundary plasma solutions applicable to next step fusion experiments beyond ITER. Developing the scientific basis for fusion reactor divertor solutions must necessarily follow three lines of research, which we plan to pursue in DIII-D: (1) Advance scientific understanding and predictive capability through development and comparison between state-of-the art computational models and enhanced measurements using targeted parametric scans; (2) Develop and validate key divertor design concepts and codes through innovative variations in physical structure and magnetic geometry; (3) Assess candidate materials, determining the implications for core plasma operation and control, and develop mitigation techniques for any deleterious effects, incorporating development of plasma-material interaction models. These efforts will lead to design, installation, and evaluation of an advanced divertor for DIII-D to enable highly dissipative divertor operation at core density (n e/n GW), neutral fueling and impurity influx most compatible with high performance plasma scenarios and reactor relevant plasma facing components (PFCs). This paper highlights the current progress and near-term strategies of boundary/PMI research on DIII-D.
Developing and validating advanced divertor solutions on DIII-D for next-step fusion devices
Guo, H. Y.; Hill, D. N.; Leonard, A. W.; ...
2016-09-14
A major challenge facing the design and operation of next-step high-power steady-state fusion devices is to develop a viable divertor solution with order-of-magnitude increases in power handling capability relative to present experience, while having acceptable divertor target plate erosion and being compatible with maintaining good core plasma confinement. A new initiative has been launched on DIII-D to develop the scientific basis for design, installation, and operation of an advanced divertor to evaluate boundary plasma solutions applicable to next step fusion experiments beyond ITER. Developing the scientific basis for fusion reactor divertor solutions must necessarily follow three lines of research, whichmore » we plan to pursue in DIII-D: (1) Advance scientific understanding and predictive capability through development and comparison between state-of-the art computational models and enhanced measurements using targeted parametric scans; (2) Develop and validate key divertor design concepts and codes through innovative variations in physical structure and magnetic geometry; (3) Assess candidate materials, determining the implications for core plasma operation and control, and develop mitigation techniques for any deleterious effects, incorporating development of plasma-material interaction models. These efforts will lead to design, installation, and evaluation of an advanced divertor for DIII-D to enable highly dissipative divertor operation at core density (n e/n GW), neutral fueling and impurity influx most compatible with high performance plasma scenarios and reactor relevant plasma facing components (PFCs). In conclusion, this paper highlights the current progress and near-term strategies of boundary/PMI research on DIII-D.« less
Signature of chaos in the 4 f -core-excited states for highly-charged tungsten ions
NASA Astrophysics Data System (ADS)
Safronova, Ulyana; Safronova, Alla
2014-05-01
We evaluate radiative and autoionizing transition rates in highly charged W ions in search for the signature of chaos. In particularly, previously published results for Ag-like W27+, Tm-like W5+, and Yb-like W4+ ions as well as newly obtained for I-like W21+, Xe-like W20+, Cs-like W19+, and La-like W17+ ions (with ground configuration [Kr] 4d10 4fk with k = 7, 8, 9, and 11, respectively) are considered that were calculated using the multiconfiguration relativistic Hebrew University Lawrence Livermore Atomic Code (HULLAC code) and the Hartree-Fock-Relativistic method (COWAN code). The main emphasis was on verification of Gaussian statistics of rates as a function of transition energy. There was no evidence of such statistics for above mentioned previously published results as well as for the transitions between the excited and autoionizing states for newly calculated results. However, we did find the Gaussian profile for the transitions between excited states such as the [Kr] 4d10 4fk - [Kr] 4d10 4f k - 1 5 d transitions , for newly calculated W ions. This work is supported in part by DOE under NNSA Cooperative Agreement DE-NA0001984.
Preparation macroconstants to simulate the core of VVER-1000 reactor
NASA Astrophysics Data System (ADS)
Seleznev, V. Y.
2017-01-01
Dynamic model is used in simulators of VVER-1000 reactor for training of operating staff and students. As a code for the simulation of neutron-physical characteristics is used DYNCO code that allows you to perform calculations of stationary, transient and emergency processes in real time to a different geometry of the reactor lattices [1]. To perform calculations using this code, you need to prepare macroconstants for each FA. One way of getting macroconstants is to use the WIMS code, which is based on the use of its own 69-group macroconstants library. This paper presents the results of calculations of FA obtained by the WIMS code for VVER-1000 reactor with different parameters of fuel and coolant, as well as the method of selection of energy groups for further calculation macroconstants.
NASA Technical Reports Server (NTRS)
Bade, W. L.; Yos, J. M.
1975-01-01
The present, third volume of the final report is a programmer's manual for the code. It provides a listing of the FORTRAN 4 source program; a complete glossary of FORTRAN symbols; a discussion of the purpose and method of operation of each subroutine (including mathematical analyses of special algorithms); and a discussion of the operation of the code on IBM/360 and UNIVAC 1108 systems, including required control cards and the overlay structure used to accommodate the code to the limited core size of the 1108. In addition, similar information is provided to document the programming of the NOZFIT code, which is employed to set up nozzle profile curvefits for use in NATA.
Initial verification and validation of RAZORBACK - A research reactor transient analysis code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Talley, Darren G.
2015-09-01
This report describes the work and results of the initial verification and validation (V&V) of the beta release of the Razorback code. Razorback is a computer code designed to simulate the operation of a research reactor (such as the Annular Core Research Reactor (ACRR)) by a coupled numerical solution of the point reactor kinetics equations, the energy conservation equation for fuel element heat transfer, and the mass, momentum, and energy conservation equations for the water cooling of the fuel elements. This initial V&V effort was intended to confirm that the code work to-date shows good agreement between simulation and actualmore » ACRR operations, indicating that the subsequent V&V effort for the official release of the code will be successful.« less
Segers, Laurent; Van Bavegem, David; De Winne, Sam; Braeken, An; Touhafi, Abdellah; Steenhaut, Kris
2015-01-01
This paper describes a new approach and implementation methodology for indoor ranging based on the time difference of arrival using code division multiple access with ultrasound signals. A novel implementation based on a field programmable gate array using finite impulse response filters and an optimized correlation demodulator implementation for ultrasound orthogonal signals is developed. Orthogonal codes are modulated onto ultrasound signals using frequency shift keying with carrier frequencies of 24.5 kHz and 26 kHz. This implementation enhances the possibilities for real-time, embedded and low-power tracking of several simultaneous transmitters. Due to the high degree of parallelism offered by field programmable gate arrays, up to four transmitters can be tracked simultaneously. The implementation requires at most 30% of the available logic gates of a Spartan-6 XC6SLX45 device and is evaluated on accuracy and precision through several ranging topologies. In the first topology, the distance between one transmitter and one receiver is evaluated. Afterwards, ranging analyses are applied between two simultaneous transmitters and one receiver. Ultimately, the position of the receiver against four transmitters using trilateration is also demonstrated. Results show enhanced distance measurements with distances ranging from a few centimeters up to 17 m, while keeping a centimeter-level accuracy. PMID:26263986
ERIC Educational Resources Information Center
Murray, Jeffrey W.
2014-01-01
This article seeks to provide some modest insights into the pedagogy of higher-order thinking and metacognition and to share the use of color-coded drafts as a best practice in service of both higher-order thinking and metacognition. This article will begin with a brief theoretical exploration of thinking and of thinking about thinking--the latter…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Summers, R.M.; Cole, R.K. Jr.; Smith, R.C.
1995-03-01
MELCOR is a fully integrated, engineering-level computer code that models the progression of severe accidents in light water reactor nuclear power plants. MELCOR is being developed at Sandia National Laboratories for the U.S. Nuclear Regulatory Commission as a second-generation plant risk assessment tool and the successor to the Source Term Code Package. A broad spectrum of severe accident phenomena in both boiling and pressurized water reactors is treated in MELCOR in a unified framework. These include: thermal-hydraulic response in the reactor coolant system, reactor cavity, containment, and confinement buildings; core heatup, degradation, and relocation; core-concrete attack; hydrogen production, transport, andmore » combustion; fission product release and transport; and the impact of engineered safety features on thermal-hydraulic and radionuclide behavior. Current uses of MELCOR include estimation of severe accident source terms and their sensitivities and uncertainties in a variety of applications. This publication of the MELCOR computer code manuals corresponds to MELCOR 1.8.3, released to users in August, 1994. Volume 1 contains a primer that describes MELCOR`s phenomenological scope, organization (by package), and documentation. The remainder of Volume 1 contains the MELCOR Users Guides, which provide the input instructions and guidelines for each package. Volume 2 contains the MELCOR Reference Manuals, which describe the phenomenological models that have been implemented in each package.« less
What does music express? Basic emotions and beyond.
Juslin, Patrik N
2013-01-01
Numerous studies have investigated whether music can reliably convey emotions to listeners, and-if so-what musical parameters might carry this information. Far less attention has been devoted to the actual contents of the communicative process. The goal of this article is thus to consider what types of emotional content are possible to convey in music. I will argue that the content is mainly constrained by the type of coding involved, and that distinct types of content are related to different types of coding. Based on these premises, I suggest a conceptualization in terms of "multiple layers" of musical expression of emotions. The "core" layer is constituted by iconically-coded basic emotions. I attempt to clarify the meaning of this concept, dispel the myths that surround it, and provide examples of how it can be heuristic in explaining findings in this domain. However, I also propose that this "core" layer may be extended, qualified, and even modified by additional layers of expression that involve intrinsic and associative coding. These layers enable listeners to perceive more complex emotions-though the expressions are less cross-culturally invariant and more dependent on the social context and/or the individual listener. This multiple-layer conceptualization of expression in music can help to explain both similarities and differences between vocal and musical expression of emotions.
The systems biology simulation core algorithm
2013-01-01
Background With the increasing availability of high dimensional time course data for metabolites, genes, and fluxes, the mathematical description of dynamical systems has become an essential aspect of research in systems biology. Models are often encoded in formats such as SBML, whose structure is very complex and difficult to evaluate due to many special cases. Results This article describes an efficient algorithm to solve SBML models that are interpreted in terms of ordinary differential equations. We begin our consideration with a formal representation of the mathematical form of the models and explain all parts of the algorithm in detail, including several preprocessing steps. We provide a flexible reference implementation as part of the Systems Biology Simulation Core Library, a community-driven project providing a large collection of numerical solvers and a sophisticated interface hierarchy for the definition of custom differential equation systems. To demonstrate the capabilities of the new algorithm, it has been tested with the entire SBML Test Suite and all models of BioModels Database. Conclusions The formal description of the mathematics behind the SBML format facilitates the implementation of the algorithm within specifically tailored programs. The reference implementation can be used as a simulation backend for Java™-based programs. Source code, binaries, and documentation can be freely obtained under the terms of the LGPL version 3 from http://simulation-core.sourceforge.net. Feature requests, bug reports, contributions, or any further discussion can be directed to the mailing list simulation-core-development@lists.sourceforge.net. PMID:23826941
PARALLELISATION OF THE MODEL-BASED ITERATIVE RECONSTRUCTION ALGORITHM DIRA.
Örtenberg, A; Magnusson, M; Sandborg, M; Alm Carlsson, G; Malusek, A
2016-06-01
New paradigms for parallel programming have been devised to simplify software development on multi-core processors and many-core graphical processing units (GPU). Despite their obvious benefits, the parallelisation of existing computer programs is not an easy task. In this work, the use of the Open Multiprocessing (OpenMP) and Open Computing Language (OpenCL) frameworks is considered for the parallelisation of the model-based iterative reconstruction algorithm DIRA with the aim to significantly shorten the code's execution time. Selected routines were parallelised using OpenMP and OpenCL libraries; some routines were converted from MATLAB to C and optimised. Parallelisation of the code with the OpenMP was easy and resulted in an overall speedup of 15 on a 16-core computer. Parallelisation with OpenCL was more difficult owing to differences between the central processing unit and GPU architectures. The resulting speedup was substantially lower than the theoretical peak performance of the GPU; the cause was explained. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chowdhury, J.; Wan, Weigang; Chen, Yang
2014-11-15
The δ f particle-in-cell code GEM is used to study the transport “shortfall” problem of gyrokinetic simulations. In local simulations, the GEM results confirm the previously reported simulation results of DIII-D [Holland et al., Phys. Plasmas 16, 052301 (2009)] and Alcator C-Mod [Howard et al., Nucl. Fusion 53, 123011 (2013)] tokamaks with the continuum code GYRO. Namely, for DIII-D the simulations closely predict the ion heat flux at the core, while substantially underpredict transport towards the edge; while for Alcator C-Mod, the simulations show agreement with the experimental values of ion heat flux, at least within the range of experimental error.more » Global simulations are carried out for DIII-D L-mode plasmas to study the effect of edge turbulence on the outer core ion heat transport. The edge turbulence enhances the outer core ion heat transport through turbulence spreading. However, this edge turbulence spreading effect is not enough to explain the transport underprediction.« less
Engelmann, J B; Berns, G S; Dunlop, B W
2017-12-01
Commonly observed distortions in decision-making among patients with major depressive disorder (MDD) may emerge from impaired reward processing and cognitive biases toward negative events. There is substantial theoretical support for the hypothesis that MDD patients overweight potential losses compared with gains, though the neurobiological underpinnings of this bias are uncertain. Twenty-one unmedicated patients with MDD were compared with 25 healthy controls (HC) using functional magnetic resonance imaging (fMRI) together with an economic decision-making task over mixed lotteries involving probabilistic gains and losses. Region-of-interest analyses evaluated neural signatures of gain and loss coding within a core network of brain areas known to be involved in valuation (anterior insula, caudate nucleus, ventromedial prefrontal cortex). Usable fMRI data were available for 19 MDD and 23 HC subjects. Anterior insula signal showed negative coding of losses (gain > loss) in HC subjects consistent with previous findings, whereas MDD subjects demonstrated significant reversals in these associations (loss > gain). Moreover, depression severity further enhanced the positive coding of losses in anterior insula, ventromedial prefrontal cortex, and caudate nucleus. The hyper-responsivity to losses displayed by the anterior insula of MDD patients was paralleled by a reduced influence of gain, but not loss, stake size on choice latencies. Patients with MDD demonstrate a significant shift from negative to positive coding of losses in the anterior insula, revealing the importance of this structure in value-based decision-making in the context of emotional disturbances.
Construction, classification and parametrization of complex Hadamard matrices
NASA Astrophysics Data System (ADS)
Szöllősi, Ferenc
To improve the design of nuclear systems, high-fidelity neutron fluxes are required. Leadership-class machines provide platforms on which very large problems can be solved. Computing such fluxes efficiently requires numerical methods with good convergence properties and algorithms that can scale to hundreds of thousands of cores. Many 3-D deterministic transport codes are decomposable in space and angle only, limiting them to tens of thousands of cores. Most codes rely on methods such as Gauss Seidel for fixed source problems and power iteration for eigenvalue problems, which can be slow to converge for challenging problems like those with highly scattering materials or high dominance ratios. Three methods have been added to the 3-D SN transport code Denovo that are designed to improve convergence and enable the full use of cutting-edge computers. The first is a multigroup Krylov solver that converges more quickly than Gauss Seidel and parallelizes the code in energy such that Denovo can use hundreds of thousand of cores effectively. The second is Rayleigh quotient iteration (RQI), an old method applied in a new context. This eigenvalue solver finds the dominant eigenvalue in a mathematically optimal way and should converge in fewer iterations than power iteration. RQI creates energy-block-dense equations that the new Krylov solver treats efficiently. However, RQI can have convergence problems because it creates poorly conditioned systems. This can be overcome with preconditioning. The third method is a multigrid-in-energy preconditioner. The preconditioner takes advantage of the new energy decomposition because the grids are in energy rather than space or angle. The preconditioner greatly reduces iteration count for many problem types and scales well in energy. It also allows RQI to be successful for problems it could not solve otherwise. The methods added to Denovo accomplish the goals of this work. They converge in fewer iterations than traditional methods and enable the use of hundreds of thousands of cores. Each method can be used individually, with the multigroup Krylov solver and multigrid-in-energy preconditioner being particularly successful on their own. The largest benefit, though, comes from using these methods in concert.
M3D-K Simulations of Beam-Driven Alfven Eigenmodes in ASDEX-U
NASA Astrophysics Data System (ADS)
Wang, Ge; Fu, Guoyong; Lauber, Philipp; Schneller, Mirjam
2013-10-01
Core-localized Alfven eigenmodes are often observed in neutral beam-heated plasma in ASDEX-U tokamak. In this work, hybrid simulations with the global kinetic/MHD hybrid code M3D-K have been carried out to investigate the linear stability and nonlinear dynamics of beam-driven Alfven eigenmodes using experimental parameters and profiles of an ASDEX-U discharge. The safety factor q profile is weakly reversed with minimum q value about qmin = 3.0. The simulation results show that the n = 3 mode transits from a reversed shear Alfven eigenmode (RSAE) to a core-localized toroidal Alfven eigenmode (TAE) as qmin drops from 3.0 to 2.79, consistent with results from the stability code NOVA as well as the experimental measurement. The M3D-K results are being compared with those of the linear gyrokinetic stability code LIGKA for benchmark. The simulation results will also be compared with the measured mode frequency and mode structure. This work was funded by the Max-Planck/Princeton Center for Plasma Physics.
Integration of the SSPM and STAGE with the MPACT Virtual Facility Distributed Test Bed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cipiti, Benjamin B.; Shoman, Nathan
The Material Protection Accounting and Control Technologies (MPACT) program within DOE NE is working toward a 2020 milestone to demonstrate a Virtual Facility Distributed Test Bed. The goal of the Virtual Test Bed is to link all MPACT modeling tools, technology development, and experimental work to create a Safeguards and Security by Design capability for fuel cycle facilities. The Separation and Safeguards Performance Model (SSPM) forms the core safeguards analysis tool, and the Scenario Toolkit and Generation Environment (STAGE) code forms the core physical security tool. These models are used to design and analyze safeguards and security systems and generatemore » performance metrics. Work over the past year has focused on how these models will integrate with the other capabilities in the MPACT program and specific model changes to enable more streamlined integration in the future. This report describes the model changes and plans for how the models will be used more collaboratively. The Virtual Facility is not designed to integrate all capabilities into one master code, but rather to maintain stand-alone capabilities that communicate results between codes more effectively.« less
NASA Astrophysics Data System (ADS)
Nagakura, Hiroki; Richers, Sherwood; Ott, Christian; Iwakami, Wakana; Furusawa, Shun; Sumiyoshi, Kohsuke; Yamada, Shoichi
2017-01-01
We have developed a multi-d radiation-hydrodynamic code which solves first-principles Boltzmann equation for neutrino transport. It is currently applicable specifically for core-collapse supernovae (CCSNe), but we will extend their applicability to further extreme phenomena such as black hole formation and coalescence of double neutron stars. In this meeting, I will discuss about two things; (1) detailed comparison with a Monte-Carlo neutrino transport (2) axisymmetric CCSNe simulations. The project (1) gives us confidence of our code. The Monte-Carlo code has been developed by Caltech group and it is specialized to obtain a steady state. Among CCSNe community, this is the first attempt to compare two different methods for multi-d neutrino transport. I will show the result of these comparison. For the project (2), I particularly focus on the property of neutrino distribution function in the semi-transparent region where only first-principle Boltzmann solver can appropriately handle the neutrino transport. In addition to these analyses, I will also discuss the ``explodability'' by neutrino heating mechanism.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jung, Y. S.; Joo, H. G.; Yoon, J. I.
The nTRACER direct whole core transport code employing the planar MOC solution based 3-D calculation method, the subgroup method for resonance treatment, the Krylov matrix exponential method for depletion, and a subchannel thermal/hydraulic calculation solver was developed for practical high-fidelity simulation of power reactors. Its accuracy and performance is verified by comparing with the measurement data obtained for three pressurized water reactor cores. It is demonstrated that accurate and detailed multi-physic simulation of power reactors is practically realizable without any prior calculations or adjustments. (authors)
Can basal magma oceans generate magnetic fields?
NASA Astrophysics Data System (ADS)
Stegman, D. R.; Ziegler, L. B.; Davies, C.
2015-12-01
Earth's magnetic field is very old, with recent data now showing the field possibly extended back to 4.1 billion years ago (Tarduno et al., Science, 2015). Yet, based upon our current knowledge there are difficulties in sustained a core dynamo over most of Earth's history. Moreover, recent estimates of thermal and electrical conductivity of liquid iron at core conditions from mineral physics experiments indicate that adiabatic heat flux is approximately 15 TW, nearly 3 times larger than previously thought, exacerbating difficulties for driving a core dynamo by convective core cooling alone throughout Earth history. A long-lived basal magma ocean in the lowermost mantle has been proposed to exist in the early Earth, surviving perhaps into the Archean. While the modern, solid lower mantle is an electromagnetic insulator, electrical conductivities of silicate melts are known to be higher, though as yet they are unconstrained for lowermost mantle conditions. Here we explore the geomagnetic consequences of a basal magma ocean layer for a range of possible electrical conductivities. For the highest electrical conductivities considered, we find a basal magma ocean could be a primary dynamo source region. This would suggest the proposed three magnetic eras observed in paleomagnetic data originate from distinct sources for dynamo generation: from 4.5-2.45 Ga within a basal magma ocean, from 2.25-0.4 Ga within a superadiabatically cooled liquid core, and from 0.4-present within a quasi-adiabatic core that includes a solidifying inner core. We have extended this work by developing a new code, Dynamantle, which is a model with an entropy-based approach, similar to those commonly used in core dynamics models. We present new results using this code to assess the conditions under which basal magma oceans can generate positive ohmic dissipation. This is more generally useful than just considering the early Earth, but also for many silicate exoplanets in which basal magma oceans are even more likely to exist.
NASA Astrophysics Data System (ADS)
Stoekl, Alexander; Dorfi, Ernst
2014-05-01
In the early, embedded phase of evolution of terrestrial planets, the planetary core accumulates gas from the circumstellar disk into a planetary envelope. This atmosphere is very significant for the further thermal evolution of the planet by forming an insulation around the rocky core. The disk-captured envelope is also the staring point for the atmospheric evolution where the atmosphere is modified by outgassing from the planetary core and atmospheric mass loss once the planet is exposed to the radiation field of the host star. The final amount of persistent atmosphere around the evolved planet very much characterizes the planet and is a key criterion for habitability. The established way to study disk accumulated atmospheres are hydrostatic models, even though in many cases the assumption of stationarity is unlikely to be fulfilled. We present, for the first time, time-dependent radiation hydrodynamics simulations of the accumulation process and the interaction between the disk-nebula gas and the planetary core. The calculations were performed with the TAPIR-Code (short for The adaptive, implicit RHD-Code) in spherical symmetry solving the equations of hydrodynamics, gray radiative transport, and convective energy transport. The models range from the surface of the solid core up to the Hill radius where the planetary envelope merges into the surrounding protoplanetary disk. Our results show that the time-scale of gas capturing and atmospheric growth strongly depends on the mass of the solid core. The amount of atmosphere accumulated during the lifetime of the protoplanetary disk (typically a few Myr) varies accordingly with the mass of the planet. Thus, a core with Mars-mass will end up with about 10 bar of atmosphere while for an Earth-mass core, the surface pressure reaches several 1000 bar. Even larger planets with several Earth masses quickly capture massive envelopes which in turn become gravitationally unstable leading to runaway accretion and the eventual formation of a gas planet.
Guralnick, Robert; Conlin, Tom; Deck, John; Stucky, Brian J.; Cellinese, Nico
2014-01-01
The biodiversity informatics community has discussed aspirations and approaches for assigning globally unique identifiers (GUIDs) to biocollections for nearly a decade. During that time, and despite misgivings, the de facto standard identifier has become the “Darwin Core Triplet”, which is a concatenation of values for institution code, collection code, and catalog number associated with biocollections material. Our aim is not to rehash the challenging discussions regarding which GUID system in theory best supports the biodiversity informatics use case of discovering and linking digital data across the Internet, but how well we can link those data together at this moment, utilizing the current identifier schemes that have already been deployed. We gathered Darwin Core Triplets from a subset of VertNet records, along with vertebrate records from GenBank and the Barcode of Life Data System, in order to determine how Darwin Core Triplets are deployed “in the wild”. We asked if those triplets follow the recommended structure and whether they provide an easy and unambiguous means to track from specimen records to genetic sequence records. We show that Darwin Core Triplets are often riddled with semantic and syntactic errors when deployed and curated in practice, despite specifications about how to construct them. Our results strongly suggest that Darwin Core Triplets that have not been carefully curated are not currently serving a useful role for relinking data. We briefly consider needed next steps to overcome current limitations. PMID:25470125
Analytical methods in the high conversion reactor core design
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zeggel, W.; Oldekop, W.; Axmann, J.K.
High conversion reactor (HCR) design methods have been used at the Technical University of Braunschweig (TUBS) with the technological support of Kraftwerk Union (KWU). The present state and objectives of this cooperation between KWU and TUBS in the field of HCRs have been described using existing design models and current activities aimed at further development and validation of the codes. The hard physical and thermal-hydraulic boundary conditions of pressurized water reactor (PWR) cores with a high degree of fuel utilization result from the tight packing of the HCR fuel rods and the high fissionable plutonium content of the fuel. Inmore » terms of design, the problem will be solved with rod bundles whose fuel rods are adjusted by helical spacers to the proposed small rod pitches. These HCR properties require novel computational models for neutron physics, thermal hydraulics, and fuel rod design. By means of a survey of the codes, the analytical procedure for present-day HCR core design is presented. The design programs are currently under intensive development, as design tools with a solid, scientific foundation and with essential parameters that are widely valid and are required for a promising optimization of the HCR core. Design results and a survey of future HCR development are given. In this connection, the reoptimization of the PWR core in the direction of an HCR is considered a fascinating scientific task, with respect to both economic and safety aspects.« less
Enhancing Image Processing Performance for PCID in a Heterogeneous Network of Multi-code Processors
NASA Astrophysics Data System (ADS)
Linderman, R.; Spetka, S.; Fitzgerald, D.; Emeny, S.
The Physically-Constrained Iterative Deconvolution (PCID) image deblurring code is being ported to heterogeneous networks of multi-core systems, including Intel Xeons and IBM Cell Broadband Engines. This paper reports results from experiments using the JAWS supercomputer at MHPCC (60 TFLOPS of dual-dual Xeon nodes linked with Infiniband) and the Cell Cluster at AFRL in Rome, NY. The Cell Cluster has 52 TFLOPS of Playstation 3 (PS3) nodes with IBM Cell Broadband Engine multi-cores and 15 dual-quad Xeon head nodes. The interconnect fabric includes Infiniband, 10 Gigabit Ethernet and 1 Gigabit Ethernet to each of the 336 PS3s. The results compare approaches to parallelizing FFT executions across the Xeons and the Cell's Synergistic Processing Elements (SPEs) for frame-level image processing. The experiments included Intel's Performance Primitives and Math Kernel Library, FFTW3.2, and Carnegie Mellon's SPIRAL. Optimization of FFTs in the PCID code led to a decrease in relative processing time for FFTs. Profiling PCID version 6.2, about one year ago, showed the 13 functions that accounted for the highest percentage of processing were all FFT processing functions. They accounted for over 88% of processing time in one run on Xeons. FFT optimizations led to improvement in the current PCID version 8.0. A recent profile showed that only two of the 19 functions with the highest processing time were FFT processing functions. Timing measurements showed that FFT processing for PCID version 8.0 has been reduced to less than 19% of overall processing time. We are working toward a goal of scaling to 200-400 cores per job (1-2 imagery frames/core). Running a pair of cores on each set of frames reduces latency by implementing parallel FFT processing. Our current results show scaling well out to 100 pairs of cores. These results support the next higher level of parallelism in PCID, where groups of several hundred frames each producing one resolved image are sent to cliques of several hundred cores in a round robin fashion. Current efforts toward further performance enhancement for PCID are shifting toward using the Playstations in conjunction with the Xeons to take advantage of outstanding price/performance as well as the Flops/Watt cost advantage. We are fine-tuning the PCID parallization strategy to balance processing over Xeons and Cell BEs to find an optimal partitioning of PCID over the heterogeneous processors. A high performance information management system that exploits native Infiniband multicast is used to improve latency among the head nodes. Using a publication/subscription oriented information management system to implement a unified communications platform makes runs on large HPCs with thousands of intercommunicating cores more flexible and more fault tolerant. It features a loose couplingof publishers to subscribers through intervening brokers. We are also working on enhancing performance for both Xeons and Cell BEs, buy moving selected operations to single precision. Techniques for adapting the code to single precision and performance results are reported.
Creep relaxation of fuel pin bending and ovalling stresses. [BEND code, OVAL code, MARC-CDC code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chan, D.P.; Jackson, R.J.
1981-10-01
Analytical methods for calculating fuel pin cladding bending and ovalling stresses due to pin bundle-duct mechanical interaction taking into account nonlinear creep are presented. Calculated results are in agreement with finite element results by MARC-CDC program. The methods are used to investigate the effect of creep on the FTR fuel cladding bending and ovalling stresses. It is concluded that the cladding of 316 SS 20 percent CW and reference design has high creep rates in the FTR core region to keep the bending and ovalling stresses to acceptable levels. 6 refs.
Salko, Robert K.; Schmidt, Rodney C.; Avramova, Maria N.
2014-11-23
This study describes major improvements to the computational infrastructure of the CTF subchannel code so that full-core, pincell-resolved (i.e., one computational subchannel per real bundle flow channel) simulations can now be performed in much shorter run-times, either in stand-alone mode or as part of coupled-code multi-physics calculations. These improvements support the goals of the Department Of Energy Consortium for Advanced Simulation of Light Water Reactors (CASL) Energy Innovation Hub to develop high fidelity multi-physics simulation tools for nuclear energy design and analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bradley, Jon David; Oppel III, Fred J.; Hart, Brian E.
Umbra is a flexible simulation framework for complex systems that can be used by itself for modeling, simulation, and analysis, or to create specific applications. It has been applied to many operations, primarily dealing with robotics and system of system simulations. This version, from 4.8 to 4.8.3b, incorporates bug fixes, refactored code, and new managed C++ wrapper code that can be used to bridge new applications written in C# to the C++ libraries. The new managed C++ wrapper code includes (project/directories) BasicSimulation, CSharpUmbraInterpreter, LogFileView, UmbraAboutBox, UmbraControls, UmbraMonitor and UmbraWrapper.
Fostering Team Awareness in Earth System Modeling Communities
NASA Astrophysics Data System (ADS)
Easterbrook, S. M.; Lawson, A.; Strong, S.
2009-12-01
Existing Global Climate Models are typically managed and controlled at a single site, with varied levels of participation by scientists outside the core lab. As these models evolve to encompass a wider set of earth systems, this central control of the modeling effort becomes a bottleneck. But such models cannot evolve to become fully distributed open source projects unless they address the imbalance in the availability of communication channels: scientists at the core site have access to regular face-to-face communication with one another, while those at remote sites have access to only a subset of these conversations - e.g. formally scheduled teleconferences and user meetings. Because of this imbalance, critical decision making can be hidden from many participants, their code contributions can interact in unanticipated ways, and the community loses awareness of who knows what. We have documented some of these problems in a field study at one climate modeling centre, and started to develop tools to overcome these problems. We report on one such tool, TracSNAP, which analyzes the social network of the scientists contributing code to the model by extracting the data in an existing project code repository. The tool presents the results of this analysis to modelers and model users in a number of ways: recommendation for who has expertise on particular code modules, suggestions for code sections that are related to files being worked on, and visualizations of team communication patterns. The tool is currently available as a plugin for the Trac bug tracking system.
Mastery of Content Representation (CoRes) Related TPACK High School Biology Teacher
NASA Astrophysics Data System (ADS)
Nasution, W. R.; Sriyati, S.; Riandi, R.; Safitri, M.
2017-09-01
The purpose of this study was to determine the mastery of Content Representation (CoRes) teachers related to the integration of technology and pedagogy in teaching Biology (TPACK). This research uses a descriptive method. The data were taken using instruments CoRes as the primary data and semi-structured interviews as supporting data. The subjects were biology teacher in class X MIA from four schools in Bandung. Teachers raised CoRes was analyzed using a scoring rubric CoRes with coding 1-3 then categorized into a group of upper, middle, or lower. The results showed that the two teachers in the lower category. This results means that the control of teachers in defining the essential concept in the CoRes has not been detailed and specific. Meanwhile, two other teachers were in the middle category. This means that the ability of teachers to determine the essential concepts in the CoRes are still inadequate so that still needs to be improved.
Evaluation of stator core loss of high speed motor by using thermography camera
NASA Astrophysics Data System (ADS)
Sato, Takeru; Enokizono, Masato
2018-04-01
In order to design a high-efficiency motor, the iron loss that is generated in the motor should be reduced. The iron loss of the motor is generated in a stator core that is produced with an electrical steel sheet. The iron loss characteristics of the stator core and the electrical steel sheet are agreed due to a building factor. To evaluate the iron loss of the motor, the iron loss of the stator core should be measured more accurately. Thus, we proposed the method of the iron loss evaluation of the stator core by using a stator model core. This stator model core has been applied to the surface mounted permanent magnet (PM) motors without windings. By rotate the permanent magnet rotor, the rotating magnetic field is generated in the stator core like a motor under driving. To evaluate the iron loss of the stator model core, the iron loss of the stator core can be evaluated. Also, the iron loss can be calculated by a temperature gradient. When the temperature gradient is measured by using thermography camera, the iron loss of entire stator core can be evaluated as the iron loss distribution. In this paper, the usefulness of the iron loss evaluation method by using the stator model core is shown by the simulation with FEM and the heat measurement with thermography camera.
MATCHED-INDEX-OF-REFRACTION FLOW FACILITY FOR FUNDAMENTAL AND APPLIED RESEARCH
DOE Office of Scientific and Technical Information (OSTI.GOV)
Piyush Sabharwall; Carl Stoots; Donald M. McEligot
2014-11-01
Significant challenges face reactor designers with regard to thermal hydraulic design and associated modeling for advanced reactor concepts. Computational thermal hydraulic codes solve only a piece of the core. There is a need for a whole core dynamics system code with local resolution to investigate and understand flow behavior with all the relevant physics and thermo-mechanics. The matched index of refraction (MIR) flow facility at Idaho National Laboratory (INL) has a unique capability to contribute to the development of validated computational fluid dynamics (CFD) codes through the use of state-of-the-art optical measurement techniques, such as Laser Doppler Velocimetry (LDV) andmore » Particle Image Velocimetry (PIV). PIV is a non-intrusive velocity measurement technique that tracks flow by imaging the movement of small tracer particles within a fluid. At the heart of a PIV calculation is the cross correlation algorithm, which is used to estimate the displacement of particles in some small part of the image over the time span between two images. Generally, the displacement is indicated by the location of the largest peak. To quantify these measurements accurately, sophisticated processing algorithms correlate the locations of particles within the image to estimate the velocity (Ref. 1). Prior to use with reactor deign, the CFD codes have to be experimentally validated, which requires rigorous experimental measurements to produce high quality, multi-dimensional flow field data with error quantification methodologies. Computational thermal hydraulic codes solve only a piece of the core. There is a need for a whole core dynamics system code with local resolution to investigate and understand flow behavior with all the relevant physics and thermo-mechanics. Computational techniques with supporting test data may be needed to address the heat transfer from the fuel to the coolant during the transition from turbulent to laminar flow, including the possibility of an early laminarization of the flow (Refs. 2 and 3) (laminarization is caused when the coolant velocity is theoretically in the turbulent regime, but the heat transfer properties are indicative of the coolant velocity being in the laminar regime). Such studies are complicated enough that computational fluid dynamics (CFD) models may not converge to the same conclusion. Thus, experimentally scaled thermal hydraulic data with uncertainties should be developed to support modeling and simulation for verification and validation activities. The fluid/solid index of refraction matching technique allows optical access in and around geometries that would otherwise be impossible while the large test section of the INL system provides better spatial and temporal resolution than comparable facilities. Benchmark data for assessing computational fluid dynamics can be acquired for external flows, internal flows, and coupled internal/external flows for better understanding of physical phenomena of interest. The core objective of this study is to describe MIR and its capabilities, and mention current development areas for uncertainty quantification, mainly the uncertainty surface method and cross-correlation method. Using these methods, it is anticipated to establish a suitable approach to quantify PIV uncertainty for experiments performed in the MIR.« less
Core/corona modeling of diode-imploded annular loads
NASA Astrophysics Data System (ADS)
Terry, R. E.; Guillory, J. U.
1980-11-01
The effects of a tenuous exterior plasma corona with anomalous resistivity on the compression and heating of a hollow, collisional aluminum z-pinch plasma are predicted by a one-dimensional code. As the interior ("core") plasma is imploded by its axial current, the energy exchange between core and corona determines the current partition. Under the conditions of rapid core heating and compression, the increase in coronal current provides a trade-off between radial acceleration and compression, which reduces the implosion forces and softens the pitch. Combined with a heuristic account of energy and momentum transport in the strongly coupled core plasma and an approximate radiative loss calculation including Al line, recombination and Bremsstrahlung emission, the current model can provide a reasonably accurate description of imploding annular plasma loads that remain azimuthally symmetric. The implications for optimization of generator load coupling are examined.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Surkov, A. V., E-mail: surkov.andrew@gmail.com; Kochkin, V. N.; Pesnya, Yu. E.
2015-12-15
A comparison of measured and calculated neutronic characteristics (fast neutron flux and fission rate of {sup 235}U) in the core and reflector of the IR-8 reactor is presented. The irradiation devices equipped with neutron activation detectors were prepared. The determination of fast neutron flux was performed using the {sup 54}Fe (n, p) and {sup 58}Ni (n, p) reactions. The {sup 235}U fission rate was measured using uranium dioxide with 10% enrichment in {sup 235}U. The determination of specific activities of detectors was carried out by measuring the intensity of characteristic gamma peaks using the ORTEC gamma spectrometer. Neutron fields inmore » the core and reflector of the IR-8 reactor were calculated using the MCU-PTR code.« less
Segmentation, dynamic storage, and variable loading on CDC equipment
NASA Technical Reports Server (NTRS)
Tiffany, S. H.
1980-01-01
Techniques for varying the segmented load structure of a program and for varying the dynamic storage allocation, depending upon whether a batch type or interactive type run is desired, are explained and demonstrated. All changes are based on a single data input to the program. The techniques involve: code within the program to suppress scratch pad input/output (I/O) for a batch run or translate the in-core data storage area from blank common to the end-of-code+1 address of a particular segment for an interactive run; automatic editing of the segload directives prior to loading, based upon data input to the program, to vary the structure of the load for interactive and batch runs; and automatic editing of the load map to determine the initial addresses for in core data storage for an interactive run.
Multi-dimensional simulations of core-collapse supernova explosions with CHIMERA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Messer, Bronson; Harris, James Austin; Hix, William Raphael
Unraveling the core-collapse supernova (CCSN) mechanism is a problem that remains essentially unsolved despite more than four decades of effort. Spherically symmetric models with otherwise high physical fidelity generally fail to produce explosions, and it is widely accepted that CCSNe are inherently multi-dimensional. Progress in realistic modeling has occurred recently through the availability of petascale platforms and the increasing sophistication of supernova codes. We will discuss our most recent work on understanding neutrino-driven CCSN explosions employing multi-dimensional neutrino-radiation hydrodynamics simulations with the Chimera code. We discuss the inputs and resulting outputs from these simulations, the role of neutrino radiation transport,more » and the importance of multi-dimensional fluid flows in shaping the explosions. We also highlight the production of 48Ca in long-running Chimera simulations.« less
Unified transform architecture for AVC, AVS, VC-1 and HEVC high-performance codecs
NASA Astrophysics Data System (ADS)
Dias, Tiago; Roma, Nuno; Sousa, Leonel
2014-12-01
A unified architecture for fast and efficient computation of the set of two-dimensional (2-D) transforms adopted by the most recent state-of-the-art digital video standards is presented in this paper. Contrasting to other designs with similar functionality, the presented architecture is supported on a scalable, modular and completely configurable processing structure. This flexible structure not only allows to easily reconfigure the architecture to support different transform kernels, but it also permits its resizing to efficiently support transforms of different orders (e.g. order-4, order-8, order-16 and order-32). Consequently, not only is it highly suitable to realize high-performance multi-standard transform cores, but it also offers highly efficient implementations of specialized processing structures addressing only a reduced subset of transforms that are used by a specific video standard. The experimental results that were obtained by prototyping several configurations of this processing structure in a Xilinx Virtex-7 FPGA show the superior performance and hardware efficiency levels provided by the proposed unified architecture for the implementation of transform cores for the Advanced Video Coding (AVC), Audio Video coding Standard (AVS), VC-1 and High Efficiency Video Coding (HEVC) standards. In addition, such results also demonstrate the ability of this processing structure to realize multi-standard transform cores supporting all the standards mentioned above and that are capable of processing the 8k Ultra High Definition Television (UHDTV) video format (7,680 × 4,320 at 30 fps) in real time.
Zhou, Yangen; Lu, Jiamei; Liu, Xianmin; Zhang, Pengcheng; Chen, Wuying
2014-01-01
To explore the impact of Core self-evaluations on job burnout of nurses, and especially to test and verify the mediator role of organizational commitment between the two variables. Random cluster sampling was used to pick up participants sample, which consisted of 445 nurses of a hospital in Shanghai. Core self-evaluations questionnaire, job burnout scale and organizational commitment scale were administrated to the study participants. There are significant relationships between Core self-evaluations and dimensions of job burnout and organizational commitment. There is a significant mediation effect of organizational commitment between Core self-evaluations and job burnout. To enhance nurses' Core self-evaluations can reduce the incidence of job burnout.
IceChrono v1: a probabilistic model to compute a common and optimal chronology for several ice cores
NASA Astrophysics Data System (ADS)
Parrenin, Frédéric
2015-04-01
Polar ice cores provide exceptional archives of past environmental conditions. The dating of ice cores is essential to interpret the paleo records that they contain, but it is a complicated problem since it involves different dating methods. Here I present IceChrono v1, a new probabilistic model to combine different kinds of chronological information to obtain a common and optimized chronology for several ice cores, as well as its uncertainty. It is based on the inversion of three quantities: the surface accumulation rate, the Lock-In Depth (LID) of air bubbles and the vertical thinning function. The chronological information used are: models of the sedimentation process (accumulation of snow, densification of snow into ice and air trapping, ice flow), ice and gas dated horizons, ice and gas dated depth intervals, Δdepth observations (depth shift between synchronous events recorded in the ice and in the air), stratigraphic links in between ice cores (ice-ice, air-air or mix ice-air and air-ice links). The optimization problem is formulated as a least squares problems, that is, all densities of probabilities are assumed gaussian. It is numerically solved using the Levenberg-Marquardt algorithm and a numerical evaluation of the model's Jacobian. IceChrono is similar in scope to the Datice model, but has differences from the mathematical, numerical and programming point of views. I apply IceChrono on an AICC2012-like experiment and I find similar results than Datice within a few centuries, which is a confirmation of both IceChrono and Datice codes. IceChrono v1 is freely available under the GPL v3 open source license.
First experience with particle-in-cell plasma physics code on ARM-based HPC systems
NASA Astrophysics Data System (ADS)
Sáez, Xavier; Soba, Alejandro; Sánchez, Edilberto; Mantsinen, Mervi; Mateo, Sergi; Cela, José M.; Castejón, Francisco
2015-09-01
In this work, we will explore the feasibility of porting a Particle-in-cell code (EUTERPE) to an ARM multi-core platform from the Mont-Blanc project. The used prototype is based on a system-on-chip Samsung Exynos 5 with an integrated GPU. It is the first prototype that could be used for High-Performance Computing (HPC), since it supports double precision and parallel programming languages.
A Large number of fast cosmological simulations
NASA Astrophysics Data System (ADS)
Koda, Jun; Kazin, E.; Blake, C.
2014-01-01
Mock galaxy catalogs are essential tools to analyze large-scale structure data. Many independent realizations of mock catalogs are necessary to evaluate the uncertainties in the measurements. We perform 3600 cosmological simulations for the WiggleZ Dark Energy Survey to obtain the new improved Baron Acoustic Oscillation (BAO) cosmic distance measurements using the density field "reconstruction" technique. We use 1296^3 particles in a periodic box of 600/h Mpc on a side, which is the minimum requirement from the survey volume and observed galaxies. In order to perform such large number of simulations, we developed a parallel code using the COmoving Lagrangian Acceleration (COLA) method, which can simulate cosmological large-scale structure reasonably well with only 10 time steps. Our simulation is more than 100 times faster than conventional N-body simulations; one COLA simulation takes only 15 minutes with 216 computing cores. We have completed the 3600 simulations with a reasonable computation time of 200k core hours. We also present the results of the revised WiggleZ BAO distance measurement, which are significantly improved by the reconstruction technique.
Tungsten dust impact on ITER-like plasma edge
Smirnov, R. D.; Krasheninnikov, S. I.; Pigarov, A. Yu.; ...
2015-01-12
The impact of tungsten dust originating from divertor plates on the performance of edge plasma in ITER-like discharge is evaluated using computer modeling with the coupled dust-plasma transport code DUSTT-UEDGE. Different dust injection parameters, including dust size and mass injection rates, are surveyed. It is found that tungsten dust injection with rates as low as a few mg/s can lead to dangerously high tungsten impurity concentrations in the plasma core. Dust injections with rates of a few tens of mg/s are shown to have a significant effect on edge plasma parameters and dynamics in ITER scale tokamaks. The large impactmore » of certain phenomena, such as dust shielding by an ablation cloud and the thermal force on tungsten ions, on dust/impurity transport in edge plasma and consequently on core tungsten contamination level is demonstrated. Lastly, it is also found that high-Z impurities provided by dust can induce macroscopic self-sustained plasma oscillations in plasma edge leading to large temporal variations of edge plasma parameters and heat load to divertor target plates.« less
NASA Astrophysics Data System (ADS)
Matsumoto, K.; Hanano, T.; Ito, K.; Ishihara, M.; Higashi, T.; Kikuchi, Y.; Fukumoto, N.; Nagata, M.
2011-10-01
The current drive by Multi-pulsing Coaxial Helicity Injection (M-CHI) has been performed on HIST in a wide range of configurations from high-q ST to low-q ST and spheromak generated by the utilization of the toroidal field. It is a key issue to investigate the dynamo mechanism required to maintain each configuration. To identify the detail mechanisms regarding a helicity transport from the edge to the core region, we have investigated the characteristics of magnetic field fluctuations observed in M- CHI experiments. We have fitted internal magnetic field data to a ST configuration calculated by the equilibrium code with a hollow pressure profile in order to find the sustained configurations. Fluctuation frequency is identified as about 80 kHz and it has been found to propagate from the open flux column region toward the core region. The toroidal mode n=0 is dominant in the high TF coil current operation. Alfven wave generation has been identified by evaluating its velocity as a function of plasma density or magnetic field strength. We will discuss the relationship between the Alfven wave and helicity propagation.
Use of SUSA in Uncertainty and Sensitivity Analysis for INL VHTR Coupled Codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gerhard Strydom
2010-06-01
The need for a defendable and systematic Uncertainty and Sensitivity approach that conforms to the Code Scaling, Applicability, and Uncertainty (CSAU) process, and that could be used for a wide variety of software codes, was defined in 2008.The GRS (Gesellschaft für Anlagen und Reaktorsicherheit) company of Germany has developed one type of CSAU approach that is particularly well suited for legacy coupled core analysis codes, and a trial version of their commercial software product SUSA (Software for Uncertainty and Sensitivity Analyses) was acquired on May 12, 2010. This interim milestone report provides an overview of the current status of themore » implementation and testing of SUSA at the INL VHTR Project Office.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Talley, Darren G.
2017-04-01
This report describes the work and results of the verification and validation (V&V) of the version 1.0 release of the Razorback code. Razorback is a computer code designed to simulate the operation of a research reactor (such as the Annular Core Research Reactor (ACRR)) by a coupled numerical solution of the point reactor kinetics equations, the energy conservation equation for fuel element heat transfer, the equation of motion for fuel element thermal expansion, and the mass, momentum, and energy conservation equations for the water cooling of the fuel elements. This V&V effort was intended to confirm that the code showsmore » good agreement between simulation and actual ACRR operations.« less
openQ*D simulation code for QCD+QED
NASA Astrophysics Data System (ADS)
Campos, Isabel; Fritzsch, Patrick; Hansen, Martin; Krstić Marinković, Marina; Patella, Agostino; Ramos, Alberto; Tantalo, Nazario
2018-03-01
The openQ*D code for the simulation of QCD+QED with C* boundary conditions is presented. This code is based on openQCD-1.6, from which it inherits the core features that ensure its efficiency: the locally-deflated SAP-preconditioned GCR solver, the twisted-mass frequency splitting of the fermion action, the multilevel integrator, the 4th order OMF integrator, the SSE/AVX intrinsics, etc. The photon field is treated as fully dynamical and C* boundary conditions can be chosen in the spatial directions. We discuss the main features of openQ*D, and we show basic test results and performance analysis. An alpha version of this code is publicly available and can be downloaded from http://rcstar.web.cern.ch/.
Porting ONETEP to graphical processing unit-based coprocessors. 1. FFT box operations.
Wilkinson, Karl; Skylaris, Chris-Kriton
2013-10-30
We present the first graphical processing unit (GPU) coprocessor-enabled version of the Order-N Electronic Total Energy Package (ONETEP) code for linear-scaling first principles quantum mechanical calculations on materials. This work focuses on porting to the GPU the parts of the code that involve atom-localized fast Fourier transform (FFT) operations. These are among the most computationally intensive parts of the code and are used in core algorithms such as the calculation of the charge density, the local potential integrals, the kinetic energy integrals, and the nonorthogonal generalized Wannier function gradient. We have found that direct porting of the isolated FFT operations did not provide any benefit. Instead, it was necessary to tailor the port to each of the aforementioned algorithms to optimize data transfer to and from the GPU. A detailed discussion of the methods used and tests of the resulting performance are presented, which show that individual steps in the relevant algorithms are accelerated by a significant amount. However, the transfer of data between the GPU and host machine is a significant bottleneck in the reported version of the code. In addition, an initial investigation into a dynamic precision scheme for the ONETEP energy calculation has been performed to take advantage of the enhanced single precision capabilities of GPUs. The methods used here result in no disruption to the existing code base. Furthermore, as the developments reported here concern the core algorithms, they will benefit the full range of ONETEP functionality. Our use of a directive-based programming model ensures portability to other forms of coprocessors and will allow this work to form the basis of future developments to the code designed to support emerging high-performance computing platforms. Copyright © 2013 Wiley Periodicals, Inc.
PIC codes for plasma accelerators on emerging computer architectures (GPUS, Multicore/Manycore CPUS)
NASA Astrophysics Data System (ADS)
Vincenti, Henri
2016-03-01
The advent of exascale computers will enable 3D simulations of a new laser-plasma interaction regimes that were previously out of reach of current Petasale computers. However, the paradigm used to write current PIC codes will have to change in order to fully exploit the potentialities of these new computing architectures. Indeed, achieving Exascale computing facilities in the next decade will be a great challenge in terms of energy consumption and will imply hardware developments directly impacting our way of implementing PIC codes. As data movement (from die to network) is by far the most energy consuming part of an algorithm future computers will tend to increase memory locality at the hardware level and reduce energy consumption related to data movement by using more and more cores on each compute nodes (''fat nodes'') that will have a reduced clock speed to allow for efficient cooling. To compensate for frequency decrease, CPU machine vendors are making use of long SIMD instruction registers that are able to process multiple data with one arithmetic operator in one clock cycle. SIMD register length is expected to double every four years. GPU's also have a reduced clock speed per core and can process Multiple Instructions on Multiple Datas (MIMD). At the software level Particle-In-Cell (PIC) codes will thus have to achieve both good memory locality and vectorization (for Multicore/Manycore CPU) to fully take advantage of these upcoming architectures. In this talk, we present the portable solutions we implemented in our high performance skeleton PIC code PICSAR to both achieve good memory locality and cache reuse as well as good vectorization on SIMD architectures. We also present the portable solutions used to parallelize the Pseudo-sepctral quasi-cylindrical code FBPIC on GPUs using the Numba python compiler.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yamanaka, M; Takashina, M; Kurosu, K
Purpose: In this study we present Monte Carlo based evaluation of the shielding effect for secondary neutrons from patient collimator, and secondary photons emitted in the process of neutron shielding by combination of moderator and boron-10 placed around patient collimator. Methods: The PHITS Monte Carlo Simulation radiation transport code was used to simulate the proton beam (Ep = 64 to 93 MeV) from a proton therapy facility. In this study, moderators (water, polyethylene and paraffin) and boron (pure {sup 10}B) were placed around patient collimator in this order. The rate of moderator and boron thicknesses was changed fixing the totalmore » thickness at 3cm. The secondary neutron and photons doses were evaluated as the ambient dose equivalent per absorbed dose [H*(10)/D]. Results: The secondary neutrons are shielded more effectively by combination moderators and boron. The most effective combination of shielding neutrons is the polyethylene of 2.4 cm thick and the boron of 0.6 cm thick and the maximum reduction rate is 47.3 %. The H*(10)/D of secondary photons in the control case is less than that of neutrons by two orders of magnitude and the maximum increase of secondary photons is 1.0 µSv/Gy with the polyethylene of 2.8 cm thick and the boron of 0.2 cm thick. Conclusion: The combination of moderators and boron is beneficial for shielding secondary neutrons. Both the secondary photons of control and those emitted in the shielding neutrons are very lower than the secondary neutrons and photon has low RBE in comparison with neutron. Therefore the secondary photons can be ignored in the shielding neutrons.This work was supported by JSPS Core-to-Core Program (No.23003). This work was supported by JSPS Core-to-Core Program (No.23003)« less
Autism-like behavioral phenotypes in BTBR T+tf/J mice.
McFarlane, H G; Kusek, G K; Yang, M; Phoenix, J L; Bolivar, V J; Crawley, J N
2008-03-01
Autism is a behaviorally defined neurodevelopmental disorder of unknown etiology. Mouse models with face validity to the core symptoms offer an experimental approach to test hypotheses about the causes of autism and translational tools to evaluate potential treatments. We discovered that the inbred mouse strain BTBR T+tf/J (BTBR) incorporates multiple behavioral phenotypes relevant to all three diagnostic symptoms of autism. BTBR displayed selectively reduced social approach, low reciprocal social interactions and impaired juvenile play, as compared with C57BL/6J (B6) controls. Impaired social transmission of food preference in BTBR suggests communication deficits. Repetitive behaviors appeared as high levels of self-grooming by juvenile and adult BTBR mice. Comprehensive analyses of procedural abilities confirmed that social recognition and olfactory abilities were normal in BTBR, with no evidence for high anxiety-like traits or motor impairments, supporting an interpretation of highly specific social deficits. Database comparisons between BTBR and B6 on 124 putative autism candidate genes showed several interesting single nucleotide polymorphisms (SNPs) in the BTBR genetic background, including a nonsynonymous coding region polymorphism in Kmo. The Kmo gene encodes kynurenine 3-hydroxylase, an enzyme-regulating metabolism of kynurenic acid, a glutamate antagonist with neuroprotective actions. Sequencing confirmed this coding SNP in Kmo, supporting further investigation into the contribution of this polymorphism to autism-like behavioral phenotypes. Robust and selective social deficits, repetitive self-grooming, genetic stability and commercial availability of the BTBR inbred strain encourage its use as a research tool to search for background genes relevant to the etiology of autism, and to explore therapeutics to treat the core symptoms.
NASA Astrophysics Data System (ADS)
Hauth, T.; Innocente and, V.; Piparo, D.
2012-12-01
The processing of data acquired by the CMS detector at LHC is carried out with an object-oriented C++ software framework: CMSSW. With the increasing luminosity delivered by the LHC, the treatment of recorded data requires extraordinary large computing resources, also in terms of CPU usage. A possible solution to cope with this task is the exploitation of the features offered by the latest microprocessor architectures. Modern CPUs present several vector units, the capacity of which is growing steadily with the introduction of new processor generations. Moreover, an increasing number of cores per die is offered by the main vendors, even on consumer hardware. Most recent C++ compilers provide facilities to take advantage of such innovations, either by explicit statements in the programs sources or automatically adapting the generated machine instructions to the available hardware, without the need of modifying the existing code base. Programming techniques to implement reconstruction algorithms and optimised data structures are presented, that aim to scalable vectorization and parallelization of the calculations. One of their features is the usage of new language features of the C++11 standard. Portions of the CMSSW framework are illustrated which have been found to be especially profitable for the application of vectorization and multi-threading techniques. Specific utility components have been developed to help vectorization and parallelization. They can easily become part of a larger common library. To conclude, careful measurements are described, which show the execution speedups achieved via vectorised and multi-threaded code in the context of CMSSW.
NASA Astrophysics Data System (ADS)
Pattanayak, Sujata; Mohanty, U. C.
2018-06-01
The paper intends to present the development of the extended weather research forecasting data assimilation (WRFDA) system in the framework of the non-hydrostatic mesoscale model core of weather research forecasting system (WRF-NMM), as an imperative aspect of numerical modeling studies. Though originally the WRFDA provides improved initial conditions for advanced research WRF, we have successfully developed a unified WRFDA utility that can be used by the WRF-NMM core, as well. After critical evaluation, it has been strategized to develop a code to merge WRFDA framework and WRF-NMM output. In this paper, we have provided a few selected implementations and initial results through single observation test, and background error statistics like eigenvalues, eigenvector and length scale among others, which showcase the successful development of extended WRFDA code for WRF-NMM model. Furthermore, the extended WRFDA system is applied for the forecast of three severe cyclonic storms: Nargis (27 April-3 May 2008), Aila (23-26 May 2009) and Jal (4-8 November 2010) formed over the Bay of Bengal. Model results are compared and contrasted within the analysis fields and later on with high-resolution model forecasts. The mean initial position error is reduced by 33% with WRFDA as compared to GFS analysis. The vector displacement errors in track forecast are reduced by 33, 31, 30 and 20% to 24, 48, 72 and 96 hr forecasts respectively, in data assimilation experiments as compared to control run. The model diagnostics indicates successful implementation of WRFDA within the WRF-NMM system.
Joyce, Brendan; Lee, Danny; Rubio, Alex; Ogurtsov, Aleksey; Alves, Gelio; Yu, Yi-Kuo
2018-03-15
RAId is a software package that has been actively developed for the past 10 years for computationally and visually analyzing MS/MS data. Founded on rigorous statistical methods, RAId's core program computes accurate E-values for peptides and proteins identified during database searches. Making this robust tool readily accessible for the proteomics community by developing a graphical user interface (GUI) is our main goal here. We have constructed a graphical user interface to facilitate the use of RAId on users' local machines. Written in Java, RAId_GUI not only makes easy executions of RAId but also provides tools for data/spectra visualization, MS-product analysis, molecular isotopic distribution analysis, and graphing the retrieval versus the proportion of false discoveries. The results viewer displays and allows the users to download the analyses results. Both the knowledge-integrated organismal databases and the code package (containing source code, the graphical user interface, and a user manual) are available for download at https://www.ncbi.nlm.nih.gov/CBBresearch/Yu/downloads/raid.html .
The value of psychosocial group activity in nursing education: A qualitative analysis.
Choi, Yun-Jung
2018-05-01
Nursing faculty often struggle to find effective teaching strategies for nursing students that integrate group work into nursing students' learning activities. This study was conducted to evaluate students' experiences in a psychiatric and mental health nursing course using psychosocial group activities to develop therapeutic communication and interpersonal relationship skills, as well as to introduce psychosocial nursing interventions. A qualitative research design was used. The study explored nursing students' experiences of the course in accordance with the inductive, interpretative, and constructive approaches via focus group interviews. Participants were 17 undergraduate nursing students who registered for a psychiatric and mental health nursing course. The collected data were analyzed by qualitative content analysis. The analysis resulted in 28 codes, 14 interpretive codes, 4 themes (developing interpersonal relationships, learning problem-solving skills, practicing cooperation and altruism, and getting insight and healing), and a core theme (interdependent growth in self-confidence). The psychosocial group activity provided constructive opportunities for the students to work independently and interdependently as healthcare team members through reflective learning experiences. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
O'Connor, Evan Patrick
Core-Collapse Supernovae are one of the most complex astrophysical systems in the universe. They deeply entwine aspects of physics and astrophysics that are rarely side by side in nature. To accurately model core-collapse supernovae one must self-consistently combine general relativity, nuclear physics, neutrino physics, and magneto-hydrodynamics in a symmetry-free computational environment. This is a challenging task, as each one of these aspects on its own is an area of great study. We take an open approach in an effort to encourage collaboration in the core-collapse supernovae community. In this thesis, we develop a new open-source general-relativistic spherically-symmetric Eulerian hydrodynamics code for studying stellar collapse, protoneutron star formation, and evolution until black hole formation. GR1D includes support for finite temperature equations of state and an efficient and qualitatively accurate treatment of neutrino leakage. GR1D implements spherically-symmetric rotation, allowing for the study of slowly rotating stellar collapse. GR1D is available at http://www.stellarcollapse.org. We use GR1D to perform an extensive study of black hole formation in failing core-collapse supernovae. Over 100 presupernova models from various sources are used in over 700 total simulations. We systematically explore the dependence of black hole formation on the input physics: initial zero-age main sequence (ZAMS) mass and metallicity, nuclear equation of state, rotation, and stellar mass loss rates. Assuming the core-collapse supernova mechanism fails and a black hole forms, we find that the outcome, for a given equation of state, can be estimated, to first order, by a single parameter, the compactness of the stellar core at bounce. By comparing the protoneutron star structure at the onset of gravitational instability with solutions of the Tolman-Oppenheimer-Volkof equations, we find that thermal pressure support in the outer protoneutron star core is responsible for raising the maximum protoneutron star mass by up to 25% above the cold neutron star value. By artificially increasing neutrino heating, we find the critical neutrino heating efficiency required for exploding a given progenitor structure and connect these findings with ZAMS conditions. This establishes, albeit approximately, for the first time based on actual collapse simulations, the mapping between ZAMS parameters and the outcome of core collapse. We also use GR1D to study proposed progenitors of long-duration gamma-ray bursts. We find that many of the proposed progenitors have core structures similar to garden-variety core-collapse supernovae. These are not expected to form black holes, a key ingredient of the collapsar model of long-duration gamma-ray bursts. The small fraction of proposed progenitors that are compact enough to form black holes have fast rotating iron cores, making them prone to a magneto-rotational explosion and the formation of a protomagnetar rather than a black hole. Finally, we present preliminary work on a fully general-relativistic neutrino transport code and neutrino-interaction library. Following along with the trends explored in our black hole formation study, we look at the dependence of the neutrino observables on the bounce compactness. We find clear relationships that will allow us to extract details of the core structure from the next galactic supernova. Following the open approach of GR1D, the neutrino transport code will be made open-source upon completion. The open-source neutrino-interaction library, NuLib, is already available at http://www.nulib.org.
Summary of papers on current and anticipated uses of thermal-hydraulic codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Caruso, R.
1997-07-01
The author reviews a range of recent papers which discuss possible uses and future development needs for thermal/hydraulic codes in the nuclear industry. From this review, eight common recommendations are extracted. They are: improve the user interface so that more people can use the code, so that models are easier and less expensive to prepare and maintain, and so that the results are scrutable; design the code so that it can easily be coupled to other codes, such as core physics, containment, fission product behaviour during severe accidents; improve the numerical methods to make the code more robust and especiallymore » faster running, particularly for low pressure transients; ensure that future code development includes assessment of code uncertainties as integral part of code verification and validation; provide extensive user guidelines or structure the code so that the `user effect` is minimized; include the capability to model multiple fluids (gas and liquid phase); design the code in a modular fashion so that new models can be added easily; provide the ability to include detailed or simplified component models; build on work previously done with other codes (RETRAN, RELAP, TRAC, CATHARE) and other code validation efforts (CSAU, CSNI SET and IET matrices).« less
NASA Astrophysics Data System (ADS)
Weng, Yi; He, Xuan; Wang, Junyi; Pan, Zhongqi
2017-01-01
Spatial-division multiplexing (SDM) techniques have been purposed to increase the capacity of optical fiber transmission links by utilizing multicore fibers or few-mode fibers (FMF). The most challenging impairments of SDMbased long-haul optical links mainly include modal dispersion and mode-dependent loss (MDL), whereas MDL arises from inline component imperfections, and breaks modal orthogonality thus degrading the capacity of multiple-inputmultiple- output (MIMO) receivers. To reduce MDL, optical approaches include mode scramblers and specialty fiber designs, yet these methods were burdened with high cost, yet cannot completely remove the accumulated MDL in the link. Besides, space-time trellis codes (STTC) were purposed to lessen MDL, but suffered from high complexity. In this work, we investigated the performance of space-time block-coding (STBC) scheme to mitigate MDL in SDM-based optical communication by exploiting space and delay diversity, whereas weight matrices of frequency-domain equalization (FDE) were updated heuristically using decision-directed recursive-least-squares (RLS) algorithm for convergence and channel estimation. The STBC was evaluated in a six-mode multiplexed system over 30-km FMF via 6×6 MIMO FDE, with modal gain offset 3 dB, core refractive index 1.49, numerical aperture 0.5. Results show that optical-signal-to-noise ratio (OSNR) tolerance can be improved via STBC by approximately 3.1, 4.9, 7.8 dB for QPSK, 16- and 64-QAM with respective bit-error-rates (BER) and minimum-mean-square-error (MMSE). Besides, we also evaluate the complexity optimization of STBC decoding scheme with zero-forcing decision feedback (ZFDF) equalizer by shortening the coding slot length, which is robust to frequency-selective fading channels, and can be scaled up for SDM systems with more dynamic channels.
Compression behavior of delaminated composite plates
NASA Technical Reports Server (NTRS)
Peck, Scott O.; Springer, George S.
1989-01-01
The response of delaminated composite plates to compressive in-plane loads was investigated. The delaminated region may be either circular or elliptical, and may be located between any two plies of the laminate. For elliptical delaminations, the axes of the ellipse may be arbitrarily oriented with respect to the applied loads. A model was developed that describes the stresses, strains, and deformation of the sublaminate created by the delamination. The mathematical model is based on a two dimensional nonlinear plate theory that includes the effects of transverse shear deformation. The model takes into account thermal and moisture induced strains, transverse pressures acting on the sublaminate, and contact between the sublaminate and plate. The solution technique used is the Ritz method. A computationally efficient computer implementation of the model was developed. The code can be used to predict the nonlinear-load-strain behavior of the sublaminate including the buckling load, postbuckling behavior, and the onset of delamination growth. The accuracy of the code was evaluated by comparing the model results to benchmark analytical solutions. A series of experiments was conducted on Fiberite T300/976 graphite/epoxy laminates bonded to an aluminum honeycomb core forming a sandwich panel. Either circles or ellipses made from Teflon film were embedded in the laminates, simulating the presence of a delamination. Each specimen was loaded in compression and the strain history of the sublaminate was recorded far into the postbuckling regime. The extent of delamination growth was evaluated by C-scan examination of each specimen. The experimental data were compared to code predictions. The code was found to describe the data with reasonable accuracy. A sensitivity study examined the relative importance of various material properties, the delamination dimensions, the contact model, the transverse pressure differential, the critical strain energy release rate, and the relative growth direction on the buckling load, the postbuckling behavior, and the growth load of the sublaminate.
NASA Astrophysics Data System (ADS)
Benettin, Paolo; Bertuzzo, Enrico
2018-04-01
This paper presents the tran-SAS
package, which includes a set of codes to model solute transport and water residence times through a hydrological system. The model is based on a catchment-scale approach that aims at reproducing the integrated response of the system at one of its outlets. The codes are implemented in MATLAB and are meant to be easy to edit, so that users with minimal programming knowledge can adapt them to the desired application. The problem of large-scale solute transport has both theoretical and practical implications. On the one side, the ability to represent the ensemble of water flow trajectories through a heterogeneous system helps unraveling streamflow generation processes and allows us to make inferences on plant-water interactions. On the other side, transport models are a practical tool that can be used to estimate the persistence of solutes in the environment. The core of the package is based on the implementation of an age master equation (ME), which is solved using general StorAge Selection (SAS) functions. The age ME is first converted into a set of ordinary differential equations, each addressing the transport of an individual precipitation input through the catchment, and then it is discretized using an explicit numerical scheme. Results show that the implementation is efficient and allows the model to run in short times. The numerical accuracy is critically evaluated and it is shown to be satisfactory in most cases of hydrologic interest. Additionally, a higher-order implementation is provided within the package to evaluate and, if necessary, to improve the numerical accuracy of the results. The codes can be used to model streamflow age and solute concentration, but a number of additional outputs can be obtained by editing the codes to further advance the ability to understand and model catchment transport processes.
A first step to compare geodynamical models and seismic observations of the inner core
NASA Astrophysics Data System (ADS)
Lasbleis, M.; Waszek, L.; Day, E. A.
2016-12-01
Seismic observations have revealed a complex inner core, with lateral and radial heterogeneities at all observable scales. The dominant feature is the east-west hemispherical dichotomy in seismic velocity and attenuation. Several geodynamical models have been proposed to explain the observed structure: convective instabilities, external forces, crystallisation processes or influence of outer core convection. However, interpreting such geodynamical models in terms of the seismic observations is difficult, and has been performed only for very specific models (Geballe 2013, Lincot 2014, 2016). Here, we propose a common framework to make such comparisons. We have developed a Python code that propagates seismic ray paths through kinematic geodynamical models for the inner core, computing a synthetic seismic data set that can be compared to seismic observations. Following the method of Geballe 2013, we start with the simple model of translation. For this, the seismic velocity is proposed to be function of the age or initial growth rate of the material (since there is no deformation included in our models); the assumption is reasonable when considering translation, growth and super rotation of the inner core. Using both artificial (random) seismic ray data sets and a real inner core data set (from Waszek et al. 2011), we compare these different models. Our goal is to determine the model which best matches the seismic observations. Preliminary results show that super rotation successfully creates an eastward shift in properties with depth, as has been observed seismically. Neither the growth rate of inner core material nor the relationship between crystal size and seismic velocity are well constrained. Consequently our method does not directly compute the seismic travel times. Instead, here we use age, growth rate and other parameters as proxies for the seismic properties, which represent a good first step to compare geodynamical and seismic observations.Ultimately we aim to release our codes to broader scientific community, allowing researchers from all disciplines to test their models of inner core growth against seismic observations or create a kinematic model for the evolution of the inner core which matches new geophysical observations.
Zhou, Yangen; Lu, Jiamei; Liu, Xianmin; Zhang, Pengcheng; Chen, Wuying
2014-01-01
Objective To explore the impact of Core self-evaluations on job burnout of nurses, and especially to test and verify the mediator role of organizational commitment between the two variables. Method Random cluster sampling was used to pick up participants sample, which consisted of 445 nurses of a hospital in Shanghai. Core self-evaluations questionnaire, job burnout scale and organizational commitment scale were administrated to the study participants. Results There are significant relationships between Core self-evaluations and dimensions of job burnout and organizational commitment. There is a significant mediation effect of organizational commitment between Core self-evaluations and job burnout. Conclusions To enhance nurses’ Core self-evaluations can reduce the incidence of job burnout. PMID:24755670
Moving away from exhaustion: how core self-evaluations influence academic burnout.
Lian, Penghu; Sun, Yunfeng; Ji, Zhigang; Li, Hanzhong; Peng, Jiaxi
2014-01-01
Academic burnout refers to students who have low interest, lack of motivation, and tiredness in studying. Studies concerning how to prevent academic burnout are rare. The present study aimed to investigate the impact of core self-evaluations on the academic burnout of university students, and mainly focused on the confirmation of the mediator role of life satisfaction. A total of 470 university students accomplished the core self-evaluation scale, Satisfaction with Life, and academic burnout scale. Both core self-evaluations and life satisfaction were significantly correlated with academic burnout. Structural equation modeling indicated that life satisfaction partially mediated the relationship between core self-evaluations and academic burnout. Core self-evaluations significantly influence academic burnout and are partially mediated by life satisfaction.
Multigrid based First-Principles Molecular Dynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fattebert, Jean-Luc; Osei-Kuffuor, Daniel; Dunn, Ian
2017-06-01
MGmol ls a First-Principles Molecular Dynamics code. It relies on the Born-Oppenheimer approximation and models the electronic structure using Density Functional Theory, either LDA or PBE. Norm-conserving pseudopotentials are used to model atomic cores.
IceChrono1: a probabilistic model to compute a common and optimal chronology for several ice cores
NASA Astrophysics Data System (ADS)
Parrenin, F.; Bazin, L.; Capron, E.; Landais, A.; Lemieux-Dudon, B.; Masson-Delmotte, V.
2015-05-01
Polar ice cores provide exceptional archives of past environmental conditions. The dating of ice cores and the estimation of the age-scale uncertainty are essential to interpret the climate and environmental records that they contain. It is, however, a complex problem which involves different methods. Here, we present IceChrono1, a new probabilistic model integrating various sources of chronological information to produce a common and optimized chronology for several ice cores, as well as its uncertainty. IceChrono1 is based on the inversion of three quantities: the surface accumulation rate, the lock-in depth (LID) of air bubbles and the thinning function. The chronological information integrated into the model are models of the sedimentation process (accumulation of snow, densification of snow into ice and air trapping, ice flow), ice- and air-dated horizons, ice and air depth intervals with known durations, depth observations (depth shift between synchronous events recorded in the ice and in the air) and finally air and ice stratigraphic links in between ice cores. The optimization is formulated as a least squares problem, implying that all densities of probabilities are assumed to be Gaussian. It is numerically solved using the Levenberg-Marquardt algorithm and a numerical evaluation of the model's Jacobian. IceChrono follows an approach similar to that of the Datice model which was recently used to produce the AICC2012 (Antarctic ice core chronology) for four Antarctic ice cores and one Greenland ice core. IceChrono1 provides improvements and simplifications with respect to Datice from the mathematical, numerical and programming point of views. The capabilities of IceChrono1 are demonstrated on a case study similar to the AICC2012 dating experiment. We find results similar to those of Datice, within a few centuries, which is a confirmation of both IceChrono1 and Datice codes. We also test new functionalities with respect to the original version of Datice: observations as ice intervals with known durations, correlated observations, observations as air intervals with known durations and observations as mixed ice-air stratigraphic links. IceChrono1 is freely available under the General Public License v3 open source license.
Current Status of Japan's Activity for GPM/DPR and Global Rainfall Map algorithm development
NASA Astrophysics Data System (ADS)
Kachi, M.; Kubota, T.; Yoshida, N.; Kida, S.; Oki, R.; Iguchi, T.; Nakamura, K.
2012-04-01
The Global Precipitation Measurement (GPM) mission is composed of two categories of satellites; 1) a Tropical Rainfall Measuring Mission (TRMM)-like non-sun-synchronous orbit satellite (GPM Core Observatory); and 2) constellation of satellites carrying microwave radiometer instruments. The GPM Core Observatory carries the Dual-frequency Precipitation Radar (DPR), which is being developed by the Japan Aerospace Exploration Agency (JAXA) and the National Institute of Information and Communications Technology (NICT), and microwave radiometer provided by the National Aeronautics and Space Administration (NASA). GPM Core Observatory will be launched in February 2014, and development of algorithms is underway. DPR Level 1 algorithm, which provides DPR L1B product including received power, will be developed by the JAXA. The first version was submitted in March 2011. Development of the second version of DPR L1B algorithm (Version 2) will complete in March 2012. Version 2 algorithm includes all basic functions, preliminary database, HDF5 I/F, and minimum error handling. Pre-launch code will be developed by the end of October 2012. DPR Level 2 algorithm has been developing by the DPR Algorithm Team led by Japan, which is under the NASA-JAXA Joint Algorithm Team. The first version of GPM/DPR Level-2 Algorithm Theoretical Basis Document was completed on November 2010. The second version, "Baseline code", was completed in January 2012. Baseline code includes main module, and eight basic sub-modules (Preparation module, Vertical Profile module, Classification module, SRT module, DSD module, Solver module, Input module, and Output module.) The Level-2 algorithms will provide KuPR only products, KaPR only products, and Dual-frequency Precipitation products, with estimated precipitation rate, radar reflectivity, and precipitation information such as drop size distribution and bright band height. It is important to develop algorithm applicable to both TRMM/PR and KuPR in order to produce long-term continuous data set. Pre-launch code will be developed by autumn 2012. Global Rainfall Map algorithm has been developed by the Global Rainfall Map Algorithm Development Team in Japan. The algorithm succeeded heritages of the Global Satellite Mapping for Precipitation (GSMaP) project between 2002 and 2007, and near-real-time version operating at JAXA since 2007. "Baseline code" used current operational GSMaP code (V5.222,) and development completed in January 2012. Pre-launch code will be developed by autumn 2012, including update of database for rain type classification and rain/no-rain classification, and introduction of rain-gauge correction.
2005-09-01
thermal expansion of these truss elements. One side of the structure is fully clamped, while the other is free to displace. As in prior assessments [6...levels, by using the finite element package ABAQUS . To simulate the complete system, the core and the Kagome face members are modeled using linear...code ABAQUS . To simulate the complete actuation system, the core and Kagome members are modeled using linear Timoshenko-type beams, while the solid
Theoretical Developments in Understanding Massive Star Formation
NASA Technical Reports Server (NTRS)
Yorke, Harold W.; Bodenheimer, Peter
2007-01-01
Except under special circumstances massive stars in galactic disks will form through accretion. The gravitational collapse of a molecular cloud core will initially produce one or more low mass quasi-hydrostatic objects of a few Jupiter masses. Through subsequent accretion the masses of these cores grow as they simultaneously evolve toward hydrogen burning central densities and temperatures. We review the evolution of accreting (proto-)stars, including new results calculated with a publicly available stellar evolution code written by the authors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Savander, V. I.; Shumskiy, B. E., E-mail: borisshumskij@yandex.ru; Pinegin, A. A.
The possibility of decreasing the vapor fraction at the VVER-1200 fuel assembly outlet by shaping the axial power density field is considered. The power density field was shaped by axial redistribution of the concentration of the burnable gadolinium poison in the Gd-containing fuel rods. The mathematical modeling of the VVER-1200 core was performed using the NOSTRA computer code.
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
2008-07-15
The Meeting papers discuss research and test reactor fuel performance, manufacturing and testing. Some of the main topics are: conversion from HEU to LEU in different reactors and corresponding problems and activities; flux performance and core lifetime analysis with HEU and LEU fuels; physics and safety characteristics; measurement of gamma field parameters in core with LEU fuel; nondestructive analysis of RERTR fuel; thermal hydraulic analysis; fuel interactions; transient analyses and thermal hydraulics for HEU and LEU cores; microstructure research reactor fuels; post irradiation analysis and performance; computer codes and other related problems.
Method for depleting BWRs using optimal control rod patterns
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taner, M.S.; Levine, S.H.; Hsiao, M.Y.
1991-01-01
Control rod (CR) programming is an essential core management activity for boiling water reactors (BWRs). After establishing a core reload design for a BWR, CR programming is performed to develop a sequence of exposure-dependent CR patterns that assure the safe and effective depletion of the core through a reactor cycle. A time-variant target power distribution approach has been assumed in this study. The authors have developed OCTOPUS to implement a new two-step method for designing semioptimal CR programs for BWRs. The optimization procedure of OCTOPUS is based on the method of approximation programming and uses the SIMULATE-E code for nucleonicsmore » calculations.« less
NASA Technical Reports Server (NTRS)
Suarez, Max J. (Editor); Takacs, Lawrence L.
1995-01-01
A detailed description of the numerical formulation of Version 2 of the ARIES/GEOS 'dynamical core' is presented. This code is a nearly 'plug-compatible' dynamics for use in atmospheric general circulation models (GCMs). It is a finite difference model on a staggered latitude-longitude C-grid. It uses second-order differences for all terms except the advection of vorticity by the rotation part of the flow, which is done at fourth-order accuracy. This dynamical core is currently being used in the climate (ARIES) and data assimilation (GEOS) GCMs at Goddard.
Power Radiated from ITER and CIT by Impurities
DOE R&D Accomplishments Database
Cummings, J.; Cohen, S. A.; Hulse, R.; Post, D. E.; Redi, M. H.; Perkins, J.
1990-07-01
The MIST code has been used to model impurity radiation from the edge and core plasmas in ITER and CIT. A broad range of parameters have been varied, including Z{sub eff}, impurity species, impurity transport coefficients, and plasma temperature and density profiles, especially at the edge. For a set of these parameters representative of the baseline ITER ignition scenario, it is seen that impurity radiation, which is produced in roughly equal amounts by the edge and core regions, can make a major improvement in divertor operation without compromising core energy confinement. Scalings of impurity radiation with atomic number and machine size are also discussed.
Fungible weights in logistic regression.
Jones, Jeff A; Waller, Niels G
2016-06-01
In this article we develop methods for assessing parameter sensitivity in logistic regression models. To set the stage for this work, we first review Waller's (2008) equations for computing fungible weights in linear regression. Next, we describe 2 methods for computing fungible weights in logistic regression. To demonstrate the utility of these methods, we compute fungible logistic regression weights using data from the Centers for Disease Control and Prevention's (2010) Youth Risk Behavior Surveillance Survey, and we illustrate how these alternate weights can be used to evaluate parameter sensitivity. To make our work accessible to the research community, we provide R code (R Core Team, 2015) that will generate both kinds of fungible logistic regression weights. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
MELCOR computer code manuals: Primer and user`s guides, Version 1.8.3 September 1994. Volume 1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Summers, R.M.; Cole, R.K. Jr.; Smith, R.C.
1995-03-01
MELCOR is a fully integrated, engineering-level computer code that models the progression of severe accidents in light water reactor nuclear power plants. MELCOR is being developed at Sandia National Laboratories for the US Nuclear Regulatory Commission as a second-generation plant risk assessment tool and the successor to the Source Term Code Package. A broad spectrum of severe accident phenomena in both boiling and pressurized water reactors is treated in MELCOR in a unified framework. These include: thermal-hydraulic response in the reactor coolant system, reactor cavity, containment, and confinement buildings; core heatup, degradation, and relocation; core-concrete attack; hydrogen production, transport, andmore » combustion; fission product release and transport; and the impact of engineered safety features on thermal-hydraulic and radionuclide behavior. Current uses of MELCOR include estimation of severe accident source terms and their sensitivities and uncertainties in a variety of applications. This publication of the MELCOR computer code manuals corresponds to MELCOR 1.8.3, released to users in August, 1994. Volume 1 contains a primer that describes MELCOR`s phenomenological scope, organization (by package), and documentation. The remainder of Volume 1 contains the MELCOR Users` Guides, which provide the input instructions and guidelines for each package. Volume 2 contains the MELCOR Reference Manuals, which describe the phenomenological models that have been implemented in each package.« less
Porting a Hall MHD Code to a Graphic Processing Unit
NASA Technical Reports Server (NTRS)
Dorelli, John C.
2011-01-01
We present our experience porting a Hall MHD code to a Graphics Processing Unit (GPU). The code is a 2nd order accurate MUSCL-Hancock scheme which makes use of an HLL Riemann solver to compute numerical fluxes and second-order finite differences to compute the Hall contribution to the electric field. The divergence of the magnetic field is controlled with Dedner?s hyperbolic divergence cleaning method. Preliminary benchmark tests indicate a speedup (relative to a single Nehalem core) of 58x for a double precision calculation. We discuss scaling issues which arise when distributing work across multiple GPUs in a CPU-GPU cluster.
Study of no-man's land physics in the total-f gyrokinetic code XGC1
NASA Astrophysics Data System (ADS)
Ku, Seung Hoe; Chang, C. S.; Lang, J.
2014-10-01
While the ``transport shortfall'' in the ``no-man's land'' has been observed often in delta-f codes, it has not yet been observed in the global total-f gyrokinetic particle code XGC1. Since understanding the interaction between the edge and core transport appears to be a critical element in the prediction for ITER performance, understanding the no-man's land issue is an important physics research topic. Simulation results using the Holland case will be presented and the physics causing the shortfall phenomenon will be discussed. Nonlinear nonlocal interaction of turbulence, secondary flows, and transport appears to be the key.
Coenen, Michaela; Rudolf, Klaus-Dieter; Kus, Sandra; Dereskewitz, Caroline
2018-05-24
The International Classification of Functioning, Disability and Health (ICF) provides a standardized language of almost 1500 ICF categories for coding information about functioning and contextual factors. Short lists (ICF Core Sets) are helpful tools to support the implementation of the ICF in clinical routine. In this paper we report on the implementation of ICF Core Sets in clinical routine using the "ICF Core Sets for Hand Conditions" and the "Lighthouse Project Hand" as an example. Based on the ICF categories of the "Brief ICF Core Set for Hand Conditions", the ICF-based assessment tool (ICF Hand A ) was developed aiming to guide the assessment and treatment of patients with injuries and diseases located at the hand. The ICF Hand A facilitates the standardized assessment of functioning - taking into consideration of a holistic view of the patients - along the continuum of care ranging from acute care to rehabilitation and return to work. Reference points for the assessment of the ICF Hand A are determined in treatment guidelines for selected injuries and diseases of the hand along with recommendations for acute treatment and care, procedures and interventions of subsequent treatment and rehabilitation. The assessment of the ICF Hand A according to the defined reference points can be done using electronic clinical assessment tools and allows for an automatic generation of a timely medical report of a patient's functioning. In the future, the ICF Hand A can be used to inform the coding of functioning in ICD-11.
Bain, Christine; Parroche, Peggy; Lavergne, Jean Pierre; Duverger, Blandine; Vieux, Claude; Dubois, Valérie; Komurian-Pradel, Florence; Trépo, Christian; Gebuhrer, Lucette; Paranhos-Baccala, Glaucia; Penin, François; Inchauspé, Geneviève
2004-01-01
In vitro studies have described the synthesis of an alternative reading frame form of the hepatitis C virus (HCV) core protein that was named F protein or ARFP (alternative reading frame protein) and includes a domain coded by the +1 open reading frame of the RNA core coding region. The expression of this protein in HCV-infected patients remains controversial. We have analyzed peripheral blood from 47 chronically or previously HCV-infected patients for the presence of T lymphocytes and antibodies specific to the ARFP. Anti-ARFP antibodies were detected in 41.6% of the patients infected with various HCV genotypes. Using a specific ARFP 99-amino-acid polypeptide as well as four ARFP predicted class I-restricted 9-mer peptides, we show that 20% of the patients display specific lymphocytes capable of producing gamma interferon, interleukin-10, or both cytokines. Patients harboring three different viral genotypes (1a, 1b, and 3) carried T lymphocytes reactive to genotype 1b-derived peptides. In longitudinal analysis of patients receiving therapy, both core and ARFP-specific T-cell- and B-cell-mediated responses were documented. The magnitude and kinetics of the HCV antigen-specific responses differed and were not linked with viremia or therapy outcome. These observations provide strong and new arguments in favor of the synthesis, during natural HCV infection, of an ARFP derived from the core sequence. Moreover, the present data provide the first demonstration of the presence of T-cell-mediated immune responses directed to this novel HCV antigen. PMID:15367612
Moving Away from Exhaustion: How Core Self-Evaluations Influence Academic Burnout
Lian, Penghu; Sun, Yunfeng; Ji, Zhigang; Li, Hanzhong; Peng, Jiaxi
2014-01-01
Background Academic burnout refers to students who have low interest, lack of motivation, and tiredness in studying. Studies concerning how to prevent academic burnout are rare. Objective The present study aimed to investigate the impact of core self-evaluations on the academic burnout of university students, and mainly focused on the confirmation of the mediator role of life satisfaction. Methods A total of 470 university students accomplished the core self-evaluation scale, Satisfaction with Life, and academic burnout scale. Results Both core self-evaluations and life satisfaction were significantly correlated with academic burnout. Structural equation modeling indicated that life satisfaction partially mediated the relationship between core self-evaluations and academic burnout. Conclusions Core self-evaluations significantly influence academic burnout and are partially mediated by life satisfaction. PMID:24489857
Posttest calculations of bundle quench test CORA-13 with ATHLET-CD
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bestele, J.; Trambauer, K.; Schubert, J.D.
Gesellschaft fuer Anlagen- und Reaktorsicherheit is developing, in cooperation with the Institut fuer Kernenergetik und Energiesysteme, Stuttgart, the system code Analysis of Thermalhydraulics of Leaks and Transients with Core Degradation (ATHLET-CD). The code consists of detailed models of the thermal hydraulics of the reactor coolant system. This thermo-fluid dynamics module is coupled with modules describing the early phase of the core degradation, like cladding deformation, oxidation and melt relocation, and the release and transport of fission products. The assessment of the code is being done by the analysis of separate effect tests, integral tests, and plant events. The code willmore » be applied to the verification of severe accident management procedures. The out-of-pile test CORA-13 was conducted by Forschungszentrum Karlsruhe in their CORA test facility. The test consisted of two phases, a heatup phase and a quench phase. At the beginning of the quench phase, a sharp peak in the hydrogen generation rate was observed. Both phases of the test have been calculated with the system code ATHLET-CD. Special efforts have been made to simulate the heat losses and the flow distribution in the test facility and the thermal hydraulics during the quench phase. In addition to previous calculations, the material relocation and the quench phase have been modeled. The temperature increase during the heatup phase, the starting time of the temperature escalation, and the maximum temperatures have been calculated correctly. At the beginning of the quench phase, an increased hydrogen generation rate has been calculated as measured in the experiment.« less
AN OPEN-SOURCE NEUTRINO RADIATION HYDRODYNAMICS CODE FOR CORE-COLLAPSE SUPERNOVAE
DOE Office of Scientific and Technical Information (OSTI.GOV)
O’Connor, Evan, E-mail: evanoconnor@ncsu.edu; CITA, Canadian Institute for Theoretical Astrophysics, Toronto, M5S 3H8
2015-08-15
We present an open-source update to the spherically symmetric, general-relativistic hydrodynamics, core-collapse supernova (CCSN) code GR1D. The source code is available at http://www.GR1Dcode.org. We extend its capabilities to include a general-relativistic treatment of neutrino transport based on the moment formalisms of Shibata et al. and Cardall et al. We pay special attention to implementing and testing numerical methods and approximations that lessen the computational demand of the transport scheme by removing the need to invert large matrices. This is especially important for the implementation and development of moment-like transport methods in two and three dimensions. A critical component of neutrinomore » transport calculations is the neutrino–matter interaction coefficients that describe the production, absorption, scattering, and annihilation of neutrinos. In this article we also describe our open-source neutrino interaction library NuLib (available at http://www.nulib.org). We believe that an open-source approach to describing these interactions is one of the major steps needed to progress toward robust models of CCSNe and robust predictions of the neutrino signal. We show, via comparisons to full Boltzmann neutrino-transport simulations of CCSNe, that our neutrino transport code performs remarkably well. Furthermore, we show that the methods and approximations we employ to increase efficiency do not decrease the fidelity of our results. We also test the ability of our general-relativistic transport code to model failed CCSNe by evolving a 40-solar-mass progenitor to the onset of collapse to a black hole.« less
A future-proof architecture for telemedicine using loose-coupled modules and HL7 FHIR.
Gøeg, Kirstine Rosenbeck; Rasmussen, Rune Kongsgaard; Jensen, Lasse; Wollesen, Christian Møller; Larsen, Søren; Pape-Haugaard, Louise Bilenberg
2018-07-01
Most telemedicine solutions are proprietary and disease specific which cause a heterogeneous and silo-oriented system landscape with limited interoperability. Solving the interoperability problem would require a strong focus on data integration and standardization in telemedicine infrastructures. Our objective was to suggest a future-proof architecture, that consisted of small loose-coupled modules to allow flexible integration with new and existing services, and the use of international standards to allow high re-usability of modules, and interoperability in the health IT landscape. We identified core features of our future-proof architecture as the following (1) To provide extended functionality the system should be designed as a core with modules. Database handling and implementation of security protocols are modules, to improve flexibility compared to other frameworks. (2) To ensure loosely coupled modules the system should implement an inversion of control mechanism. (3) A focus on ease of implementation requires the system should use HL7 FHIR (Fast Interoperable Health Resources) as the primary standard because it is based on web-technologies. We evaluated the feasibility of our architecture by developing an open source implementation of the system called ORDS. ORDS is written in TypeScript, and makes use of the Express Framework and HL7 FHIR DSTU2. The code is distributed on GitHub. All modules have been tested unit wise, but end-to-end testing awaits our first clinical example implementations. Our study showed that highly adaptable and yet interoperable core frameworks for telemedicine can be designed and implemented. Future work includes implementation of a clinical use case and evaluation. Copyright © 2018 Elsevier B.V. All rights reserved.
Mills, Jane; Yates, Karen; Harrison, Helena; Woods, Cindy; Chamberlain-Salaun, Jennifer; Trueman, Scott; Hitchins, Marnie
2016-08-01
Postgraduate nursing students' negative perceptions about a core research subject at an Australian university led to a revision and restructure of the subject using a Communities of Inquiry framework. Negative views are often expressed by nursing and midwifery students about the research process. The success of evidence-based practice is dependent on changing these views. A Community of Inquiry is an online teaching, learning, thinking, and sharing space created through the combination of three domains-teacher presence (related largely to pedagogy), social presence, and cognitive presence (critical thinking). Evaluate student satisfaction with a postgraduate core nursing and midwifery subject in research design, theory, and methodology, which was delivered using a Communities of Inquiry framework. This evaluative study incorporated a validated Communities of Inquiry survey (n=29) and interviews (n=10) and was conducted at an Australian university. Study participants were a convenience sample drawn from 56 postgraduate students enrolled in a core research subject. Survey data were analysed descriptively and interviews were coded thematically. Five main themes were identified: subject design and delivery; cultivating community through social interaction; application-knowledge, practice, research; student recommendations; and technology and technicalities. Student satisfaction was generally high, particularly in the areas of cognitive presence (critical thinking) and teacher presence (largely pedagogy related). Students' views about the creation of a "social presence" were varied but overall, the framework was effective in stimulating both inquiry and a sense of community. The process of research is, in itself, the creation of a "community of inquiry." This framework showed strong potential for use in the teaching of nurse research subjects; satisfaction was high as students reported learning, not simply the theory and the methods of research, but also how to engage in "doing" research by forging professional and intellectual communities. Copyright © 2016 Elsevier Ltd. All rights reserved.
Herbrecht, Evelyn; Kievit, Esther; Spiegel, René; Dima, Diana; Goth, Kirstin; Schmeck, Klaus
2015-01-01
In autism spectrum disorders (ASDs), impairments in fundamental social abilities and a lack of interest in social stimuli become apparent early in life. These impairments are thought to negatively affect further brain and behavioural development. Early intensive interventions can help to attenuate social-development and other risk factors and, thus, to ameliorate the deficits associated with ASDs. We present FIAS, an intensive early intervention approach for young children with ASD, which aims at developing children's social motivation. During 18 days, therapists work continuously for 6 h a day with the affected child, involving the whole family in a day care setting. Follow-up care at home over 1 year as well as fresh-up interventions and inclusion in kindergarten or a play group should stabilise the effects and help to respond to further challenges. Here, we present observations from the first 12 patients (25-48 months of age) treated according to the FIAS approach. We evaluated changes in core autistic symptoms and level of functioning after the 18 days of intensive intervention. Beyond standardised assessment, two innovative video-based instruments (Autism Behaviour Coding System and Evaluationsfragebogen) have been developed to assess autistic symptoms and interaction parameters during intervention. Improvements were noted in most core autistic symptom domains, with the highest effect sizes in domains like eye contact, communication, repetitive behaviour, imitation, motivation and reciprocity. In addition, the level of functioning significantly improved. The first evaluation of the FIAS approach shows promising results, as the FIAS intervention appears to improve core autistic symptom domains as well as the level of everyday functioning. Limitations of this study are the small sample size and the lack of a control group. A more comprehensive and longitudinal evaluation is in progress; this will focus on the stability of the observed effects and will attempt to identify potential predictors of treatment response. © 2015 S. Karger AG, Basel.
Measurement and simulation of thermal neutron flux distribution in the RTP core
NASA Astrophysics Data System (ADS)
Rabir, Mohamad Hairie B.; Jalal Bayar, Abi Muttaqin B.; Hamzah, Na'im Syauqi B.; Mustafa, Muhammad Khairul Ariff B.; Karim, Julia Bt. Abdul; Zin, Muhammad Rawi B. Mohamed; Ismail, Yahya B.; Hussain, Mohd Huzair B.; Mat Husin, Mat Zin B.; Dan, Roslan B. Md; Ismail, Ahmad Razali B.; Husain, Nurfazila Bt.; Jalil Khan, Zareen Khan B. Abdul; Yakin, Shaiful Rizaide B. Mohd; Saad, Mohamad Fauzi B.; Masood, Zarina Bt.
2018-01-01
The in-core thermal neutron flux distribution was determined using measurement and simulation methods for the Malaysian’s PUSPATI TRIGA Reactor (RTP). In this work, online thermal neutron flux measurement using Self Powered Neutron Detector (SPND) has been performed to verify and validate the computational methods for neutron flux calculation in RTP calculations. The experimental results were used as a validation to the calculations performed with Monte Carlo code MCNP. The detail in-core neutron flux distributions were estimated using MCNP mesh tally method. The neutron flux mapping obtained revealed the heterogeneous configuration of the core. Based on the measurement and simulation, the thermal flux profile peaked at the centre of the core and gradually decreased towards the outer side of the core. The results show a good agreement (relatively) between calculation and measurement where both show the same radial thermal flux profile inside the core: MCNP model over estimation with maximum discrepancy around 20% higher compared to SPND measurement. As our model also predicts well the neutron flux distribution in the core it can be used for the characterization of the full core, that is neutron flux and spectra calculation, dose rate calculations, reaction rate calculations, etc.
Foundational numerical capacities and the origins of dyscalculia.
Butterworth, Brian
2010-12-01
One important cause of very low attainment in arithmetic (dyscalculia) seems to be a core deficit in an inherited foundational capacity for numbers. According to one set of hypotheses, arithmetic ability is built on an inherited system responsible for representing approximate numerosity. One account holds that this is supported by a system for representing exactly a small number (less than or equal to four4) of individual objects. In these approaches, the core deficit in dyscalculia lies in either of these systems. An alternative proposal holds that the deficit lies in an inherited system for sets of objects and operations on them (numerosity coding) on which arithmetic is built. I argue that a deficit in numerosity coding, not in the approximate number system or the small number system, is responsible for dyscalculia. Nevertheless, critical tests should involve both longitudinal studies and intervention, and these have yet to be carried out. Copyright © 2010 Elsevier Ltd. All rights reserved.
Helical vortices: viscous dynamics and instability
NASA Astrophysics Data System (ADS)
Rossi, Maurice; Selcuk, Can; Delbende, Ivan; Ijlra-Upmc Team; Limsi-Cnrs Team
2014-11-01
Understanding the dynamical properties of helical vortices is of great importance for numerous applications such as wind turbines, helicopter rotors, ship propellers. Locally these flows often display a helical symmetry: fields are invariant through combined axial translation of distance Δz and rotation of angle θ = Δz / L around the same z-axis, where 2 πL denotes the helix pitch. A DNS code with built-in helical symmetry has been developed in order to compute viscous quasi-steady basic states with one or multiple vortices. These states will be characterized (core structure, ellipticity, ...) as a function of the pitch, without or with an axial flow component. The instability modes growing in the above base flows and their growth rates are investigated by a linearized version of the DNS code coupled to an Arnoldi procedure. This analysis is complemented by a helical thin-cored vortex filaments model. ANR HELIX.
Gregarious Data Re-structuring in a Many Core Architecture
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shrestha, Sunil; Manzano Franco, Joseph B.; Marquez, Andres
this paper, we have developed a new methodology that takes in consideration the access patterns from a single parallel actor (e.g. a thread), as well as, the access patterns of “grouped” parallel actors that share a resource (e.g. a distributed Level 3 cache). We start with a hierarchical tile code for our target machine and apply a series of transformations at the tile level to improve data residence in a given memory hierarchy level. The contribution of this paper includes (a) collaborative data restructuring for group reuse and (b) low overhead transformation technique to improve access pattern and bring closelymore » connected data elements together. Preliminary results in a many core architecture, Tilera TileGX, shows promising improvements over optimized OpenMP code (up to 31% increase in GFLOPS) and over our own previous work on fine grained runtimes (up to 16%) for selected kernels« less
CHAP-2 heat-transfer analysis of the Fort St. Vrain reactor core
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kotas, J.F.; Stroh, K.R.
1983-01-01
The Los Alamos National Laboratory is developing the Composite High-Temperature Gas-Cooled Reactor Analysis Program (CHAP) to provide advanced best-estimate predictions of postulated accidents in gas-cooled reactor plants. The CHAP-2 reactor-core model uses the finite-element method to initialize a two-dimensional temperature map of the Fort St. Vrain (FSV) core and its top and bottom reflectors. The code generates a finite-element mesh, initializes noding and boundary conditions, and solves the nonlinear Laplace heat equation using temperature-dependent thermal conductivities, variable coolant-channel-convection heat-transfer coefficients, and specified internal fuel and moderator heat-generation rates. This paper discusses this method and analyzes an FSV reactor-core accident thatmore » simulates a control-rod withdrawal at full power.« less
The Interplay of Opacities and Rotation in Promoting the Explosion of Core-Collapse Supernovae
NASA Astrophysics Data System (ADS)
Vartanyan, David; Burrows, Adam; Radice, David
2018-01-01
For over five decades, the mechanism of explosion in core-collapse supernovae has been a central unsolved problem in astrophysics, challenging both our computational capabilities and our understanding of relevant physics. Current simulations often produce explosions, but they are at times underenergetic. The neutrino mechanism, wherein a fraction of emitted neutrinos is absorbed in the mantle of the star to reignite the stalled shock, remains the dominant model for reviving explosions in massive stars undergoing core collapse. We present here a diverse suite of 2D axisymmetric simulations produced by FORNAX, a highly parallelizable multidimensional supernova simulation code. We explore the effects of various corrections, including the many-body correction, to neutrino-matter opacities and the possible role of rotation in promoting explosion amongst various core-collapse progenitors.
Fast multi-core based multimodal registration of 2D cross-sections and 3D datasets.
Scharfe, Michael; Pielot, Rainer; Schreiber, Falk
2010-01-11
Solving bioinformatics tasks often requires extensive computational power. Recent trends in processor architecture combine multiple cores into a single chip to improve overall performance. The Cell Broadband Engine (CBE), a heterogeneous multi-core processor, provides power-efficient and cost-effective high-performance computing. One application area is image analysis and visualisation, in particular registration of 2D cross-sections into 3D image datasets. Such techniques can be used to put different image modalities into spatial correspondence, for example, 2D images of histological cuts into morphological 3D frameworks. We evaluate the CBE-driven PlayStation 3 as a high performance, cost-effective computing platform by adapting a multimodal alignment procedure to several characteristic hardware properties. The optimisations are based on partitioning, vectorisation, branch reducing and loop unrolling techniques with special attention to 32-bit multiplies and limited local storage on the computing units. We show how a typical image analysis and visualisation problem, the multimodal registration of 2D cross-sections and 3D datasets, benefits from the multi-core based implementation of the alignment algorithm. We discuss several CBE-based optimisation methods and compare our results to standard solutions. More information and the source code are available from http://cbe.ipk-gatersleben.de. The results demonstrate that the CBE processor in a PlayStation 3 accelerates computational intensive multimodal registration, which is of great importance in biological/medical image processing. The PlayStation 3 as a low cost CBE-based platform offers an efficient option to conventional hardware to solve computational problems in image processing and bioinformatics.
Development of a CFD code for casting simulation
NASA Technical Reports Server (NTRS)
Murph, Jesse E.
1992-01-01
The task of developing a computational fluid dynamics (CFD) code to accurately model the mold filling phase of a casting operation was accomplished in a systematic manner. First the state-of-the-art was determined through a literature search, a code search, and participation with casting industry personnel involved in consortium startups. From this material and inputs from industry personnel, an evaluation of the currently available codes was made. It was determined that a few of the codes already contained sophisticated CFD algorithms and further validation of one of these codes could preclude the development of a new CFD code for this purpose. With industry concurrence, ProCAST was chosen for further evaluation. Two benchmark cases were used to evaluate the code's performance using a Silicon Graphics Personal Iris system. The results of these limited evaluations (because of machine and time constraints) are presented along with discussions of possible improvements and recommendations for further evaluation.
Analysis and Design of ITER 1 MV Core Snubber
NASA Astrophysics Data System (ADS)
Wang, Haitian; Li, Ge
2012-11-01
The core snubber, as a passive protection device, can suppress arc current and absorb stored energy in stray capacitance during the electrical breakdown in accelerating electrodes of ITER NBI. In order to design the core snubber of ITER, the control parameters of the arc peak current have been firstly analyzed by the Fink-Baker-Owren (FBO) method, which are used for designing the DIIID 100 kV snubber. The B-H curve can be derived from the measured voltage and current waveforms, and the hysteresis loss of the core snubber can be derived using the revised parallelogram method. The core snubber can be a simplified representation as an equivalent parallel resistance and inductance, which has been neglected by the FBO method. A simulation code including the parallel equivalent resistance and inductance has been set up. The simulation and experiments result in dramatically large arc shorting currents due to the parallel inductance effect. The case shows that the core snubber utilizing the FBO method gives more compact design.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Evans, Thomas; Hamilton, Steven; Slattery, Stuart
Profugus is an open-source mini-application (mini-app) for radiation transport and reactor applications. It contains the fundamental computational kernels used in the Exnihilo code suite from Oak Ridge National Laboratory. However, Exnihilo is production code with a substantial user base. Furthermore, Exnihilo is export controlled. This makes collaboration with computer scientists and computer engineers difficult. Profugus is designed to bridge that gap. By encapsulating the core numerical algorithms in an abbreviated code base that is open-source, computer scientists can analyze the algorithms and easily make code-architectural changes to test performance without compromising the production code values of Exnihilo. Profugus is notmore » meant to be production software with respect to problem analysis. The computational kernels in Profugus are designed to analyze performance, not correctness. Nonetheless, users of Profugus can setup and run problems with enough real-world features to be useful as proof-of-concept for actual production work.« less
NASA Astrophysics Data System (ADS)
Timm, S.; Cooper, G.; Fuess, S.; Garzoglio, G.; Holzman, B.; Kennedy, R.; Grassano, D.; Tiradani, A.; Krishnamurthy, R.; Vinayagam, S.; Raicu, I.; Wu, H.; Ren, S.; Noh, S.-Y.
2017-10-01
The Fermilab HEPCloud Facility Project has as its goal to extend the current Fermilab facility interface to provide transparent access to disparate resources including commercial and community clouds, grid federations, and HPC centers. This facility enables experiments to perform the full spectrum of computing tasks, including data-intensive simulation and reconstruction. We have evaluated the use of the commercial cloud to provide elasticity to respond to peaks of demand without overprovisioning local resources. Full scale data-intensive workflows have been successfully completed on Amazon Web Services for two High Energy Physics Experiments, CMS and NOνA, at the scale of 58000 simultaneous cores. This paper describes the significant improvements that were made to the virtual machine provisioning system, code caching system, and data movement system to accomplish this work. The virtual image provisioning and contextualization service was extended to multiple AWS regions, and to support experiment-specific data configurations. A prototype Decision Engine was written to determine the optimal availability zone and instance type to run on, minimizing cost and job interruptions. We have deployed a scalable on-demand caching service to deliver code and database information to jobs running on the commercial cloud. It uses the frontiersquid server and CERN VM File System (CVMFS) clients on EC2 instances and utilizes various services provided by AWS to build the infrastructure (stack). We discuss the architecture and load testing benchmarks on the squid servers. We also describe various approaches that were evaluated to transport experimental data to and from the cloud, and the optimal solutions that were used for the bulk of the data transport. Finally, we summarize lessons learned from this scale test, and our future plans to expand and improve the Fermilab HEP Cloud Facility.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Timm, S.; Cooper, G.; Fuess, S.
The Fermilab HEPCloud Facility Project has as its goal to extend the current Fermilab facility interface to provide transparent access to disparate resources including commercial and community clouds, grid federations, and HPC centers. This facility enables experiments to perform the full spectrum of computing tasks, including data-intensive simulation and reconstruction. We have evaluated the use of the commercial cloud to provide elasticity to respond to peaks of demand without overprovisioning local resources. Full scale data-intensive workflows have been successfully completed on Amazon Web Services for two High Energy Physics Experiments, CMS and NOνA, at the scale of 58000 simultaneous cores.more » This paper describes the significant improvements that were made to the virtual machine provisioning system, code caching system, and data movement system to accomplish this work. The virtual image provisioning and contextualization service was extended to multiple AWS regions, and to support experiment-specific data configurations. A prototype Decision Engine was written to determine the optimal availability zone and instance type to run on, minimizing cost and job interruptions. We have deployed a scalable on-demand caching service to deliver code and database information to jobs running on the commercial cloud. It uses the frontiersquid server and CERN VM File System (CVMFS) clients on EC2 instances and utilizes various services provided by AWS to build the infrastructure (stack). We discuss the architecture and load testing benchmarks on the squid servers. We also describe various approaches that were evaluated to transport experimental data to and from the cloud, and the optimal solutions that were used for the bulk of the data transport. Finally, we summarize lessons learned from this scale test, and our future plans to expand and improve the Fermilab HEP Cloud Facility.« less
Static and Dynamic Frequency Scaling on Multicore CPUs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bao, Wenlei; Hong, Changwan; Chunduri, Sudheer
2016-12-28
Dynamic voltage and frequency scaling (DVFS) adapts CPU power consumption by modifying a processor’s operating frequency (and the associated voltage). Typical approaches employing DVFS involve default strategies such as running at the lowest or the highest frequency, or observing the CPU’s runtime behavior and dynamically adapting the voltage/frequency configuration based on CPU usage. In this paper, we argue that many previous approaches suffer from inherent limitations, such as not account- ing for processor-specific impact of frequency changes on energy for different workload types. We first propose a lightweight runtime-based approach to automatically adapt the frequency based on the CPU workload,more » that is agnostic of the processor characteristics. We then show that further improvements can be achieved for affine kernels in the application, using a compile-time characterization instead of run-time monitoring to select the frequency and number of CPU cores to use. Our framework relies on a one-time energy characterization of CPU-specific DVFS profiles followed by a compile-time categorization of loop-based code segments in the application. These are combined to determine a priori of the frequency and the number of cores to use to execute the application so as to optimize energy or energy-delay product, outperforming runtime approach. Extensive evaluation on 60 benchmarks and five multi-core CPUs show that our approach systematically outperforms the powersave Linux governor, while improving overall performance.« less
Gyrokinetic simulation of driftwave instability in field-reversed configuration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fulton, D. P., E-mail: dfulton@trialphaenergy.com; University of California, Irvine, California 92697; Lau, C. K.
2016-05-15
Following the recent remarkable progress in magnetohydrodynamic (MHD) stability control in the C-2U advanced beam driven field-reversed configuration (FRC), turbulent transport has become one of the foremost obstacles on the path towards an FRC-based fusion reactor. Significant effort has been made to expand kinetic simulation capabilities in FRC magnetic geometry. The recently upgraded Gyrokinetic Toroidal Code (GTC) now accommodates realistic magnetic geometry from the C-2U experiment at Tri Alpha Energy, Inc. and is optimized to efficiently handle the FRC's magnetic field line orientation. Initial electrostatic GTC simulations find that ion-scale instabilities are linearly stable in the FRC core for realisticmore » pressure gradient drives. Estimated instability thresholds from linear GTC simulations are qualitatively consistent with critical gradients determined from experimental Doppler backscattering fluctuation data, which also find ion scale modes to be depressed in the FRC core. Beyond GTC, A New Code (ANC) has been developed to accurately resolve the magnetic field separatrix and address the interaction between the core and scrape-off layer regions, which ultimately determines global plasma confinement in the FRC. The current status of ANC and future development targets are discussed.« less
Optimizing Irregular Applications for Energy and Performance on the Tilera Many-core Architecture
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chavarría-Miranda, Daniel; Panyala, Ajay R.; Halappanavar, Mahantesh
Optimizing applications simultaneously for energy and performance is a complex problem. High performance, parallel, irregular applications are notoriously hard to optimize due to their data-dependent memory accesses, lack of structured locality and complex data structures and code patterns. Irregular kernels are growing in importance in applications such as machine learning, graph analytics and combinatorial scientific computing. Performance- and energy-efficient implementation of these kernels on modern, energy efficient, multicore and many-core platforms is therefore an important and challenging problem. We present results from optimizing two irregular applications { the Louvain method for community detection (Grappolo), and high-performance conjugate gradient (HPCCG) {more » on the Tilera many-core system. We have significantly extended MIT's OpenTuner auto-tuning framework to conduct a detailed study of platform-independent and platform-specific optimizations to improve performance as well as reduce total energy consumption. We explore the optimization design space along three dimensions: memory layout schemes, compiler-based code transformations, and optimization of parallel loop schedules. Using auto-tuning, we demonstrate whole node energy savings of up to 41% relative to a baseline instantiation, and up to 31% relative to manually optimized variants.« less
Gyrokinetic simulation of driftwave instability in field-reversed configuration
NASA Astrophysics Data System (ADS)
Fulton, D. P.; Lau, C. K.; Schmitz, L.; Holod, I.; Lin, Z.; Tajima, T.; Binderbauer, M. W.
2016-05-01
Following the recent remarkable progress in magnetohydrodynamic (MHD) stability control in the C-2U advanced beam driven field-reversed configuration (FRC), turbulent transport has become one of the foremost obstacles on the path towards an FRC-based fusion reactor. Significant effort has been made to expand kinetic simulation capabilities in FRC magnetic geometry. The recently upgraded Gyrokinetic Toroidal Code (GTC) now accommodates realistic magnetic geometry from the C-2U experiment at Tri Alpha Energy, Inc. and is optimized to efficiently handle the FRC's magnetic field line orientation. Initial electrostatic GTC simulations find that ion-scale instabilities are linearly stable in the FRC core for realistic pressure gradient drives. Estimated instability thresholds from linear GTC simulations are qualitatively consistent with critical gradients determined from experimental Doppler backscattering fluctuation data, which also find ion scale modes to be depressed in the FRC core. Beyond GTC, A New Code (ANC) has been developed to accurately resolve the magnetic field separatrix and address the interaction between the core and scrape-off layer regions, which ultimately determines global plasma confinement in the FRC. The current status of ANC and future development targets are discussed.
Lee, Anthony; Yau, Christopher; Giles, Michael B.; Doucet, Arnaud; Holmes, Christopher C.
2011-01-01
We present a case-study on the utility of graphics cards to perform massively parallel simulation of advanced Monte Carlo methods. Graphics cards, containing multiple Graphics Processing Units (GPUs), are self-contained parallel computational devices that can be housed in conventional desktop and laptop computers and can be thought of as prototypes of the next generation of many-core processors. For certain classes of population-based Monte Carlo algorithms they offer massively parallel simulation, with the added advantage over conventional distributed multi-core processors that they are cheap, easily accessible, easy to maintain, easy to code, dedicated local devices with low power consumption. On a canonical set of stochastic simulation examples including population-based Markov chain Monte Carlo methods and Sequential Monte Carlo methods, we nd speedups from 35 to 500 fold over conventional single-threaded computer code. Our findings suggest that GPUs have the potential to facilitate the growth of statistical modelling into complex data rich domains through the availability of cheap and accessible many-core computation. We believe the speedup we observe should motivate wider use of parallelizable simulation methods and greater methodological attention to their design. PMID:22003276
Concept Inventory Development Reveals Common Student Misconceptions about Microbiology †
Briggs, Amy G.; Hughes, Lee E.; Brennan, Robert E.; Buchner, John; Horak, Rachel E. A.; Amburn, D. Sue Katz; McDonald, Ann H.; Primm, Todd P.; Smith, Ann C.; Stevens, Ann M.; Yung, Sunny B.; Paustian, Timothy D.
2017-01-01
Misconceptions, or alternative conceptions, are incorrect understandings that students have incorporated into their prior knowledge. The goal of this study was the identification of misconceptions in microbiology held by undergraduate students upon entry into an introductory, general microbiology course. This work was the first step in developing a microbiology concept inventory based on the American Society for Microbiology’s Recommended Curriculum Guidelines for Undergraduate Microbiology. Responses to true/false (T/F) questions accompanied by written explanations by undergraduate students at a diverse set of institutions were used to reveal misconceptions for fundamental microbiology concepts. These data were analyzed to identify the most difficult core concepts, misalignment between explanations and answer choices, and the most common misconceptions for each core concept. From across the core concepts, nineteen misconception themes found in at least 5% of the coded answers for a given question were identified. The top five misconceptions, with coded responses ranging from 19% to 43% of the explanations, are described, along with suggested classroom interventions. Identification of student misconceptions in microbiology provides a foundation upon which to understand students’ prior knowledge and to design appropriate tools for improving instruction in microbiology. PMID:29854046
Initial Coupling of the RELAP-7 and PRONGHORN Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
J. Ortensi; D. Andrs; A.A. Bingham
2012-10-01
Modern nuclear reactor safety codes require the ability to solve detailed coupled neutronic- thermal fluids problems. For larger cores, this implies fully coupled higher dimensionality spatial dynamics with appropriate feedback models that can provide enough resolution to accurately compute core heat generation and removal during steady and unsteady conditions. The reactor analysis code PRONGHORN is being coupled to RELAP-7 as a first step to extend RELAP’s current capabilities. This report details the mathematical models, the type of coupling, and the testing results from the integrated system. RELAP-7 is a MOOSE-based application that solves the continuity, momentum, and energy equations inmore » 1-D for a compressible fluid. The pipe and joint capabilities enable it to model parts of the power conversion unit. The PRONGHORN application, also developed on the MOOSE infrastructure, solves the coupled equations that define the neutron diffusion, fluid flow, and heat transfer in a full core model. The two systems are loosely coupled to simplify the transition towards a more complex infrastructure. The integration is tested on a simplified version of the OECD/NEA MHTGR-350 Coupled Neutronics-Thermal Fluids benchmark model.« less
Kalantzis, Georgios; Tachibana, Hidenobu
2014-01-01
For microdosimetric calculations event-by-event Monte Carlo (MC) methods are considered the most accurate. The main shortcoming of those methods is the extensive requirement for computational time. In this work we present an event-by-event MC code of low projectile energy electron and proton tracks for accelerated microdosimetric MC simulations on a graphic processing unit (GPU). Additionally, a hybrid implementation scheme was realized by employing OpenMP and CUDA in such a way that both GPU and multi-core CPU were utilized simultaneously. The two implementation schemes have been tested and compared with the sequential single threaded MC code on the CPU. Performance comparison was established on the speed-up for a set of benchmarking cases of electron and proton tracks. A maximum speedup of 67.2 was achieved for the GPU-based MC code, while a further improvement of the speedup up to 20% was achieved for the hybrid approach. The results indicate the capability of our CPU-GPU implementation for accelerated MC microdosimetric calculations of both electron and proton tracks without loss of accuracy. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Hur, Min Young; Verboncoeur, John; Lee, Hae June
2014-10-01
Particle-in-cell (PIC) simulations have high fidelity in the plasma device requiring transient kinetic modeling compared with fluid simulations. It uses less approximation on the plasma kinetics but requires many particles and grids to observe the semantic results. It means that the simulation spends lots of simulation time in proportion to the number of particles. Therefore, PIC simulation needs high performance computing. In this research, a graphic processing unit (GPU) is adopted for high performance computing of PIC simulation for low temperature discharge plasmas. GPUs have many-core processors and high memory bandwidth compared with a central processing unit (CPU). NVIDIA GeForce GPUs were used for the test with hundreds of cores which show cost-effective performance. PIC code algorithm is divided into two modules which are a field solver and a particle mover. The particle mover module is divided into four routines which are named move, boundary, Monte Carlo collision (MCC), and deposit. Overall, the GPU code solves particle motions as well as electrostatic potential in two-dimensional geometry almost 30 times faster than a single CPU code. This work was supported by the Korea Institute of Science Technology Information.
Nonlinear dynamic simulation of single- and multi-spool core engines
NASA Technical Reports Server (NTRS)
Schobeiri, T.; Lippke, C.; Abouelkheir, M.
1993-01-01
In this paper a new computational method for accurate simulation of the nonlinear dynamic behavior of single- and multi-spool core engines, turbofan engines, and power generation gas turbine engines is presented. In order to perform the simulation, a modularly structured computer code has been developed which includes individual mathematical modules representing various engine components. The generic structure of the code enables the dynamic simulation of arbitrary engine configurations ranging from single-spool thrust generation to multi-spool thrust/power generation engines under adverse dynamic operating conditions. For precise simulation of turbine and compressor components, row-by-row calculation procedures were implemented that account for the specific turbine and compressor cascade and blade geometry and characteristics. The dynamic behavior of the subject engine is calculated by solving a number of systems of partial differential equations, which describe the unsteady behavior of the individual components. In order to ensure the capability, accuracy, robustness, and reliability of the code, comprehensive critical performance assessment and validation tests were performed. As representatives, three different transient cases with single- and multi-spool thrust and power generation engines were simulated. The transient cases range from operating with a prescribed fuel schedule, to extreme load changes, to generator and turbine shut down.
Pellet Injection in ITER with ∇B-induced Drift Effect using TASK/TR and HPI2 Codes
NASA Astrophysics Data System (ADS)
Kongkurd, R.; Wisitsorasak, A.
2017-09-01
The impact of pellet injection in International Thermonuclear Experimental Reactor (ITER) are investigated using integrated predictive modeling codes TASK/TR and HPI2 . In the core, the plasma profiles are predicted by the TASK/TR code in which the core transport models consist of a combination of the MMM95 anomalous transport model and NCLASS neoclassical transport. The pellet ablation in the plasma is described using neutral gas shielding (NGS) model with inclusion of the ∇B-induced \\overrightarrow{E}× \\overrightarrow{B} drift of the ionized ablated pellet particles. It is found that the high-field-side injection can deposit the pellet mass deeper than the injection from the low-field-side due to the advantage of the ∇B-induced drift. When pellets with deuterium-tritium mixing ratio of unity are launched with speed of 200 m/s, radius of 3 mm and injected at frequency of 2 Hz, the line average density and the plasma stored energy are increased by 80% and 25% respectively. The pellet material is mostly deposited at the normalized minor radius of 0.5 from the edge.