Sample records for advanced simulation codes

  1. Benchmark Simulations of the Thermal-Hydraulic Responses during EBR-II Inherent Safety Tests using SAM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hu, Rui; Sumner, Tyler S.

    2016-04-17

    An advanced system analysis tool SAM is being developed for fast-running, improved-fidelity, and whole-plant transient analyses at Argonne National Laboratory under DOE-NE’s Nuclear Energy Advanced Modeling and Simulation (NEAMS) program. As an important part of code development, companion validation activities are being conducted to ensure the performance and validity of the SAM code. This paper presents the benchmark simulations of two EBR-II tests, SHRT-45R and BOP-302R, whose data are available through the support of DOE-NE’s Advanced Reactor Technology (ART) program. The code predictions of major primary coolant system parameter are compared with the test results. Additionally, the SAS4A/SASSYS-1 code simulationmore » results are also included for a code-to-code comparison.« less

  2. Dakota Uncertainty Quantification Methods Applied to the CFD code Nek5000

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Delchini, Marc-Olivier; Popov, Emilian L.; Pointer, William David

    This report presents the state of advancement of a Nuclear Energy Advanced Modeling and Simulation (NEAMS) project to characterize the uncertainty of the computational fluid dynamics (CFD) code Nek5000 using the Dakota package for flows encountered in the nuclear engineering industry. Nek5000 is a high-order spectral element CFD code developed at Argonne National Laboratory for high-resolution spectral-filtered large eddy simulations (LESs) and unsteady Reynolds-averaged Navier-Stokes (URANS) simulations.

  3. The Modeling of Advanced BWR Fuel Designs with the NRC Fuel Depletion Codes PARCS/PATHS

    DOE PAGES

    Ward, Andrew; Downar, Thomas J.; Xu, Y.; ...

    2015-04-22

    The PATHS (PARCS Advanced Thermal Hydraulic Solver) code was developed at the University of Michigan in support of U.S. Nuclear Regulatory Commission research to solve the steady-state, two-phase, thermal-hydraulic equations for a boiling water reactor (BWR) and to provide thermal-hydraulic feedback for BWR depletion calculations with the neutronics code PARCS (Purdue Advanced Reactor Core Simulator). The simplified solution methodology, including a three-equation drift flux formulation and an optimized iteration scheme, yields very fast run times in comparison to conventional thermal-hydraulic systems codes used in the industry, while still retaining sufficient accuracy for applications such as BWR depletion calculations. Lastly, themore » capability to model advanced BWR fuel designs with part-length fuel rods and heterogeneous axial channel flow geometry has been implemented in PATHS, and the code has been validated against previously benchmarked advanced core simulators as well as BWR plant and experimental data. We describe the modifications to the codes and the results of the validation in this paper.« less

  4. SAC: Sheffield Advanced Code

    NASA Astrophysics Data System (ADS)

    Griffiths, Mike; Fedun, Viktor; Mumford, Stuart; Gent, Frederick

    2013-06-01

    The Sheffield Advanced Code (SAC) is a fully non-linear MHD code designed for simulations of linear and non-linear wave propagation in gravitationally strongly stratified magnetized plasma. It was developed primarily for the forward modelling of helioseismological processes and for the coupling processes in the solar interior, photosphere, and corona; it is built on the well-known VAC platform that allows robust simulation of the macroscopic processes in gravitationally stratified (non-)magnetized plasmas. The code has no limitations of simulation length in time imposed by complications originating from the upper boundary, nor does it require implementation of special procedures to treat the upper boundaries. SAC inherited its modular structure from VAC, thereby allowing modification to easily add new physics.

  5. Development of the US3D Code for Advanced Compressible and Reacting Flow Simulations

    NASA Technical Reports Server (NTRS)

    Candler, Graham V.; Johnson, Heath B.; Nompelis, Ioannis; Subbareddy, Pramod K.; Drayna, Travis W.; Gidzak, Vladimyr; Barnhardt, Michael D.

    2015-01-01

    Aerothermodynamics and hypersonic flows involve complex multi-disciplinary physics, including finite-rate gas-phase kinetics, finite-rate internal energy relaxation, gas-surface interactions with finite-rate oxidation and sublimation, transition to turbulence, large-scale unsteadiness, shock-boundary layer interactions, fluid-structure interactions, and thermal protection system ablation and thermal response. Many of the flows have a large range of length and time scales, requiring large computational grids, implicit time integration, and large solution run times. The University of Minnesota NASA US3D code was designed for the simulation of these complex, highly-coupled flows. It has many of the features of the well-established DPLR code, but uses unstructured grids and has many advanced numerical capabilities and physical models for multi-physics problems. The main capabilities of the code are described, the physical modeling approaches are discussed, the different types of numerical flux functions and time integration approaches are outlined, and the parallelization strategy is overviewed. Comparisons between US3D and the NASA DPLR code are presented, and several advanced simulations are presented to illustrate some of novel features of the code.

  6. Micromagnetic Code Development of Advanced Magnetic Structures Final Report CRADA No. TC-1561-98

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cerjan, Charles J.; Shi, Xizeng

    The specific goals of this project were to: Further develop the previously written micromagnetic code DADIMAG (DOE code release number 980017); Validate the code. The resulting code was expected to be more realistic and useful for simulations of magnetic structures of specific interest to Read-Rite programs. We also planned to further the code for use in internal LLNL programs. This project complemented LLNL CRADA TC-840-94 between LLNL and Read-Rite, which allowed for simulations of the advanced magnetic head development completed under the CRADA. TC-1561-98 was effective concurrently with LLNL non-exclusive copyright license (TL-1552-98) to Read-Rite for DADIMAG Version 2 executablemore » code.« less

  7. Overview of High-Fidelity Modeling Activities in the Numerical Propulsion System Simulations (NPSS) Project

    NASA Technical Reports Server (NTRS)

    Veres, Joseph P.

    2002-01-01

    A high-fidelity simulation of a commercial turbofan engine has been created as part of the Numerical Propulsion System Simulation Project. The high-fidelity computer simulation utilizes computer models that were developed at NASA Glenn Research Center in cooperation with turbofan engine manufacturers. The average-passage (APNASA) Navier-Stokes based viscous flow computer code is used to simulate the 3D flow in the compressors and turbines of the advanced commercial turbofan engine. The 3D National Combustion Code (NCC) is used to simulate the flow and chemistry in the advanced aircraft combustor. The APNASA turbomachinery code and the NCC combustor code exchange boundary conditions at the interface planes at the combustor inlet and exit. This computer simulation technique can evaluate engine performance at steady operating conditions. The 3D flow models provide detailed knowledge of the airflow within the fan and compressor, the high and low pressure turbines, and the flow and chemistry within the combustor. The models simulate the performance of the engine at operating conditions that include sea level takeoff and the altitude cruise condition.

  8. Turbulence dissipation challenge: particle-in-cell simulations

    NASA Astrophysics Data System (ADS)

    Roytershteyn, V.; Karimabadi, H.; Omelchenko, Y.; Germaschewski, K.

    2015-12-01

    We discuss application of three particle in cell (PIC) codes to the problems relevant to turbulence dissipation challenge. VPIC is a fully kinetic code extensively used to study a variety of diverse problems ranging from laboratory plasmas to astrophysics. PSC is a flexible fully kinetic code offering a variety of algorithms that can be advantageous to turbulence simulations, including high order particle shapes, dynamic load balancing, and ability to efficiently run on Graphics Processing Units (GPUs). Finally, HYPERS is a novel hybrid (kinetic ions+fluid electrons) code, which utilizes asynchronous time advance and a number of other advanced algorithms. We present examples drawn both from large-scale turbulence simulations and from the test problems outlined by the turbulence dissipation challenge. Special attention is paid to such issues as the small-scale intermittency of inertial range turbulence, mode content of the sub-proton range of scales, the formation of electron-scale current sheets and the role of magnetic reconnection, as well as numerical challenges of applying PIC codes to simulations of astrophysical turbulence.

  9. A real-time simulator of a turbofan engine

    NASA Technical Reports Server (NTRS)

    Litt, Jonathan S.; Delaat, John C.; Merrill, Walter C.

    1989-01-01

    A real-time digital simulator of a Pratt and Whitney F100 engine has been developed for real-time code verification and for actuator diagnosis during full-scale engine testing. This self-contained unit can operate in an open-loop stand-alone mode or as part of closed-loop control system. It can also be used for control system design and development. Tests conducted in conjunction with the NASA Advanced Detection, Isolation, and Accommodation program show that the simulator is a valuable tool for real-time code verification and as a real-time actuator simulator for actuator fault diagnosis. Although currently a small perturbation model, advances in microprocessor hardware should allow the simulator to evolve into a real-time, full-envelope, full engine simulation.

  10. Coupled field effects in BWR stability simulations using SIMULATE-3K

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Borkowski, J.; Smith, K.; Hagrman, D.

    1996-12-31

    The SIMULATE-3K code is the transient analysis version of the Studsvik advanced nodal reactor analysis code, SIMULATE-3. Recent developments have focused on further broadening the range of transient applications by refinement of core thermal-hydraulic models and on comparison with boiling water reactor (BWR) stability measurements performed at Ringhals unit 1, during the startups of cycles 14 through 17.

  11. Science based integrated approach to advanced nuclear fuel development - integrated multi-scale multi-physics hierarchical modeling and simulation framework Part III: cladding

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tome, Carlos N; Caro, J A; Lebensohn, R A

    2010-01-01

    Advancing the performance of Light Water Reactors, Advanced Nuclear Fuel Cycles, and Advanced Reactors, such as the Next Generation Nuclear Power Plants, requires enhancing our fundamental understanding of fuel and materials behavior under irradiation. The capability to accurately model the nuclear fuel systems to develop predictive tools is critical. Not only are fabrication and performance models needed to understand specific aspects of the nuclear fuel, fully coupled fuel simulation codes are required to achieve licensing of specific nuclear fuel designs for operation. The backbone of these codes, models, and simulations is a fundamental understanding and predictive capability for simulating themore » phase and microstructural behavior of the nuclear fuel system materials and matrices. In this paper we review the current status of the advanced modeling and simulation of nuclear reactor cladding, with emphasis on what is available and what is to be developed in each scale of the project, how we propose to pass information from one scale to the next, and what experimental information is required for benchmarking and advancing the modeling at each scale level.« less

  12. Advanced Power Electronic Interfaces for Distributed Energy Systems, Part 2: Modeling, Development, and Experimental Evaluation of Advanced Control Functions for Single-Phase Utility-Connected Inverter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chakraborty, S.; Kroposki, B.; Kramer, W.

    Integrating renewable energy and distributed generations into the Smart Grid architecture requires power electronic (PE) for energy conversion. The key to reaching successful Smart Grid implementation is to develop interoperable, intelligent, and advanced PE technology that improves and accelerates the use of distributed energy resource systems. This report describes the simulation, design, and testing of a single-phase DC-to-AC inverter developed to operate in both islanded and utility-connected mode. It provides results on both the simulations and the experiments conducted, demonstrating the ability of the inverter to provide advanced control functions such as power flow and VAR/voltage regulation. This report alsomore » analyzes two different techniques used for digital signal processor (DSP) code generation. Initially, the DSP code was written in C programming language using Texas Instrument's Code Composer Studio. In a later stage of the research, the Simulink DSP toolbox was used to self-generate code for the DSP. The successful tests using Simulink self-generated DSP codes show promise for fast prototyping of PE controls.« less

  13. MHD code using multi graphical processing units: SMAUG+

    NASA Astrophysics Data System (ADS)

    Gyenge, N.; Griffiths, M. K.; Erdélyi, R.

    2018-01-01

    This paper introduces the Sheffield Magnetohydrodynamics Algorithm Using GPUs (SMAUG+), an advanced numerical code for solving magnetohydrodynamic (MHD) problems, using multi-GPU systems. Multi-GPU systems facilitate the development of accelerated codes and enable us to investigate larger model sizes and/or more detailed computational domain resolutions. This is a significant advancement over the parent single-GPU MHD code, SMAUG (Griffiths et al., 2015). Here, we demonstrate the validity of the SMAUG + code, describe the parallelisation techniques and investigate performance benchmarks. The initial configuration of the Orszag-Tang vortex simulations are distributed among 4, 16, 64 and 100 GPUs. Furthermore, different simulation box resolutions are applied: 1000 × 1000, 2044 × 2044, 4000 × 4000 and 8000 × 8000 . We also tested the code with the Brio-Wu shock tube simulations with model size of 800 employing up to 10 GPUs. Based on the test results, we observed speed ups and slow downs, depending on the granularity and the communication overhead of certain parallel tasks. The main aim of the code development is to provide massively parallel code without the memory limitation of a single GPU. By using our code, the applied model size could be significantly increased. We demonstrate that we are able to successfully compute numerically valid and large 2D MHD problems.

  14. NEAMS Update. Quarterly Report for October - December 2011.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bradley, K.

    2012-02-16

    The Advanced Modeling and Simulation Office within the DOE Office of Nuclear Energy (NE) has been charged with revolutionizing the design tools used to build nuclear power plants during the next 10 years. To accomplish this, the DOE has brought together the national laboratories, U.S. universities, and the nuclear energy industry to establish the Nuclear Energy Advanced Modeling and Simulation (NEAMS) Program. The mission of NEAMS is to modernize computer modeling of nuclear energy systems and improve the fidelity and validity of modeling results using contemporary software environments and high-performance computers. NEAMS will create a set of engineering-level codes aimedmore » at designing and analyzing the performance and safety of nuclear power plants and reactor fuels. The truly predictive nature of these codes will be achieved by modeling the governing phenomena at the spatial and temporal scales that dominate the behavior. These codes will be executed within a simulation environment that orchestrates code integration with respect to spatial meshing, computational resources, and execution to give the user a common 'look and feel' for setting up problems and displaying results. NEAMS is building upon a suite of existing simulation tools, including those developed by the federal Scientific Discovery through Advanced Computing and Advanced Simulation and Computing programs. NEAMS also draws upon existing simulation tools for materials and nuclear systems, although many of these are limited in terms of scale, applicability, and portability (their ability to be integrated into contemporary software and hardware architectures). NEAMS investments have directly and indirectly supported additional NE research and development programs, including those devoted to waste repositories, safeguarded separations systems, and long-term storage of used nuclear fuel. NEAMS is organized into two broad efforts, each comprising four elements. The quarterly highlights October-December 2011 are: (1) Version 1.0 of AMP, the fuel assembly performance code, was tested on the JAGUAR supercomputer and released on November 1, 2011, a detailed discussion of this new simulation tool is given; (2) A coolant sub-channel model and a preliminary UO{sub 2} smeared-cracking model were implemented in BISON, the single-pin fuel code, more information on how these models were developed and benchmarked is given; (3) The Object Kinetic Monte Carlo model was implemented to account for nucleation events in meso-scale simulations and a discussion of the significance of this advance is given; (4) The SHARP neutronics module, PROTEUS, was expanded to be applicable to all types of reactors, and a discussion of the importance of PROTEUS is given; (5) A plan has been finalized for integrating the high-fidelity, three-dimensional reactor code SHARP with both the systems-level code RELAP7 and the fuel assembly code AMP. This is a new initiative; (6) Work began to evaluate the applicability of AMP to the problem of dry storage of used fuel and to define a relevant problem to test the applicability; (7) A code to obtain phonon spectra from the force-constant matrix for a crystalline lattice has been completed. This important bridge between subcontinuum and continuum phenomena is discussed; (8) Benchmarking was begun on the meso-scale, finite-element fuels code MARMOT to validate its new variable splitting algorithm; (9) A very computationally demanding simulation of diffusion-driven nucleation of new microstructural features has been completed. An explanation of the difficulty of this simulation is given; (10) Experiments were conducted with deformed steel to validate a crystal plasticity finite-element code for bodycentered cubic iron; (11) The Capability Transfer Roadmap was completed and published as an internal laboratory technical report; (12) The AMP fuel assembly code input generator was integrated into the NEAMS Integrated Computational Environment (NiCE). More details on the planned NEAMS computing environment is given; and (13) The NEAMS program website (neams.energy.gov) is nearly ready to launch.« less

  15. Numerical Propulsion System Simulation: A Common Tool for Aerospace Propulsion Being Developed

    NASA Technical Reports Server (NTRS)

    Follen, Gregory J.; Naiman, Cynthia G.

    2001-01-01

    The NASA Glenn Research Center is developing an advanced multidisciplinary analysis environment for aerospace propulsion systems called the Numerical Propulsion System Simulation (NPSS). This simulation is initially being used to support aeropropulsion in the analysis and design of aircraft engines. NPSS provides increased flexibility for the user, which reduces the total development time and cost. It is currently being extended to support the Aviation Safety Program and Advanced Space Transportation. NPSS focuses on the integration of multiple disciplines such as aerodynamics, structure, and heat transfer with numerical zooming on component codes. Zooming is the coupling of analyses at various levels of detail. NPSS development includes using the Common Object Request Broker Architecture (CORBA) in the NPSS Developer's Kit to facilitate collaborative engineering. The NPSS Developer's Kit will provide the tools to develop custom components and to use the CORBA capability for zooming to higher fidelity codes, coupling to multidiscipline codes, transmitting secure data, and distributing simulations across different platforms. These powerful capabilities will extend NPSS from a zero-dimensional simulation tool to a multifidelity, multidiscipline system-level simulation tool for the full life cycle of an engine.

  16. OSIRIS - an object-oriented parallel 3D PIC code for modeling laser and particle beam-plasma interaction

    NASA Astrophysics Data System (ADS)

    Hemker, Roy

    1999-11-01

    The advances in computational speed make it now possible to do full 3D PIC simulations of laser plasma and beam plasma interactions, but at the same time the increased complexity of these problems makes it necessary to apply modern approaches like object oriented programming to the development of simulation codes. We report here on our progress in developing an object oriented parallel 3D PIC code using Fortran 90. In its current state the code contains algorithms for 1D, 2D, and 3D simulations in cartesian coordinates and for 2D cylindrically-symmetric geometry. For all of these algorithms the code allows for a moving simulation window and arbitrary domain decomposition for any number of dimensions. Recent 3D simulation results on the propagation of intense laser and electron beams through plasmas will be presented.

  17. Multi-dimensional free-electron laser simulation codes : a comparison study.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Biedron, S. G.; Chae, Y. C.; Dejus, R. J.

    A self-amplified spontaneous emission (SASE) free-electron laser (FEL) is under construction at the Advanced Photon Source (APS). Five FEL simulation codes were used in the design phase: GENESIS, GINGER, MEDUSA, RON, and TDA3D. Initial comparisons between each of these independent formulations show good agreement for the parameters of the APS SASE FEL.

  18. Multi-Dimensional Free-Electron Laser Simulation Codes: A Comparison Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nuhn, Heinz-Dieter

    A self-amplified spontaneous emission (SASE) free-electron laser (FEL) is under construction at the Advanced Photon Source (APS). Five FEL simulation codes were used in the design phase: GENESIS, GINGER, MEDUSA, RON, and TDA3D. Initial comparisons between each of these independent formulations show good agreement for the parameters of the APS SASE FEL.

  19. Development of the V4.2m5 and V5.0m0 Multigroup Cross Section Libraries for MPACT for PWR and BWR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Kang Seog; Clarno, Kevin T.; Gentry, Cole

    2017-03-01

    The MPACT neutronics module of the Consortium for Advanced Simulation of Light Water Reactors (CASL) core simulator is a 3-D whole core transport code being developed for the CASL toolset, Virtual Environment for Reactor Analysis (VERA). Key characteristics of the MPACT code include (1) a subgroup method for resonance selfshielding and (2) a whole-core transport solver with a 2-D/1-D synthesis method. The MPACT code requires a cross section library to support all the MPACT core simulation capabilities which would be the most influencing component for simulation accuracy.

  20. Development and application of numerical techniques for general-relativistic magnetohydrodynamics simulations of black hole accretion

    NASA Astrophysics Data System (ADS)

    White, Christopher Joseph

    We describe the implementation of sophisticated numerical techniques for general-relativistic magnetohydrodynamics simulations in the Athena++ code framework. Improvements over many existing codes include the use of advanced Riemann solvers and of staggered-mesh constrained transport. Combined with considerations for computational performance and parallel scalability, these allow us to investigate black hole accretion flows with unprecedented accuracy. The capability of the code is demonstrated by exploring magnetically arrested disks.

  1. SAM Theory Manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hu, Rui

    The System Analysis Module (SAM) is an advanced and modern system analysis tool being developed at Argonne National Laboratory under the U.S. DOE Office of Nuclear Energy’s Nuclear Energy Advanced Modeling and Simulation (NEAMS) program. SAM development aims for advances in physical modeling, numerical methods, and software engineering to enhance its user experience and usability for reactor transient analyses. To facilitate the code development, SAM utilizes an object-oriented application framework (MOOSE), and its underlying meshing and finite-element library (libMesh) and linear and non-linear solvers (PETSc), to leverage modern advanced software environments and numerical methods. SAM focuses on modeling advanced reactormore » concepts such as SFRs (sodium fast reactors), LFRs (lead-cooled fast reactors), and FHRs (fluoride-salt-cooled high temperature reactors) or MSRs (molten salt reactors). These advanced concepts are distinguished from light-water reactors in their use of single-phase, low-pressure, high-temperature, and low Prandtl number (sodium and lead) coolants. As a new code development, the initial effort has been focused on modeling and simulation capabilities of heat transfer and single-phase fluid dynamics responses in Sodium-cooled Fast Reactor (SFR) systems. The system-level simulation capabilities of fluid flow and heat transfer in general engineering systems and typical SFRs have been verified and validated. This document provides the theoretical and technical basis of the code to help users understand the underlying physical models (such as governing equations, closure models, and component models), system modeling approaches, numerical discretization and solution methods, and the overall capabilities in SAM. As the code is still under ongoing development, this SAM Theory Manual will be updated periodically to keep it consistent with the state of the development.« less

  2. Optimization and parallelization of the thermal–hydraulic subchannel code CTF for high-fidelity multi-physics applications

    DOE PAGES

    Salko, Robert K.; Schmidt, Rodney C.; Avramova, Maria N.

    2014-11-23

    This study describes major improvements to the computational infrastructure of the CTF subchannel code so that full-core, pincell-resolved (i.e., one computational subchannel per real bundle flow channel) simulations can now be performed in much shorter run-times, either in stand-alone mode or as part of coupled-code multi-physics calculations. These improvements support the goals of the Department Of Energy Consortium for Advanced Simulation of Light Water Reactors (CASL) Energy Innovation Hub to develop high fidelity multi-physics simulation tools for nuclear energy design and analysis.

  3. Large Eddy Simulations and Turbulence Modeling for Film Cooling

    NASA Technical Reports Server (NTRS)

    Acharya, Sumanta

    1999-01-01

    The objective of the research is to perform Direct Numerical Simulations (DNS) and Large Eddy Simulations (LES) for film cooling process, and to evaluate and improve advanced forms of the two equation turbulence models for turbine blade surface flow analysis. The DNS/LES were used to resolve the large eddies within the flow field near the coolant jet location. The work involved code development and applications of the codes developed to the film cooling problems. Five different codes were developed and utilized to perform this research. This report presented a summary of the development of the codes and their applications to analyze the turbulence properties at locations near coolant injection holes.

  4. Main steam line break accident simulation of APR1400 using the model of ATLAS facility

    NASA Astrophysics Data System (ADS)

    Ekariansyah, A. S.; Deswandri; Sunaryo, Geni R.

    2018-02-01

    A main steam line break simulation for APR1400 as an advanced design of PWR has been performed using the RELAP5 code. The simulation was conducted in a model of thermal-hydraulic test facility called as ATLAS, which represents a scaled down facility of the APR1400 design. The main steam line break event is described in a open-access safety report document, in which initial conditions and assumptionsfor the analysis were utilized in performing the simulation and analysis of the selected parameter. The objective of this work was to conduct a benchmark activities by comparing the simulation results of the CESEC-III code as a conservative approach code with the results of RELAP5 as a best-estimate code. Based on the simulation results, a general similarity in the behavior of selected parameters was observed between the two codes. However the degree of accuracy still needs further research an analysis by comparing with the other best-estimate code. Uncertainties arising from the ATLAS model should be minimized by taking into account much more specific data in developing the APR1400 model.

  5. Recent advances in lossless coding techniques

    NASA Astrophysics Data System (ADS)

    Yovanof, Gregory S.

    Current lossless techniques are reviewed with reference to both sequential data files and still images. Two major groups of sequential algorithms, dictionary and statistical techniques, are discussed. In particular, attention is given to Lempel-Ziv coding, Huffman coding, and arithmewtic coding. The subject of lossless compression of imagery is briefly discussed. Finally, examples of practical implementations of lossless algorithms and some simulation results are given.

  6. Throughput Optimization Via Adaptive MIMO Communications

    DTIC Science & Technology

    2006-05-30

    End-to-end matlab packet simulation platform. * Low density parity check code (LDPCC). * Field trials with Silvus DSP MIMO testbed. * High mobility...incorporate advanced LDPC (low density parity check) codes . Realizing that the power of LDPC codes come at the price of decoder complexity, we also...Channel Coding Binary Convolution Code or LDPC Packet Length 0 - 216-1, bytes Coding Rate 1/2, 2/3, 3/4, 5/6 MIMO Channel Training Length 0 - 4, symbols

  7. Simulation of EAST vertical displacement events by tokamak simulation code

    NASA Astrophysics Data System (ADS)

    Qiu, Qinglai; Xiao, Bingjia; Guo, Yong; Liu, Lei; Xing, Zhe; Humphreys, D. A.

    2016-10-01

    Vertical instability is a potentially serious hazard for elongated plasma. In this paper, the tokamak simulation code (TSC) is used to simulate vertical displacement events (VDE) on the experimental advanced superconducting tokamak (EAST). Key parameters from simulations, including plasma current, plasma shape and position, flux contours and magnetic measurements match experimental data well. The growth rates simulated by TSC are in good agreement with TokSys results. In addition to modeling the free drift, an EAST fast vertical control model enables TSC to simulate the course of VDE recovery. The trajectories of the plasma current center and control currents on internal coils (IC) fit experimental data well.

  8. Simulating Coupling Complexity in Space Plasmas: First Results from a new code

    NASA Astrophysics Data System (ADS)

    Kryukov, I.; Zank, G. P.; Pogorelov, N. V.; Raeder, J.; Ciardo, G.; Florinski, V. A.; Heerikhuisen, J.; Li, G.; Petrini, F.; Shematovich, V. I.; Winske, D.; Shaikh, D.; Webb, G. M.; Yee, H. M.

    2005-12-01

    The development of codes that embrace 'coupling complexity' via the self-consistent incorporation of multiple physical scales and multiple physical processes in models has been identified by the NRC Decadal Survey in Solar and Space Physics as a crucial necessary development in simulation/modeling technology for the coming decade. The National Science Foundation, through its Information Technology Research (ITR) Program, is supporting our efforts to develop a new class of computational code for plasmas and neutral gases that integrates multiple scales and multiple physical processes and descriptions. We are developing a highly modular, parallelized, scalable code that incorporates multiple scales by synthesizing 3 simulation technologies: 1) Computational fluid dynamics (hydrodynamics or magneto-hydrodynamics-MHD) for the large-scale plasma; 2) direct Monte Carlo simulation of atoms/neutral gas, and 3) transport code solvers to model highly energetic particle distributions. We are constructing the code so that a fourth simulation technology, hybrid simulations for microscale structures and particle distributions, can be incorporated in future work, but for the present, this aspect will be addressed at a test-particle level. This synthesis we will provide a computational tool that will advance our understanding of the physics of neutral and charged gases enormously. Besides making major advances in basic plasma physics and neutral gas problems, this project will address 3 Grand Challenge space physics problems that reflect our research interests: 1) To develop a temporal global heliospheric model which includes the interaction of solar and interstellar plasma with neutral populations (hydrogen, helium, etc., and dust), test-particle kinetic pickup ion acceleration at the termination shock, anomalous cosmic ray production, interaction with galactic cosmic rays, while incorporating the time variability of the solar wind and the solar cycle. 2) To develop a coronal mass ejection and interplanetary shock propagation model for the inner and outer heliosphere, including, at a test-particle level, wave-particle interactions and particle acceleration at traveling shock waves and compression regions. 3) To develop an advanced Geospace General Circulation Model (GGCM) capable of realistically modeling space weather events, in particular the interaction with CMEs and geomagnetic storms. Furthermore, by implementing scalable run-time supports and sophisticated off- and on-line prediction algorithms, we anticipate important advances in the development of automatic and intelligent system software to optimize a wide variety of 'embedded' computations on parallel computers. Finally, public domain MHD and hydrodynamic codes had a transforming effect on space and astrophysics. We expect that our new generation, open source, public domain multi-scale code will have a similar transformational effect in a variety of disciplines, opening up new classes of problems to physicists and engineers alike.

  9. FANS Simulation of Propeller Wash at Navy Harbors (ESTEP Project ER-201031)

    DTIC Science & Technology

    2016-08-01

    this study, the Finite-Analytic Navier–Stokes code was employed to solve the Reynolds-Averaged Navier–Stokes equations in conjunction with advanced...site-specific harbor configurations, it is desirable to perform propeller wash study by solving the Navier–Stokes equations directly in conjunction ...Analytic Navier–Stokes code was employed to solve the Reynolds-Averaged Navier–Stokes equations in conjunction with advanced near-wall turbulence

  10. Wakefield Computations for the CLIC PETS using the Parallel Finite Element Time-Domain Code T3P

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Candel, A; Kabel, A.; Lee, L.

    In recent years, SLAC's Advanced Computations Department (ACD) has developed the high-performance parallel 3D electromagnetic time-domain code, T3P, for simulations of wakefields and transients in complex accelerator structures. T3P is based on advanced higher-order Finite Element methods on unstructured grids with quadratic surface approximation. Optimized for large-scale parallel processing on leadership supercomputing facilities, T3P allows simulations of realistic 3D structures with unprecedented accuracy, aiding the design of the next generation of accelerator facilities. Applications to the Compact Linear Collider (CLIC) Power Extraction and Transfer Structure (PETS) are presented.

  11. Modeling of ion orbit loss and intrinsic toroidal rotation with the COGENT code

    NASA Astrophysics Data System (ADS)

    Dorf, M.; Dorr, M.; Cohen, R.; Rognlien, T.; Hittinger, J.

    2014-10-01

    We discuss recent advances in cross-separatrix neoclassical transport simulations with COGENT, a continuum gyro-kinetic code being developed by the Edge Simulation Laboratory (ESL) collaboration. The COGENT code models the axisymmetric transport properties of edge plasmas including the effects of nonlinear (Fokker-Planck) collisions and a self-consistent electrostatic potential. Our recent work has focused on studies of ion orbit loss and the associated toroidal rotation driven by this mechanism. The results of the COGENT simulations are discussed and analyzed for the parameters of the DIII-D experiment. Work performed for USDOE at LLNL under Contract DE-AC52-07NA27344.

  12. Advanced simulation of mixed-material erosion/evolution and application to low and high-Z containing plasma facing components

    NASA Astrophysics Data System (ADS)

    Brooks, J. N.; Hassanein, A.; Sizyuk, T.

    2013-07-01

    Plasma interactions with mixed-material surfaces are being analyzed using advanced modeling of time-dependent surface evolution/erosion. Simulations use the REDEP/WBC erosion/redeposition code package coupled to the HEIGHTS package ITMC-DYN mixed-material formation/response code, with plasma parameter input from codes and data. We report here on analysis for a DIII-D Mo/C containing tokamak divertor. A DIII-D/DiMES probe experiment simulation predicts that sputtered molybdenum from a 1 cm diameter central spot quickly saturates (˜4 s) in the 5 cm diameter surrounding carbon probe surface, with subsequent re-sputtering and transport to off-probe divertor regions, and with high (˜50%) redeposition on the Mo spot. Predicted Mo content in the carbon agrees well with post-exposure probe data. We discuss implications and mixed-material analysis issues for Be/W mixing at the ITER outer divertor, and Li, C, Mo mixing at an NSTX divertor.

  13. Simulator platform for fast reactor operation and safety technology demonstration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vilim, R. B.; Park, Y. S.; Grandy, C.

    2012-07-30

    A simulator platform for visualization and demonstration of innovative concepts in fast reactor technology is described. The objective is to make more accessible the workings of fast reactor technology innovations and to do so in a human factors environment that uses state-of-the art visualization technologies. In this work the computer codes in use at Argonne National Laboratory (ANL) for the design of fast reactor systems are being integrated to run on this platform. This includes linking reactor systems codes with mechanical structures codes and using advanced graphics to depict the thermo-hydraulic-structure interactions that give rise to an inherently safe responsemore » to upsets. It also includes visualization of mechanical systems operation including advanced concepts that make use of robotics for operations, in-service inspection, and maintenance.« less

  14. Edge-relevant plasma simulations with the continuum code COGENT

    NASA Astrophysics Data System (ADS)

    Dorf, M.; Dorr, M.; Ghosh, D.; Hittinger, J.; Rognlien, T.; Cohen, R.; Lee, W.; Schwartz, P.

    2016-10-01

    We describe recent advances in cross-separatrix and other edge-relevant plasma simulations with COGENT, a continuum gyro-kinetic code being developed by the Edge Simulation Laboratory (ESL) collaboration. The distinguishing feature of the COGENT code is its high-order finite-volume discretization methods, which employ arbitrary mapped multiblock grid technology (nearly field-aligned on blocks) to handle the complexity of tokamak divertor geometry with high accuracy. This paper discusses the 4D (axisymmetric) electrostatic version of the code, and the presented topics include: (a) initial simulations with kinetic electrons and development of reduced fluid models; (b) development and application of implicit-explicit (IMEX) time integration schemes; and (c) conservative modeling of drift-waves and the universal instability. Work performed for USDOE, at LLNL under contract DE-AC52-07NA27344 and at LBNL under contract DE-AC02-05CH11231.

  15. Stroke code simulation benefits advanced practice providers similar to neurology residents.

    PubMed

    Khan, Muhib; Baird, Grayson L; Price, Theresa; Tubergen, Tricia; Kaskar, Omran; De Jesus, Michelle; Zachariah, Joseph; Oostema, Adam; Scurek, Raymond; Coleman, Robert R; Sherman, Wendy; Hingtgen, Cynthia; Abdelhak, Tamer; Smith, Brien; Silver, Brian

    2018-04-01

    Advanced practice providers (APPs) are important members of stroke teams. Stroke code simulations offer valuable experience in the evaluation and treatment of stroke patients without compromising patient care. We hypothesized that simulation training would increase APP confidence, comfort level, and preparedness in leading a stroke code similar to neurology residents. This is a prospective quasi-experimental, pretest/posttest study. Nine APPs and 9 neurology residents participated in 3 standardized simulated cases to determine need for IV thrombolysis, thrombectomy, and blood pressure management for intracerebral hemorrhage. Emergency medicine physicians and neurologists were preceptors. APPs and residents completed a survey before and after the simulation. Generalized mixed modeling assuming a binomial distribution was used to evaluate change. On a 5-point Likert scale (1 = strongly disagree and 5 = strongly agree), confidence in leading a stroke code increased from 2.4 to 4.2 ( p < 0.05) among APPs. APPs reported improved comfort level in rapidly assessing a stroke patient for thrombolytics (3.1-4.2; p < 0.05), making the decision to give thrombolytics (2.8 vs 4.2; p < 0.05), and assessing a patient for embolectomy (2.4-4.0; p < 0.05). There was no difference in the improvement observed in all the survey questions as compared to neurology residents. Simulation training is a beneficial part of medical education for APPs and should be considered in addition to traditional didactics and clinical training. Further research is needed to determine whether simulation education of APPs results in improved treatment times and outcomes of acute stroke patients.

  16. Acceleration of Monte Carlo simulation of photon migration in complex heterogeneous media using Intel many-integrated core architecture.

    PubMed

    Gorshkov, Anton V; Kirillin, Mikhail Yu

    2015-08-01

    Over two decades, the Monte Carlo technique has become a gold standard in simulation of light propagation in turbid media, including biotissues. Technological solutions provide further advances of this technique. The Intel Xeon Phi coprocessor is a new type of accelerator for highly parallel general purpose computing, which allows execution of a wide range of applications without substantial code modification. We present a technical approach of porting our previously developed Monte Carlo (MC) code for simulation of light transport in tissues to the Intel Xeon Phi coprocessor. We show that employing the accelerator allows reducing computational time of MC simulation and obtaining simulation speed-up comparable to GPU. We demonstrate the performance of the developed code for simulation of light transport in the human head and determination of the measurement volume in near-infrared spectroscopy brain sensing.

  17. An approach for coupled-code multiphysics core simulations from a common input

    DOE PAGES

    Schmidt, Rodney; Belcourt, Kenneth; Hooper, Russell; ...

    2014-12-10

    This study describes an approach for coupled-code multiphysics reactor core simulations that is being developed by the Virtual Environment for Reactor Applications (VERA) project in the Consortium for Advanced Simulation of Light-Water Reactors (CASL). In this approach a user creates a single problem description, called the “VERAIn” common input file, to define and setup the desired coupled-code reactor core simulation. A preprocessing step accepts the VERAIn file and generates a set of fully consistent input files for the different physics codes being coupled. The problem is then solved using a single-executable coupled-code simulation tool applicable to the problem, which ismore » built using VERA infrastructure software tools and the set of physics codes required for the problem of interest. The approach is demonstrated by performing an eigenvalue and power distribution calculation of a typical three-dimensional 17 × 17 assembly with thermal–hydraulic and fuel temperature feedback. All neutronics aspects of the problem (cross-section calculation, neutron transport, power release) are solved using the Insilico code suite and are fully coupled to a thermal–hydraulic analysis calculated by the Cobra-TF (CTF) code. The single-executable coupled-code (Insilico-CTF) simulation tool is created using several VERA tools, including LIME (Lightweight Integrating Multiphysics Environment for coupling codes), DTK (Data Transfer Kit), Trilinos, and TriBITS. Parallel calculations are performed on the Titan supercomputer at Oak Ridge National Laboratory using 1156 cores, and a synopsis of the solution results and code performance is presented. Finally, ongoing development of this approach is also briefly described.« less

  18. STAR-CCM+ Verification and Validation Plan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pointer, William David

    2016-09-30

    The commercial Computational Fluid Dynamics (CFD) code STAR-CCM+ provides general purpose finite volume method solutions for fluid dynamics and energy transport. This document defines plans for verification and validation (V&V) of the base code and models implemented within the code by the Consortium for Advanced Simulation of Light water reactors (CASL). The software quality assurance activities described herein are port of the overall software life cycle defined in the CASL Software Quality Assurance (SQA) Plan [Sieger, 2015]. STAR-CCM+ serves as the principal foundation for development of an advanced predictive multi-phase boiling simulation capability within CASL. The CASL Thermal Hydraulics Methodsmore » (THM) team develops advanced closure models required to describe the subgrid-resolution behavior of secondary fluids or fluid phases in multiphase boiling flows within the Eulerian-Eulerian framework of the code. These include wall heat partitioning models that describe the formation of vapor on the surface and the forces the define bubble/droplet dynamic motion. The CASL models are implemented as user coding or field functions within the general framework of the code. This report defines procedures and requirements for V&V of the multi-phase CFD capability developed by CASL THM. Results of V&V evaluations will be documented in a separate STAR-CCM+ V&V assessment report. This report is expected to be a living document and will be updated as additional validation cases are identified and adopted as part of the CASL THM V&V suite.« less

  19. A numerical code for the simulation of non-equilibrium chemically reacting flows on hybrid CPU-GPU clusters

    NASA Astrophysics Data System (ADS)

    Kudryavtsev, Alexey N.; Kashkovsky, Alexander V.; Borisov, Semyon P.; Shershnev, Anton A.

    2017-10-01

    In the present work a computer code RCFS for numerical simulation of chemically reacting compressible flows on hybrid CPU/GPU supercomputers is developed. It solves 3D unsteady Euler equations for multispecies chemically reacting flows in general curvilinear coordinates using shock-capturing TVD schemes. Time advancement is carried out using the explicit Runge-Kutta TVD schemes. Program implementation uses CUDA application programming interface to perform GPU computations. Data between GPUs is distributed via domain decomposition technique. The developed code is verified on the number of test cases including supersonic flow over a cylinder.

  20. Open-source framework for documentation of scientific software written on MATLAB-compatible programming languages

    NASA Astrophysics Data System (ADS)

    Konnik, Mikhail V.; Welsh, James

    2012-09-01

    Numerical simulators for adaptive optics systems have become an essential tool for the research and development of the future advanced astronomical instruments. However, growing software code of the numerical simulator makes it difficult to continue to support the code itself. The problem of adequate documentation of the astronomical software for adaptive optics simulators may complicate the development since the documentation must contain up-to-date schemes and mathematical descriptions implemented in the software code. Although most modern programming environments like MATLAB or Octave have in-built documentation abilities, they are often insufficient for the description of a typical adaptive optics simulator code. This paper describes a general cross-platform framework for the documentation of scientific software using open-source tools such as LATEX, mercurial, Doxygen, and Perl. Using the Perl script that translates M-files MATLAB comments into C-like, one can use Doxygen to generate and update the documentation for the scientific source code. The documentation generated by this framework contains the current code description with mathematical formulas, images, and bibliographical references. A detailed description of the framework components is presented as well as the guidelines for the framework deployment. Examples of the code documentation for the scripts and functions of a MATLAB-based adaptive optics simulator are provided.

  1. Four-Dimensional Continuum Gyrokinetic Code: Neoclassical Simulation of Fusion Edge Plasmas

    NASA Astrophysics Data System (ADS)

    Xu, X. Q.

    2005-10-01

    We are developing a continuum gyrokinetic code, TEMPEST, to simulate edge plasmas. Our code represents velocity space via a grid in equilibrium energy and magnetic moment variables, and configuration space via poloidal magnetic flux and poloidal angle. The geometry is that of a fully diverted tokamak (single or double null) and so includes boundary conditions for both closed magnetic flux surfaces and open field lines. The 4-dimensional code includes kinetic electrons and ions, and electrostatic field-solver options, and simulates neoclassical transport. The present implementation is a Method of Lines approach where spatial finite-differences (higher order upwinding) and implicit time advancement are used. We present results of initial verification and validation studies: transition from collisional to collisionless limits of parallel end-loss in the scrape-off layer, self-consistent electric field, and the effect of the real X-point geometry and edge plasma conditions on the standard neoclassical theory, including a comparison of our 4D code with other kinetic neoclassical codes and experiments.

  2. Nuclear Energy Advanced Modeling and Simulation Waste Integrated Performance and Safety Codes (NEAMS Waste IPSC).

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schultz, Peter Andrew

    The objective of the U.S. Department of Energy Office of Nuclear Energy Advanced Modeling and Simulation Waste Integrated Performance and Safety Codes (NEAMS Waste IPSC) is to provide an integrated suite of computational modeling and simulation (M&S) capabilities to quantitatively assess the long-term performance of waste forms in the engineered and geologic environments of a radioactive-waste storage facility or disposal repository. Achieving the objective of modeling the performance of a disposal scenario requires describing processes involved in waste form degradation and radionuclide release at the subcontinuum scale, beginning with mechanistic descriptions of chemical reactions and chemical kinetics at the atomicmore » scale, and upscaling into effective, validated constitutive models for input to high-fidelity continuum scale codes for coupled multiphysics simulations of release and transport. Verification and validation (V&V) is required throughout the system to establish evidence-based metrics for the level of confidence in M&S codes and capabilities, including at the subcontiunuum scale and the constitutive models they inform or generate. This Report outlines the nature of the V&V challenge at the subcontinuum scale, an approach to incorporate V&V concepts into subcontinuum scale modeling and simulation (M&S), and a plan to incrementally incorporate effective V&V into subcontinuum scale M&S destined for use in the NEAMS Waste IPSC work flow to meet requirements of quantitative confidence in the constitutive models informed by subcontinuum scale phenomena.« less

  3. Summary Report of Working Group 2: Computation

    NASA Astrophysics Data System (ADS)

    Stoltz, P. H.; Tsung, R. S.

    2009-01-01

    The working group on computation addressed three physics areas: (i) plasma-based accelerators (laser-driven and beam-driven), (ii) high gradient structure-based accelerators, and (iii) electron beam sources and transport [1]. Highlights of the talks in these areas included new models of breakdown on the microscopic scale, new three-dimensional multipacting calculations with both finite difference and finite element codes, and detailed comparisons of new electron gun models with standard models such as PARMELA. The group also addressed two areas of advances in computation: (i) new algorithms, including simulation in a Lorentz-boosted frame that can reduce computation time orders of magnitude, and (ii) new hardware architectures, like graphics processing units and Cell processors that promise dramatic increases in computing power. Highlights of the talks in these areas included results from the first large-scale parallel finite element particle-in-cell code (PIC), many order-of-magnitude speedup of, and details of porting the VPIC code to the Roadrunner supercomputer. The working group featured two plenary talks, one by Brian Albright of Los Alamos National Laboratory on the performance of the VPIC code on the Roadrunner supercomputer, and one by David Bruhwiler of Tech-X Corporation on recent advances in computation for advanced accelerators. Highlights of the talk by Albright included the first one trillion particle simulations, a sustained performance of 0.3 petaflops, and an eight times speedup of science calculations, including back-scatter in laser-plasma interaction. Highlights of the talk by Bruhwiler included simulations of 10 GeV accelerator laser wakefield stages including external injection, new developments in electromagnetic simulations of electron guns using finite difference and finite element approaches.

  4. Summary Report of Working Group 2: Computation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stoltz, P. H.; Tsung, R. S.

    2009-01-22

    The working group on computation addressed three physics areas: (i) plasma-based accelerators (laser-driven and beam-driven), (ii) high gradient structure-based accelerators, and (iii) electron beam sources and transport [1]. Highlights of the talks in these areas included new models of breakdown on the microscopic scale, new three-dimensional multipacting calculations with both finite difference and finite element codes, and detailed comparisons of new electron gun models with standard models such as PARMELA. The group also addressed two areas of advances in computation: (i) new algorithms, including simulation in a Lorentz-boosted frame that can reduce computation time orders of magnitude, and (ii) newmore » hardware architectures, like graphics processing units and Cell processors that promise dramatic increases in computing power. Highlights of the talks in these areas included results from the first large-scale parallel finite element particle-in-cell code (PIC), many order-of-magnitude speedup of, and details of porting the VPIC code to the Roadrunner supercomputer. The working group featured two plenary talks, one by Brian Albright of Los Alamos National Laboratory on the performance of the VPIC code on the Roadrunner supercomputer, and one by David Bruhwiler of Tech-X Corporation on recent advances in computation for advanced accelerators. Highlights of the talk by Albright included the first one trillion particle simulations, a sustained performance of 0.3 petaflops, and an eight times speedup of science calculations, including back-scatter in laser-plasma interaction. Highlights of the talk by Bruhwiler included simulations of 10 GeV accelerator laser wakefield stages including external injection, new developments in electromagnetic simulations of electron guns using finite difference and finite element approaches.« less

  5. Finite element methods in a simulation code for offshore wind turbines

    NASA Astrophysics Data System (ADS)

    Kurz, Wolfgang

    1994-06-01

    Offshore installation of wind turbines will become important for electricity supply in future. Wind conditions above sea are more favorable than on land and appropriate locations on land are limited and restricted. The dynamic behavior of advanced wind turbines is investigated with digital simulations to reduce time and cost in development and design phase. A wind turbine can be described and simulated as a multi-body system containing rigid and flexible bodies. Simulation of the non-linear motion of such a mechanical system using a multi-body system code is much faster than using a finite element code. However, a modal representation of the deformation field has to be incorporated in the multi-body system approach. The equations of motion of flexible bodies due to deformation are generated by finite element calculations. At Delft University of Technology the simulation code DUWECS has been developed which simulates the non-linear behavior of wind turbines in time domain. The wind turbine is divided in subcomponents which are represented by modules (e.g. rotor, tower etc.).

  6. NEAMS FPL M2 Milestone Report: Development of a UO₂ Grain Size Model using Multicale Modeling and Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tonks, Michael R; Zhang, Yongfeng; Bai, Xianming

    2014-06-01

    This report summarizes development work funded by the Nuclear Energy Advanced Modeling Simulation program's Fuels Product Line (FPL) to develop a mechanistic model for the average grain size in UO₂ fuel. The model is developed using a multiscale modeling and simulation approach involving atomistic simulations, as well as mesoscale simulations using INL's MARMOT code.

  7. Implementation and Testing of Turbulence Models for the F18-HARV Simulation

    NASA Technical Reports Server (NTRS)

    Yeager, Jessie C.

    1998-01-01

    This report presents three methods of implementing the Dryden power spectral density model for atmospheric turbulence. Included are the equations which define the three methods and computer source code written in Advanced Continuous Simulation Language to implement the equations. Time-history plots and sample statistics of simulated turbulence results from executing the code in a test program are also presented. Power spectral densities were computed for sample sequences of turbulence and are plotted for comparison with the Dryden spectra. The three model implementations were installed in a nonlinear six-degree-of-freedom simulation of the High Alpha Research Vehicle airplane. Aircraft simulation responses to turbulence generated with the three implementations are presented as plots.

  8. Simulations of binary black hole mergers

    NASA Astrophysics Data System (ADS)

    Lovelace, Geoffrey

    2017-01-01

    Advanced LIGO's observations of merging binary black holes have inaugurated the era of gravitational wave astronomy. Accurate models of binary black holes and the gravitational waves they emit are helping Advanced LIGO to find as many gravitational waves as possible and to learn as much as possible about the waves' sources. These models require numerical-relativity simulations of binary black holes, because near the time when the black holes merge, all analytic approximations break down. Following breakthroughs in 2005, many research groups have built numerical-relativity codes capable of simulating binary black holes. In this talk, I will discuss current challenges in simulating binary black holes for gravitational-wave astronomy, and I will discuss the tremendous progress that has already enabled such simulations to become an essential tool for Advanced LIGO.

  9. An advanced constitutive model in the sheet metal forming simulation: the Teodosiu microstructural model and the Cazacu Barlat yield criterion

    NASA Astrophysics Data System (ADS)

    Alves, J. L.; Oliveira, M. C.; Menezes, L. F.

    2004-06-01

    Two constitutive models used to describe the plastic behavior of sheet metals in the numerical simulation of sheet metal forming process are studied: a recently proposed advanced constitutive model based on the Teodosiu microstructural model and the Cazacu Barlat yield criterion is compared with a more classical one, based on the Swift law and the Hill 1948 yield criterion. These constitutive models are implemented into DD3IMP, a finite element home code specifically developed to simulate sheet metal forming processes, which generically is a 3-D elastoplastic finite element code with an updated Lagrangian formulation, following a fully implicit time integration scheme, large elastoplastic strains and rotations. Solid finite elements and parametric surfaces are used to model the blank sheet and tool surfaces, respectively. Some details of the numerical implementation of the constitutive models are given. Finally, the theory is illustrated with the numerical simulation of the deep drawing of a cylindrical cup. The results show that the proposed advanced constitutive model predicts with more exactness the final shape (medium height and ears profile) of the formed part, as one can conclude from the comparison with the experimental results.

  10. Evaluation of CFD Methods for Simulation of Two-Phase Boiling Flow Phenomena in a Helical Coil Steam Generator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pointer, William David; Shaver, Dillon; Liu, Yang

    The U.S. Department of Energy, Office of Nuclear Energy charges participants in the Nuclear Energy Advanced Modeling and Simulation (NEAMS) program with the development of advanced modeling and simulation capabilities that can be used to address design, performance and safety challenges in the development and deployment of advanced reactor technology. The NEAMS has established a high impact problem (HIP) team to demonstrate the applicability of these tools to identification and mitigation of sources of steam generator flow induced vibration (SGFIV). The SGFIV HIP team is working to evaluate vibration sources in an advanced helical coil steam generator using computational fluidmore » dynamics (CFD) simulations of the turbulent primary coolant flow over the outside of the tubes and CFD simulations of the turbulent multiphase boiling secondary coolant flow inside the tubes integrated with high resolution finite element method assessments of the tubes and their associated structural supports. This report summarizes the demonstration of a methodology for the multiphase boiling flow analysis inside the helical coil steam generator tube. A helical coil steam generator configuration has been defined based on the experiments completed by Polytecnico di Milano in the SIET helical coil steam generator tube facility. Simulations of the defined problem have been completed using the Eulerian-Eulerian multi-fluid modeling capabilities of the commercial CFD code STAR-CCM+. Simulations suggest that the two phases will quickly stratify in the slightly inclined pipe of the helical coil steam generator. These results have been successfully benchmarked against both empirical correlations for pressure drop and simulations using an alternate CFD methodology, the dispersed phase mixture modeling capabilities of the open source CFD code Nek5000.« less

  11. Chemical reactivity and spectroscopy explored from QM/MM molecular dynamics simulations using the LIO code

    NASA Astrophysics Data System (ADS)

    Marcolongo, Juan P.; Zeida, Ari; Semelak, Jonathan A.; Foglia, Nicolás O.; Morzan, Uriel N.; Estrin, Dario A.; González Lebrero, Mariano C.; Scherlis, Damián A.

    2018-03-01

    In this work we present the current advances in the development and the applications of LIO, a lab-made code designed for density functional theory calculations in graphical processing units (GPU), that can be coupled with different classical molecular dynamics engines. This code has been thoroughly optimized to perform efficient molecular dynamics simulations at the QM/MM DFT level, allowing for an exhaustive sampling of the configurational space. Selected examples are presented for the description of chemical reactivity in terms of free energy profiles, and also for the computation of optical properties, such as vibrational and electronic spectra in solvent and protein environments.

  12. Lawrence Livermore National Laboratories Perspective on Code Development and High Performance Computing Resources in Support of the National HED/ICF Effort

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clouse, C. J.; Edwards, M. J.; McCoy, M. G.

    2015-07-07

    Through its Advanced Scientific Computing (ASC) and Inertial Confinement Fusion (ICF) code development efforts, Lawrence Livermore National Laboratory (LLNL) provides a world leading numerical simulation capability for the National HED/ICF program in support of the Stockpile Stewardship Program (SSP). In addition the ASC effort provides high performance computing platform capabilities upon which these codes are run. LLNL remains committed to, and will work with, the national HED/ICF program community to help insure numerical simulation needs are met and to make those capabilities available, consistent with programmatic priorities and available resources.

  13. Advances in modelling of condensation phenomena

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, W.S.; Zaltsgendler, E.; Hanna, B.

    1997-07-01

    The physical parameters in the modelling of condensation phenomena in the CANDU reactor system codes are discussed. The experimental programs used for thermal-hydraulic code validation in the Canadian nuclear industry are briefly described. The modelling of vapour generation and in particular condensation plays a key role in modelling of postulated reactor transients. The condensation models adopted in the current state-of-the-art two-fluid CANDU reactor thermal-hydraulic system codes (CATHENA and TUF) are described. As examples of the modelling challenges faced, the simulation of a cold water injection experiment by CATHENA and the simulation of a condensation induced water hammer experiment by TUFmore » are described.« less

  14. Particle kinetic simulation of high altitude hypervelocity flight

    NASA Technical Reports Server (NTRS)

    Boyd, Iain; Haas, Brian L.

    1994-01-01

    Rarefied flows about hypersonic vehicles entering the upper atmosphere or through nozzles expanding into a near vacuum may only be simulated accurately with a direct simulation Monte Carlo (DSMC) method. Under this grant, researchers enhanced the models employed in the DSMC method and performed simulations in support of existing NASA projects or missions. DSMC models were developed and validated for simulating rotational, vibrational, and chemical relaxation in high-temperature flows, including effects of quantized anharmonic oscillators and temperature-dependent relaxation rates. State-of-the-art advancements were made in simulating coupled vibration-dissociation recombination for post-shock flows. Models were also developed to compute vehicle surface temperatures directly in the code rather than requiring isothermal estimates. These codes were instrumental in simulating aerobraking of NASA's Magellan spacecraft during orbital maneuvers to assess heat transfer and aerodynamic properties of the delicate satellite. NASA also depended upon simulations of entry of the Galileo probe into the atmosphere of Jupiter to provide drag and flow field information essential for accurate interpretation of an onboard experiment. Finally, the codes have been used extensively to simulate expanding nozzle flows in low-power thrusters in support of propulsion activities at NASA-Lewis. Detailed comparisons between continuum calculations and DSMC results helped to quantify the limitations of continuum CFD codes in rarefied applications.

  15. GLOBECOM '86 - Global Telecommunications Conference, Houston, TX, Dec. 1-4, 1986, Conference Record. Volumes 1, 2, & 3

    NASA Astrophysics Data System (ADS)

    Papers are presented on local area networks; formal methods for communication protocols; computer simulation of communication systems; spread spectrum and coded communications; tropical radio propagation; VLSI for communications; strategies for increasing software productivity; multiple access communications; advanced communication satellite technologies; and spread spectrum systems. Topics discussed include Space Station communication and tracking development and design; transmission networks; modulation; data communications; computer network protocols and performance; and coding and synchronization. Consideration is given to free space optical communications systems; VSAT communication networks; network topology design; advances in adaptive filtering echo cancellation and adaptive equalization; advanced signal processing for satellite communications; the elements, design, and analysis of fiber-optic networks; and advances in digital microwave systems.

  16. Introduction to study and simulation of low rate video coding schemes

    NASA Technical Reports Server (NTRS)

    1992-01-01

    During this period, the development of simulators for the various HDTV systems proposed to the FCC were developed. These simulators will be tested using test sequences from the MPEG committee. The results will be extrapolated to HDTV video sequences. Currently, the simulator for the compression aspects of the Advanced Digital Television (ADTV) was completed. Other HDTV proposals are at various stages of development. A brief overview of the ADTV system is given. Some coding results obtained using the simulator are discussed. These results are compared to those obtained using the CCITT H.261 standard. These results in the context of the CCSDS specifications are evaluated and some suggestions as to how the ADTV system could be implemented in the NASA network are made.

  17. Comparison of CFD simulations with experimental data for a tanker model advancing in waves

    NASA Astrophysics Data System (ADS)

    Orihara, Hideo

    2011-03-01

    In this paper, CFD simulation results for a tanker model are compared with experimental data over a range of wave conditions to verify a capability to predict the sea-keeping performance of practical hull forms. CFD simulations are conducted using WISDAM-X code which is capable of unsteady RANS calculations in arbitrary wave conditions. Comparisons are made of unsteady surface pressures, added resistance and ship motions in regular waves for cases of fully-loaded and ballast conditions of a large tanker model. It is shown that the simulation results agree fairly well with the experimental data, and that WISDAM-X code can predict sea-keeping performance of practical hull forms.

  18. Numerical Propulsion System Simulation

    NASA Technical Reports Server (NTRS)

    Naiman, Cynthia

    2006-01-01

    The NASA Glenn Research Center, in partnership with the aerospace industry, other government agencies, and academia, is leading the effort to develop an advanced multidisciplinary analysis environment for aerospace propulsion systems called the Numerical Propulsion System Simulation (NPSS). NPSS is a framework for performing analysis of complex systems. The initial development of NPSS focused on the analysis and design of airbreathing aircraft engines, but the resulting NPSS framework may be applied to any system, for example: aerospace, rockets, hypersonics, power and propulsion, fuel cells, ground based power, and even human system modeling. NPSS provides increased flexibility for the user, which reduces the total development time and cost. It is currently being extended to support the NASA Aeronautics Research Mission Directorate Fundamental Aeronautics Program and the Advanced Virtual Engine Test Cell (AVETeC). NPSS focuses on the integration of multiple disciplines such as aerodynamics, structure, and heat transfer with numerical zooming on component codes. Zooming is the coupling of analyses at various levels of detail. NPSS development includes capabilities to facilitate collaborative engineering. The NPSS will provide improved tools to develop custom components and to use capability for zooming to higher fidelity codes, coupling to multidiscipline codes, transmitting secure data, and distributing simulations across different platforms. These powerful capabilities extend NPSS from a zero-dimensional simulation tool to a multi-fidelity, multidiscipline system-level simulation tool for the full development life cycle.

  19. Tensoral for post-processing users and simulation authors

    NASA Technical Reports Server (NTRS)

    Dresselhaus, Eliot

    1993-01-01

    The CTR post-processing effort aims to make turbulence simulations and data more readily and usefully available to the research and industrial communities. The Tensoral language, which provides the foundation for this effort, is introduced here in the form of a user's guide. The Tensoral user's guide is presented in two main sections. Section one acts as a general introduction and guides database users who wish to post-process simulation databases. Section two gives a brief description of how database authors and other advanced users can make simulation codes and/or the databases they generate available to the user community via Tensoral database back ends. The two-part structure of this document conforms to the two-level design structure of the Tensoral language. Tensoral has been designed to be a general computer language for performing tensor calculus and statistics on numerical data. Tensoral's generality allows it to be used for stand-alone native coding of high-level post-processing tasks (as described in section one of this guide). At the same time, Tensoral's specialization to a minute task (namely, to numerical tensor calculus and statistics) allows it to be easily embedded into applications written partly in Tensoral and partly in other computer languages (here, C and Vectoral). Embedded Tensoral, aimed at advanced users for more general coding (e.g. of efficient simulations, for interfacing with pre-existing software, for visualization, etc.), is described in section two of this guide.

  20. Progress on the Development of the hPIC Particle-in-Cell Code

    NASA Astrophysics Data System (ADS)

    Dart, Cameron; Hayes, Alyssa; Khaziev, Rinat; Marcinko, Stephen; Curreli, Davide; Laboratory of Computational Plasma Physics Team

    2017-10-01

    Advancements were made in the development of the kinetic-kinetic electrostatic Particle-in-Cell code, hPIC, designed for large-scale simulation of the Plasma-Material Interface. hPIC achieved a weak scaling efficiency of 87% using the Algebraic Multigrid Solver BoomerAMG from the PETSc library on more than 64,000 cores of the Blue Waters supercomputer at the University of Illinois at Urbana-Champaign. The code successfully simulates two-stream instability and a volume of plasma over several square centimeters of surface extending out to the presheath in kinetic-kinetic mode. Results from a parametric study of the plasma sheath in strongly magnetized conditions will be presented, as well as a detailed analysis of the plasma sheath structure at grazing magnetic angles. The distribution function and its moments will be reported for plasma species in the simulation domain and at the material surface for plasma sheath simulations. Membership Pending.

  1. System Simulation of Nuclear Power Plant by Coupling RELAP5 and Matlab/Simulink

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meng Lin; Dong Hou; Zhihong Xu

    2006-07-01

    Since RELAP5 code has general and advanced features in thermal-hydraulic computation, it has been widely used in transient and accident safety analysis, experiment planning analysis, and system simulation, etc. So we wish to design, analyze, verify a new Instrumentation And Control (I and C) system of Nuclear Power Plant (NPP) based on the best-estimated code, and even develop our engineering simulator. But because of limited function of simulating control and protection system in RELAP5, it is necessary to expand the function for high efficient, accurate, flexible design and simulation of I and C system. Matlab/Simulink, a scientific computation software, justmore » can compensate the limitation, which is a powerful tool in research and simulation of plant process control. The software is selected as I and C part to be coupled with RELAP5 code to realize system simulation of NPPs. There are two key techniques to be solved. One is the dynamic data exchange, by which Matlab/Simulink receives plant parameters and returns control results. Database is used to communicate the two codes. Accordingly, Dynamic Link Library (DLL) is applied to link database in RELAP5, while DLL and S-Function is applied in Matlab/Simulink. The other problem is synchronization between the two codes for ensuring consistency in global simulation time. Because Matlab/Simulink always computes faster than RELAP5, the simulation time is sent by RELAP5 and received by Matlab/Simulink. A time control subroutine is added into the simulation procedure of Matlab/Simulink to control its simulation advancement. Through these ways, Matlab/Simulink is dynamically coupled with RELAP5. Thus, in Matlab/Simulink, we can freely design control and protection logic of NPPs and test it with best-estimated plant model feedback. A test will be shown to illuminate that results of coupling calculation are nearly the same with one of single RELAP5 with control logic. In practice, a real Pressurized Water Reactor (PWR) is modeled by RELAP5 code, and its main control and protection system is duplicated by Matlab/Simulink. Some steady states and transients are calculated under control of these I and C systems, and the results are compared with the plant test curves. The application showed that it can do exact system simulation of NPPs by coupling RELAP5 and Matlab/Simulink. This paper will mainly focus on the coupling method, plant thermal-hydraulic model, main control logics, test and application results. (authors)« less

  2. CFD validation needs for advanced concepts at Northrop Corporation

    NASA Technical Reports Server (NTRS)

    George, Michael W.

    1987-01-01

    Information is given in viewgraph form on the Computational Fluid Dynamics (CFD) Workshop held July 14 - 16, 1987. Topics covered include the philosophy of CFD validation, current validation efforts, the wing-body-tail Euler code, F-20 Euler simulated oil flow, and Euler Navier-Stokes code validation for 2D and 3D nozzle afterbody applications.

  3. Advanced Spectral Modeling Development

    DTIC Science & Technology

    1992-09-14

    above, the AFGL line-by-line code already possesses many of the attributes desired of a generally applicable transmittance/radiance simulation code, it...transmittance calculations, (b) perform generalized multiple scattering calculations, (c) calculate both heating and dissociative fluxes, (d) provide...This report is subdivided into task specific subsections. The following section describes our general approach to address these technical issues (Section

  4. The APS SASE FEL : modeling and code comparison.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Biedron, S. G.

    A self-amplified spontaneous emission (SASE) free-electron laser (FEL) is under construction at the Advanced Photon Source (APS). Five FEL simulation codes were used in the design phase: GENESIS, GINGER, MEDUSA, RON, and TDA3D. Initial comparisons between each of these independent formulations show good agreement for the parameters of the APS SASE FEL.

  5. Accelerating cardiac bidomain simulations using graphics processing units.

    PubMed

    Neic, A; Liebmann, M; Hoetzl, E; Mitchell, L; Vigmond, E J; Haase, G; Plank, G

    2012-08-01

    Anatomically realistic and biophysically detailed multiscale computer models of the heart are playing an increasingly important role in advancing our understanding of integrated cardiac function in health and disease. Such detailed simulations, however, are computationally vastly demanding, which is a limiting factor for a wider adoption of in-silico modeling. While current trends in high-performance computing (HPC) hardware promise to alleviate this problem, exploiting the potential of such architectures remains challenging since strongly scalable algorithms are necessitated to reduce execution times. Alternatively, acceleration technologies such as graphics processing units (GPUs) are being considered. While the potential of GPUs has been demonstrated in various applications, benefits in the context of bidomain simulations where large sparse linear systems have to be solved in parallel with advanced numerical techniques are less clear. In this study, the feasibility of multi-GPU bidomain simulations is demonstrated by running strong scalability benchmarks using a state-of-the-art model of rabbit ventricles. The model is spatially discretized using the finite element methods (FEM) on fully unstructured grids. The GPU code is directly derived from a large pre-existing code, the Cardiac Arrhythmia Research Package (CARP), with very minor perturbation of the code base. Overall, bidomain simulations were sped up by a factor of 11.8 to 16.3 in benchmarks running on 6-20 GPUs compared to the same number of CPU cores. To match the fastest GPU simulation which engaged 20 GPUs, 476 CPU cores were required on a national supercomputing facility.

  6. Accelerating Cardiac Bidomain Simulations Using Graphics Processing Units

    PubMed Central

    Neic, Aurel; Liebmann, Manfred; Hoetzl, Elena; Mitchell, Lawrence; Vigmond, Edward J.; Haase, Gundolf

    2013-01-01

    Anatomically realistic and biophysically detailed multiscale computer models of the heart are playing an increasingly important role in advancing our understanding of integrated cardiac function in health and disease. Such detailed simulations, however, are computationally vastly demanding, which is a limiting factor for a wider adoption of in-silico modeling. While current trends in high-performance computing (HPC) hardware promise to alleviate this problem, exploiting the potential of such architectures remains challenging since strongly scalable algorithms are necessitated to reduce execution times. Alternatively, acceleration technologies such as graphics processing units (GPUs) are being considered. While the potential of GPUs has been demonstrated in various applications, benefits in the context of bidomain simulations where large sparse linear systems have to be solved in parallel with advanced numerical techniques are less clear. In this study, the feasibility of multi-GPU bidomain simulations is demonstrated by running strong scalability benchmarks using a state-of-the-art model of rabbit ventricles. The model is spatially discretized using the finite element methods (FEM) on fully unstructured grids. The GPU code is directly derived from a large pre-existing code, the Cardiac Arrhythmia Research Package (CARP), with very minor perturbation of the code base. Overall, bidomain simulations were sped up by a factor of 11.8 to 16.3 in benchmarks running on 6–20 GPUs compared to the same number of CPU cores. To match the fastest GPU simulation which engaged 20GPUs, 476 CPU cores were required on a national supercomputing facility. PMID:22692867

  7. Design Considerations of a Virtual Laboratory for Advanced X-ray Sources

    NASA Astrophysics Data System (ADS)

    Luginsland, J. W.; Frese, M. H.; Frese, S. D.; Watrous, J. J.; Heileman, G. L.

    2004-11-01

    The field of scientific computation has greatly advanced in the last few years, resulting in the ability to perform complex computer simulations that can predict the performance of real-world experiments in a number of fields of study. Among the forces driving this new computational capability is the advent of parallel algorithms, allowing calculations in three-dimensional space with realistic time scales. Electromagnetic radiation sources driven by high-voltage, high-current electron beams offer an area to further push the state-of-the-art in high fidelity, first-principles simulation tools. The physics of these x-ray sources combine kinetic plasma physics (electron beams) with dense fluid-like plasma physics (anode plasmas) and x-ray generation (bremsstrahlung). There are a number of mature techniques and software packages for dealing with the individual aspects of these sources, such as Particle-In-Cell (PIC), Magneto-Hydrodynamics (MHD), and radiation transport codes. The current effort is focused on developing an object-oriented software environment using the Rational© Unified Process and the Unified Modeling Language (UML) to provide a framework where multiple 3D parallel physics packages, such as a PIC code (ICEPIC), a MHD code (MACH), and a x-ray transport code (ITS) can co-exist in a system-of-systems approach to modeling advanced x-ray sources. Initial software design and assessments of the various physics algorithms' fidelity will be presented.

  8. Design of neurophysiologically motivated structures of time-pulse coded neurons

    NASA Astrophysics Data System (ADS)

    Krasilenko, Vladimir G.; Nikolsky, Alexander I.; Lazarev, Alexander A.; Lobodzinska, Raisa F.

    2009-04-01

    The common methodology of biologically motivated concept of building of processing sensors systems with parallel input and picture operands processing and time-pulse coding are described in paper. Advantages of such coding for creation of parallel programmed 2D-array structures for the next generation digital computers which require untraditional numerical systems for processing of analog, digital, hybrid and neuro-fuzzy operands are shown. The optoelectronic time-pulse coded intelligent neural elements (OETPCINE) simulation results and implementation results of a wide set of neuro-fuzzy logic operations are considered. The simulation results confirm engineering advantages, intellectuality, circuit flexibility of OETPCINE for creation of advanced 2D-structures. The developed equivalentor-nonequivalentor neural element has power consumption of 10mW and processing time about 10...100us.

  9. Hypersonic simulations using open-source CFD and DSMC solvers

    NASA Astrophysics Data System (ADS)

    Casseau, V.; Scanlon, T. J.; John, B.; Emerson, D. R.; Brown, R. E.

    2016-11-01

    Hypersonic hybrid hydrodynamic-molecular gas flow solvers are required to satisfy the two essential requirements of any high-speed reacting code, these being physical accuracy and computational efficiency. The James Weir Fluids Laboratory at the University of Strathclyde is currently developing an open-source hybrid code which will eventually reconcile the direct simulation Monte-Carlo method, making use of the OpenFOAM application called dsmcFoam, and the newly coded open-source two-temperature computational fluid dynamics solver named hy2Foam. In conjunction with employing the CVDV chemistry-vibration model in hy2Foam, novel use is made of the QK rates in a CFD solver. In this paper, further testing is performed, in particular with the CFD solver, to ensure its efficacy before considering more advanced test cases. The hy2Foam and dsmcFoam codes have shown to compare reasonably well, thus providing a useful basis for other codes to compare against.

  10. PRATHAM: Parallel Thermal Hydraulics Simulations using Advanced Mesoscopic Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Joshi, Abhijit S; Jain, Prashant K; Mudrich, Jaime A

    2012-01-01

    At the Oak Ridge National Laboratory, efforts are under way to develop a 3D, parallel LBM code called PRATHAM (PaRAllel Thermal Hydraulic simulations using Advanced Mesoscopic Methods) to demonstrate the accuracy and scalability of LBM for turbulent flow simulations in nuclear applications. The code has been developed using FORTRAN-90, and parallelized using the message passing interface MPI library. Silo library is used to compact and write the data files, and VisIt visualization software is used to post-process the simulation data in parallel. Both the single relaxation time (SRT) and multi relaxation time (MRT) LBM schemes have been implemented in PRATHAM.more » To capture turbulence without prohibitively increasing the grid resolution requirements, an LES approach [5] is adopted allowing large scale eddies to be numerically resolved while modeling the smaller (subgrid) eddies. In this work, a Smagorinsky model has been used, which modifies the fluid viscosity by an additional eddy viscosity depending on the magnitude of the rate-of-strain tensor. In LBM, this is achieved by locally varying the relaxation time of the fluid.« less

  11. Supernova Light Curves and Spectra from Two Different Codes: Supernu and Phoenix

    NASA Astrophysics Data System (ADS)

    Van Rossum, Daniel R; Wollaeger, Ryan T

    2014-08-01

    The observed similarities between light curve shapes from Type Ia supernovae, and in particular the correlation of light curve shape and brightness, have been actively studied for more than two decades. In recent years, hydronamic simulations of white dwarf explosions have advanced greatly, and multiple mechanisms that could potentially produce Type Ia supernovae have been explored in detail. The question which of the proposed mechanisms is (or are) possibly realized in nature remains challenging to answer, but detailed synthetic light curves and spectra from explosion simulations are very helpful and important guidelines towards answering this question.We present results from a newly developed radiation transport code, Supernu. Supernu solves the supernova radiation transfer problem uses a novel technique based on a hybrid between Implicit Monte Carlo and Discrete Diffusion Monte Carlo. This technique enhances the efficiency with respect to traditional implicit monte carlo codes and thus lends itself perfectly for multi-dimensional simulations. We show direct comparisons of light curves and spectra from Type Ia simulations with Supernu versus the legacy Phoenix code.

  12. RICH: OPEN-SOURCE HYDRODYNAMIC SIMULATION ON A MOVING VORONOI MESH

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yalinewich, Almog; Steinberg, Elad; Sari, Re’em

    2015-02-01

    We present here RICH, a state-of-the-art two-dimensional hydrodynamic code based on Godunov’s method, on an unstructured moving mesh (the acronym stands for Racah Institute Computational Hydrodynamics). This code is largely based on the code AREPO. It differs from AREPO in the interpolation and time-advancement schemeS as well as a novel parallelization scheme based on Voronoi tessellation. Using our code, we study the pros and cons of a moving mesh (in comparison to a static mesh). We also compare its accuracy to other codes. Specifically, we show that our implementation of external sources and time-advancement scheme is more accurate and robustmore » than is AREPO when the mesh is allowed to move. We performed a parameter study of the cell rounding mechanism (Lloyd iterations) and its effects. We find that in most cases a moving mesh gives better results than a static mesh, but it is not universally true. In the case where matter moves in one way and a sound wave is traveling in the other way (such that relative to the grid the wave is not moving) a static mesh gives better results than a moving mesh. We perform an analytic analysis for finite difference schemes that reveals that a Lagrangian simulation is better than a Eulerian simulation in the case of a highly supersonic flow. Moreover, we show that Voronoi-based moving mesh schemes suffer from an error, which is resolution independent, due to inconsistencies between the flux calculation and the change in the area of a cell. Our code is publicly available as open source and designed in an object-oriented, user-friendly way that facilitates incorporation of new algorithms and physical processes.« less

  13. On the utility of graphics cards to perform massively parallel simulation of advanced Monte Carlo methods

    PubMed Central

    Lee, Anthony; Yau, Christopher; Giles, Michael B.; Doucet, Arnaud; Holmes, Christopher C.

    2011-01-01

    We present a case-study on the utility of graphics cards to perform massively parallel simulation of advanced Monte Carlo methods. Graphics cards, containing multiple Graphics Processing Units (GPUs), are self-contained parallel computational devices that can be housed in conventional desktop and laptop computers and can be thought of as prototypes of the next generation of many-core processors. For certain classes of population-based Monte Carlo algorithms they offer massively parallel simulation, with the added advantage over conventional distributed multi-core processors that they are cheap, easily accessible, easy to maintain, easy to code, dedicated local devices with low power consumption. On a canonical set of stochastic simulation examples including population-based Markov chain Monte Carlo methods and Sequential Monte Carlo methods, we nd speedups from 35 to 500 fold over conventional single-threaded computer code. Our findings suggest that GPUs have the potential to facilitate the growth of statistical modelling into complex data rich domains through the availability of cheap and accessible many-core computation. We believe the speedup we observe should motivate wider use of parallelizable simulation methods and greater methodological attention to their design. PMID:22003276

  14. Nuclear Energy Advanced Modeling and Simulation (NEAMS) Waste Integrated Performance and Safety Codes (IPSC) : FY10 development and integration.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Criscenti, Louise Jacqueline; Sassani, David Carl; Arguello, Jose Guadalupe, Jr.

    2011-02-01

    This report describes the progress in fiscal year 2010 in developing the Waste Integrated Performance and Safety Codes (IPSC) in support of the U.S. Department of Energy (DOE) Office of Nuclear Energy Advanced Modeling and Simulation (NEAMS) Campaign. The goal of the Waste IPSC is to develop an integrated suite of computational modeling and simulation capabilities to quantitatively assess the long-term performance of waste forms in the engineered and geologic environments of a radioactive waste storage or disposal system. The Waste IPSC will provide this simulation capability (1) for a range of disposal concepts, waste form types, engineered repository designs,more » and geologic settings, (2) for a range of time scales and distances, (3) with appropriate consideration of the inherent uncertainties, and (4) in accordance with robust verification, validation, and software quality requirements. Waste IPSC activities in fiscal year 2010 focused on specifying a challenge problem to demonstrate proof of concept, developing a verification and validation plan, and performing an initial gap analyses to identify candidate codes and tools to support the development and integration of the Waste IPSC. The current Waste IPSC strategy is to acquire and integrate the necessary Waste IPSC capabilities wherever feasible, and develop only those capabilities that cannot be acquired or suitably integrated, verified, or validated. This year-end progress report documents the FY10 status of acquisition, development, and integration of thermal-hydrologic-chemical-mechanical (THCM) code capabilities, frameworks, and enabling tools and infrastructure.« less

  15. User Guidelines and Best Practices for CASL VUQ Analysis Using Dakota

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adams, Brian M.; Coleman, Kayla; Gilkey, Lindsay N.

    Sandia’s Dakota software (available at http://dakota.sandia.gov) supports science and engineering transformation through advanced exploration of simulations. Specifically it manages and analyzes ensembles of simulations to provide broader and deeper perspective for analysts and decision makers. This enables them to enhance understanding of risk, improve products, and assess simulation credibility. In its simplest mode, Dakota can automate typical parameter variation studies through a generic interface to a physics-based computational model. This can lend efficiency and rigor to manual parameter perturbation studies already being conducted by analysts. However, Dakota also delivers advanced parametric analysis techniques enabling design exploration, optimization, model calibration, riskmore » analysis, and quantification of margins and uncertainty with such models. It directly supports verification and validation activities. Dakota algorithms enrich complex science and engineering models, enabling an analyst to answer crucial questions of - Sensitivity: Which are the most important input factors or parameters entering the simulation, and how do they influence key outputs?; Uncertainty: What is the uncertainty or variability in simulation output, given uncertainties in input parameters? How safe, reliable, robust, or variable is my system? (Quantification of margins and uncertainty, QMU); Optimization: What parameter values yield the best performing design or operating condition, given constraints? Calibration: What models and/or parameters best match experimental data? In general, Dakota is the Consortium for Advanced Simulation of Light Water Reactors (CASL) delivery vehicle for verification, validation, and uncertainty quantification (VUQ) algorithms. It permits ready application of the VUQ methods described above to simulation codes by CASL researchers, code developers, and application engineers.« less

  16. Warthog: Progress on Coupling BISON and PROTEUS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hart, Shane W.D.

    The Nuclear Energy Advanced Modeling and Simulation (NEAMS) program from the Office of Nuclear Energy at the Department of Energy (DOE) provides a robust toolkit for modeling and simulation of current and future advanced nuclear reactor designs. This toolkit provides these technologies organized across product lines, with two divisions targeted at fuels and end-to-end reactor modeling, and a third for integration, coupling, and high-level workflow management. The Fuels Product Line (FPL) and the Reactor Product Line (RPL) provide advanced computational technologies that serve each respective field effectively. There is currently a lack of integration between the product lines, impeding futuremore » improvements of simulation solution fidelity. In order to mix and match tools across the product lines, a new application called Warthog was produced. Warthog is built on the Multi-physics Object-Oriented Simulation Environment (MOOSE) framework developed at Idaho National Laboratory (INL). This report details the continuing efforts to provide the Integration Product Line (IPL) with interoperability using the Warthog code. Currently, this application strives to couple the BISON fuel performance application from the FPL using the PROTEUS Core Neutronics application from the RPL. Warthog leverages as much prior work from the NEAMS program as possible, enabling interoperability between the independently developed MOOSE and SHARP frameworks, and the libMesh and MOAB mesh data formats. Previous work performed on Warthog allowed it to couple a pin cell between the two codes. However, as the temperature changed due to the BISON calculation, the cross sections were not recalculated, leading to errors as the temperature got further away from the initial conditions. XSProc from the SCALE code suite was used to calculate the cross sections as needed. The remainder of this report discusses the changes to Warthog to allow for the implementation of XSProc as an external code. It also discusses the changes made to Warthog to allow it to fit more cleanly into the MultiApp syntax of the MOOSE framework. The capabilities, design, and limitations of Warthog will be described, in addition to some of the test cases that were used to demonstrate the code. Future plans for Warthog will be discussed, including continuation of the modifications to the input and coupling to other SHARP codes such as Nek5000.« less

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romanov, Gennady

    Typically the RFQs are designed using the Parmteq, DesRFQ and other similar specialized codes, which produces the files containing the field and geometrical parameters for every cell. The beam dynamic simulations with these analytical fields a re, of course, ideal realizations of the designed RFQs. The new advanced computing capabilities made it possible to simulate beam and even dark current in the realistic 3D electromagnetic fields in the RFQs that may reflect cavity tuning, presence of tune rs and couplers, RFQ segmentation etc. The paper describes the utilization of full 3D field distribution obtained with CST Studio Suite for beammore » dynamic simulations using both PIC solver of CST Particle Studio and the beam dynamic code TRACK.« less

  18. Integration of Dakota into the NEAMS Workbench

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Swiler, Laura Painton; Lefebvre, Robert A.; Langley, Brandon R.

    2017-07-01

    This report summarizes a NEAMS (Nuclear Energy Advanced Modeling and Simulation) project focused on integrating Dakota into the NEAMS Workbench. The NEAMS Workbench, developed at Oak Ridge National Laboratory, is a new software framework that provides a graphical user interface, input file creation, parsing, validation, job execution, workflow management, and output processing for a variety of nuclear codes. Dakota is a tool developed at Sandia National Laboratories that provides a suite of uncertainty quantification and optimization algorithms. Providing Dakota within the NEAMS Workbench allows users of nuclear simulation codes to perform uncertainty and optimization studies on their nuclear codes frommore » within a common, integrated environment. Details of the integration and parsing are provided, along with an example of Dakota running a sampling study on the fuels performance code, BISON, from within the NEAMS Workbench.« less

  19. Advances in simulation of wave interactions with extended MHD phenomena

    NASA Astrophysics Data System (ADS)

    Batchelor, D.; Abla, G.; D'Azevedo, E.; Bateman, G.; Bernholdt, D. E.; Berry, L.; Bonoli, P.; Bramley, R.; Breslau, J.; Chance, M.; Chen, J.; Choi, M.; Elwasif, W.; Foley, S.; Fu, G.; Harvey, R.; Jaeger, E.; Jardin, S.; Jenkins, T.; Keyes, D.; Klasky, S.; Kruger, S.; Ku, L.; Lynch, V.; McCune, D.; Ramos, J.; Schissel, D.; Schnack, D.; Wright, J.

    2009-07-01

    The Integrated Plasma Simulator (IPS) provides a framework within which some of the most advanced, massively-parallel fusion modeling codes can be interoperated to provide a detailed picture of the multi-physics processes involved in fusion experiments. The presentation will cover four topics: 1) recent improvements to the IPS, 2) application of the IPS for very high resolution simulations of ITER scenarios, 3) studies of resistive and ideal MHD stability in tokamk discharges using IPS facilities, and 4) the application of RF power in the electron cyclotron range of frequencies to control slowly growing MHD modes in tokamaks and initial evaluations of optimized location for RF power deposition.

  20. Advanced graphical user interface for multi-physics simulations using AMST

    NASA Astrophysics Data System (ADS)

    Hoffmann, Florian; Vogel, Frank

    2017-07-01

    Numerical modelling of particulate matter has gained much popularity in recent decades. Advanced Multi-physics Simulation Technology (AMST) is a state-of-the-art three dimensional numerical modelling technique combining the eX-tended Discrete Element Method (XDEM) with Computational Fluid Dynamics (CFD) and Finite Element Analysis (FEA) [1]. One major limitation of this code is the lack of a graphical user interface (GUI) meaning that all pre-processing has to be made directly in a HDF5-file. This contribution presents the first graphical pre-processor developed for AMST.

  1. Modernizing the ATLAS simulation infrastructure

    NASA Astrophysics Data System (ADS)

    Di Simone, A.; CollaborationAlbert-Ludwigs-Universitt Freiburg, ATLAS; Institut, Physikalisches; Br., 79104 Freiburg i.; Germany

    2017-10-01

    The ATLAS Simulation infrastructure has been used to produce upwards of 50 billion proton-proton collision events for analyses ranging from detailed Standard Model measurements to searches for exotic new phenomena. In the last several years, the infrastructure has been heavily revised to allow intuitive multithreading and significantly improved maintainability. Such a massive update of a legacy code base requires careful choices about what pieces of code to completely rewrite and what to wrap or revise. The initialization of the complex geometry was generalized to allow new tools and geometry description languages, popular in some detector groups. The addition of multithreading requires Geant4-MT and GaudiHive, two frameworks with fundamentally different approaches to multithreading, to work together. It also required enforcing thread safety throughout a large code base, which required the redesign of several aspects of the simulation, including truth, the record of particle interactions with the detector during the simulation. These advances were possible thanks to close interactions with the Geant4 developers.

  2. Nonlinear three-dimensional verification of the SPECYL and PIXIE3D magnetohydrodynamics codes for fusion plasmas

    NASA Astrophysics Data System (ADS)

    Bonfiglio, D.; Chacón, L.; Cappello, S.

    2010-08-01

    With the increasing impact of scientific discovery via advanced computation, there is presently a strong emphasis on ensuring the mathematical correctness of computational simulation tools. Such endeavor, termed verification, is now at the center of most serious code development efforts. In this study, we address a cross-benchmark nonlinear verification study between two three-dimensional magnetohydrodynamics (3D MHD) codes for fluid modeling of fusion plasmas, SPECYL [S. Cappello and D. Biskamp, Nucl. Fusion 36, 571 (1996)] and PIXIE3D [L. Chacón, Phys. Plasmas 15, 056103 (2008)], in their common limit of application: the simple viscoresistive cylindrical approximation. SPECYL is a serial code in cylindrical geometry that features a spectral formulation in space and a semi-implicit temporal advance, and has been used extensively to date for reversed-field pinch studies. PIXIE3D is a massively parallel code in arbitrary curvilinear geometry that features a conservative, solenoidal finite-volume discretization in space, and a fully implicit temporal advance. The present study is, in our view, a first mandatory step in assessing the potential of any numerical 3D MHD code for fluid modeling of fusion plasmas. Excellent agreement is demonstrated over a wide range of parameters for several fusion-relevant cases in both two- and three-dimensional geometries.

  3. Nonlinear three-dimensional verification of the SPECYL and PIXIE3D magnetohydrodynamics codes for fusion plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bonfiglio, Daniele; Chacon, Luis; Cappello, Susanna

    2010-01-01

    With the increasing impact of scientific discovery via advanced computation, there is presently a strong emphasis on ensuring the mathematical correctness of computational simulation tools. Such endeavor, termed verification, is now at the center of most serious code development efforts. In this study, we address a cross-benchmark nonlinear verification study between two three-dimensional magnetohydrodynamics (3D MHD) codes for fluid modeling of fusion plasmas, SPECYL [S. Cappello and D. Biskamp, Nucl. Fusion 36, 571 (1996)] and PIXIE3D [L. Chacon, Phys. Plasmas 15, 056103 (2008)], in their common limit of application: the simple viscoresistive cylindrical approximation. SPECYL is a serial code inmore » cylindrical geometry that features a spectral formulation in space and a semi-implicit temporal advance, and has been used extensively to date for reversed-field pinch studies. PIXIE3D is a massively parallel code in arbitrary curvilinear geometry that features a conservative, solenoidal finite-volume discretization in space, and a fully implicit temporal advance. The present study is, in our view, a first mandatory step in assessing the potential of any numerical 3D MHD code for fluid modeling of fusion plasmas. Excellent agreement is demonstrated over a wide range of parameters for several fusion-relevant cases in both two- and three-dimensional geometries.« less

  4. Status Report on NEAMS PROTEUS/ORIGEN Integration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wieselquist, William A

    2016-02-18

    The US Department of Energy’s Nuclear Energy Advanced Modeling and Simulation (NEAMS) Program has contributed significantly to the development of the PROTEUS neutron transport code at Argonne National Laboratory and to the Oak Ridge Isotope Generation and Depletion Code (ORIGEN) depletion/decay code at Oak Ridge National Laboratory. PROTEUS’s key capability is the efficient and scalable (up to hundreds of thousands of cores) neutron transport solver on general, unstructured, three-dimensional finite-element-type meshes. The scalability and mesh generality enable the transfer of neutron and power distributions to other codes in the NEAMS toolkit for advanced multiphysics analysis. Recently, ORIGEN has received considerablemore » modernization to provide the high-performance depletion/decay capability within the NEAMS toolkit. This work presents a description of the initial integration of ORIGEN in PROTEUS, mainly performed during FY 2015, with minor updates in FY 2016.« less

  5. A flooding induced station blackout analysis for a pressurized water reactor using the RISMC toolkit

    DOE PAGES

    Mandelli, Diego; Prescott, Steven; Smith, Curtis; ...

    2015-05-17

    In this paper we evaluate the impact of a power uprate on a pressurized water reactor (PWR) for a tsunami-induced flooding test case. This analysis is performed using the RISMC toolkit: the RELAP-7 and RAVEN codes. RELAP-7 is the new generation of system analysis codes that is responsible for simulating the thermal-hydraulic dynamics of PWR and boiling water reactor systems. RAVEN has two capabilities: to act as a controller of the RELAP-7 simulation (e.g., component/system activation) and to perform statistical analyses. In our case, the simulation of the flooding is performed by using an advanced smooth particle hydrodynamics code calledmore » NEUTRINO. The obtained results allow the user to investigate and quantify the impact of timing and sequencing of events on system safety. The impact of power uprate is determined in terms of both core damage probability and safety margins.« less

  6. Simulations of toroidal Alfvén eigenmode excited by fast ions on the Experimental Advanced Superconducting Tokamak

    NASA Astrophysics Data System (ADS)

    Pei, Youbin; Xiang, Nong; Shen, Wei; Hu, Youjun; Todo, Y.; Zhou, Deng; Huang, Juan

    2018-05-01

    Kinetic-MagnetoHydroDynamic (MHD) hybrid simulations are carried out to study fast ion driven toroidal Alfvén eigenmodes (TAEs) on the Experimental Advanced Superconducting Tokamak (EAST). The first part of this article presents the linear benchmark between two kinetic-MHD codes, namely MEGA and M3D-K, based on a realistic EAST equilibrium. Parameter scans show that the frequency and the growth rate of the TAE given by the two codes agree with each other. The second part of this article discusses the resonance interaction between the TAE and fast ions simulated by the MEGA code. The results show that the TAE exchanges energy with the co-current passing particles with the parallel velocity |v∥ | ≈VA 0/3 or |v∥ | ≈VA 0/5 , where VA 0 is the Alfvén speed on the magnetic axis. The TAE destabilized by the counter-current passing ions is also analyzed and found to have a much smaller growth rate than the co-current ions driven TAE. One of the reasons for this is found to be that the overlapping region of the TAE spatial location and the counter-current ion orbits is narrow, and thus the wave-particle energy exchange is not efficient.

  7. Scientific Discovery through Advanced Computing in Plasma Science

    NASA Astrophysics Data System (ADS)

    Tang, William

    2005-03-01

    Advanced computing is generally recognized to be an increasingly vital tool for accelerating progress in scientific research during the 21st Century. For example, the Department of Energy's ``Scientific Discovery through Advanced Computing'' (SciDAC) Program was motivated in large measure by the fact that formidable scientific challenges in its research portfolio could best be addressed by utilizing the combination of the rapid advances in super-computing technology together with the emergence of effective new algorithms and computational methodologies. The imperative is to translate such progress into corresponding increases in the performance of the scientific codes used to model complex physical systems such as those encountered in high temperature plasma research. If properly validated against experimental measurements and analytic benchmarks, these codes can provide reliable predictive capability for the behavior of a broad range of complex natural and engineered systems. This talk reviews recent progress and future directions for advanced simulations with some illustrative examples taken from the plasma science applications area. Significant recent progress has been made in both particle and fluid simulations of fine-scale turbulence and large-scale dynamics, giving increasingly good agreement between experimental observations and computational modeling. This was made possible by the combination of access to powerful new computational resources together with innovative advances in analytic and computational methods for developing reduced descriptions of physics phenomena spanning a huge range in time and space scales. In particular, the plasma science community has made excellent progress in developing advanced codes for which computer run-time and problem size scale well with the number of processors on massively parallel machines (MPP's). A good example is the effective usage of the full power of multi-teraflop (multi-trillion floating point computations per second) MPP's to produce three-dimensional, general geometry, nonlinear particle simulations which have accelerated progress in understanding the nature of plasma turbulence in magnetically-confined high temperature plasmas. These calculations, which typically utilized billions of particles for thousands of time-steps, would not have been possible without access to powerful present generation MPP computers and the associated diagnostic and visualization capabilities. In general, results from advanced simulations provide great encouragement for being able to include increasingly realistic dynamics to enable deeper physics insights into plasmas in both natural and laboratory environments. The associated scientific excitement should serve to stimulate improved cross-cutting collaborations with other fields and also to help attract bright young talent to the computational science area.

  8. Detection of Explosive Devices using X-ray Backscatter Radiation

    NASA Astrophysics Data System (ADS)

    Faust, Anthony A.

    2002-09-01

    It is our goal to develop a coded aperture based X-ray backscatter imaging detector that will provide sufficient speed, contrast and spatial resolution to detect Antipersonnel Landmines and Improvised Explosive Devices (IED). While our final objective is to field a hand-held detector, we have currently constrained ourselves to a design that can be fielded on a small robotic platform. Coded aperture imaging has been used by the observational gamma astronomy community for a number of years. However, it has been the recent advances in the field of medical nuclear imaging which has allowed for the application of the technique to a backscatter scenario. In addition, driven by requirements in medical applications, advances in X-ray detection are continually being made, and detectors are now being produced that are faster, cheaper and lighter than those only a decade ago. With these advances, a coded aperture hand-held imaging system has only recently become a possibility. This paper will begin with an introduction to the technique, identify recent advances which have made this approach possible, present a simulated example case, and conclude with a discussion on future work.

  9. A large scale software system for simulation and design optimization of mechanical systems

    NASA Technical Reports Server (NTRS)

    Dopker, Bernhard; Haug, Edward J.

    1989-01-01

    The concept of an advanced integrated, networked simulation and design system is outlined. Such an advanced system can be developed utilizing existing codes without compromising the integrity and functionality of the system. An example has been used to demonstrate the applicability of the concept of the integrated system outlined here. The development of an integrated system can be done incrementally. Initial capabilities can be developed and implemented without having a detailed design of the global system. Only a conceptual global system must exist. For a fully integrated, user friendly design system, further research is needed in the areas of engineering data bases, distributed data bases, and advanced user interface design.

  10. Advanced Hybrid Modeling of Hall Thruster Plumes

    DTIC Science & Technology

    2010-06-16

    Hall thruster operated in the Large Vacuum Test Facility at the University of Michigan. The approach utilizes the direct simulation Monte Carlo method and the Particle-in-Cell method to simulate the collision and plasma dynamics of xenon neutrals and ions. The electrons are modeled as a fluid using conservation equations. A second code is employed to model discharge chamber behavior to provide improved input conditions at the thruster exit for the plume simulation. Simulation accuracy is assessed using experimental data previously

  11. Wakefield Simulation of CLIC PETS Structure Using Parallel 3D Finite Element Time-Domain Solver T3P

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Candel, A.; Kabel, A.; Lee, L.

    In recent years, SLAC's Advanced Computations Department (ACD) has developed the parallel 3D Finite Element electromagnetic time-domain code T3P. Higher-order Finite Element methods on conformal unstructured meshes and massively parallel processing allow unprecedented simulation accuracy for wakefield computations and simulations of transient effects in realistic accelerator structures. Applications include simulation of wakefield damping in the Compact Linear Collider (CLIC) power extraction and transfer structure (PETS).

  12. A 2D electrostatic PIC code for the Mark III Hypercube

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ferraro, R.D.; Liewer, P.C.; Decyk, V.K.

    We have implemented a 2D electrostastic plasma particle in cell (PIC) simulation code on the Caltech/JPL Mark IIIfp Hypercube. The code simulates plasma effects by evolving in time the trajectories of thousands to millions of charged particles subject to their self-consistent fields. Each particle`s position and velocity is advanced in time using a leap frog method for integrating Newton`s equations of motion in electric and magnetic fields. The electric field due to these moving charged particles is calculated on a spatial grid at each time by solving Poisson`s equation in Fourier space. These two tasks represent the largest part ofmore » the computation. To obtain efficient operation on a distributed memory parallel computer, we are using the General Concurrent PIC (GCPIC) algorithm previously developed for a 1D parallel PIC code.« less

  13. High-performance computational fluid dynamics: a custom-code approach

    NASA Astrophysics Data System (ADS)

    Fannon, James; Loiseau, Jean-Christophe; Valluri, Prashant; Bethune, Iain; Náraigh, Lennon Ó.

    2016-07-01

    We introduce a modified and simplified version of the pre-existing fully parallelized three-dimensional Navier-Stokes flow solver known as TPLS. We demonstrate how the simplified version can be used as a pedagogical tool for the study of computational fluid dynamics (CFDs) and parallel computing. TPLS is at its heart a two-phase flow solver, and uses calls to a range of external libraries to accelerate its performance. However, in the present context we narrow the focus of the study to basic hydrodynamics and parallel computing techniques, and the code is therefore simplified and modified to simulate pressure-driven single-phase flow in a channel, using only relatively simple Fortran 90 code with MPI parallelization, but no calls to any other external libraries. The modified code is analysed in order to both validate its accuracy and investigate its scalability up to 1000 CPU cores. Simulations are performed for several benchmark cases in pressure-driven channel flow, including a turbulent simulation, wherein the turbulence is incorporated via the large-eddy simulation technique. The work may be of use to advanced undergraduate and graduate students as an introductory study in CFDs, while also providing insight for those interested in more general aspects of high-performance computing.

  14. Advanced Pellet-Cladding Interaction Modeling using the US DOE CASL Fuel Performance Code: Peregrine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Montgomery, Robert O.; Capps, Nathan A.; Sunderland, Dion J.

    The US DOE’s Consortium for Advanced Simulation of LWRs (CASL) program has undertaken an effort to enhance and develop modeling and simulation tools for a virtual reactor application, including high fidelity neutronics, fluid flow/thermal hydraulics, and fuel and material behavior. The fuel performance analysis efforts aim to provide 3-dimensional capabilities for single and multiple rods to assess safety margins and the impact of plant operation and fuel rod design on the fuel thermo-mechanical-chemical behavior, including Pellet-Cladding Interaction (PCI) failures and CRUD-Induced Localized Corrosion (CILC) failures in PWRs. [1-3] The CASL fuel performance code, Peregrine, is an engineering scale code thatmore » is built upon the MOOSE/ELK/FOX computational FEM framework, which is also common to the fuel modeling framework, BISON [4,5]. Peregrine uses both 2-D and 3-D geometric fuel rod representations and contains a materials properties and fuel behavior model library for the UO2 and Zircaloy system common to PWR fuel derived from both open literature sources and the FALCON code [6]. The primary purpose of Peregrine is to accurately calculate the thermal, mechanical, and chemical processes active throughout a single fuel rod during operation in a reactor, for both steady state and off-normal conditions.« less

  15. NPSS Multidisciplinary Integration and Analysis

    NASA Technical Reports Server (NTRS)

    Hall, Edward J.; Rasche, Joseph; Simons, Todd A.; Hoyniak, Daniel

    2006-01-01

    The objective of this task was to enhance the capability of the Numerical Propulsion System Simulation (NPSS) by expanding its reach into the high-fidelity multidisciplinary analysis area. This task investigated numerical techniques to convert between cold static to hot running geometry of compressor blades. Numerical calculations of blade deformations were iteratively done with high fidelity flow simulations together with high fidelity structural analysis of the compressor blade. The flow simulations were performed with the Advanced Ducted Propfan Analysis (ADPAC) code, while structural analyses were performed with the ANSYS code. High fidelity analyses were used to evaluate the effects on performance of: variations in tip clearance, uncertainty in manufacturing tolerance, variable inlet guide vane scheduling, and the effects of rotational speed on the hot running geometry of the compressor blades.

  16. Soul on Silicon.

    ERIC Educational Resources Information Center

    Kurzweil, Raymond C.

    1994-01-01

    Summarizes recent advances in computer simulation and "reverse engineering" technologies, highlighting the Human Genome Project to scan the human genetic code; artificial retina chips to copy the human retina's neural organization; high-speed, high-resolution Magnetic Resonance Imaging scanners; and the virtual book. Discusses…

  17. Development of the FHR advanced natural circulation analysis code and application to FHR safety analysis

    DOE PAGES

    Guo, Z.; Zweibaum, N.; Shao, M.; ...

    2016-04-19

    The University of California, Berkeley (UCB) is performing thermal hydraulics safety analysis to develop the technical basis for design and licensing of fluoride-salt-cooled, high-temperature reactors (FHRs). FHR designs investigated by UCB use natural circulation for emergency, passive decay heat removal when normal decay heat removal systems fail. The FHR advanced natural circulation analysis (FANCY) code has been developed for assessment of passive decay heat removal capability and safety analysis of these innovative system designs. The FANCY code uses a one-dimensional, semi-implicit scheme to solve for pressure-linked mass, momentum and energy conservation equations. Graph theory is used to automatically generate amore » staggered mesh for complicated pipe network systems. Heat structure models have been implemented for three types of boundary conditions (Dirichlet, Neumann and Robin boundary conditions). Heat structures can be composed of several layers of different materials, and are used for simulation of heat structure temperature distribution and heat transfer rate. Control models are used to simulate sequences of events or trips of safety systems. A proportional-integral controller is also used to automatically make thermal hydraulic systems reach desired steady state conditions. A point kinetics model is used to model reactor kinetics behavior with temperature reactivity feedback. The underlying large sparse linear systems in these models are efficiently solved by using direct and iterative solvers provided by the SuperLU code on high performance machines. Input interfaces are designed to increase the flexibility of simulation for complicated thermal hydraulic systems. In conclusion, this paper mainly focuses on the methodology used to develop the FANCY code, and safety analysis of the Mark 1 pebble-bed FHR under development at UCB is performed.« less

  18. Perspectives on the Future of CFD

    NASA Technical Reports Server (NTRS)

    Kwak, Dochan

    2000-01-01

    This viewgraph presentation gives an overview of the future of computational fluid dynamics (CFD), which in the past has pioneered the field of flow simulation. Over time CFD has progressed as computing power. Numerical methods have been advanced as CPU and memory capacity increases. Complex configurations are routinely computed now and direct numerical simulations (DNS) and large eddy simulations (LES) are used to study turbulence. As the computing resources changed to parallel and distributed platforms, computer science aspects such as scalability (algorithmic and implementation) and portability and transparent codings have advanced. Examples of potential future (or current) challenges include risk assessment, limitations of the heuristic model, and the development of CFD and information technology (IT) tools.

  19. Forward and adjoint spectral-element simulations of seismic wave propagation using hardware accelerators

    NASA Astrophysics Data System (ADS)

    Peter, Daniel; Videau, Brice; Pouget, Kevin; Komatitsch, Dimitri

    2015-04-01

    Improving the resolution of tomographic images is crucial to answer important questions on the nature of Earth's subsurface structure and internal processes. Seismic tomography is the most prominent approach where seismic signals from ground-motion records are used to infer physical properties of internal structures such as compressional- and shear-wave speeds, anisotropy and attenuation. Recent advances in regional- and global-scale seismic inversions move towards full-waveform inversions which require accurate simulations of seismic wave propagation in complex 3D media, providing access to the full 3D seismic wavefields. However, these numerical simulations are computationally very expensive and need high-performance computing (HPC) facilities for further improving the current state of knowledge. During recent years, many-core architectures such as graphics processing units (GPUs) have been added to available large HPC systems. Such GPU-accelerated computing together with advances in multi-core central processing units (CPUs) can greatly accelerate scientific applications. There are mainly two possible choices of language support for GPU cards, the CUDA programming environment and OpenCL language standard. CUDA software development targets NVIDIA graphic cards while OpenCL was adopted mainly by AMD graphic cards. In order to employ such hardware accelerators for seismic wave propagation simulations, we incorporated a code generation tool BOAST into an existing spectral-element code package SPECFEM3D_GLOBE. This allows us to use meta-programming of computational kernels and generate optimized source code for both CUDA and OpenCL languages, running simulations on either CUDA or OpenCL hardware accelerators. We show here applications of forward and adjoint seismic wave propagation on CUDA/OpenCL GPUs, validating results and comparing performances for different simulations and hardware usages.

  20. Can an Atmospherically Forced Ocean Model Accurately Simulate Sea Surface Temperature During ENSO Events?

    DTIC Science & Technology

    2010-01-01

    Ruth H. Preller, 7300 Security, Code 1226 Office of Counsel.Code 1008.3 ADOR/Director NCST E. R. Franchi , 7000 Public Affairs (Unclassified...Ruth H. Prellcr. 7300 Security. Code 1226 Office nl Cot nsal.Co’de’""" 10OB.3 ADORfOireMO,’ NCST. E. R. Franchi , 7000 Public Affairs ftMCl»SS/»d...over the global ocean. Similarly, the monthly mean MODAS SST climatology is based on Advanced Very-High Resolution Radiometer (AVHRR) Multi

  1. Methods for simulation-based analysis of fluid-structure interaction.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barone, Matthew Franklin; Payne, Jeffrey L.

    2005-10-01

    Methods for analysis of fluid-structure interaction using high fidelity simulations are critically reviewed. First, a literature review of modern numerical techniques for simulation of aeroelastic phenomena is presented. The review focuses on methods contained within the arbitrary Lagrangian-Eulerian (ALE) framework for coupling computational fluid dynamics codes to computational structural mechanics codes. The review treats mesh movement algorithms, the role of the geometric conservation law, time advancement schemes, wetted surface interface strategies, and some representative applications. The complexity and computational expense of coupled Navier-Stokes/structural dynamics simulations points to the need for reduced order modeling to facilitate parametric analysis. The proper orthogonalmore » decomposition (POD)/Galerkin projection approach for building a reduced order model (ROM) is presented, along with ideas for extension of the methodology to allow construction of ROMs based on data generated from ALE simulations.« less

  2. Advanced radiometric and interferometric milimeter-wave scene simulations

    NASA Technical Reports Server (NTRS)

    Hauss, B. I.; Moffa, P. J.; Steele, W. G.; Agravante, H.; Davidheiser, R.; Samec, T.; Young, S. K.

    1993-01-01

    Smart munitions and weapons utilize various imaging sensors (including passive IR, active and passive millimeter-wave, and visible wavebands) to detect/identify targets at short standoff ranges and in varied terrain backgrounds. In order to design and evaluate these sensors under a variety of conditions, a high-fidelity scene simulation capability is necessary. Such a capability for passive millimeter-wave scene simulation exists at TRW. TRW's Advanced Radiometric Millimeter-Wave Scene Simulation (ARMSS) code is a rigorous, benchmarked, end-to-end passive millimeter-wave scene simulation code for interpreting millimeter-wave data, establishing scene signatures and evaluating sensor performance. In passive millimeter-wave imaging, resolution is limited due to wavelength and aperture size. Where high resolution is required, the utility of passive millimeter-wave imaging is confined to short ranges. Recent developments in interferometry have made possible high resolution applications on military platforms. Interferometry or synthetic aperture radiometry allows the creation of a high resolution image with a sparsely filled aperture. Borrowing from research work in radio astronomy, we have developed and tested at TRW scene reconstruction algorithms that allow the recovery of the scene from a relatively small number of spatial frequency components. In this paper, the TRW modeling capability is described and numerical results are presented.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krasheninnikov, Sergei I.; Angus, Justin; Lee, Wonjae

    The goal of the Edge Simulation Laboratory (ESL) multi-institutional project is to advance scientific understanding of the edge plasma region of magnetic fusion devices via a coordinated effort utilizing modern computing resources, advanced algorithms, and ongoing theoretical development. The UCSD team was involved in the development of the COGENT code for kinetic studies across a magnetic separatrix. This work included a kinetic treatment of electrons and multiple ion species (impurities) and accurate collision operators.

  4. Potential Vorticity Analysis of Low Level Thunderstorm Dynamics in an Idealized Supercell Simulation

    DTIC Science & Technology

    2009-03-01

    Severe Weather, Supercell, Weather Research and Forecasting Model , Advanced WRF 16. PRICE CODE 17. SECURITY CLASSIFICATION OF REPORT...27 A. ADVANCED RESEARCH WRF MODEL .................................................27 1. Data, Model Setup, and Methodology...03/11/2006 GFS model run. Top row: 11/12Z initialization. Middle row: 12 hour forecast valid at 12/00Z. Bottom row: 24 hour forecast valid at

  5. GLOBECOM '89 - IEEE Global Telecommunications Conference and Exhibition, Dallas, TX, Nov. 27-30, 1989, Conference Record. Volumes 1, 2, & 3

    NASA Astrophysics Data System (ADS)

    The present conference discusses topics in multiwavelength network technology and its applications, advanced digital radio systems in their propagation environment, mobile radio communications, switching programmability, advancements in computer communications, integrated-network management and security, HDTV and image processing in communications, basic exchange communications radio advancements in digital switching, intelligent network evolution, speech coding for telecommunications, and multiple access communications. Also discussed are network designs for quality assurance, recent progress in coherent optical systems, digital radio applications, advanced communications technologies for mobile users, communication software for switching systems, AI and expert systems in network management, intelligent multiplexing nodes, video and image coding, network protocols and performance, system methods in quality and reliability, the design and simulation of lightwave systems, local radio networks, mobile satellite communications systems, fiber networks restoration, packet video networks, human interfaces for future networks, and lightwave networking.

  6. EnergyPlus Run Time Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hong, Tianzhen; Buhl, Fred; Haves, Philip

    2008-09-20

    EnergyPlus is a new generation building performance simulation program offering many new modeling capabilities and more accurate performance calculations integrating building components in sub-hourly time steps. However, EnergyPlus runs much slower than the current generation simulation programs. This has become a major barrier to its widespread adoption by the industry. This paper analyzed EnergyPlus run time from comprehensive perspectives to identify key issues and challenges of speeding up EnergyPlus: studying the historical trends of EnergyPlus run time based on the advancement of computers and code improvements to EnergyPlus, comparing EnergyPlus with DOE-2 to understand and quantify the run time differences,more » identifying key simulation settings and model features that have significant impacts on run time, and performing code profiling to identify which EnergyPlus subroutines consume the most amount of run time. This paper provides recommendations to improve EnergyPlus run time from the modeler?s perspective and adequate computing platforms. Suggestions of software code and architecture changes to improve EnergyPlus run time based on the code profiling results are also discussed.« less

  7. Composite Load Spectra for Select Space Propulsion Structural Components

    NASA Technical Reports Server (NTRS)

    Ho, Hing W.; Newell, James F.

    1994-01-01

    Generic load models are described with multiple levels of progressive sophistication to simulate the composite (combined) load spectra (CLS) that are induced in space propulsion system components, representative of Space Shuttle Main Engines (SSME), such as transfer ducts, turbine blades and liquid oxygen (LOX) posts. These generic (coupled) models combine the deterministic models for composite load dynamic, acoustic, high-pressure and high rotational speed, etc., load simulation using statistically varying coefficients. These coefficients are then determined using advanced probabilistic simulation methods with and without strategically selected experimental data. The entire simulation process is included in a CLS computer code. Applications of the computer code to various components in conjunction with the PSAM (Probabilistic Structural Analysis Method) to perform probabilistic load evaluation and life prediction evaluations are also described to illustrate the effectiveness of the coupled model approach.

  8. GPU accelerated population annealing algorithm

    NASA Astrophysics Data System (ADS)

    Barash, Lev Yu.; Weigel, Martin; Borovský, Michal; Janke, Wolfhard; Shchur, Lev N.

    2017-11-01

    Population annealing is a promising recent approach for Monte Carlo simulations in statistical physics, in particular for the simulation of systems with complex free-energy landscapes. It is a hybrid method, combining importance sampling through Markov chains with elements of sequential Monte Carlo in the form of population control. While it appears to provide algorithmic capabilities for the simulation of such systems that are roughly comparable to those of more established approaches such as parallel tempering, it is intrinsically much more suitable for massively parallel computing. Here, we tap into this structural advantage and present a highly optimized implementation of the population annealing algorithm on GPUs that promises speed-ups of several orders of magnitude as compared to a serial implementation on CPUs. While the sample code is for simulations of the 2D ferromagnetic Ising model, it should be easily adapted for simulations of other spin models, including disordered systems. Our code includes implementations of some advanced algorithmic features that have only recently been suggested, namely the automatic adaptation of temperature steps and a multi-histogram analysis of the data at different temperatures. Program Files doi:http://dx.doi.org/10.17632/sgzt4b7b3m.1 Licensing provisions: Creative Commons Attribution license (CC BY 4.0) Programming language: C, CUDA External routines/libraries: NVIDIA CUDA Toolkit 6.5 or newer Nature of problem: The program calculates the internal energy, specific heat, several magnetization moments, entropy and free energy of the 2D Ising model on square lattices of edge length L with periodic boundary conditions as a function of inverse temperature β. Solution method: The code uses population annealing, a hybrid method combining Markov chain updates with population control. The code is implemented for NVIDIA GPUs using the CUDA language and employs advanced techniques such as multi-spin coding, adaptive temperature steps and multi-histogram reweighting. Additional comments: Code repository at https://github.com/LevBarash/PAising. The system size and size of the population of replicas are limited depending on the memory of the GPU device used. For the default parameter values used in the sample programs, L = 64, θ = 100, β0 = 0, βf = 1, Δβ = 0 . 005, R = 20 000, a typical run time on an NVIDIA Tesla K80 GPU is 151 seconds for the single spin coded (SSC) and 17 seconds for the multi-spin coded (MSC) program (see Section 2 for a description of these parameters).

  9. Use of the MATRIXx Integrated Toolkit on the Microwave Anisotropy Probe Attitude Control System

    NASA Technical Reports Server (NTRS)

    Ward, David K.; Andrews, Stephen F.; McComas, David C.; ODonnell, James R., Jr.

    1999-01-01

    Recent advances in analytical software tools allow the analysis, simulation, flight code, and documentation of an algorithm to be generated from a single source, all within one integrated analytical design package. NASA's Microwave Anisotropy Probe project has used one such package, Integrated Systems' MATRIXx suite, in the design of the spacecraft's Attitude Control System. The project's experience with the linear analysis, simulation, code generation, and documentation tools will be presented and compared with more traditional development tools. In particular, the quality of the flight software generated will be examined in detail. Finally, lessons learned on each of the tools will be shared.

  10. ARC integration into the NEAMS Workbench

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stauff, N.; Gaughan, N.; Kim, T.

    2017-01-01

    One of the objectives of the Nuclear Energy Advanced Modeling and Simulation (NEAMS) Integration Product Line (IPL) is to facilitate the deployment of the high-fidelity codes developed within the program. The Workbench initiative was launched in FY-2017 by the IPL to facilitate the transition from conventional tools to high fidelity tools. The Workbench provides a common user interface for model creation, real-time validation, execution, output processing, and visualization for integrated codes.

  11. Application of Advanced Concepts and Techniques in Electromagnetic Topology Based Simulations: CRIPTE and Related Codes

    DTIC Science & Technology

    2008-12-01

    multiconductor transmission line theory. The per-unit capacitance, inductance , and characteristic impedance matrices generated from the companion LAPLACE...code based on the Method of Moments application, by meshing different sections of the multiconductor cable for capacitance and inductance matrices [21...conductors held together in four pairs and resided in the cable jacket. Each of eight conductors was also designed with the per unit length resistance

  12. Low-temperature plasma simulations with the LSP PIC code

    NASA Astrophysics Data System (ADS)

    Carlsson, Johan; Khrabrov, Alex; Kaganovich, Igor; Keating, David; Selezneva, Svetlana; Sommerer, Timothy

    2014-10-01

    The LSP (Large-Scale Plasma) PIC-MCC code has been used to simulate several low-temperature plasma configurations, including a gas switch for high-power AC/DC conversion, a glow discharge and a Hall thruster. Simulation results will be presented with an emphasis on code comparison and validation against experiment. High-voltage, direct-current (HVDC) power transmission is becoming more common as it can reduce construction costs and power losses. Solid-state power-electronics devices are presently used, but it has been proposed that gas switches could become a compact, less costly, alternative. A gas-switch conversion device would be based on a glow discharge, with a magnetically insulated cold cathode. Its operation is similar to that of a sputtering magnetron, but with much higher pressure (0.1 to 0.3 Torr) in order to achieve high current density. We have performed 1D (axial) and 2D (axial/radial) simulations of such a gas switch using LSP. The 1D results were compared with results from the EDIPIC code. To test and compare the collision models used by the LSP and EDIPIC codes in more detail, a validation exercise was performed for the cathode fall of a glow discharge. We will also present some 2D (radial/azimuthal) LSP simulations of a Hall thruster. The information, data, or work presented herein was funded in part by the Advanced Research Projects Agency-Energy (ARPA-E), U.S. Department of Energy, under Award Number DE-AR0000298.

  13. Assessment of a hybrid finite element and finite volume code for turbulent incompressible flows

    DOE PAGES

    Xia, Yidong; Wang, Chuanjin; Luo, Hong; ...

    2015-12-15

    Hydra-TH is a hybrid finite-element/finite-volume incompressible/low-Mach flow simulation code based on the Hydra multiphysics toolkit being developed and used for thermal-hydraulics applications. In the present work, a suite of verification and validation (V&V) test problems for Hydra-TH was defined to meet the design requirements of the Consortium for Advanced Simulation of Light Water Reactors (CASL). The intent for this test problem suite is to provide baseline comparison data that demonstrates the performance of the Hydra-TH solution methods. The simulation problems vary in complexity from laminar to turbulent flows. A set of RANS and LES turbulence models were used in themore » simulation of four classical test problems. Numerical results obtained by Hydra-TH agreed well with either the available analytical solution or experimental data, indicating the verified and validated implementation of these turbulence models in Hydra-TH. Where possible, we have attempted some form of solution verification to identify sensitivities in the solution methods, and to suggest best practices when using the Hydra-TH code.« less

  14. Assessment of a hybrid finite element and finite volume code for turbulent incompressible flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xia, Yidong; Wang, Chuanjin; Luo, Hong

    Hydra-TH is a hybrid finite-element/finite-volume incompressible/low-Mach flow simulation code based on the Hydra multiphysics toolkit being developed and used for thermal-hydraulics applications. In the present work, a suite of verification and validation (V&V) test problems for Hydra-TH was defined to meet the design requirements of the Consortium for Advanced Simulation of Light Water Reactors (CASL). The intent for this test problem suite is to provide baseline comparison data that demonstrates the performance of the Hydra-TH solution methods. The simulation problems vary in complexity from laminar to turbulent flows. A set of RANS and LES turbulence models were used in themore » simulation of four classical test problems. Numerical results obtained by Hydra-TH agreed well with either the available analytical solution or experimental data, indicating the verified and validated implementation of these turbulence models in Hydra-TH. Where possible, we have attempted some form of solution verification to identify sensitivities in the solution methods, and to suggest best practices when using the Hydra-TH code.« less

  15. Direct Large-Scale N-Body Simulations of Planetesimal Dynamics

    NASA Astrophysics Data System (ADS)

    Richardson, Derek C.; Quinn, Thomas; Stadel, Joachim; Lake, George

    2000-01-01

    We describe a new direct numerical method for simulating planetesimal dynamics in which N˜10 6 or more bodies can be evolved simultaneously in three spatial dimensions over hundreds of dynamical times. This represents several orders of magnitude improvement in resolution over previous studies. The advance is made possible through modification of a stable and tested cosmological code optimized for massively parallel computers. However, owing to the excellent scalability and portability of the code, modest clusters of workstations can treat problems with N˜10 5 particles in a practical fashion. The code features algorithms for detection and resolution of collisions and takes into account the strong central force field and flattened Keplerian disk geometry of planetesimal systems. We demonstrate the range of problems that can be addressed by presenting simulations that illustrate oligarchic growth of protoplanets, planet formation in the presence of giant planet perturbations, the formation of the jovian moons, and orbital migration via planetesimal scattering. We also describe methods under development for increasing the timescale of the simulations by several orders of magnitude.

  16. Experimental and code simulation of a station blackout scenario for APR1400 with test facility ATLAS and MARS code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, X. G.; Kim, Y. S.; Choi, K. Y.

    2012-07-01

    A SBO (station blackout) experiment named SBO-01 was performed at full-pressure IET (Integral Effect Test) facility ATLAS (Advanced Test Loop for Accident Simulation) which is scaled down from the APR1400 (Advanced Power Reactor 1400 MWe). In this study, the transient of SBO-01 is discussed and is subdivided into three phases: the SG fluid loss phase, the RCS fluid loss phase, and the core coolant depletion and core heatup phase. In addition, the typical phenomena in SBO-01 test - SG dryout, natural circulation, core coolant boiling, the PRZ full, core heat-up - are identified. Furthermore, the SBO-01 test is reproduced bymore » the MARS code calculation with the ATLAS model which represents the ATLAS test facility. The experimental and calculated transients are then compared and discussed. The comparison reveals there was malfunction of equipments: the SG leakage through SG MSSV and the measurement error of loop flow meter. As the ATLAS model is validated against the experimental results, it can be further employed to investigate the other possible SBO scenarios and to study the scaling distortions in the ATLAS. (authors)« less

  17. (U) Ristra Next Generation Code Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hungerford, Aimee L.; Daniel, David John

    LANL’s Weapons Physics management (ADX) and ASC program office have defined a strategy for exascale-class application codes that follows two supportive, and mutually risk-mitigating paths: evolution for established codes (with a strong pedigree within the user community) based upon existing programming paradigms (MPI+X); and Ristra (formerly known as NGC), a high-risk/high-reward push for a next-generation multi-physics, multi-scale simulation toolkit based on emerging advanced programming systems (with an initial focus on data-flow task-based models exemplified by Legion [5]). Development along these paths is supported by the ATDM, IC, and CSSE elements of the ASC program, with the resulting codes forming amore » common ecosystem, and with algorithm and code exchange between them anticipated. Furthermore, solution of some of the more challenging problems of the future will require a federation of codes working together, using established-pedigree codes in partnership with new capabilities as they come on line. The role of Ristra as the high-risk/high-reward path for LANL’s codes is fully consistent with its role in the Advanced Technology Development and Mitigation (ATDM) sub-program of ASC (see Appendix C), in particular its emphasis on evolving ASC capabilities through novel programming models and data management technologies.« less

  18. Solar dynamic power for the Space Station

    NASA Technical Reports Server (NTRS)

    Archer, J. S.; Diamant, E. S.

    1986-01-01

    This paper describes a computer code which provides a significant advance in the systems analysis capabilities of solar dynamic power modules. While the code can be used to advantage in the preliminary analysis of terrestrial solar dynamic modules its real value lies in the adaptions which make it particularly useful for the conceptualization of optimized power modules for space applications. In particular, as illustrated in the paper, the code can be used to establish optimum values of concentrator diameter, concentrator surface roughness, concentrator rim angle and receiver aperture corresponding to the main heat cycle options - Organic Rankine and Brayton - and for certain receiver design options. The code can also be used to establish system sizing margins to account for the loss of reflectivity in orbit or the seasonal variation of insolation. By the simulation of the interactions among the major components of a solar dynamic module and through simplified formulations of the major thermal-optic-thermodynamic interactions the code adds a powerful, efficient and economic analytical tool to the repertory of techniques available for the design of advanced space power systems.

  19. Crashworthiness: Planes, trains, and automobiles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Logan, R.W.; Tokarz, F.J.; Whirley, R.G.

    A powerful DYNA3D computer code simulates the dynamic effects of stress traveling through structures. It is the most advanced modeling tool available to study crashworthiness problems and to analyze impacts. Now used by some 1000 companies, government research laboratories, and universities in the U.S. and abroad, DYNA3D is also a preeminent example of successful technology transfer. The initial interest in such a code was to simulate the structural response of weapons systems. The need was to model not the explosive or nuclear events themselves but rather the impacts of weapons systems with the ground, tracking the stress waves as theymore » move through the object. This type of computer simulation augmented or, in certain cases, reduced the need for expensive and time-consuming crash testing.« less

  20. High-Energy Activation Simulation Coupling TENDL and SPACS with FISPACT-II

    NASA Astrophysics Data System (ADS)

    Fleming, Michael; Sublet, Jean-Christophe; Gilbert, Mark

    2018-06-01

    To address the needs of activation-transmutation simulation in incident-particle fields with energies above a few hundred MeV, the FISPACT-II code has been extended to splice TENDL standard ENDF-6 nuclear data with extended nuclear data forms. The JENDL-2007/HE and HEAD-2009 libraries were processed for FISPACT-II and used to demonstrate the capabilities of the new code version. Tests of the libraries and comparisons against both experimental yield data and the most recent intra-nuclear cascade model results demonstrate that there is need for improved nuclear data libraries up to and above 1 GeV. Simulations on lead targets show that important radionuclides, such as 148Gd, can vary by more than an order of magnitude where more advanced models find agreement within the experimental uncertainties.

  1. Qualification of Simulation Software for Safety Assessment of Sodium Cooled Fast Reactors. Requirements and Recommendations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Nicholas R.; Pointer, William David; Sieger, Matt

    2016-04-01

    The goal of this review is to enable application of codes or software packages for safety assessment of advanced sodium-cooled fast reactor (SFR) designs. To address near-term programmatic needs, the authors have focused on two objectives. First, the authors have focused on identification of requirements for software QA that must be satisfied to enable the application of software to future safety analyses. Second, the authors have collected best practices applied by other code development teams to minimize cost and time of initial code qualification activities and to recommend a path to the stated goal.

  2. Computational Fluid Dynamics Technology for Hypersonic Applications

    NASA Technical Reports Server (NTRS)

    Gnoffo, Peter A.

    2003-01-01

    Several current challenges in computational fluid dynamics and aerothermodynamics for hypersonic vehicle applications are discussed. Example simulations are presented from code validation and code benchmarking efforts to illustrate capabilities and limitations. Opportunities to advance the state-of-art in algorithms, grid generation and adaptation, and code validation are identified. Highlights of diverse efforts to address these challenges are then discussed. One such effort to re-engineer and synthesize the existing analysis capability in LAURA, VULCAN, and FUN3D will provide context for these discussions. The critical (and evolving) role of agile software engineering practice in the capability enhancement process is also noted.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Salko, Robert K; Sung, Yixing; Kucukboyaci, Vefa

    The Virtual Environment for Reactor Applications core simulator (VERA-CS) being developed by the Consortium for the Advanced Simulation of Light Water Reactors (CASL) includes coupled neutronics, thermal-hydraulics, and fuel temperature components with an isotopic depletion capability. The neutronics capability employed is based on MPACT, a three-dimensional (3-D) whole core transport code. The thermal-hydraulics and fuel temperature models are provided by the COBRA-TF (CTF) subchannel code. As part of the CASL development program, the VERA-CS (MPACT/CTF) code system was applied to model and simulate reactor core response with respect to departure from nucleate boiling ratio (DNBR) at the limiting time stepmore » of a postulated pressurized water reactor (PWR) main steamline break (MSLB) event initiated at the hot zero power (HZP), either with offsite power available and the reactor coolant pumps in operation (high-flow case) or without offsite power where the reactor core is cooled through natural circulation (low-flow case). The VERA-CS simulation was based on core boundary conditions from the RETRAN-02 system transient calculations and STAR-CCM+ computational fluid dynamics (CFD) core inlet distribution calculations. The evaluation indicated that the VERA-CS code system is capable of modeling and simulating quasi-steady state reactor core response under the steamline break (SLB) accident condition, the results are insensitive to uncertainties in the inlet flow distributions from the CFD simulations, and the high-flow case is more DNB limiting than the low-flow case.« less

  4. Numerical Simulation of Flow in a Whirling Annular Seal and Comparison with Experiments

    NASA Technical Reports Server (NTRS)

    Athavale, M. M.; Hendricks, R. C.; Steinetz, B. M.

    1995-01-01

    The turbulent flow field in a simulated annular seal with a large clearance/radius ratio (0.015) and a whirling rotor was simulated using an advanced 3D CFD code SCISEAL. A circular whirl orbit with synchronous whirl was imposed on the rotor center. The flow field was rendered quasi-steady by making a transformation to a totaling frame. Standard k-epsilon model with wall functions was used to treat the turbulence. Experimentally measured values of flow parameters were used to specify the seal inlet and exit boundary conditions. The computed flow-field in terms of the velocity and pressure is compared with the experimental measurements inside the seal. The agreement between the numerical results and experimental data with correction is fair to good. The capability of current advanced CFD methodology to analyze this complex flow field is demonstrated. The methodology can also be extended to other whirl frequencies. Half- (or sub-) synchronous (fluid film unstable motion) and synchronous (rotor centrifugal force unbalance) whirls are the most unstable whirl modes in turbomachinery seals, and the flow code capability of simulating the flows in steady as well as whirling seals will prove to be extremely useful in the design, analyses, and performance predictions of annular as well as other types of seals.

  5. Advanced functionality for radio analysis in the Offline software framework of the Pierre Auger Observatory

    NASA Astrophysics Data System (ADS)

    Abreu, P.; Aglietta, M.; Ahn, E. J.; Albuquerque, I. F. M.; Allard, D.; Allekotte, I.; Allen, J.; Allison, P.; Alvarez Castillo, J.; Alvarez-Muñiz, J.; Ambrosio, M.; Aminaei, A.; Anchordoqui, L.; Andringa, S.; Antičić, T.; Aramo, C.; Arganda, E.; Arqueros, F.; Asorey, H.; Assis, P.; Aublin, J.; Ave, M.; Avenier, M.; Avila, G.; Bäcker, T.; Balzer, M.; Barber, K. B.; Barbosa, A. F.; Bardenet, R.; Barroso, S. L. C.; Baughman, B.; Beatty, J. J.; Becker, B. R.; Becker, K. H.; Bellido, J. A.; Benzvi, S.; Berat, C.; Bertou, X.; Biermann, P. L.; Billoir, P.; Blanco, F.; Blanco, M.; Bleve, C.; Blümer, H.; Boháčová, M.; Boncioli, D.; Bonifazi, C.; Bonino, R.; Borodai, N.; Brack, J.; Brogueira, P.; Brown, W. C.; Bruijn, R.; Buchholz, P.; Bueno, A.; Burton, R. E.; Caballero-Mora, K. S.; Caramete, L.; Caruso, R.; Castellina, A.; Cataldi, G.; Cazon, L.; Cester, R.; Chauvin, J.; Chiavassa, A.; Chinellato, J. A.; Chou, A.; Chudoba, J.; Clay, R. W.; Coluccia, M. R.; Conceição, R.; Contreras, F.; Cook, H.; Cooper, M. J.; Coppens, J.; Cordier, A.; Cotti, U.; Coutu, S.; Covault, C. E.; Creusot, A.; Criss, A.; Cronin, J.; Curutiu, A.; Dagoret-Campagne, S.; Dallier, R.; Dasso, S.; Daumiller, K.; Dawson, B. R.; de Almeida, R. M.; de Domenico, M.; de Donato, C.; de Jong, S. J.; de La Vega, G.; de Mello Junior, W. J. M.; de Mello Neto, J. R. T.; de Mitri, I.; de Souza, V.; de Vries, K. D.; Decerprit, G.; Del Peral, L.; Deligny, O.; Dembinski, H.; Denkiewicz, A.; di Giulio, C.; Diaz, J. C.; Díaz Castro, M. L.; Diep, P. N.; Dobrigkeit, C.; D'Olivo, J. C.; Dong, P. N.; Dorofeev, A.; Dos Anjos, J. C.; Dova, M. T.; D'Urso, D.; Dutan, I.; Ebr, J.; Engel, R.; Erdmann, M.; Escobar, C. O.; Etchegoyen, A.; Facal San Luis, P.; Falcke, H.; Farrar, G.; Fauth, A. C.; Fazzini, N.; Ferguson, A. P.; Ferrero, A.; Fick, B.; Filevich, A.; Filipčič, A.; Fliescher, S.; Fracchiolla, C. E.; Fraenkel, E. D.; Fröhlich, U.; Fuchs, B.; Gamarra, R. F.; Gambetta, S.; García, B.; García Gámez, D.; Garcia-Pinto, D.; Gascon, A.; Gemmeke, H.; Gesterling, K.; Ghia, P. L.; Giaccari, U.; Giller, M.; Glass, H.; Gold, M. S.; Golup, G.; Gomez Albarracin, F.; Gómez Berisso, M.; Gonçalves, P.; Gonzalez, D.; Gonzalez, J. G.; Gookin, B.; Góra, D.; Gorgi, A.; Gouffon, P.; Gozzini, S. R.; Grashorn, E.; Grebe, S.; Griffith, N.; Grigat, M.; Grillo, A. F.; Guardincerri, Y.; Guarino, F.; Guedes, G. P.; Hague, J. D.; Hansen, P.; Harari, D.; Harmsma, S.; Harton, J. L.; Haungs, A.; Hebbeker, T.; Heck, D.; Herve, A. E.; Hojvat, C.; Holmes, V. C.; Homola, P.; Hörandel, J. R.; Horneffer, A.; Hrabovský, M.; Huege, T.; Insolia, A.; Ionita, F.; Italiano, A.; Jiraskova, S.; Kadija, K.; Kampert, K. H.; Karhan, P.; Karova, T.; Kasper, P.; Kégl, B.; Keilhauer, B.; Keivani, A.; Kelley, J. L.; Kemp, E.; Kieckhafer, R. M.; Klages, H. O.; Kleifges, M.; Kleinfeller, J.; Knapp, J.; Koang, D.-H.; Kotera, K.; Krohm, N.; Krömer, O.; Kruppke-Hansen, D.; Kuehn, F.; Kuempel, D.; Kulbartz, J. K.; Kunka, N.; La Rosa, G.; Lachaud, C.; Lautridou, P.; Leão, M. S. A. B.; Lebrun, D.; Lebrun, P.; Leigui de Oliveira, M. A.; Lemiere, A.; Letessier-Selvon, A.; Lhenry-Yvon, I.; Link, K.; López, R.; Lopez Agüera, A.; Louedec, K.; Lozano Bahilo, J.; Lucero, A.; Ludwig, M.; Lyberis, H.; Macolino, C.; Maldera, S.; Mandat, D.; Mantsch, P.; Mariazzi, A. G.; Marin, V.; Maris, I. C.; Marquez Falcon, H. R.; Marsella, G.; Martello, D.; Martin, L.; Martínez Bravo, O.; Mathes, H. J.; Matthews, J.; Matthews, J. A. J.; Matthiae, G.; Maurizio, D.; Mazur, P. O.; Medina-Tanco, G.; Melissas, M.; Melo, D.; Menichetti, E.; Menshikov, A.; Mertsch, P.; Meurer, C.; Mićanović, S.; Micheletti, M. I.; Miller, W.; Miramonti, L.; Mollerach, S.; Monasor, M.; Monnier Ragaigne, D.; Montanet, F.; Morales, B.; Morello, C.; Moreno, E.; Moreno, J. C.; Morris, C.; Mostafá, M.; Moura, C. A.; Mueller, S.; Muller, M. A.; Müller, G.; Münchmeyer, M.; Mussa, R.; Navarra, G.; Navarro, J. L.; Navas, S.; Necesal, P.; Nellen, L.; Nelles, A.; Nhung, P. T.; Nierstenhoefer, N.; Nitz, D.; Nosek, D.; Nožka, L.; Nyklicek, M.; Oehlschläger, J.; Olinto, A.; Oliva, P.; Olmos-Gilbaja, V. M.; Ortiz, M.; Pacheco, N.; Pakk Selmi-Dei, D.; Palatka, M.; Pallotta, J.; Palmieri, N.; Parente, G.; Parizot, E.; Parra, A.; Parrisius, J.; Parsons, R. D.; Pastor, S.; Paul, T.; Pech, M.; PeĶala, J.; Pelayo, R.; Pepe, I. M.; Perrone, L.; Pesce, R.; Petermann, E.; Petrera, S.; Petrinca, P.; Petrolini, A.; Petrov, Y.; Petrovic, J.; Pfendner, C.; Phan, N.; Piegaia, R.; Pierog, T.; Pieroni, P.; Pimenta, M.; Pirronello, V.; Platino, M.; Ponce, V. H.; Pontz, M.; Privitera, P.; Prouza, M.; Quel, E. J.; Rautenberg, J.; Ravel, O.; Ravignani, D.; Revenu, B.; Ridky, J.; Risse, M.; Ristori, P.; Rivera, H.; Riviére, C.; Rizi, V.; Robledo, C.; Rodrigues de Carvalho, W.; Rodriguez, G.; Rodriguez Martino, J.; Rodriguez Rojo, J.; Rodriguez-Cabo, I.; Rodríguez-Frías, M. D.; Ros, G.; Rosado, J.; Rossler, T.; Roth, M.; Rouillé-D'Orfeuil, B.; Roulet, E.; Rovero, A. C.; Rühle, C.; Salamida, F.; Salazar, H.; Salina, G.; Sánchez, F.; Santander, M.; Santo, C. E.; Santos, E.; Santos, E. M.; Sarazin, F.; Sarkar, S.; Sato, R.; Scharf, N.; Scherini, V.; Schieler, H.; Schiffer, P.; Schmidt, A.; Schmidt, F.; Schmidt, T.; Scholten, O.; Schoorlemmer, H.; Schovancova, J.; Schovánek, P.; Schroeder, F.; Schulte, S.; Schuster, D.; Sciutto, S. J.; Scuderi, M.; Segreto, A.; Semikoz, D.; Settimo, M.; Shadkam, A.; Shellard, R. C.; Sidelnik, I.; Sigl, G.; Śmiałkowski, A.; Šmída, R.; Snow, G. R.; Sommers, P.; Sorokin, J.; Spinka, H.; Squartini, R.; Stapleton, J.; Stasielak, J.; Stephan, M.; Stutz, A.; Suarez, F.; Suomijärvi, T.; Supanitsky, A. D.; Šuša, T.; Sutherland, M. S.; Swain, J.; Szadkowski, Z.; Szuba, M.; Tamashiro, A.; Tapia, A.; Taşcău, O.; Tcaciuc, R.; Tegolo, D.; Thao, N. T.; Thomas, D.; Tiffenberg, J.; Timmermans, C.; Tiwari, D. K.; Tkaczyk, W.; Todero Peixoto, C. J.; Tomé, B.; Tonachini, A.; Travnicek, P.; Tridapalli, D. B.; Tristram, G.; Trovato, E.; Tueros, M.; Ulrich, R.; Unger, M.; Urban, M.; Valdés Galicia, J. F.; Valiño, I.; Valore, L.; van den Berg, A. M.; Vargas Cárdenas, B.; Vázquez, J. R.; Vázquez, R. A.; Veberič, D.; Verzi, V.; Videla, M.; Villaseñor, L.; Wahlberg, H.; Wahrlich, P.; Wainberg, O.; Warner, D.; Watson, A. A.; Weber, M.; Weidenhaupt, K.; Weindl, A.; Westerhoff, S.; Whelan, B. J.; Wieczorek, G.; Wiencke, L.; Wilczyńska, B.; Wilczyński, H.; Will, M.; Williams, C.; Winchen, T.; Winders, L.; Winnick, M. G.; Wommer, M.; Wundheiler, B.; Yamamoto, T.; Younk, P.; Yuan, G.; Zamorano, B.; Zas, E.; Zavrtanik, D.; Zavrtanik, M.; Zaw, I.; Zepeda, A.; Ziolkowski, M.

    2011-04-01

    The advent of the Auger Engineering Radio Array (AERA) necessitates the development of a powerful framework for the analysis of radio measurements of cosmic ray air showers. As AERA performs “radio-hybrid” measurements of air shower radio emission in coincidence with the surface particle detectors and fluorescence telescopes of the Pierre Auger Observatory, the radio analysis functionality had to be incorporated in the existing hybrid analysis solutions for fluorescence and surface detector data. This goal has been achieved in a natural way by extending the existing Auger Offline software framework with radio functionality. In this article, we lay out the design, highlights and features of the radio extension implemented in the Auger Offline framework. Its functionality has achieved a high degree of sophistication and offers advanced features such as vectorial reconstruction of the electric field, advanced signal processing algorithms, a transparent and efficient handling of FFTs, a very detailed simulation of detector effects, and the read-in of multiple data formats including data from various radio simulation codes. The source code of this radio functionality can be made available to interested parties on request.

  6. Advanced ballistic range technology

    NASA Technical Reports Server (NTRS)

    Yates, Leslie A.

    1993-01-01

    Optical images, such as experimental interferograms, schlieren, and shadowgraphs, are routinely used to identify and locate features in experimental flow fields and for validating computational fluid dynamics (CFD) codes. Interferograms can also be used for comparing experimental and computed integrated densities. By constructing these optical images from flow-field simulations, one-to-one comparisons of computation and experiment are possible. During the period from February 1, 1992, to November 30, 1992, work has continued on the development of CISS (Constructed Interferograms, Schlieren, and Shadowgraphs), a code that constructs images from ideal- and real-gas flow-field simulations. In addition, research connected with the automated film-reading system and the proposed reactivation of the radiation facility has continued.

  7. Center for Extended Magnetohydrodynamics Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ramos, Jesus

    This researcher participated in the DOE-funded Center for Extended Magnetohydrodynamics Modeling (CEMM), a multi-institutional collaboration led by the Princeton Plasma Physics Laboratory with Dr. Stephen Jardin as the overall Principal Investigator. This project developed advanced simulation tools to study the non-linear macroscopic dynamics of magnetically confined plasmas. The collaborative effort focused on the development of two large numerical simulation codes, M3D-C1 and NIMROD, and their application to a wide variety of problems. Dr. Ramos was responsible for theoretical aspects of the project, deriving consistent sets of model equations applicable to weakly collisional plasmas and devising test problems for verification ofmore » the numerical codes. This activity was funded for twelve years.« less

  8. Overview of the relevant CFD work at Thiokol Corporation

    NASA Technical Reports Server (NTRS)

    Chwalowski, Pawel; Loh, Hai-Tien

    1992-01-01

    An in-house developed proprietary advanced computational fluid dynamics code called SHARP (Trademark) is a primary tool for many flow simulations and design analyses. The SHARP code is a time dependent, two dimensional (2-D) axisymmetric numerical solution technique for the compressible Navier-Stokes equations. The solution technique in SHARP uses a vectorizable implicit, second order accurate in time and space, finite volume scheme based on an upwind flux-difference splitting of a Roe-type approximated Riemann solver, Van Leer's flux vector splitting, and a fourth order artificial dissipation scheme with a preconditioning to accelerate the flow solution. Turbulence is simulated by an algebraic model, and ultimately the kappa-epsilon model. Some other capabilities of the code are 2-D two-phase Lagrangian particle tracking and cell blockages. Extensive development and testing has been conducted on the 3-D version of the code with flow, combustion, and turbulence interactions. The emphasis here is on the specific applications of SHARP in Solid Rocket Motor design. Information is given in viewgraph form.

  9. Optical fiber evanescent absorption sensors for high-temperature gas sensing in advanced coal-fired power plants

    NASA Astrophysics Data System (ADS)

    Buric, Michael P.; Ohodnicky, Paul R.; Duy, Janice

    2012-10-01

    Modern advanced energy systems such as coal-fired power plants, gasifiers, or similar infrastructure present some of the most challenging harsh environments for sensors. The power industry would benefit from new, ultra-high temperature devices capable of surviving in hot and corrosive environments for embedded sensing at the highest value locations. For these applications, we are currently exploring optical fiber evanescent wave absorption spectroscopy (EWAS) based sensors consisting of high temperature core materials integrated with novel high temperature gas sensitive cladding materials. Mathematical simulations can be used to assist in sensor development efforts, and we describe a simulation code that assumes a single thick cladding layer with gas sensitive optical constants. Recent work has demonstrated that Au nanoparticle-incorporated metal oxides show a potentially useful response for high temperature optical gas sensing applications through the sensitivity of the localized surface plasmon resonance absorption peak to ambient atmospheric conditions. Hence, the simulation code has been applied to understand how such a response can be exploited in an optical fiber based EWAS sensor configuration. We demonstrate that interrogation can be used to optimize the sensing response in such materials.

  10. Modeling rapidly spinning, merging black holes with numerical relativity for the era of first gravitational-wave observations

    NASA Astrophysics Data System (ADS)

    Lovelace, Geoffrey; Simulating eXtreme Collaboration; LIGO Scientific Collaboration

    2016-03-01

    The Advanced Laser Interferometer Gravitational-Wave Observatory (Advanced LIGO) began searching for gravitational waves in September 2015, with three times the sensitivity of the initial LIGO experiment. Merging black holes are among the most promising sources of gravitational waves for Advanced LIGO, but near the time of merger, the emitted waves can only be computed using numerical relativity. In this talk, I will present new numerical-relativity simulations of merging black holes, made using the Spectral Einstein Code [black-holes.org/SpEC.html], including cases with black-hole spins that are nearly as fast as possible. I will discuss how such simulations will be able to rapidly follow up gravitational-wave observations, improving our understanding of the waves' sources.

  11. Advanced Aerospace Materials by Design

    NASA Technical Reports Server (NTRS)

    Srivastava, Deepak; Djomehri, Jahed; Wei, Chen-Yu

    2004-01-01

    The advances in the emerging field of nanophase thermal and structural composite materials; materials with embedded sensors and actuators for morphing structures; light-weight composite materials for energy and power storage; and large surface area materials for in-situ resource generation and waste recycling, are expected to :revolutionize the capabilities of virtually every system comprising of future robotic and :human moon and mars exploration missions. A high-performance multiscale simulation platform, including the computational capabilities and resources of Columbia - the new supercomputer, is being developed to discover, validate, and prototype next generation (of such advanced materials. This exhibit will describe the porting and scaling of multiscale 'physics based core computer simulation codes for discovering and designing carbon nanotube-polymer composite materials for light-weight load bearing structural and 'thermal protection applications.

  12. Simulation of drift wave instability in field-reversed configurations using global magnetic geometry

    NASA Astrophysics Data System (ADS)

    Fulton, D. P.; Lau, C. K.; Lin, Z.; Tajima, T.; Holod, I.; the TAE Team

    2016-10-01

    Minimizing transport in the field-reversed configuration (FRC) is essential to enable FRC-based fusion reactors. Recently, significant progress on advanced beam-driven FRCs in C-2 and C-2U (at Tri Alpha Energy) provides opportunities to study transport properties using Doppler backscattering (DBS) measurements of turbulent fluctuations and kinetic particle-in-cell simulations of driftwaves in realistic equilibria via the Gyrokinetic Toroidal Code (GTC). Both measurements and simulations indicate relatively small fluctuations in the scrape-off layer (SOL). In the FRC core, local, single flux surface simulations reveal strong stabilization, while experiments indicate quiescent but finite fluctuations. One possible explanation is that turbulence may originate in the SOL and propagate at very low levels across the separatrix into the core. To test this hypothesis, a significant effort has been made to develop A New Code (ANC) based on GTC physics formulations, but using cylindrical coordinates which span the magnetic separatrix, including both core and SOL. Here, we present first results from global ANC simulations.

  13. Optimization of automotive Rankine cycle waste heat recovery under various engine operating condition

    NASA Astrophysics Data System (ADS)

    Punov, Plamen; Milkov, Nikolay; Danel, Quentin; Perilhon, Christelle; Podevin, Pierre; Evtimov, Teodossi

    2017-02-01

    An optimization study of the Rankine cycle as a function of diesel engine operating mode is presented. The Rankine cycle here, is studied as a waste heat recovery system which uses the engine exhaust gases as heat source. The engine exhaust gases parameters (temperature, mass flow and composition) were defined by means of numerical simulation in advanced simulation software AVL Boost. Previously, the engine simulation model was validated and the Vibe function parameters were defined as a function of engine load. The Rankine cycle output power and efficiency was numerically estimated by means of a simulation code in Python(x,y). This code includes discretized heat exchanger model and simplified model of the pump and the expander based on their isentropic efficiency. The Rankine cycle simulation revealed the optimum value of working fluid mass flow and evaporation pressure according to the heat source. Thus, the optimal Rankine cycle performance was obtained over the engine operating map.

  14. Analysis of PANDA Passive Containment Cooling Steady-State Tests with the Spectra Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stempniewicz, Marek M

    2000-07-15

    Results of post test simulation of the PANDA passive containment cooling (PCC) steady-state tests (S-series tests), performed at the PANDA facility at the Paul Scherrer Institute, Switzerland, are presented. The simulation has been performed using the computer code SPECTRA, a thermal-hydraulic code, designed specifically for analyzing containment behavior of nuclear power plants.Results of the present calculations are compared to the measurement data as well as the results obtained earlier with the codes MELCOR, TRAC-BF1, and TRACG. The calculated PCC efficiencies are somewhat lower than the measured values. Similar underestimation of PCC efficiencies had been obtained in the past, with themore » other computer codes. To explain this difference, it is postulated that condensate coming into the tubes forms a stream of liquid in one or two tubes, leaving most of the tubes unaffected. The condensate entering the water box is assumed to fall down in the form of droplets. With these assumptions, the results calculated with SPECTRA are close to the experimental data.It is concluded that the SPECTRA code is a suitable tool for analyzing containments of advanced reactors, equipped with passive containment cooling systems.« less

  15. RELAP-7 Closure Correlations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zou, Ling; Berry, R. A.; Martineau, R. C.

    The RELAP-7 code is the next generation nuclear reactor system safety analysis code being developed at the Idaho National Laboratory (INL). The code is based on the INL’s modern scientific software development framework, MOOSE (Multi-Physics Object Oriented Simulation Environment). The overall design goal of RELAP-7 is to take advantage of the previous thirty years of advancements in computer architecture, software design, numerical integration methods, and physical models. The end result will be a reactor systems analysis capability that retains and improves upon RELAP5’s and TRACE’s capabilities and extends their analysis capabilities for all reactor system simulation scenarios. The RELAP-7 codemore » utilizes the well-posed 7-equation two-phase flow model for compressible two-phase flow. Closure models used in the TRACE code has been reviewed and selected to reflect the progress made during the past decades and provide a basis for the colure correlations implemented in the RELAP-7 code. This document provides a summary on the closure correlations that are currently implemented in the RELAP-7 code. The closure correlations include sub-grid models that describe interactions between the fluids and the flow channel, and interactions between the two phases.« less

  16. The SCEC/USGS dynamic earthquake rupture code verification exercise

    USGS Publications Warehouse

    Harris, R.A.; Barall, M.; Archuleta, R.; Dunham, E.; Aagaard, Brad T.; Ampuero, J.-P.; Bhat, H.; Cruz-Atienza, Victor M.; Dalguer, L.; Dawson, P.; Day, S.; Duan, B.; Ely, G.; Kaneko, Y.; Kase, Y.; Lapusta, N.; Liu, Yajing; Ma, S.; Oglesby, D.; Olsen, K.; Pitarka, A.; Song, S.; Templeton, E.

    2009-01-01

    Numerical simulations of earthquake rupture dynamics are now common, yet it has been difficult to test the validity of these simulations because there have been few field observations and no analytic solutions with which to compare the results. This paper describes the Southern California Earthquake Center/U.S. Geological Survey (SCEC/USGS) Dynamic Earthquake Rupture Code Verification Exercise, where codes that simulate spontaneous rupture dynamics in three dimensions are evaluated and the results produced by these codes are compared using Web-based tools. This is the first time that a broad and rigorous examination of numerous spontaneous rupture codes has been performed—a significant advance in this science. The automated process developed to attain this achievement provides for a future where testing of codes is easily accomplished.Scientists who use computer simulations to understand earthquakes utilize a range of techniques. Most of these assume that earthquakes are caused by slip at depth on faults in the Earth, but hereafter the strategies vary. Among the methods used in earthquake mechanics studies are kinematic approaches and dynamic approaches.The kinematic approach uses a computer code that prescribes the spatial and temporal evolution of slip on the causative fault (or faults). These types of simulations are very helpful, especially since they can be used in seismic data inversions to relate the ground motions recorded in the field to slip on the fault(s) at depth. However, these kinematic solutions generally provide no insight into the physics driving the fault slip or information about why the involved fault(s) slipped that much (or that little). In other words, these kinematic solutions may lack information about the physical dynamics of earthquake rupture that will be most helpful in forecasting future events.To help address this issue, some researchers use computer codes to numerically simulate earthquakes and construct dynamic, spontaneous rupture (hereafter called “spontaneous rupture”) solutions. For these types of numerical simulations, rather than prescribing the slip function at each location on the fault(s), just the friction constitutive properties and initial stress conditions are prescribed. The subsequent stresses and fault slip spontaneously evolve over time as part of the elasto-dynamic solution. Therefore, spontaneous rupture computer simulations of earthquakes allow us to include everything that we know, or think that we know, about earthquake dynamics and to test these ideas against earthquake observations.

  17. Simulation of beam-induced plasma in gas-filled rf cavities

    DOE PAGES

    Yu, Kwangmin; Samulyak, Roman; Yonehara, Katsuya; ...

    2017-03-07

    Processes occurring in a radio-frequency (rf) cavity, filled with high pressure gas and interacting with proton beams, have been studied via advanced numerical simulations. Simulations support the experimental program on the hydrogen gas-filled rf cavity in the Mucool Test Area (MTA) at Fermilab, and broader research on the design of muon cooling devices. space, a 3D electromagnetic particle-in-cell (EM-PIC) code with atomic physics support, was used in simulation studies. Plasma dynamics in the rf cavity, including the process of neutral gas ionization by proton beams, plasma loading of the rf cavity, and atomic processes in plasma such as electron-ion andmore » ion-ion recombination and electron attachment to dopant molecules, have been studied. Here, through comparison with experiments in the MTA, simulations quantified several uncertain values of plasma properties such as effective recombination rates and the attachment time of electrons to dopant molecules. Simulations have achieved very good agreement with experiments on plasma loading and related processes. Lastly, the experimentally validated code space is capable of predictive simulations of muon cooling devices.« less

  18. Assessment of the National Combustion Code

    NASA Technical Reports Server (NTRS)

    Liu, nan-Suey; Iannetti, Anthony; Shih, Tsan-Hsing

    2007-01-01

    The advancements made during the last decade in the areas of combustion modeling, numerical simulation, and computing platform have greatly facilitated the use of CFD based tools in the development of combustion technology. Further development of verification, validation and uncertainty quantification will have profound impact on the reliability and utility of these CFD based tools. The objectives of the present effort are to establish baseline for the National Combustion Code (NCC) and experimental data, as well as to document current capabilities and identify gaps for further improvements.

  19. Features of MCNP6 Relevant to Medical Radiation Physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hughes, H. Grady III; Goorley, John T.

    2012-08-29

    MCNP (Monte Carlo N-Particle) is a general-purpose Monte Carlo code for simulating the transport of neutrons, photons, electrons, positrons, and more recently other fundamental particles and heavy ions. Over many years MCNP has found a wide range of applications in many different fields, including medical radiation physics. In this presentation we will describe and illustrate a number of significant recently-developed features in the current version of the code, MCNP6, having particular utility for medical physics. Among these are major extensions of the ability to simulate large, complex geometries, improvement in memory requirements and speed for large lattices, introduction of mesh-basedmore » isotopic reaction tallies, advances in radiography simulation, expanded variance-reduction capabilities, especially for pulse-height tallies, and a large number of enhancements in photon/electron transport.« less

  20. RELAP-7 Software Verification and Validation Plan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Curtis L.; Choi, Yong-Joon; Zou, Ling

    This INL plan comprehensively describes the software for RELAP-7 and documents the software, interface, and software design requirements for the application. The plan also describes the testing-based software verification and validation (SV&V) process—a set of specially designed software models used to test RELAP-7. The RELAP-7 (Reactor Excursion and Leak Analysis Program) code is a nuclear reactor system safety analysis code being developed at Idaho National Laboratory (INL). The code is based on the INL’s modern scientific software development framework – MOOSE (Multi-Physics Object-Oriented Simulation Environment). The overall design goal of RELAP-7 is to take advantage of the previous thirty yearsmore » of advancements in computer architecture, software design, numerical integration methods, and physical models. The end result will be a reactor systems analysis capability that retains and improves upon RELAP5’s capability and extends the analysis capability for all reactor system simulation scenarios.« less

  1. EO/IR scene generation open source initiative for real-time hardware-in-the-loop and all-digital simulation

    NASA Astrophysics Data System (ADS)

    Morris, Joseph W.; Lowry, Mac; Boren, Brett; Towers, James B.; Trimble, Darian E.; Bunfield, Dennis H.

    2011-06-01

    The US Army Aviation and Missile Research, Development and Engineering Center (AMRDEC) and the Redstone Test Center (RTC) has formed the Scene Generation Development Center (SGDC) to support the Department of Defense (DoD) open source EO/IR Scene Generation initiative for real-time hardware-in-the-loop and all-digital simulation. Various branches of the DoD have invested significant resources in the development of advanced scene and target signature generation codes. The SGDC goal is to maintain unlimited government rights and controlled access to government open source scene generation and signature codes. In addition, the SGDC provides development support to a multi-service community of test and evaluation (T&E) users, developers, and integrators in a collaborative environment. The SGDC has leveraged the DoD Defense Information Systems Agency (DISA) ProjectForge (https://Project.Forge.mil) which provides a collaborative development and distribution environment for the DoD community. The SGDC will develop and maintain several codes for tactical and strategic simulation, such as the Joint Signature Image Generator (JSIG), the Multi-spectral Advanced Volumetric Real-time Imaging Compositor (MAVRIC), and Office of the Secretary of Defense (OSD) Test and Evaluation Science and Technology (T&E/S&T) thermal modeling and atmospherics packages, such as EOView, CHARM, and STAR. Other utility packages included are the ContinuumCore for real-time messaging and data management and IGStudio for run-time visualization and scenario generation.

  2. Improved Simulations of Astrophysical Plasmas: Computation of New Atomic Data

    NASA Technical Reports Server (NTRS)

    Gorczyca, Thomas W.; Korista, Kirk T.

    2005-01-01

    Our research program is designed to carry out state-of-the-art atomic physics calculations crucial to advancing our understanding of fundamental astrophysical problems. We redress the present inadequacies in the atomic data base along two important areas: dielectronic recombination and inner-shell photoionization and multiple electron ejection/Auger fluorescence therefrom. All of these data are disseminated to the astrophysical community in the proper format for implementation in spectral simulation code.

  3. Modeling of bubble dynamics in relation to medical applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Amendt, P.A.; London, R.A.; Strauss, M.

    1997-03-12

    In various pulsed-laser medical applications, strong stress transients can be generated in advance of vapor bubble formation. To better understand the evolution of stress transients and subsequent formation of vapor bubbles, two-dimensional simulations are presented in channel or cylindrical geometry with the LATIS (LAser TISsue) computer code. Differences with one-dimensional modeling are explored, and simulated experimental conditions for vapor bubble generation are presented and compared with data. 22 refs., 8 figs.

  4. Nuclear Energy Advanced Modeling and Simulation (NEAMS) waste Integrated Performance and Safety Codes (IPSC) : gap analysis for high fidelity and performance assessment code development.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Joon H.; Siegel, Malcolm Dean; Arguello, Jose Guadalupe, Jr.

    2011-03-01

    This report describes a gap analysis performed in the process of developing the Waste Integrated Performance and Safety Codes (IPSC) in support of the U.S. Department of Energy (DOE) Office of Nuclear Energy Advanced Modeling and Simulation (NEAMS) Campaign. The goal of the Waste IPSC is to develop an integrated suite of computational modeling and simulation capabilities to quantitatively assess the long-term performance of waste forms in the engineered and geologic environments of a radioactive waste storage or disposal system. The Waste IPSC will provide this simulation capability (1) for a range of disposal concepts, waste form types, engineered repositorymore » designs, and geologic settings, (2) for a range of time scales and distances, (3) with appropriate consideration of the inherent uncertainties, and (4) in accordance with rigorous verification, validation, and software quality requirements. The gap analyses documented in this report were are performed during an initial gap analysis to identify candidate codes and tools to support the development and integration of the Waste IPSC, and during follow-on activities that delved into more detailed assessments of the various codes that were acquired, studied, and tested. The current Waste IPSC strategy is to acquire and integrate the necessary Waste IPSC capabilities wherever feasible, and develop only those capabilities that cannot be acquired or suitably integrated, verified, or validated. The gap analysis indicates that significant capabilities may already exist in the existing THC codes although there is no single code able to fully account for all physical and chemical processes involved in a waste disposal system. Large gaps exist in modeling chemical processes and their couplings with other processes. The coupling of chemical processes with flow transport and mechanical deformation remains challenging. The data for extreme environments (e.g., for elevated temperature and high ionic strength media) that are needed for repository modeling are severely lacking. In addition, most of existing reactive transport codes were developed for non-radioactive contaminants, and they need to be adapted to account for radionuclide decay and in-growth. The accessibility to the source codes is generally limited. Because the problems of interest for the Waste IPSC are likely to result in relatively large computational models, a compact memory-usage footprint and a fast/robust solution procedure will be needed. A robust massively parallel processing (MPP) capability will also be required to provide reasonable turnaround times on the analyses that will be performed with the code. A performance assessment (PA) calculation for a waste disposal system generally requires a large number (hundreds to thousands) of model simulations to quantify the effect of model parameter uncertainties on the predicted repository performance. A set of codes for a PA calculation must be sufficiently robust and fast in terms of code execution. A PA system as a whole must be able to provide multiple alternative models for a specific set of physical/chemical processes, so that the users can choose various levels of modeling complexity based on their modeling needs. This requires PA codes, preferably, to be highly modularized. Most of the existing codes have difficulties meeting these requirements. Based on the gap analysis results, we have made the following recommendations for the code selection and code development for the NEAMS waste IPSC: (1) build fully coupled high-fidelity THCMBR codes using the existing SIERRA codes (e.g., ARIA and ADAGIO) and platform, (2) use DAKOTA to build an enhanced performance assessment system (EPAS), and build a modular code architecture and key code modules for performance assessments. The key chemical calculation modules will be built by expanding the existing CANTERA capabilities as well as by extracting useful components from other existing codes.« less

  5. Grid Standards and Codes | Grid Modernization | NREL

    Science.gov Websites

    simulations that take advantage of advanced concepts such as hardware-in-the-loop testing. Such methods of methods and solutions. Projects Accelerating Systems Integration Standards Sharp increases in goal of this project is to develop streamlined and accurate methods for New York utilities to determine

  6. Simulation of in-hospital pediatric medical emergencies and cardiopulmonary arrests: highlighting the importance of the first 5 minutes.

    PubMed

    Hunt, Elizabeth A; Walker, Allen R; Shaffner, Donald H; Miller, Marlene R; Pronovost, Peter J

    2008-01-01

    Outcomes of in-hospital pediatric cardiopulmonary arrest are dismal. Recent data suggest that the quality of basic and advanced life support delivered to adults is low and contributes to poor outcomes, but few data regarding pediatric events have been reported. The objectives of this study were to (1) measure the median elapsed time to initiate important resuscitation maneuvers in simulated pediatric medical emergencies (ie, "mock codes") and (2) identify the types and frequency of errors committed during pediatric mock codes. A prospective, observational study was conducted of 34 consecutive hospital-based mock codes. A mannequin or computerized simulator was used to enact unannounced, simulated crisis situations involving children with respiratory distress or insufficiency, respiratory arrest, hemodynamic instability, and/or cardiopulmonary arrest. Assessment included time elapsed to initiation of specific resuscitation maneuvers and deviation from American Heart Association guidelines. Among the 34 mock codes, the median time to assessment of airway and breathing was 1.3 minutes, to administration of oxygen was 2.0 minutes, to assessment of circulation was 4.0 minutes, to arrival of any physician was 3.0 minutes, and to arrival of first member of code team was 6.0 minutes. Among cardiopulmonary arrest scenarios, elapsed time to initiation of compressions was 1.5 minutes and to request for defibrillator was 4.3 minutes. In 75% of mock codes, the team deviated from American Heart Association pediatric basic life support protocols, and in 100% of mock codes there was a communication error. Alarming delays and deviations occur in the major components of pediatric resuscitation. Future educational and organizational interventions should focus on improving the quality of care delivered during the first 5 minutes of resuscitation. Simulation of pediatric crises can identify targets for educational intervention to improve pediatric cardiopulmonary resuscitation and, ideally, outcomes.

  7. Probabilistic structural analysis methods and applications

    NASA Technical Reports Server (NTRS)

    Cruse, T. A.; Wu, Y.-T.; Dias, B.; Rajagopal, K. R.

    1988-01-01

    An advanced algorithm for simulating the probabilistic distribution of structural responses due to statistical uncertainties in loads, geometry, material properties, and boundary conditions is reported. The method effectively combines an advanced algorithm for calculating probability levels for multivariate problems (fast probability integration) together with a general-purpose finite-element code for stress, vibration, and buckling analysis. Application is made to a space propulsion system turbine blade for which the geometry and material properties are treated as random variables.

  8. Cross-separatrix Coupling in Nonlinear Global Electrostatic Turbulent Transport in C-2U

    NASA Astrophysics Data System (ADS)

    Lau, Calvin; Fulton, Daniel; Bao, Jian; Lin, Zhihong; Binderbauer, Michl; Tajima, Toshiki; Schmitz, Lothar; TAE Team

    2017-10-01

    In recent years, the progress of the C-2/C-2U advanced beam-driven field-reversed configuration (FRC) experiments at Tri Alpha Energy, Inc. has pushed FRCs to transport limited regimes. Understanding particle and energy transport is a vital step towards an FRC reactor, and two particle-in-cell microturbulence codes, the Gyrokinetic Toroidal Code (GTC) and A New Code (ANC), are being developed and applied toward this goal. Previous local electrostatic GTC simulations find the core to be robustly stable with drift-wave instability only in the scrape-off layer (SOL) region. However, experimental measurements showed fluctuations in both regions; one possibility is that fluctuations in the core originate from the SOL, suggesting the need for non-local simulations with cross-separatrix coupling. Current global ANC simulations with gyrokinetic ions and adiabatic electrons find that non-local effects (1) modify linear growth-rates and frequencies of instabilities and (2) allow instability to move from the unstable SOL to the linearly stable core. Nonlinear spreading is also seen prior to mode saturation. We also report on the progress of the first turbulence simulations in the SOL. This work is supported by the Norman Rostoker Fellowship.

  9. Advances in stellarator gyrokinetics

    NASA Astrophysics Data System (ADS)

    Helander, P.; Bird, T.; Jenko, F.; Kleiber, R.; Plunk, G. G.; Proll, J. H. E.; Riemann, J.; Xanthopoulos, P.

    2015-05-01

    Recent progress in the gyrokinetic theory of stellarator microinstabilities and turbulence simulations is summarized. The simulations have been carried out using two different gyrokinetic codes, the global particle-in-cell code EUTERPE and the continuum code GENE, which operates in the geometry of a flux tube or a flux surface but is local in the radial direction. Ion-temperature-gradient (ITG) and trapped-electron modes are studied and compared with their counterparts in axisymmetric tokamak geometry. Several interesting differences emerge. Because of the more complicated structure of the magnetic field, the fluctuations are much less evenly distributed over each flux surface in stellarators than in tokamaks. Instead of covering the entire outboard side of the torus, ITG turbulence is localized to narrow bands along the magnetic field in regions of unfavourable curvature, and the resulting transport depends on the normalized gyroradius ρ* even in radially local simulations. Trapped-electron modes can be significantly more stable than in typical tokamaks, because of the spatial separation of regions with trapped particles from those with bad magnetic curvature. Preliminary non-linear simulations in flux-tube geometry suggest differences in the turbulence levels in Wendelstein 7-X and a typical tokamak.

  10. Some Aspects of Advanced Tokamak Modeling in DIII-D

    NASA Astrophysics Data System (ADS)

    St John, H. E.; Petty, C. C.; Murakami, M.; Kinsey, J. E.

    2000-10-01

    We extend previous work(M. Murakami, et al., General Atomics Report GA-A23310 (1999).) done on time dependent DIII-D advanced tokamak simulations by introducing theoretical confinement models rather than relying on power balance derived transport coefficients. We explore using NBCD and off axis ECCD together with a self-consistent aligned bootstrap current, driven by the internal transport barrier dynamics generated with the GLF23 confinement model, to shape the hollow current profile and to maintain MHD stable conditions. Our theoretical modeling approach uses measured DIII-D initial conditions to start off the simulations in a smooth consistent manner. This mitigates the troublesome long lived perturbations in the ohmic current profile that is normally caused by inconsistent initial data. To achieve this goal our simulation uses a sequence of time dependent eqdsks generated autonomously by the EFIT MHD equilibrium code in analyzing experimental data to supply the history for the simulation.

  11. Advanced Pellet Cladding Interaction Modeling Using the US DOE CASL Fuel Performance Code: Peregrine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jason Hales; Various

    The US DOE’s Consortium for Advanced Simulation of LWRs (CASL) program has undertaken an effort to enhance and develop modeling and simulation tools for a virtual reactor application, including high fidelity neutronics, fluid flow/thermal hydraulics, and fuel and material behavior. The fuel performance analysis efforts aim to provide 3-dimensional capabilities for single and multiple rods to assess safety margins and the impact of plant operation and fuel rod design on the fuel thermomechanical- chemical behavior, including Pellet-Cladding Interaction (PCI) failures and CRUD-Induced Localized Corrosion (CILC) failures in PWRs. [1-3] The CASL fuel performance code, Peregrine, is an engineering scale codemore » that is built upon the MOOSE/ELK/FOX computational FEM framework, which is also common to the fuel modeling framework, BISON [4,5]. Peregrine uses both 2-D and 3-D geometric fuel rod representations and contains a materials properties and fuel behavior model library for the UO2 and Zircaloy system common to PWR fuel derived from both open literature sources and the FALCON code [6]. The primary purpose of Peregrine is to accurately calculate the thermal, mechanical, and chemical processes active throughout a single fuel rod during operation in a reactor, for both steady state and off-normal conditions.« less

  12. High Speed Civil Transport Aircraft Simulation: Reference-H Cycle 1, MATLAB Implementation

    NASA Technical Reports Server (NTRS)

    Sotack, Robert A.; Chowdhry, Rajiv S.; Buttrill, Carey S.

    1999-01-01

    The mathematical model and associated code to simulate a high speed civil transport aircraft - the Boeing Reference H configuration - are described. The simulation was constructed in support of advanced control law research. In addition to providing time histories of the dynamic response, the code includes the capabilities for calculating trim solutions and for generating linear models. The simulation relies on the nonlinear, six-degree-of-freedom equations which govern the motion of a rigid aircraft in atmospheric flight. The 1962 Standard Atmosphere Tables are used along with a turbulence model to simulate the Earth atmosphere. The aircraft model has three parts - an aerodynamic model, an engine model, and a mass model. These models use the data from the Boeing Reference H cycle 1 simulation data base. Models for the actuator dynamics, landing gear, and flight control system are not included in this aircraft model. Dynamic responses generated by the nonlinear simulation are presented and compared with results generated from alternate simulations at Boeing Commercial Aircraft Company and NASA Langley Research Center. Also, dynamic responses generated using linear models are presented and compared with dynamic responses generated using the nonlinear simulation.

  13. The Simpsons program 6-D phase space tracking with acceleration

    NASA Astrophysics Data System (ADS)

    Machida, S.

    1993-12-01

    A particle tracking code, Simpsons, in 6-D phase space including energy ramping has been developed to model proton synchrotrons and storage rings. We take time as the independent variable to change machine parameters and diagnose beam quality in a quite similar way as real machines, unlike existing tracking codes for synchrotrons which advance a particle element by element. Arbitrary energy ramping and rf voltage curves as a function of time are read as an input file for defining a machine cycle. The code is used to study beam dynamics with time dependent parameters. Some of the examples from simulations of the Superconducting Super Collider (SSC) boosters are shown.

  14. Simulated single molecule microscopy with SMeagol.

    PubMed

    Lindén, Martin; Ćurić, Vladimir; Boucharin, Alexis; Fange, David; Elf, Johan

    2016-08-01

    SMeagol is a software tool to simulate highly realistic microscopy data based on spatial systems biology models, in order to facilitate development, validation and optimization of advanced analysis methods for live cell single molecule microscopy data. SMeagol runs on Matlab R2014 and later, and uses compiled binaries in C for reaction-diffusion simulations. Documentation, source code and binaries for Mac OS, Windows and Ubuntu Linux can be downloaded from http://smeagol.sourceforge.net johan.elf@icm.uu.se Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  15. Steady state operation simulation of the Francis-99 turbine by means of advanced turbulence models

    NASA Astrophysics Data System (ADS)

    Gavrilov, A.; Dekterev, A.; Minakov, A.; Platonov, D.; Sentyabov, A.

    2017-01-01

    The paper presents numerical simulation of the flow in hydraulic turbine based on the experimental data of the II Francis-99 workshop. The calculation domain includes the wicket gate, runner and draft tube with rotating reference frame for the runner zone. Different turbulence models such as k-ω SST, ζ-f and RSM were considered. The calculations were performed by means of in-house CFD code SigmaFlow. The numerical simulation for part load, high load and best efficiency operation points were performed.

  16. Assessment of the TRACE Reactor Analysis Code Against Selected PANDA Transient Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zavisca, M.; Ghaderi, M.; Khatib-Rahbar, M.

    2006-07-01

    The TRACE (TRAC/RELAP Advanced Computational Engine) code is an advanced, best-estimate thermal-hydraulic program intended to simulate the transient behavior of light-water reactor systems, using a two-fluid (steam and water, with non-condensable gas), seven-equation representation of the conservation equations and flow-regime dependent constitutive relations in a component-based model with one-, two-, or three-dimensional elements, as well as solid heat structures and logical elements for the control system. The U.S. Nuclear Regulatory Commission is currently supporting the development of the TRACE code and its assessment against a variety of experimental data pertinent to existing and evolutionary reactor designs. This paper presents themore » results of TRACE post-test prediction of P-series of experiments (i.e., tests comprising the ISP-42 blind and open phases) conducted at the PANDA large-scale test facility in 1990's. These results show reasonable agreement with the reported test results, indicating good performance of the code and relevant underlying thermal-hydraulic and heat transfer models. (authors)« less

  17. Development of the functional simulator for the Galileo attitude and articulation control system

    NASA Technical Reports Server (NTRS)

    Namiri, M. K.

    1983-01-01

    A simulation program for verifying and checking the performance of the Galileo Spacecraft's Attitude and Articulation Control Subsystem's (AACS) flight software is discussed. The program, which is called Functional Simulator (FUNSIM), provides a simple method of interfacing user-supplied mathematical models coded in FORTRAN which describes spacecraft dynamics, sensors, and actuators; this is done with the AACS flight software, coded in HAL/S (High-level Advanced Language/Shuttle). It is thus able to simulate the AACS flight software accurately to the HAL/S statement level in the environment of a mainframe computer system. FUNSIM also has a command and data subsystem (CDS) simulator. It is noted that the input/output data and timing are simulated with the same precision as the flight microprocessor. FUNSIM uses a variable stepsize numerical integration algorithm complete with individual error bound control on the state variable to solve the equations of motion. The program has been designed to provide both line printer and matrix dot plotting of the variables requested in the run section and to provide error diagnostics.

  18. Application of Jacobian-free Newton–Krylov method in implicitly solving two-fluid six-equation two-phase flow problems: Implementation, validation and benchmark

    DOE PAGES

    Zou, Ling; Zhao, Haihua; Zhang, Hongbin

    2016-03-09

    This work represents a first-of-its-kind successful application to employ advanced numerical methods in solving realistic two-phase flow problems with two-fluid six-equation two-phase flow model. These advanced numerical methods include high-resolution spatial discretization scheme with staggered grids (high-order) fully implicit time integration schemes, and Jacobian-free Newton–Krylov (JFNK) method as the nonlinear solver. The computer code developed in this work has been extensively validated with existing experimental flow boiling data in vertical pipes and rod bundles, which cover wide ranges of experimental conditions, such as pressure, inlet mass flux, wall heat flux and exit void fraction. Additional code-to-code benchmark with the RELAP5-3Dmore » code further verifies the correct code implementation. The combined methods employed in this work exhibit strong robustness in solving two-phase flow problems even when phase appearance (boiling) and realistic discrete flow regimes are considered. Transitional flow regimes used in existing system analysis codes, normally introduced to overcome numerical difficulty, were completely removed in this work. As a result, this in turn provides the possibility to utilize more sophisticated flow regime maps in the future to further improve simulation accuracy.« less

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Preece, D.S.; Knudsen, S.D.

    The spherical element computer code DMC (Distinct Motion Code) used to model rock motion resulting from blasting has been enhanced to allow routine computer simulations of bench blasting. The enhancements required for bench blast simulation include: (1) modifying the gas flow portion of DMC, (2) adding a new explosive gas equation of state capability, (3) modifying the porosity calculation, and (4) accounting for blastwell spacing parallel to the face. A parametric study performed with DMC shows logical variation of the face velocity as burden, spacing, blastwell diameter and explosive type are varied. These additions represent a significant advance in themore » capability of DMC which will not only aid in understanding the physics involved in blasting but will also become a blast design tool. 8 refs., 7 figs., 1 tab.« less

  20. Design of the radiation shielding for the time of flight enhanced diagnostics neutron spectrometer at Experimental Advanced Superconducting Tokamak

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Du, T. F.; Chen, Z. J.; Peng, X. Y.

    A radiation shielding has been designed to reduce scattered neutrons and background gamma-rays for the new double-ring Time Of Flight Enhanced Diagnostics (TOFED). The shielding was designed based on simulation with the Monte Carlo code MCNP5. Dedicated model of the EAST tokamak has been developed together with the emission neutron source profile and spectrum; the latter were simulated with the Nubeam and GENESIS codes. Significant reduction of background radiation at the detector can be achieved and this satisfies the requirement of TOFED. The intensities of the scattered and direct neutrons in the line of sight of the TOFED neutron spectrometermore » at EAST are studied for future data interpretation.« less

  1. Maestro and Castro: Simulation Codes for Astrophysical Flows

    NASA Astrophysics Data System (ADS)

    Zingale, Michael; Almgren, Ann; Beckner, Vince; Bell, John; Friesen, Brian; Jacobs, Adam; Katz, Maximilian P.; Malone, Christopher; Nonaka, Andrew; Zhang, Weiqun

    2017-01-01

    Stellar explosions are multiphysics problems—modeling them requires the coordinated input of gravity solvers, reaction networks, radiation transport, and hydrodynamics together with microphysics recipes to describe the physics of matter under extreme conditions. Furthermore, these models involve following a wide range of spatial and temporal scales, which puts tough demands on simulation codes. We developed the codes Maestro and Castro to meet the computational challenges of these problems. Maestro uses a low Mach number formulation of the hydrodynamics to efficiently model convection. Castro solves the fully compressible radiation hydrodynamics equations to capture the explosive phases of stellar phenomena. Both codes are built upon the BoxLib adaptive mesh refinement library, which prepares them for next-generation exascale computers. Common microphysics shared between the codes allows us to transfer a problem from the low Mach number regime in Maestro to the explosive regime in Castro. Importantly, both codes are freely available (https://github.com/BoxLib-Codes). We will describe the design of the codes and some of their science applications, as well as future development directions.Support for development was provided by NSF award AST-1211563 and DOE/Office of Nuclear Physics grant DE-FG02-87ER40317 to Stony Brook and by the Applied Mathematics Program of the DOE Office of Advance Scientific Computing Research under US DOE contract DE-AC02-05CH11231 to LBNL.

  2. FY17 Status Report on NEAMS Neutronics Activities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, C. H.; Jung, Y. S.; Smith, M. A.

    2017-09-30

    Under the U.S. DOE NEAMS program, the high-fidelity neutronics code system has been developed to support the multiphysics modeling and simulation capability named SHARP. The neutronics code system includes the high-fidelity neutronics code PROTEUS, the cross section library and preprocessing tools, the multigroup cross section generation code MC2-3, the in-house meshing generation tool, the perturbation and sensitivity analysis code PERSENT, and post-processing tools. The main objectives of the NEAMS neutronics activities in FY17 are to continue development of an advanced nodal solver in PROTEUS for use in nuclear reactor design and analysis projects, implement a simplified sub-channel based thermal-hydraulic (T/H)more » capability into PROTEUS to efficiently compute the thermal feedback, improve the performance of PROTEUS-MOCEX using numerical acceleration and code optimization, improve the cross section generation tools including MC2-3, and continue to perform verification and validation tests for PROTEUS.« less

  3. GPU Particle Tracking and MHD Simulations with Greatly Enhanced Computational Speed

    NASA Astrophysics Data System (ADS)

    Ziemba, T.; O'Donnell, D.; Carscadden, J.; Cash, M.; Winglee, R.; Harnett, E.

    2008-12-01

    GPUs are intrinsically highly parallelized systems that provide more than an order of magnitude computing speed over a CPU based systems, for less cost than a high end-workstation. Recent advancements in GPU technologies allow for full IEEE float specifications with performance up to several hundred GFLOPs per GPU, and new software architectures have recently become available to ease the transition from graphics based to scientific applications. This allows for a cheap alternative to standard supercomputing methods and should increase the time to discovery. 3-D particle tracking and MHD codes have been developed using NVIDIA's CUDA and have demonstrated speed up of nearly a factor of 20 over equivalent CPU versions of the codes. Such a speed up enables new applications to develop, including real time running of radiation belt simulations and real time running of global magnetospheric simulations, both of which could provide important space weather prediction tools.

  4. Final Technical Report - Center for Technology for Advanced Scientific Component Software (TASCS)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sussman, Alan

    2014-10-21

    This is a final technical report for the University of Maryland work in the SciDAC Center for Technology for Advanced Scientific Component Software (TASCS). The Maryland work focused on software tools for coupling parallel software components built using the Common Component Architecture (CCA) APIs. Those tools are based on the Maryland InterComm software framework that has been used in multiple computational science applications to build large-scale simulations of complex physical systems that employ multiple separately developed codes.

  5. Air breathing engine/rocket trajectory optimization

    NASA Technical Reports Server (NTRS)

    Smith, V. K., III

    1979-01-01

    This research has focused on improving the mathematical models of the air-breathing propulsion systems, which can be mated with the rocket engine model and incorporated in trajectory optimization codes. Improved engine simulations provided accurate representation of the complex cycles proposed for advanced launch vehicles, thereby increasing the confidence in propellant use and payload calculations. The versatile QNEP (Quick Navy Engine Program) was modified to allow treatment of advanced turboaccelerator cycles using hydrogen or hydrocarbon fuels and operating in the vehicle flow field.

  6. Kinetic turbulence simulations at extreme scale on leadership-class systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Bei; Ethier, Stephane; Tang, William

    2013-01-01

    Reliable predictive simulation capability addressing confinement properties in magnetically confined fusion plasmas is critically-important for ITER, a 20 billion dollar international burning plasma device under construction in France. The complex study of kinetic turbulence, which can severely limit the energy confinement and impact the economic viability of fusion systems, requires simulations at extreme scale for such an unprecedented device size. Our newly optimized, global, ab initio particle-in-cell code solving the nonlinear equations underlying gyrokinetic theory achieves excellent performance with respect to "time to solution" at the full capacity of the IBM Blue Gene/Q on 786,432 cores of Mira at ALCFmore » and recently of the 1,572,864 cores of Sequoia at LLNL. Recent multithreading and domain decomposition optimizations in the new GTC-P code represent critically important software advances for modern, low memory per core systems by enabling routine simulations at unprecedented size (130 million grid points ITER-scale) and resolution (65 billion particles).« less

  7. Enhancement and Extension of Porosity Model in the FDNS-500 Code to Provide Enhanced Simulations of Rocket Engine Components

    NASA Technical Reports Server (NTRS)

    Cheng, Gary

    2003-01-01

    In the past, the design of rocket engines has primarily relied on the cold flow/hot fire test, and the empirical correlations developed based on the database from previous designs. However, it is very costly to fabricate and test various hardware designs during the design cycle, whereas the empirical model becomes unreliable in designing the advanced rocket engine where its operating conditions exceed the range of the database. The main goal of the 2nd Generation Reusable Launching Vehicle (GEN-II RLV) is to reduce the cost per payload and to extend the life of the hardware, which poses a great challenge to the rocket engine design. Hence, understanding the flow characteristics in each engine components is thus critical to the engine design. In the last few decades, the methodology of computational fluid dynamics (CFD) has been advanced to be a mature tool of analyzing various engine components. Therefore, it is important for the CFD design tool to be able to properly simulate the hot flow environment near the liquid injector, and thus to accurately predict the heat load to the injector faceplate. However, to date it is still not feasible to conduct CFD simulations of the detailed flowfield with very complicated geometries such as fluid flow and heat transfer in an injector assembly and through a porous plate, which requires gigantic computer memories and power to resolve the detailed geometry. The rigimesh (a sintered metal material), utilized to reduce the heat load to the faceplate, is one of the design concepts for the injector faceplate of the GEN-II RLV. In addition, the injector assembly is designed to distribute propellants into the combustion chamber of the liquid rocket engine. A porosity mode thus becomes a necessity for the CFD code in order to efficiently simulate the flow and heat transfer in these porous media, and maintain good accuracy in describing the flow fields. Currently, the FDNS (Finite Difference Navier-Stakes) code is one of the CFD codes which are most widely used by research engineers at NASA Marshall Space Flight Center (MSFC) to simulate various flow problems related to rocket engines. The objective of this research work during the 10-week summer faculty fellowship program was to 1) debug the framework of the porosity model in the current FDNS code, and 2) validate the porosity model by simulating flows through various porous media such as tube banks and porous plate.

  8. Outdoor Test Facility and Related Facilities | Photovoltaic Research | NREL

    Science.gov Websites

    advanced or emerging photovoltaic (PV) technologies under simulated, accelerated indoor and outdoor, and evaluate prototype, pre-commercial, and commercial PV modules. One of the major roles of researchers at the OTF is to work with industry to develop uniform and consensus standards and codes for testing PV

  9. Synchrotron characterization of nanograined UO 2 grain growth

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mo, Kun; Miao, Yinbin; Yun, Di

    2015-09-30

    This activity is supported by the US Nuclear Energy Advanced Modeling and Simulation (NEAMS) Fuels Product Line (FPL) and aims at providing experimental data for the validation of the mesoscale simulation code MARMOT. MARMOT is a mesoscale multiphysics code that predicts the coevolution of microstructure and properties within reactor fuel during its lifetime in the reactor. It is an important component of the Moose-Bison-Marmot (MBM) code suite that has been developed by Idaho National Laboratory (INL) to enable next generation fuel performance modeling capability as part of the NEAMS Program FPL. In order to ensure the accuracy of the microstructuremore » based materials models being developed within the MARMOT code, extensive validation efforts must be carried out. In this report, we summarize our preliminary synchrotron radiation experiments at APS to determine the grain size of nanograin UO 2. The methodology and experimental setup developed in this experiment can directly apply to the proposed in-situ grain growth measurements. The investigation of the grain growth kinetics was conducted based on isothermal annealing and grain growth characterization as functions of duration and temperature. The kinetic parameters such as activation energy for grain growth for UO 2 with different stoichiometry are obtained and compared with molecular dynamics (MD) simulations.« less

  10. Supplying materials needed for grain growth characterizations of nano-grained UO 2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mo, Kun; Miao, Yinbin; Yun, Di

    2015-09-30

    This activity is supported by the US Nuclear Energy Advanced Modeling and Simulation (NEAMS) Fuels Product Line (FPL) and aims at providing experimental data for the validation of the mesoscale simulation code MARMOT. MARMOT is a mesoscale multiphysics code that predicts the coevolution of microstructure and properties within reactor fuel during its lifetime in the reactor. It is an important component of the Moose-Bison-Marmot (MBM) code suite that has been developed by Idaho National Laboratory (INL) to enable next generation fuel performance modeling capability as part of the NEAMS Program FPL. In order to ensure the accuracy of the microstructuremore » based materials models being developed within the MARMOT code, extensive validation efforts must be carried out. In this report, we summarize our preliminary synchrotron radiation experiments at APS to determine the grain size of nanograin UO 2. The methodology and experimental setup developed in this experiment can directly apply to the proposed in-situ grain growth measurements. The investigation of the grain growth kinetics was conducted based on isothermal annealing and grain growth characterization as functions of duration and temperature. The kinetic parameters such as activation energy for grain growth for UO 2 with different stoichiometry are obtained and compared with molecular dynamics (MD) simulations.« less

  11. Analysis of JT-60SA operational scenarios

    NASA Astrophysics Data System (ADS)

    Garzotti, L.; Barbato, E.; Garcia, J.; Hayashi, N.; Voitsekhovitch, I.; Giruzzi, G.; Maget, P.; Romanelli, M.; Saarelma, S.; Stankiewitz, R.; Yoshida, M.; Zagórski, R.

    2018-02-01

    Reference scenarios for the JT-60SA tokamak have been simulated with one-dimensional transport codes to assess the stationary state of the flat-top phase and provide a profile database for further physics studies (e.g. MHD stability, gyrokinetic analysis) and diagnostics design. The types of scenario considered vary from pulsed standard H-mode to advanced non-inductive steady-state plasmas. In this paper we present the results obtained with the ASTRA, CRONOS, JINTRAC and TOPICS codes equipped with the Bohm/gyro-Bohm, CDBM and GLF23 transport models. The scenarios analysed here are: a standard ELMy H-mode, a hybrid scenario and a non-inductive steady state plasma, with operational parameters from the JT-60SA research plan. Several simulations of the scenarios under consideration have been performed with the above mentioned codes and transport models. The results from the different codes are in broad agreement and the main plasma parameters generally agree well with the zero dimensional estimates reported previously. The sensitivity of the results to different transport models and, in some cases, to the ELM/pedestal model has been investigated.

  12. Three-dimensional Boltzmann-Hydro Code for Core-collapse in Massive Stars. II. The Implementation of Moving-mesh for Neutron Star Kicks

    NASA Astrophysics Data System (ADS)

    Nagakura, Hiroki; Iwakami, Wakana; Furusawa, Shun; Sumiyoshi, Kohsuke; Yamada, Shoichi; Matsufuru, Hideo; Imakura, Akira

    2017-04-01

    We present a newly developed moving-mesh technique for the multi-dimensional Boltzmann-Hydro code for the simulation of core-collapse supernovae (CCSNe). What makes this technique different from others is the fact that it treats not only hydrodynamics but also neutrino transfer in the language of the 3 + 1 formalism of general relativity (GR), making use of the shift vector to specify the time evolution of the coordinate system. This means that the transport part of our code is essentially general relativistic, although in this paper it is applied only to the moving curvilinear coordinates in the flat Minknowski spacetime, since the gravity part is still Newtonian. The numerical aspect of the implementation is also described in detail. Employing the axisymmetric two-dimensional version of the code, we conduct two test computations: oscillations and runaways of proto-neutron star (PNS). We show that our new method works fine, tracking the motions of PNS correctly. We believe that this is a major advancement toward the realistic simulation of CCSNe.

  13. Assessment of a hybrid finite element and finite volume code for turbulent incompressible flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xia, Yidong, E-mail: yidong.xia@inl.gov; Wang, Chuanjin; Luo, Hong

    Hydra-TH is a hybrid finite-element/finite-volume incompressible/low-Mach flow simulation code based on the Hydra multiphysics toolkit being developed and used for thermal-hydraulics applications. In the present work, a suite of verification and validation (V&V) test problems for Hydra-TH was defined to meet the design requirements of the Consortium for Advanced Simulation of Light Water Reactors (CASL). The intent for this test problem suite is to provide baseline comparison data that demonstrates the performance of the Hydra-TH solution methods. The simulation problems vary in complexity from laminar to turbulent flows. A set of RANS and LES turbulence models were used in themore » simulation of four classical test problems. Numerical results obtained by Hydra-TH agreed well with either the available analytical solution or experimental data, indicating the verified and validated implementation of these turbulence models in Hydra-TH. Where possible, some form of solution verification has been attempted to identify sensitivities in the solution methods, and suggest best practices when using the Hydra-TH code. -- Highlights: •We performed a comprehensive study to verify and validate the turbulence models in Hydra-TH. •Hydra-TH delivers 2nd-order grid convergence for the incompressible Navier–Stokes equations. •Hydra-TH can accurately simulate the laminar boundary layers. •Hydra-TH can accurately simulate the turbulent boundary layers with RANS turbulence models. •Hydra-TH delivers high-fidelity LES capability for simulating turbulent flows in confined space.« less

  14. University of Washington/ Northwest National Marine Renewable Energy Center Tidal Current Technology Test Protocol, Instrumentation, Design Code, and Oceanographic Modeling Collaboration: Cooperative Research and Development Final Report, CRADA Number CRD-11-452

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Driscoll, Frederick R.

    The University of Washington (UW) - Northwest National Marine Renewable Energy Center (UW-NNMREC) and the National Renewable Energy Laboratory (NREL) will collaborate to advance research and development (R&D) of Marine Hydrokinetic (MHK) renewable energy technology, specifically renewable energy captured from ocean tidal currents. UW-NNMREC is endeavoring to establish infrastructure, capabilities and tools to support in-water testing of marine energy technology. NREL is leveraging its experience and capabilities in field testing of wind systems to develop protocols and instrumentation to advance field testing of MHK systems. Under this work, UW-NNMREC and NREL will work together to develop a common instrumentation systemmore » and testing methodologies, standards and protocols. UW-NNMREC is also establishing simulation capabilities for MHK turbine and turbine arrays. NREL has extensive experience in wind turbine array modeling and is developing several computer based numerical simulation capabilities for MHK systems. Under this CRADA, UW-NNMREC and NREL will work together to augment single device and array modeling codes. As part of this effort UW NNMREC will also work with NREL to run simulations on NREL's high performance computer system.« less

  15. MHD Simulation of Magnetic Nozzle Plasma with the NIMROD Code: Applications to the VASIMR Advanced Space Propulsion Concept

    NASA Astrophysics Data System (ADS)

    Tarditi, Alfonso G.; Shebalin, John V.

    2002-11-01

    A simulation study with the NIMROD code [1] is being carried on to investigate the efficiency of the thrust generation process and the properties of the plasma detachment in a magnetic nozzle. In the simulation, hot plasma is injected in the magnetic nozzle, modeled as a 2D, axi-symmetric domain. NIMROD has two-fluid, 3D capabilities but the present runs are being conducted within the MHD, 2D approximation. As the plasma travels through the magnetic field, part of its thermal energy is converted into longitudinal kinetic energy, along the axis of the nozzle. The plasma eventually detaches from the magnetic field at a certain distance from the nozzle throat where the kinetic energy becomes larger than the magnetic energy. Preliminary NIMROD 2D runs have been benchmarked with a particle trajectory code showing satisfactory results [2]. Further testing is here reported with the emphasis on the analysis of the diffusion rate across the field lines and of the overall nozzle efficiency. These simulation runs are specifically designed for obtaining comparisons with laboratory measurements of the VASIMR experiment, by looking at the evolution of the radial plasma density and temperature profiles in the nozzle. VASIMR (Variable Specific Impulse Magnetoplasma Rocket, [3]) is an advanced space propulsion concept currently under experimental development at the Advanced Space Propulsion Laboratory, NASA Johnson Space Center. A plasma (typically ionized Hydrogen or Helium) is generated by a RF (Helicon) discharge and heated by an Ion Cyclotron Resonance Heating antenna. The heated plasma is then guided into a magnetic nozzle to convert the thermal plasma energy into effective thrust. The VASIMR system has no electrodes and a solenoidal magnetic field produced by an asymmetric mirror configuration ensures magnetic insulation of the plasma from the material surfaces. By powering the plasma source and the heating antenna at different levels it is possible to vary smoothly of the thrust-to-specific impulse ratio while maintaining maximum power utilization. [1] http://www.nimrodteam.org [2] A. V. Ilin et al., Proc. 40th AIAA Aerospace Sciences Meeting, Reno, NV, Jan. 2002 [3] F. R. Chang-Diaz, Scientific American, p. 90, Nov. 2000

  16. Overview of the Tusas Code for Simulation of Dendritic Solidification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trainer, Amelia J.; Newman, Christopher Kyle; Francois, Marianne M.

    2016-01-07

    The aim of this project is to conduct a parametric investigation into the modeling of two dimensional dendrite solidification, using the phase field model. Specifically, we use the Tusas code, which is for coupled heat and phase-field simulation of dendritic solidification. Dendritic solidification, which may occur in the presence of an unstable solidification interface, results in treelike microstructures that often grow perpendicular to the rest of the growth front. The interface may become unstable if the enthalpy of the solid material is less than that of the liquid material, or if the solute is less soluble in solid than itmore » is in liquid, potentially causing a partition [1]. A key motivation behind this research is that a broadened understanding of phase-field formulation and microstructural developments can be utilized for macroscopic simulations of phase change. This may be directly implemented as a part of the Telluride project at Los Alamos National Laboratory (LANL), through which a computational additive manufacturing simulation tool is being developed, ultimately to become part of the Advanced Simulation and Computing Program within the U.S. Department of Energy [2].« less

  17. A Short Review of Ablative-Material Response Models and Simulation Tools

    NASA Technical Reports Server (NTRS)

    Lachaud, Jean; Magin, Thierry E.; Cozmuta, Ioana; Mansour, Nagi N.

    2011-01-01

    A review of the governing equations and boundary conditions used to model the response of ablative materials submitted to a high-enthalpy flow is proposed. The heritage of model-development efforts undertaken in the 1960s is extremely clear: the bases of the models used in the community are mathematically equivalent. Most of the material-response codes implement a single model in which the equation parameters may be modified to model different materials or conditions. The level of fidelity of the models implemented in design tools only slightly varies. Research and development codes are generally more advanced but often not as robust. The capabilities of each of these codes are summarized in a color-coded table along with research and development efforts currently in progress.

  18. Overview of Edge Simulation Laboratory (ESL)

    NASA Astrophysics Data System (ADS)

    Cohen, R. H.; Dorr, M.; Hittinger, J.; Rognlien, T.; Umansky, M.; Xiong, A.; Xu, X.; Belli, E.; Candy, J.; Snyder, P.; Colella, P.; Martin, D.; Sternberg, T.; van Straalen, B.; Bodi, K.; Krasheninnikov, S.

    2006-10-01

    The ESL is a new collaboration to build a full-f electromagnetic gyrokinetic code for tokamak edge plasmas using continuum methods. Target applications are edge turbulence and transport (neoclassical and anomalous), and edge-localized modes. Initially the project has three major threads: (i) verification and validation of TEMPEST, the project's initial (electrostatic) edge code which can be run in 4D (neoclassical and transport-timescale applications) or 5D (turbulence); (ii) design of the next generation code, which will include more complete physics (electromagnetics, fluid equation option, improved collisions) and advanced numerics (fully conservative, high-order discretization, mapped multiblock grids, adaptivity), and (iii) rapid-prototype codes to explore the issues attached to solving fully nonlinear gyrokinetics with steep radial gradiens. We present a brief summary of the status of each of these activities.

  19. Efficient depth intraprediction method for H.264/AVC-based three-dimensional video coding

    NASA Astrophysics Data System (ADS)

    Oh, Kwan-Jung; Oh, Byung Tae

    2015-04-01

    We present an intracoding method that is applicable to depth map coding in multiview plus depth systems. Our approach combines skip prediction and plane segmentation-based prediction. The proposed depth intraskip prediction uses the estimated direction at both the encoder and decoder, and does not need to encode residual data. Our plane segmentation-based intraprediction divides the current block into biregions, and applies a different prediction scheme for each segmented region. This method avoids incorrect estimations across different regions, resulting in higher prediction accuracy. Simulation results demonstrate that the proposed scheme is superior to H.264/advanced video coding intraprediction and has the ability to improve the subjective rendering quality.

  20. Recent advances in multiview distributed video coding

    NASA Astrophysics Data System (ADS)

    Dufaux, Frederic; Ouaret, Mourad; Ebrahimi, Touradj

    2007-04-01

    We consider dense networks of surveillance cameras capturing overlapped images of the same scene from different viewing directions, such a scenario being referred to as multi-view. Data compression is paramount in such a system due to the large amount of captured data. In this paper, we propose a Multi-view Distributed Video Coding approach. It allows for low complexity / low power consumption at the encoder side, and the exploitation of inter-view correlation without communications among the cameras. We introduce a combination of temporal intra-view side information and homography inter-view side information. Simulation results show both the improvement of the side information, as well as a significant gain in terms of coding efficiency.

  1. WholeCellSimDB: a hybrid relational/HDF database for whole-cell model predictions

    PubMed Central

    Karr, Jonathan R.; Phillips, Nolan C.; Covert, Markus W.

    2014-01-01

    Mechanistic ‘whole-cell’ models are needed to develop a complete understanding of cell physiology. However, extracting biological insights from whole-cell models requires running and analyzing large numbers of simulations. We developed WholeCellSimDB, a database for organizing whole-cell simulations. WholeCellSimDB was designed to enable researchers to search simulation metadata to identify simulations for further analysis, and quickly slice and aggregate simulation results data. In addition, WholeCellSimDB enables users to share simulations with the broader research community. The database uses a hybrid relational/hierarchical data format architecture to efficiently store and retrieve both simulation setup metadata and results data. WholeCellSimDB provides a graphical Web-based interface to search, browse, plot and export simulations; a JavaScript Object Notation (JSON) Web service to retrieve data for Web-based visualizations; a command-line interface to deposit simulations; and a Python API to retrieve data for advanced analysis. Overall, we believe WholeCellSimDB will help researchers use whole-cell models to advance basic biological science and bioengineering. Database URL: http://www.wholecellsimdb.org Source code repository URL: http://github.com/CovertLab/WholeCellSimDB PMID:25231498

  2. Parameters that affect parallel processing for computational electromagnetic simulation codes on high performance computing clusters

    NASA Astrophysics Data System (ADS)

    Moon, Hongsik

    What is the impact of multicore and associated advanced technologies on computational software for science? Most researchers and students have multicore laptops or desktops for their research and they need computing power to run computational software packages. Computing power was initially derived from Central Processing Unit (CPU) clock speed. That changed when increases in clock speed became constrained by power requirements. Chip manufacturers turned to multicore CPU architectures and associated technological advancements to create the CPUs for the future. Most software applications benefited by the increased computing power the same way that increases in clock speed helped applications run faster. However, for Computational ElectroMagnetics (CEM) software developers, this change was not an obvious benefit - it appeared to be a detriment. Developers were challenged to find a way to correctly utilize the advancements in hardware so that their codes could benefit. The solution was parallelization and this dissertation details the investigation to address these challenges. Prior to multicore CPUs, advanced computer technologies were compared with the performance using benchmark software and the metric was FLoting-point Operations Per Seconds (FLOPS) which indicates system performance for scientific applications that make heavy use of floating-point calculations. Is FLOPS an effective metric for parallelized CEM simulation tools on new multicore system? Parallel CEM software needs to be benchmarked not only by FLOPS but also by the performance of other parameters related to type and utilization of the hardware, such as CPU, Random Access Memory (RAM), hard disk, network, etc. The codes need to be optimized for more than just FLOPs and new parameters must be included in benchmarking. In this dissertation, the parallel CEM software named High Order Basis Based Integral Equation Solver (HOBBIES) is introduced. This code was developed to address the needs of the changing computer hardware platforms in order to provide fast, accurate and efficient solutions to large, complex electromagnetic problems. The research in this dissertation proves that the performance of parallel code is intimately related to the configuration of the computer hardware and can be maximized for different hardware platforms. To benchmark and optimize the performance of parallel CEM software, a variety of large, complex projects are created and executed on a variety of computer platforms. The computer platforms used in this research are detailed in this dissertation. The projects run as benchmarks are also described in detail and results are presented. The parameters that affect parallel CEM software on High Performance Computing Clusters (HPCC) are investigated. This research demonstrates methods to maximize the performance of parallel CEM software code.

  3. Advanced capabilities for materials modelling with Quantum ESPRESSO

    NASA Astrophysics Data System (ADS)

    Giannozzi, P.; Andreussi, O.; Brumme, T.; Bunau, O.; Buongiorno Nardelli, M.; Calandra, M.; Car, R.; Cavazzoni, C.; Ceresoli, D.; Cococcioni, M.; Colonna, N.; Carnimeo, I.; Dal Corso, A.; de Gironcoli, S.; Delugas, P.; DiStasio, R. A., Jr.; Ferretti, A.; Floris, A.; Fratesi, G.; Fugallo, G.; Gebauer, R.; Gerstmann, U.; Giustino, F.; Gorni, T.; Jia, J.; Kawamura, M.; Ko, H.-Y.; Kokalj, A.; Küçükbenli, E.; Lazzeri, M.; Marsili, M.; Marzari, N.; Mauri, F.; Nguyen, N. L.; Nguyen, H.-V.; Otero-de-la-Roza, A.; Paulatto, L.; Poncé, S.; Rocca, D.; Sabatini, R.; Santra, B.; Schlipf, M.; Seitsonen, A. P.; Smogunov, A.; Timrov, I.; Thonhauser, T.; Umari, P.; Vast, N.; Wu, X.; Baroni, S.

    2017-11-01

    Quantum EXPRESSO is an integrated suite of open-source computer codes for quantum simulations of materials using state-of-the-art electronic-structure techniques, based on density-functional theory, density-functional perturbation theory, and many-body perturbation theory, within the plane-wave pseudopotential and projector-augmented-wave approaches. Quantum EXPRESSO owes its popularity to the wide variety of properties and processes it allows to simulate, to its performance on an increasingly broad array of hardware architectures, and to a community of researchers that rely on its capabilities as a core open-source development platform to implement their ideas. In this paper we describe recent extensions and improvements, covering new methodologies and property calculators, improved parallelization, code modularization, and extended interoperability both within the distribution and with external software.

  4. Advanced capabilities for materials modelling with Quantum ESPRESSO.

    PubMed

    Giannozzi, P; Andreussi, O; Brumme, T; Bunau, O; Buongiorno Nardelli, M; Calandra, M; Car, R; Cavazzoni, C; Ceresoli, D; Cococcioni, M; Colonna, N; Carnimeo, I; Dal Corso, A; de Gironcoli, S; Delugas, P; DiStasio, R A; Ferretti, A; Floris, A; Fratesi, G; Fugallo, G; Gebauer, R; Gerstmann, U; Giustino, F; Gorni, T; Jia, J; Kawamura, M; Ko, H-Y; Kokalj, A; Küçükbenli, E; Lazzeri, M; Marsili, M; Marzari, N; Mauri, F; Nguyen, N L; Nguyen, H-V; Otero-de-la-Roza, A; Paulatto, L; Poncé, S; Rocca, D; Sabatini, R; Santra, B; Schlipf, M; Seitsonen, A P; Smogunov, A; Timrov, I; Thonhauser, T; Umari, P; Vast, N; Wu, X; Baroni, S

    2017-10-24

    Quantum EXPRESSO is an integrated suite of open-source computer codes for quantum simulations of materials using state-of-the-art electronic-structure techniques, based on density-functional theory, density-functional perturbation theory, and many-body perturbation theory, within the plane-wave pseudopotential and projector-augmented-wave approaches. Quantum EXPRESSO owes its popularity to the wide variety of properties and processes it allows to simulate, to its performance on an increasingly broad array of hardware architectures, and to a community of researchers that rely on its capabilities as a core open-source development platform to implement their ideas. In this paper we describe recent extensions and improvements, covering new methodologies and property calculators, improved parallelization, code modularization, and extended interoperability both within the distribution and with external software.

  5. Advanced capabilities for materials modelling with Quantum ESPRESSO.

    PubMed

    Andreussi, Oliviero; Brumme, Thomas; Bunau, Oana; Buongiorno Nardelli, Marco; Calandra, Matteo; Car, Roberto; Cavazzoni, Carlo; Ceresoli, Davide; Cococcioni, Matteo; Colonna, Nicola; Carnimeo, Ivan; Dal Corso, Andrea; de Gironcoli, Stefano; Delugas, Pietro; DiStasio, Robert; Ferretti, Andrea; Floris, Andrea; Fratesi, Guido; Fugallo, Giorgia; Gebauer, Ralph; Gerstmann, Uwe; Giustino, Feliciano; Gorni, Tommaso; Jia, Junteng; Kawamura, Mitsuaki; Ko, Hsin-Yu; Kokalj, Anton; Küçükbenli, Emine; Lazzeri, Michele; Marsili, Margherita; Marzari, Nicola; Mauri, Francesco; Nguyen, Ngoc Linh; Nguyen, Huy-Viet; Otero-de-la-Roza, Alberto; Paulatto, Lorenzo; Poncé, Samuel; Giannozzi, Paolo; Rocca, Dario; Sabatini, Riccardo; Santra, Biswajit; Schlipf, Martin; Seitsonen, Ari Paavo; Smogunov, Alexander; Timrov, Iurii; Thonhauser, Timo; Umari, Paolo; Vast, Nathalie; Wu, Xifan; Baroni, Stefano

    2017-09-27

    Quantum ESPRESSO is an integrated suite of open-source computer codes for quantum simulations of materials using state-of-the art electronic-structure techniques, based on density-functional theory, density-functional perturbation theory, and many-body perturbation theory, within the plane-wave pseudo-potential and projector-augmented-wave approaches. Quantum ESPRESSO owes its popularity to the wide variety of properties and processes it allows to simulate, to its performance on an increasingly broad array of hardware architectures, and to a community of researchers that rely on its capabilities as a core open-source development platform to implement theirs ideas. In this paper we describe recent extensions and improvements, covering new methodologies and property calculators, improved parallelization, code modularization, and extended interoperability both within the distribution and with external software. © 2017 IOP Publishing Ltd.

  6. Science & Technology Review November 2007

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chinn, D J

    2007-10-16

    This month's issue has the following articles: (1) Simulating the Electromagnetic World--Commentary by Steven R. Patterson; (2) A Code to Model Electromagnetic Phenomena--EMSolve, a Livermore supercomputer code that simulates electromagnetic fields, is helping advance a wide range of research efforts; (3) Characterizing Virulent Pathogens--Livermore researchers are developing multiplexed assays for rapid detection of pathogens; (4) Imaging at the Atomic Level--A powerful new electron microscope at the Laboratory is resolving materials at the atomic level for the first time; (5) Scientists without Borders--Livermore scientists lend their expertise on peaceful nuclear applications to their counterparts in other countries; and (6) Probing Deepmore » into the Nucleus--Edward Teller's contributions to the fast-growing fields of nuclear and particle physics were part of a physics golden age.« less

  7. Code Team Training: Demonstrating Adherence to AHA Guidelines During Pediatric Code Blue Activations.

    PubMed

    Stewart, Claire; Shoemaker, Jamie; Keller-Smith, Rachel; Edmunds, Katherine; Davis, Andrew; Tegtmeyer, Ken

    2017-10-16

    Pediatric code blue activations are infrequent events with a high mortality rate despite the best effort of code teams. The best method for training these code teams is debatable; however, it is clear that training is needed to assure adherence to American Heart Association (AHA) Resuscitation Guidelines and to prevent the decay that invariably occurs after Pediatric Advanced Life Support training. The objectives of this project were to train a multidisciplinary, multidepartmental code team and to measure this team's adherence to AHA guidelines during code simulation. Multidisciplinary code team training sessions were held using high-fidelity, in situ simulation. Sessions were held several times per month. Each session was filmed and reviewed for adherence to 5 AHA guidelines: chest compression rate, ventilation rate, chest compression fraction, use of a backboard, and use of a team leader. After the first study period, modifications were made to the code team including implementation of just-in-time training and alteration of the compression team. Thirty-eight sessions were completed, with 31 eligible for video analysis. During the first study period, 1 session adhered to all AHA guidelines. During the second study period, after alteration of the code team and implementation of just-in-time training, no sessions adhered to all AHA guidelines; however, there was an improvement in percentage of sessions adhering to ventilation rate and chest compression rate and an improvement in median ventilation rate. We present a method for training a large code team drawn from multiple hospital departments and a method of assessing code team performance. Despite subjective improvement in code team positioning, communication, and role completion and some improvement in ventilation rate and chest compression rate, we failed to consistently demonstrate improvement in adherence to all guidelines.

  8. The Nuclear Energy Advanced Modeling and Simulation Enabling Computational Technologies FY09 Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Diachin, L F; Garaizar, F X; Henson, V E

    2009-10-12

    In this document we report on the status of the Nuclear Energy Advanced Modeling and Simulation (NEAMS) Enabling Computational Technologies (ECT) effort. In particular, we provide the context for ECT In the broader NEAMS program and describe the three pillars of the ECT effort, namely, (1) tools and libraries, (2) software quality assurance, and (3) computational facility (computers, storage, etc) needs. We report on our FY09 deliverables to determine the needs of the integrated performance and safety codes (IPSCs) in these three areas and lay out the general plan for software quality assurance to meet the requirements of DOE andmore » the DOE Advanced Fuel Cycle Initiative (AFCI). We conclude with a brief description of our interactions with the Idaho National Laboratory computer center to determine what is needed to expand their role as a NEAMS user facility.« less

  9. Experimental Validation of Numerical Simulations for an Acoustic Liner in Grazing Flow

    NASA Technical Reports Server (NTRS)

    Tam, Christopher K. W.; Pastouchenko, Nikolai N.; Jones, Michael G.; Watson, Willie R.

    2013-01-01

    A coordinated experimental and numerical simulation effort is carried out to improve our understanding of the physics of acoustic liners in a grazing flow as well our computational aeroacoustics (CAA) method prediction capability. A numerical simulation code based on advanced CAA methods is developed. In a parallel effort, experiments are performed using the Grazing Flow Impedance Tube at the NASA Langley Research Center. In the experiment, a liner is installed in the upper wall of a rectangular flow duct with a 2 inch by 2.5 inch cross section. Spatial distribution of sound pressure levels and relative phases are measured on the wall opposite the liner in the presence of a Mach 0.3 grazing flow. The computer code is validated by comparing computed results with experimental measurements. Good agreements are found. The numerical simulation code is then used to investigate the physical properties of the acoustic liner. It is shown that an acoustic liner can produce self-noise in the presence of a grazing flow and that a feedback acoustic resonance mechanism is responsible for the generation of this liner self-noise. In addition, the same mechanism also creates additional liner drag. An estimate, based on numerical simulation data, indicates that for a resonant liner with a 10% open area ratio, the drag increase would be about 4% of the turbulent boundary layer drag over a flat wall.

  10. Coupling the System Analysis Module with SAS4A/SASSYS-1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fanning, T. H.; Hu, R.

    2016-09-30

    SAS4A/SASSYS-1 is a simulation tool used to perform deterministic analysis of anticipated events as well as design basis and beyond design basis accidents for advanced reactors, with an emphasis on sodium fast reactors. SAS4A/SASSYS-1 has been under development and in active use for nearly forty-five years, and is currently maintained by the U.S. Department of Energy under the Office of Advanced Reactor Technology. Although SAS4A/SASSYS-1 contains a very capable primary and intermediate system modeling component, PRIMAR-4, it also has some shortcomings: outdated data management and code structure makes extension of the PRIMAR-4 module somewhat difficult. The user input format formore » PRIMAR-4 also limits the number of volumes and segments that can be used to describe a given system. The System Analysis Module (SAM) is a fairly new code development effort being carried out under the U.S. DOE Nuclear Energy Advanced Modeling and Simulation (NEAMS) program. SAM is being developed with advanced physical models, numerical methods, and software engineering practices; however, it is currently somewhat limited in the system components and phenomena that can be represented. For example, component models for electromagnetic pumps and multi-layer stratified volumes have not yet been developed. Nor is there support for a balance of plant model. Similarly, system-level phenomena such as control-rod driveline expansion and vessel elongation are not represented. This report documents fiscal year 2016 work that was carried out to couple the transient safety analysis capabilities of SAS4A/SASSYS-1 with the system modeling capabilities of SAM under the joint support of the ART and NEAMS programs. The coupling effort was successful and is demonstrated by evaluating an unprotected loss of flow transient for the Advanced Burner Test Reactor (ABTR) design. There are differences between the stand-alone SAS4A/SASSYS-1 simulations and the coupled SAS/SAM simulations, but these are mainly attributed to the limited maturity of the SAM development effort. The severe accident modeling capabilities in SAS4A/SASSYS-1 (sodium boiling, fuel melting and relocation) will continue to play a vital role for a long time. Therefore, the SAS4A/SASSYS-1 modernization effort should remain a high priority task under the ART program to ensure continued participation in domestic and international SFR safety collaborations and design optimizations. On the other hand, SAM provides an advanced system analysis tool, with improved numerical solution schemes, data management, code flexibility, and accuracy. SAM is still in early stages of development and will require continued support from NEAMS to fulfill its potential and to mature into a production tool for advanced reactor safety analysis. The effort to couple SAS4A/SASSYS-1 and SAM is the first step on the integration of these modeling capabilities.« less

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stimpson, Shane G; Powers, Jeffrey J; Clarno, Kevin T

    The Consortium for Advanced Simulation of Light Water Reactors (CASL) aims to provide high-fidelity, multiphysics simulations of light water reactors (LWRs) by coupling a variety of codes within the Virtual Environment for Reactor Analysis (VERA). One of the primary goals of CASL is to predict local cladding failure through pellet-clad interaction (PCI). This capability is currently being pursued through several different approaches, such as with Tiamat, which is a simulation tool within VERA that more tightly couples the MPACT neutron transport solver, the CTF thermal hydraulics solver, and the MOOSE-based Bison-CASL fuel performance code. However, the process in this papermore » focuses on running fuel performance calculations with Bison-CASL to predict PCI using the multicycle output data from coupled neutron transport/thermal hydraulics simulations. In recent work within CASL, Watts Bar Unit 1 has been simulated over 12 cycles using the VERA core simulator capability based on MPACT and CTF. Using the output from these simulations, Bison-CASL results can be obtained without rerunning all 12 cycles, while providing some insight into PCI indicators. Multi-cycle Bison-CASL results are presented and compared against results from the FRAPCON fuel performance code. There are several quantities of interest in considering PCI and subsequent fuel rod failures, such as the clad hoop stress and maximum centerline fuel temperature, particularly as a function of time. Bison-CASL performs single-rod simulations using representative power and temperature distributions, providing high-resolution results for these and a number of other quantities. This will assist in identifying fuels rods as potential failure locations for use in further analyses.« less

  12. Effects of various electrode configurations on music perception, intonation and speaker gender identification.

    PubMed

    Landwehr, Markus; Fürstenberg, Dirk; Walger, Martin; von Wedel, Hasso; Meister, Hartmut

    2014-01-01

    Advances in speech coding strategies and electrode array designs for cochlear implants (CIs) predominantly aim at improving speech perception. Current efforts are also directed at transmitting appropriate cues of the fundamental frequency (F0) to the auditory nerve with respect to speech quality, prosody, and music perception. The aim of this study was to examine the effects of various electrode configurations and coding strategies on speech intonation identification, speaker gender identification, and music quality rating. In six MED-EL CI users electrodes were selectively deactivated in order to simulate different insertion depths and inter-electrode distances when using the high definition continuous interleaved sampling (HDCIS) and fine structure processing (FSP) speech coding strategies. Identification of intonation and speaker gender was determined and music quality rating was assessed. For intonation identification HDCIS was robust against the different electrode configurations, whereas fine structure processing showed significantly worse results when a short electrode depth was simulated. In contrast, speaker gender recognition was not affected by electrode configuration or speech coding strategy. Music quality rating was sensitive to electrode configuration. In conclusion, the three experiments revealed different outcomes, even though they all addressed the reception of F0 cues. Rapid changes in F0, as seen with intonation, were the most sensitive to electrode configurations and coding strategies. In contrast, electrode configurations and coding strategies did not show large effects when F0 information was available over a longer time period, as seen with speaker gender. Music quality relies on additional spectral cues other than F0, and was poorest when a shallow insertion was simulated.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nagakura, Hiroki; Iwakami, Wakana; Furusawa, Shun

    We present a newly developed moving-mesh technique for the multi-dimensional Boltzmann-Hydro code for the simulation of core-collapse supernovae (CCSNe). What makes this technique different from others is the fact that it treats not only hydrodynamics but also neutrino transfer in the language of the 3 + 1 formalism of general relativity (GR), making use of the shift vector to specify the time evolution of the coordinate system. This means that the transport part of our code is essentially general relativistic, although in this paper it is applied only to the moving curvilinear coordinates in the flat Minknowski spacetime, since the gravity partmore » is still Newtonian. The numerical aspect of the implementation is also described in detail. Employing the axisymmetric two-dimensional version of the code, we conduct two test computations: oscillations and runaways of proto-neutron star (PNS). We show that our new method works fine, tracking the motions of PNS correctly. We believe that this is a major advancement toward the realistic simulation of CCSNe.« less

  14. DSMC Simulations of Hypersonic Flows With Shock Interactions and Validation With Experiments

    NASA Technical Reports Server (NTRS)

    Moss, James N.; Bird, Graeme A.

    2004-01-01

    The capabilities of a relatively new direct simulation Monte Carlo (DSMC) code are examined for the problem of hypersonic laminar shock/shock and shock/boundary layer interactions, where boundary layer separation is an important feature of the flow. Flow about two model configurations is considered, where both configurations (a biconic and a hollow cylinder-flare) have recent published experimental measurements. The computations are made by using the DS2V code of Bird, a general two-dimensional/axisymmetric time accurate code that incorporates many of the advances in DSMC over the past decade. The current focus is on flows produced in ground-based facilities at Mach 12 and 16 test conditions with nitrogen as the test gas and the test models at zero incidence. Results presented highlight the sensitivity of the calculations to grid resolutions, sensitivity to physical modeling parameters, and comparison with experimental measurements. Information is provided concerning the flow structure and surface results for the extent of separation, heating, pressure, and skin friction.

  15. DSMC Simulations of Hypersonic Flows With Shock Interactions and Validation With Experiments

    NASA Technical Reports Server (NTRS)

    Moss, James N.; Bird, Graeme A.

    2004-01-01

    The capabilities of a relatively new direct simulation Monte Carlo (DSMC) code are examined for the problem of hypersonic laminar shock/shock and shock/boundary layer interactions, where boundary layer separation is an important feature of the flow. Flow about two model configurations is considered, where both configurations (a biconic and a hollow cylinder-flare) have recent published experimental measurements. The computations are made by using the DS2V code of Bird, a general two-dimensional/axisymmetric time accurate code that incorporates many of the advances in DSMC over the past decade. The current focus is on flows produced in ground-based facilities at Mach 12 and 16 test conditions with nitrogen as the test gas and the test models at zero incidence. Results presented highlight the sensitivity of the calculations to grid resolution, sensitivity to physical modeling parameters, and comparison with experimental measurements. Information is provided concerning the flow structure and surface results for the extent of separation, heating, pressure, and skin friction.

  16. Developing a workstation-based, real-time simulation for rapid handling qualities evaluations during design

    NASA Technical Reports Server (NTRS)

    Anderson, Frederick; Biezad, Daniel J.

    1994-01-01

    This paper describes the Rapid Aircraft DynamIcs AssessmeNt (RADIAN) project - an integration of the Aircraft SYNThesis (ACSTNT) design code with the USAD DATCOM code that estimates stability derivatives. Both of these codes are available to universities. These programs are then linked to flight simulation and flight controller synthesis tools and resulting design is evaluated on a graphics workstation. The entire process reduces the preliminary design time by an order of magnitude and provides an initial handling qualities evaluation of the design coupled to a control law. The integrated design process is applicable to both conventional aircraft taken from current textbooks and to unconventional designs emphasizing agility and propulsive control of attitude. The interactive and concurrent nature of the design process has been well received by industry and by design engineers at NASA. The process is being implemented into the design curriculum and is being used by students who view it as a significant advance over prior methods.

  17. Extreme Scale Plasma Turbulence Simulations on Top Supercomputers Worldwide

    DOE PAGES

    Tang, William; Wang, Bei; Ethier, Stephane; ...

    2016-11-01

    The goal of the extreme scale plasma turbulence studies described in this paper is to expedite the delivery of reliable predictions on confinement physics in large magnetic fusion systems by using world-class supercomputers to carry out simulations with unprecedented resolution and temporal duration. This has involved architecture-dependent optimizations of performance scaling and addressing code portability and energy issues, with the metrics for multi-platform comparisons being 'time-to-solution' and 'energy-to-solution'. Realistic results addressing how confinement losses caused by plasma turbulence scale from present-day devices to the much larger $25 billion international ITER fusion facility have been enabled by innovative advances in themore » GTC-P code including (i) implementation of one-sided communication from MPI 3.0 standard; (ii) creative optimization techniques on Xeon Phi processors; and (iii) development of a novel performance model for the key kernels of the PIC code. Our results show that modeling data movement is sufficient to predict performance on modern supercomputer platforms.« less

  18. Neoclassical simulation of tokamak plasmas using the continuum gyrokinetic code TEMPEST.

    PubMed

    Xu, X Q

    2008-07-01

    We present gyrokinetic neoclassical simulations of tokamak plasmas with a self-consistent electric field using a fully nonlinear (full- f ) continuum code TEMPEST in a circular geometry. A set of gyrokinetic equations are discretized on a five-dimensional computational grid in phase space. The present implementation is a method of lines approach where the phase-space derivatives are discretized with finite differences, and implicit backward differencing formulas are used to advance the system in time. The fully nonlinear Boltzmann model is used for electrons. The neoclassical electric field is obtained by solving the gyrokinetic Poisson equation with self-consistent poloidal variation. With a four-dimensional (psi,theta,micro) version of the TEMPEST code, we compute the radial particle and heat fluxes, the geodesic-acoustic mode, and the development of the neoclassical electric field, which we compare with neoclassical theory using a Lorentz collision model. The present work provides a numerical scheme for self-consistently studying important dynamical aspects of neoclassical transport and electric field in toroidal magnetic fusion devices.

  19. Neoclassical simulation of tokamak plasmas using the continuum gyrokinetic code TEMPEST

    NASA Astrophysics Data System (ADS)

    Xu, X. Q.

    2008-07-01

    We present gyrokinetic neoclassical simulations of tokamak plasmas with a self-consistent electric field using a fully nonlinear (full- f ) continuum code TEMPEST in a circular geometry. A set of gyrokinetic equations are discretized on a five-dimensional computational grid in phase space. The present implementation is a method of lines approach where the phase-space derivatives are discretized with finite differences, and implicit backward differencing formulas are used to advance the system in time. The fully nonlinear Boltzmann model is used for electrons. The neoclassical electric field is obtained by solving the gyrokinetic Poisson equation with self-consistent poloidal variation. With a four-dimensional (ψ,θ,γ,μ) version of the TEMPEST code, we compute the radial particle and heat fluxes, the geodesic-acoustic mode, and the development of the neoclassical electric field, which we compare with neoclassical theory using a Lorentz collision model. The present work provides a numerical scheme for self-consistently studying important dynamical aspects of neoclassical transport and electric field in toroidal magnetic fusion devices.

  20. Advanced Imaging Optics Utilizing Wavefront Coding.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scrymgeour, David; Boye, Robert; Adelsberger, Kathleen

    2015-06-01

    Image processing offers a potential to simplify an optical system by shifting some of the imaging burden from lenses to the more cost effective electronics. Wavefront coding using a cubic phase plate combined with image processing can extend the system's depth of focus, reducing many of the focus-related aberrations as well as material related chromatic aberrations. However, the optimal design process and physical limitations of wavefront coding systems with respect to first-order optical parameters and noise are not well documented. We examined image quality of simulated and experimental wavefront coded images before and after reconstruction in the presence of noise.more » Challenges in the implementation of cubic phase in an optical system are discussed. In particular, we found that limitations must be placed on system noise, aperture, field of view and bandwidth to develop a robust wavefront coded system.« less

  1. An Update on Improvements to NiCE Support for PROTEUS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bennett, Andrew; McCaskey, Alexander J.; Billings, Jay Jay

    2015-09-01

    The Department of Energy Office of Nuclear Energy's Nuclear Energy Advanced Modeling and Simulation (NEAMS) program has supported the development of the NEAMS Integrated Computational Environment (NiCE), a modeling and simulation workflow environment that provides services and plugins to facilitate tasks such as code execution, model input construction, visualization, and data analysis. This report details the development of workflows for the reactor core neutronics application, PROTEUS. This advanced neutronics application (primarily developed at Argonne National Laboratory) aims to improve nuclear reactor design and analysis by providing an extensible and massively parallel, finite-element solver for current and advanced reactor fuel neutronicsmore » modeling. The integration of PROTEUS-specific tools into NiCE is intended to make the advanced capabilities that PROTEUS provides more accessible to the nuclear energy research and development community. This report will detail the work done to improve existing PROTEUS workflow support in NiCE. We will demonstrate and discuss these improvements, including the development of flexible IO services, an improved interface for input generation, and the addition of advanced Fortran development tools natively in the platform.« less

  2. Advanced Field Artillery System (AFAS) Future Armored Resupply Vehicle (FARV) Simulation Feasibility Analysis Study (FAS). Appendix C-F. Revision 1.0

    DTIC Science & Technology

    1994-07-18

    09 Software Product Training 3 .4 .11 Physical Cues Segment Development3 .01 Technical Management .02 SW Requirements Analysis .03 Preliminary Design...Mission Planning Subsystem Development3 .01 Technical Management .02 SW Requirements Analysis .03 Preliminary Design - .04 Detailed Design .05 Code & CSU

  3. Collaborative Research Program on Advanced Metals and Ceramics for Armor and Anti-Armor Applications Dynamic Behavior of Non-Crystalline and Crystalline Metallic Systems

    DTIC Science & Technology

    2006-09-01

    compression, including real-time cinematography of failure under dynamic compression, was evaluated. The results (figure 10) clearly show that the failure... art of simulations of dynamic failure and damage mechanisms. An explicit dynamic parallel code has been developed to track damage mechanisms in the

  4. spMC: an R-package for 3D lithological reconstructions based on spatial Markov chains

    NASA Astrophysics Data System (ADS)

    Sartore, Luca; Fabbri, Paolo; Gaetan, Carlo

    2016-09-01

    The paper presents the spatial Markov Chains (spMC) R-package and a case study of subsoil simulation/prediction located in a plain site of Northeastern Italy. spMC is a quite complete collection of advanced methods for data inspection, besides spMC implements Markov Chain models to estimate experimental transition probabilities of categorical lithological data. Furthermore, simulation methods based on most known prediction methods (as indicator Kriging and CoKriging) were implemented in spMC package. Moreover, other more advanced methods are available for simulations, e.g. path methods and Bayesian procedures, that exploit the maximum entropy. Since the spMC package was developed for intensive geostatistical computations, part of the code is implemented for parallel computations via the OpenMP constructs. A final analysis of this computational efficiency compares the simulation/prediction algorithms by using different numbers of CPU cores, and considering the example data set of the case study included in the package.

  5. SimVascular: An Open Source Pipeline for Cardiovascular Simulation.

    PubMed

    Updegrove, Adam; Wilson, Nathan M; Merkow, Jameson; Lan, Hongzhi; Marsden, Alison L; Shadden, Shawn C

    2017-03-01

    Patient-specific cardiovascular simulation has become a paradigm in cardiovascular research and is emerging as a powerful tool in basic, translational and clinical research. In this paper we discuss the recent development of a fully open-source SimVascular software package, which provides a complete pipeline from medical image data segmentation to patient-specific blood flow simulation and analysis. This package serves as a research tool for cardiovascular modeling and simulation, and has contributed to numerous advances in personalized medicine, surgical planning and medical device design. The SimVascular software has recently been refactored and expanded to enhance functionality, usability, efficiency and accuracy of image-based patient-specific modeling tools. Moreover, SimVascular previously required several licensed components that hindered new user adoption and code management and our recent developments have replaced these commercial components to create a fully open source pipeline. These developments foster advances in cardiovascular modeling research, increased collaboration, standardization of methods, and a growing developer community.

  6. Analysis of Intelligent Transportation Systems Using Model-Driven Simulations.

    PubMed

    Fernández-Isabel, Alberto; Fuentes-Fernández, Rubén

    2015-06-15

    Intelligent Transportation Systems (ITSs) integrate information, sensor, control, and communication technologies to provide transport related services. Their users range from everyday commuters to policy makers and urban planners. Given the complexity of these systems and their environment, their study in real settings is frequently unfeasible. Simulations help to address this problem, but present their own issues: there can be unintended mistakes in the transition from models to code; their platforms frequently bias modeling; and it is difficult to compare works that use different models and tools. In order to overcome these problems, this paper proposes a framework for a model-driven development of these simulations. It is based on a specific modeling language that supports the integrated specification of the multiple facets of an ITS: people, their vehicles, and the external environment; and a network of sensors and actuators conveniently arranged and distributed that operates over them. The framework works with a model editor to generate specifications compliant with that language, and a code generator to produce code from them using platform specifications. There are also guidelines to help researchers in the application of this infrastructure. A case study on advanced management of traffic lights with cameras illustrates its use.

  7. Analysis of Intelligent Transportation Systems Using Model-Driven Simulations

    PubMed Central

    Fernández-Isabel, Alberto; Fuentes-Fernández, Rubén

    2015-01-01

    Intelligent Transportation Systems (ITSs) integrate information, sensor, control, and communication technologies to provide transport related services. Their users range from everyday commuters to policy makers and urban planners. Given the complexity of these systems and their environment, their study in real settings is frequently unfeasible. Simulations help to address this problem, but present their own issues: there can be unintended mistakes in the transition from models to code; their platforms frequently bias modeling; and it is difficult to compare works that use different models and tools. In order to overcome these problems, this paper proposes a framework for a model-driven development of these simulations. It is based on a specific modeling language that supports the integrated specification of the multiple facets of an ITS: people, their vehicles, and the external environment; and a network of sensors and actuators conveniently arranged and distributed that operates over them. The framework works with a model editor to generate specifications compliant with that language, and a code generator to produce code from them using platform specifications. There are also guidelines to help researchers in the application of this infrastructure. A case study on advanced management of traffic lights with cameras illustrates its use. PMID:26083232

  8. Investigation of roughing machining simulation by using visual basic programming in NX CAM system

    NASA Astrophysics Data System (ADS)

    Hafiz Mohamad, Mohamad; Nafis Osman Zahid, Muhammed

    2018-03-01

    This paper outlines a simulation study to investigate the characteristic of roughing machining simulation in 4th axis milling processes by utilizing visual basic programming in NX CAM systems. The selection and optimization of cutting orientation in rough milling operation is critical in 4th axis machining. The main purpose of roughing operation is to approximately shape the machined parts into finished form by removing the bulk of material from workpieces. In this paper, the simulations are executed by manipulating a set of different cutting orientation to generate estimated volume removed from the machine parts. The cutting orientation with high volume removal is denoted as an optimum value and chosen to execute a roughing operation. In order to run the simulation, customized software is developed to assist the routines. Operations build-up instructions in NX CAM interface are translated into programming codes via advanced tool available in the Visual Basic Studio. The codes is customized and equipped with decision making tools to run and control the simulations. It permits the integration with any independent program files to execute specific operations. This paper aims to discuss about the simulation program and identifies optimum cutting orientations for roughing processes. The output of this study will broaden up the simulation routines performed in NX CAM systems.

  9. SciDAC GSEP: Gyrokinetic Simulation of Energetic Particle Turbulence and Transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Zhihong

    Energetic particle (EP) confinement is a key physics issue for burning plasma experiment ITER, the crucial next step in the quest for clean and abundant energy, since ignition relies on self-heating by energetic fusion products (α-particles). Due to the strong coupling of EP with burning thermal plasmas, plasma confinement property in the ignition regime is one of the most uncertain factors when extrapolating from existing fusion devices to the ITER tokamak. EP population in current tokamaks are mostly produced by auxiliary heating such as neutral beam injection (NBI) and radio frequency (RF) heating. Remarkable progress in developing comprehensive EP simulationmore » codes and understanding basic EP physics has been made by two concurrent SciDAC EP projects GSEP funded by the Department of Energy (DOE) Office of Fusion Energy Science (OFES), which have successfully established gyrokinetic turbulence simulation as a necessary paradigm shift for studying the EP confinement in burning plasmas. Verification and validation have rapidly advanced through close collaborations between simulation, theory, and experiment. Furthermore, productive collaborations with computational scientists have enabled EP simulation codes to effectively utilize current petascale computers and emerging exascale computers. We review here key physics progress in the GSEP projects regarding verification and validation of gyrokinetic simulations, nonlinear EP physics, EP coupling with thermal plasmas, and reduced EP transport models. Advances in high performance computing through collaborations with computational scientists that enable these large scale electromagnetic simulations are also highlighted. These results have been widely disseminated in numerous peer-reviewed publications including many Phys. Rev. Lett. papers and many invited presentations at prominent fusion conferences such as the biennial International Atomic Energy Agency (IAEA) Fusion Energy Conference and the annual meeting of the American Physics Society, Division of Plasma Physics (APS-DPP).« less

  10. VIRTEX-5 Fpga Implementation of Advanced Encryption Standard Algorithm

    NASA Astrophysics Data System (ADS)

    Rais, Muhammad H.; Qasim, Syed M.

    2010-06-01

    In this paper, we present an implementation of Advanced Encryption Standard (AES) cryptographic algorithm using state-of-the-art Virtex-5 Field Programmable Gate Array (FPGA). The design is coded in Very High Speed Integrated Circuit Hardware Description Language (VHDL). Timing simulation is performed to verify the functionality of the designed circuit. Performance evaluation is also done in terms of throughput and area. The design implemented on Virtex-5 (XC5VLX50FFG676-3) FPGA achieves a maximum throughput of 4.34 Gbps utilizing a total of 399 slices.

  11. ALEGRA -- A massively parallel h-adaptive code for solid dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Summers, R.M.; Wong, M.K.; Boucheron, E.A.

    1997-12-31

    ALEGRA is a multi-material, arbitrary-Lagrangian-Eulerian (ALE) code for solid dynamics designed to run on massively parallel (MP) computers. It combines the features of modern Eulerian shock codes, such as CTH, with modern Lagrangian structural analysis codes using an unstructured grid. ALEGRA is being developed for use on the teraflop supercomputers to conduct advanced three-dimensional (3D) simulations of shock phenomena important to a variety of systems. ALEGRA was designed with the Single Program Multiple Data (SPMD) paradigm, in which the mesh is decomposed into sub-meshes so that each processor gets a single sub-mesh with approximately the same number of elements. Usingmore » this approach the authors have been able to produce a single code that can scale from one processor to thousands of processors. A current major effort is to develop efficient, high precision simulation capabilities for ALEGRA, without the computational cost of using a global highly resolved mesh, through flexible, robust h-adaptivity of finite elements. H-adaptivity is the dynamic refinement of the mesh by subdividing elements, thus changing the characteristic element size and reducing numerical error. The authors are working on several major technical challenges that must be met to make effective use of HAMMER on MP computers.« less

  12. Modeling and Analysis of Actinide Diffusion Behavior in Irradiated Metal Fuel

    NASA Astrophysics Data System (ADS)

    Edelmann, Paul G.

    There have been numerous attempts to model fast reactor fuel behavior in the last 40 years. The US currently does not have a fully reliable tool to simulate the behavior of metal fuels in fast reactors. The experimental database necessary to validate the codes is also very limited. The DOE-sponsored Advanced Fuels Campaign (AFC) has performed various experiments that are ready for analysis. Current metal fuel performance codes are either not available to the AFC or have limitations and deficiencies in predicting AFC fuel performance. A modified version of a new fuel performance code, FEAST-Metal , was employed in this investigation with useful results. This work explores the modeling and analysis of AFC metallic fuels using FEAST-Metal, particularly in the area of constituent actinide diffusion behavior. The FEAST-Metal code calculations for this work were conducted at Los Alamos National Laboratory (LANL) in support of on-going activities related to sensitivity analysis of fuel performance codes. A sensitivity analysis of FEAST-Metal was completed to identify important macroscopic parameters of interest to modeling and simulation of metallic fuel performance. A modification was made to the FEAST-Metal constituent redistribution model to enable accommodation of newer AFC metal fuel compositions with verified results. Applicability of this modified model for sodium fast reactor metal fuel design is demonstrated.

  13. Reliability assessment of MVP-BURN and JENDL-4.0 related to nuclear transmutation of light platinum group elements

    NASA Astrophysics Data System (ADS)

    Terashima, Atsunori; Nilsson, Mikael; Ozawa, Masaki; Chiba, Satoshi

    2017-09-01

    The Aprés ORIENT research program, as a concept of advanced nuclear fuel cycle, was initiated in FY2011 aiming at creating stable, highly-valuable elements by nuclear transmutation from ↓ssion products. In order to simulate creation of such elements by (n, γ) reaction succeeded by β- decay in reactors, a continuous-energy Monte Carlo burnup calculation code MVP-BURN was employed. Then, it is one of the most important tasks to con↓rm the reliability of MVP-BURN code and evaluated neutron cross section library. In this study, both an experiment of neutron activation analysis in TRIGA Mark I reactor at University of California, Irvine and the corresponding burnup calculation using MVP-BURN code were performed for validation of the simulation on transmutation of light platinum group elements. Especially, some neutron capture reactions such as 102Ru(n, γ)103Ru, 104Ru(n, γ)105Ru, and 108Pd(n, γ)109Pd were dealt with in this study. From a comparison between the calculation (C) and the experiment (E) about 102Ru(n, γ)103Ru, the deviation (C/E-1) was signi↓cantly large. Then, it is strongly suspected that not MVP-BURN code but the neutron capture cross section of 102Ru belonging to JENDL-4.0 used in this simulation have made the big di↑erence as (C/E-1) >20%.

  14. Virtual Observation System for Earth System Model: An Application to ACME Land Model Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Dali; Yuan, Fengming; Hernandez, Benjamin

    Investigating and evaluating physical-chemical-biological processes within an Earth system model (EMS) can be very challenging due to the complexity of both model design and software implementation. A virtual observation system (VOS) is presented to enable interactive observation of these processes during system simulation. Based on advance computing technologies, such as compiler-based software analysis, automatic code instrumentation, and high-performance data transport, the VOS provides run-time observation capability, in-situ data analytics for Earth system model simulation, model behavior adjustment opportunities through simulation steering. A VOS for a terrestrial land model simulation within the Accelerated Climate Modeling for Energy model is also presentedmore » to demonstrate the implementation details and system innovations.« less

  15. Virtual Observation System for Earth System Model: An Application to ACME Land Model Simulations

    DOE PAGES

    Wang, Dali; Yuan, Fengming; Hernandez, Benjamin; ...

    2017-01-01

    Investigating and evaluating physical-chemical-biological processes within an Earth system model (EMS) can be very challenging due to the complexity of both model design and software implementation. A virtual observation system (VOS) is presented to enable interactive observation of these processes during system simulation. Based on advance computing technologies, such as compiler-based software analysis, automatic code instrumentation, and high-performance data transport, the VOS provides run-time observation capability, in-situ data analytics for Earth system model simulation, model behavior adjustment opportunities through simulation steering. A VOS for a terrestrial land model simulation within the Accelerated Climate Modeling for Energy model is also presentedmore » to demonstrate the implementation details and system innovations.« less

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Platania, P., E-mail: platania@ifp.cnr.it; Figini, L.; Farina, D.

    The purpose of this work is the optical modeling and physical performances evaluations of the JT-60SA ECRF launcher system. The beams have been simulated with the electromagnetic code GRASP® and used as input for ECCD calculations performed with the beam tracing code GRAY, capable of modeling propagation, absorption and current drive of an EC Gaussion beam with general astigmatism. Full details of the optical analysis has been taken into account to model the launched beams. Inductive and advanced reference scenarios has been analysed for physical evaluations in the full poloidal and toroidal steering ranges for two slightly different layouts ofmore » the launcher system.« less

  17. Advanced GF(32) nonbinary LDPC coded modulation with non-uniform 9-QAM outperforming star 8-QAM.

    PubMed

    Liu, Tao; Lin, Changyu; Djordjevic, Ivan B

    2016-06-27

    In this paper, we first describe a 9-symbol non-uniform signaling scheme based on Huffman code, in which different symbols are transmitted with different probabilities. By using the Huffman procedure, prefix code is designed to approach the optimal performance. Then, we introduce an algorithm to determine the optimal signal constellation sets for our proposed non-uniform scheme with the criterion of maximizing constellation figure of merit (CFM). The proposed nonuniform polarization multiplexed signaling 9-QAM scheme has the same spectral efficiency as the conventional 8-QAM. Additionally, we propose a specially designed GF(32) nonbinary quasi-cyclic LDPC code for the coded modulation system based on the 9-QAM non-uniform scheme. Further, we study the efficiency of our proposed non-uniform 9-QAM, combined with nonbinary LDPC coding, and demonstrate by Monte Carlo simulation that the proposed GF(23) nonbinary LDPC coded 9-QAM scheme outperforms nonbinary LDPC coded uniform 8-QAM by at least 0.8dB.

  18. Developing and Implementing the Data Mining Algorithms in RAVEN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sen, Ramazan Sonat; Maljovec, Daniel Patrick; Alfonsi, Andrea

    The RAVEN code is becoming a comprehensive tool to perform probabilistic risk assessment, uncertainty quantification, and verification and validation. The RAVEN code is being developed to support many programs and to provide a set of methodologies and algorithms for advanced analysis. Scientific computer codes can generate enormous amounts of data. To post-process and analyze such data might, in some cases, take longer than the initial software runtime. Data mining algorithms/methods help in recognizing and understanding patterns in the data, and thus discover knowledge in databases. The methodologies used in the dynamic probabilistic risk assessment or in uncertainty and error quantificationmore » analysis couple system/physics codes with simulation controller codes, such as RAVEN. RAVEN introduces both deterministic and stochastic elements into the simulation while the system/physics code model the dynamics deterministically. A typical analysis is performed by sampling values of a set of parameter values. A major challenge in using dynamic probabilistic risk assessment or uncertainty and error quantification analysis for a complex system is to analyze the large number of scenarios generated. Data mining techniques are typically used to better organize and understand data, i.e. recognizing patterns in the data. This report focuses on development and implementation of Application Programming Interfaces (APIs) for different data mining algorithms, and the application of these algorithms to different databases.« less

  19. SSAGES: Software Suite for Advanced General Ensemble Simulations.

    PubMed

    Sidky, Hythem; Colón, Yamil J; Helfferich, Julian; Sikora, Benjamin J; Bezik, Cody; Chu, Weiwei; Giberti, Federico; Guo, Ashley Z; Jiang, Xikai; Lequieu, Joshua; Li, Jiyuan; Moller, Joshua; Quevillon, Michael J; Rahimi, Mohammad; Ramezani-Dakhel, Hadi; Rathee, Vikramjit S; Reid, Daniel R; Sevgen, Emre; Thapar, Vikram; Webb, Michael A; Whitmer, Jonathan K; de Pablo, Juan J

    2018-01-28

    Molecular simulation has emerged as an essential tool for modern-day research, but obtaining proper results and making reliable conclusions from simulations requires adequate sampling of the system under consideration. To this end, a variety of methods exist in the literature that can enhance sampling considerably, and increasingly sophisticated, effective algorithms continue to be developed at a rapid pace. Implementation of these techniques, however, can be challenging for experts and non-experts alike. There is a clear need for software that provides rapid, reliable, and easy access to a wide range of advanced sampling methods and that facilitates implementation of new techniques as they emerge. Here we present SSAGES, a publicly available Software Suite for Advanced General Ensemble Simulations designed to interface with multiple widely used molecular dynamics simulations packages. SSAGES allows facile application of a variety of enhanced sampling techniques-including adaptive biasing force, string methods, and forward flux sampling-that extract meaningful free energy and transition path data from all-atom and coarse-grained simulations. A noteworthy feature of SSAGES is a user-friendly framework that facilitates further development and implementation of new methods and collective variables. In this work, the use of SSAGES is illustrated in the context of simple representative applications involving distinct methods and different collective variables that are available in the current release of the suite. The code may be found at: https://github.com/MICCoM/SSAGES-public.

  20. SSAGES: Software Suite for Advanced General Ensemble Simulations

    NASA Astrophysics Data System (ADS)

    Sidky, Hythem; Colón, Yamil J.; Helfferich, Julian; Sikora, Benjamin J.; Bezik, Cody; Chu, Weiwei; Giberti, Federico; Guo, Ashley Z.; Jiang, Xikai; Lequieu, Joshua; Li, Jiyuan; Moller, Joshua; Quevillon, Michael J.; Rahimi, Mohammad; Ramezani-Dakhel, Hadi; Rathee, Vikramjit S.; Reid, Daniel R.; Sevgen, Emre; Thapar, Vikram; Webb, Michael A.; Whitmer, Jonathan K.; de Pablo, Juan J.

    2018-01-01

    Molecular simulation has emerged as an essential tool for modern-day research, but obtaining proper results and making reliable conclusions from simulations requires adequate sampling of the system under consideration. To this end, a variety of methods exist in the literature that can enhance sampling considerably, and increasingly sophisticated, effective algorithms continue to be developed at a rapid pace. Implementation of these techniques, however, can be challenging for experts and non-experts alike. There is a clear need for software that provides rapid, reliable, and easy access to a wide range of advanced sampling methods and that facilitates implementation of new techniques as they emerge. Here we present SSAGES, a publicly available Software Suite for Advanced General Ensemble Simulations designed to interface with multiple widely used molecular dynamics simulations packages. SSAGES allows facile application of a variety of enhanced sampling techniques—including adaptive biasing force, string methods, and forward flux sampling—that extract meaningful free energy and transition path data from all-atom and coarse-grained simulations. A noteworthy feature of SSAGES is a user-friendly framework that facilitates further development and implementation of new methods and collective variables. In this work, the use of SSAGES is illustrated in the context of simple representative applications involving distinct methods and different collective variables that are available in the current release of the suite. The code may be found at: https://github.com/MICCoM/SSAGES-public.

  1. OVERFLOW-Interaction with Industry

    NASA Technical Reports Server (NTRS)

    Buning, Pieter G.; George, Michael W. (Technical Monitor)

    1996-01-01

    A Navier-Stokes flow solver, OVERFLOW, has been developed by researchers at NASA Ames Research Center to use overset (Chimera) grids to simulate the flow about complex aerodynamic shapes. Primary customers of the OVERFLOW flow solver and related software include McDonnell Douglas and Boeing, as well as the NASA Focused Programs for Advanced Subsonic Technology (AST) and High Speed Research (HSR). Code development has focused on customer issues, including improving code performance, ability to run on workstation clusters and the NAS SP2, and direct interaction with industry on accuracy assessment and validation. Significant interaction with NAS has produced a capability tailored to the Ames computing environment, and code contributions have come from a wide range of sources, both within and outside Ames.

  2. Simulation of Guided Wave Interaction with In-Plane Fiber Waviness

    NASA Technical Reports Server (NTRS)

    Leckey, Cara A. C.; Juarez, Peter D.

    2016-01-01

    Reducing the timeline for certification of composite materials and enabling the expanded use of advanced composite materials for aerospace applications are two primary goals of NASA's Advanced Composites Project (ACP). A key a technical challenge area for accomplishing these goals is the development of rapid composite inspection methods with improved defect characterization capabilities. Ongoing work at NASA Langley is focused on expanding ultrasonic simulation capabilities for composite materials. Simulation tools can be used to guide the development of optimal inspection methods. Custom code based on elastodynamic finite integration technique is currently being developed and implemented to study ultrasonic wave interaction with manufacturing defects, such as in-plane fiber waviness (marcelling). This paper describes details of validation comparisons performed to enable simulation of guided wave propagation in composites containing fiber waviness. Simulation results for guided wave interaction with in-plane fiber waviness are also discussed. The results show that the wavefield is affected by the presence of waviness on both the surface containing fiber waviness, as well as the opposite surface to the location of waviness.

  3. NIMROD: A computational laboratory for studying nonlinear fusion magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Sovinec, C. R.; Gianakon, T. A.; Held, E. D.; Kruger, S. E.; Schnack, D. D.

    2003-05-01

    Nonlinear numerical studies of macroscopic modes in a variety of magnetic fusion experiments are made possible by the flexible high-order accurate spatial representation and semi-implicit time advance in the NIMROD simulation code [A. H. Glasser et al., Plasma Phys. Controlled Fusion 41, A747 (1999)]. Simulation of a resistive magnetohydrodynamics mode in a shaped toroidal tokamak equilibrium demonstrates computation with disparate time scales, simulations of discharge 87009 in the DIII-D tokamak [J. L. Luxon et al., Plasma Physics and Controlled Nuclear Fusion Research 1986 (International Atomic Energy Agency, Vienna, 1987), Vol. I, p. 159] confirm an analytic scaling for the temporal evolution of an ideal mode subject to plasma-β increasing beyond marginality, and a spherical torus simulation demonstrates nonlinear free-boundary capabilities. A comparison of numerical results on magnetic relaxation finds the n=1 mode and flux amplification in spheromaks to be very closely related to the m=1 dynamo modes and magnetic reversal in reversed-field pinch configurations. Advances in local and nonlocal closure relations developed for modeling kinetic effects in fluid simulation are also described.

  4. Simulation of guided wave interaction with in-plane fiber waviness

    NASA Astrophysics Data System (ADS)

    Leckey, Cara A. C.; Juarez, Peter D.

    2017-02-01

    Reducing the timeline for certification of composite materials and enabling the expanded use of advanced composite materials for aerospace applications are two primary goals of NASA's Advanced Composites Project (ACP). A key a technical challenge area for accomplishing these goals is the development of rapid composite inspection methods with improved defect characterization capabilities. Ongoing work at NASA Langley is focused on expanding ultrasonic simulation capabilities for composite materials. Simulation tools can be used to guide the development of optimal inspection methods. Custom code based on elastodynamic finite integration technique is currently being developed and implemented to study ultrasonic wave interaction with manufacturing defects, such as in-plane fiber waviness (marcelling). This paper describes details of validation comparisons performed to enable simulation of guided wave propagation in composites containing fiber waviness. Simulation results for guided wave interaction with in-plane fiber waviness are also discussed. The results show that the wavefield is affected by the presence of waviness on both the surface containing fiber waviness, as well as the opposite surface to the location of waviness.

  5. RELAP-7 Level 2 Milestone Report: Demonstration of a Steady State Single Phase PWR Simulation with RELAP-7

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    David Andrs; Ray Berry; Derek Gaston

    The document contains the simulation results of a steady state model PWR problem with the RELAP-7 code. The RELAP-7 code is the next generation nuclear reactor system safety analysis code being developed at Idaho National Laboratory (INL). The code is based on INL's modern scientific software development framework - MOOSE (Multi-Physics Object-Oriented Simulation Environment). This report summarizes the initial results of simulating a model steady-state single phase PWR problem using the current version of the RELAP-7 code. The major purpose of this demonstration simulation is to show that RELAP-7 code can be rapidly developed to simulate single-phase reactor problems. RELAP-7more » is a new project started on October 1st, 2011. It will become the main reactor systems simulation toolkit for RISMC (Risk Informed Safety Margin Characterization) and the next generation tool in the RELAP reactor safety/systems analysis application series (the replacement for RELAP5). The key to the success of RELAP-7 is the simultaneous advancement of physical models, numerical methods, and software design while maintaining a solid user perspective. Physical models include both PDEs (Partial Differential Equations) and ODEs (Ordinary Differential Equations) and experimental based closure models. RELAP-7 will eventually utilize well posed governing equations for multiphase flow, which can be strictly verified. Closure models used in RELAP5 and newly developed models will be reviewed and selected to reflect the progress made during the past three decades. RELAP-7 uses modern numerical methods, which allow implicit time integration, higher order schemes in both time and space, and strongly coupled multi-physics simulations. RELAP-7 is written with object oriented programming language C++. Its development follows modern software design paradigms. The code is easy to read, develop, maintain, and couple with other codes. Most importantly, the modern software design allows the RELAP-7 code to evolve with time. RELAP-7 is a MOOSE-based application. MOOSE (Multiphysics Object-Oriented Simulation Environment) is a framework for solving computational engineering problems in a well-planned, managed, and coordinated way. By leveraging millions of lines of open source software packages, such as PETSC (a nonlinear solver developed at Argonne National Laboratory) and LibMesh (a Finite Element Analysis package developed at University of Texas), MOOSE significantly reduces the expense and time required to develop new applications. Numerical integration methods and mesh management for parallel computation are provided by MOOSE. Therefore RELAP-7 code developers only need to focus on physics and user experiences. By using the MOOSE development environment, RELAP-7 code is developed by following the same modern software design paradigms used for other MOOSE development efforts. There are currently over 20 different MOOSE based applications ranging from 3-D transient neutron transport, detailed 3-D transient fuel performance analysis, to long-term material aging. Multi-physics and multiple dimensional analyses capabilities can be obtained by coupling RELAP-7 and other MOOSE based applications and by leveraging with capabilities developed by other DOE programs. This allows restricting the focus of RELAP-7 to systems analysis-type simulations and gives priority to retain and significantly extend RELAP5's capabilities.« less

  6. Modelling of LOCA Tests with the BISON Fuel Performance Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williamson, Richard L; Pastore, Giovanni; Novascone, Stephen Rhead

    2016-05-01

    BISON is a modern finite-element based, multidimensional nuclear fuel performance code that is under development at Idaho National Laboratory (USA). Recent advances of BISON include the extension of the code to the analysis of LWR fuel rod behaviour during loss-of-coolant accidents (LOCAs). In this work, BISON models for the phenomena relevant to LWR cladding behaviour during LOCAs are described, followed by presentation of code results for the simulation of LOCA tests. Analysed experiments include separate effects tests of cladding ballooning and burst, as well as the Halden IFA-650.2 fuel rod test. Two-dimensional modelling of the experiments is performed, and calculationsmore » are compared to available experimental data. Comparisons include cladding burst pressure and temperature in separate effects tests, as well as the evolution of fuel rod inner pressure during ballooning and time to cladding burst. Furthermore, BISON three-dimensional simulations of separate effects tests are performed, which demonstrate the capability to reproduce the effect of azimuthal temperature variations in the cladding. The work has been carried out in the frame of the collaboration between Idaho National Laboratory and Halden Reactor Project, and the IAEA Coordinated Research Project FUMAC.« less

  7. Parallel Higher-order Finite Element Method for Accurate Field Computations in Wakefield and PIC Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Candel, A.; Kabel, A.; Lee, L.

    Over the past years, SLAC's Advanced Computations Department (ACD), under SciDAC sponsorship, has developed a suite of 3D (2D) parallel higher-order finite element (FE) codes, T3P (T2P) and Pic3P (Pic2P), aimed at accurate, large-scale simulation of wakefields and particle-field interactions in radio-frequency (RF) cavities of complex shape. The codes are built on the FE infrastructure that supports SLAC's frequency domain codes, Omega3P and S3P, to utilize conformal tetrahedral (triangular)meshes, higher-order basis functions and quadratic geometry approximation. For time integration, they adopt an unconditionally stable implicit scheme. Pic3P (Pic2P) extends T3P (T2P) to treat charged-particle dynamics self-consistently using the PIC (particle-in-cell)more » approach, the first such implementation on a conformal, unstructured grid using Whitney basis functions. Examples from applications to the International Linear Collider (ILC), Positron Electron Project-II (PEP-II), Linac Coherent Light Source (LCLS) and other accelerators will be presented to compare the accuracy and computational efficiency of these codes versus their counterparts using structured grids.« less

  8. Multi-scale approach to the modeling of fission gas discharge during hypothetical loss-of-flow accident in gen-IV sodium fast reactor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Behafarid, F.; Shaver, D. R.; Bolotnov, I. A.

    The required technological and safety standards for future Gen IV Reactors can only be achieved if advanced simulation capabilities become available, which combine high performance computing with the necessary level of modeling detail and high accuracy of predictions. The purpose of this paper is to present new results of multi-scale three-dimensional (3D) simulations of the inter-related phenomena, which occur as a result of fuel element heat-up and cladding failure, including the injection of a jet of gaseous fission products into a partially blocked Sodium Fast Reactor (SFR) coolant channel, and gas/molten sodium transport along the coolant channels. The computational approachmore » to the analysis of the overall accident scenario is based on using two different inter-communicating computational multiphase fluid dynamics (CMFD) codes: a CFD code, PHASTA, and a RANS code, NPHASE-CMFD. Using the geometry and time history of cladding failure and the gas injection rate, direct numerical simulations (DNS), combined with the Level Set method, of two-phase turbulent flow have been performed by the PHASTA code. The model allows one to track the evolution of gas/liquid interfaces at a centimeter scale. The simulated phenomena include the formation and breakup of the jet of fission products injected into the liquid sodium coolant. The PHASTA outflow has been averaged over time to obtain mean phasic velocities and volumetric concentrations, as well as the liquid turbulent kinetic energy and turbulence dissipation rate, all of which have served as the input to the core-scale simulations using the NPHASE-CMFD code. A sliding window time averaging has been used to capture mean flow parameters for transient cases. The results presented in the paper include testing and validation of the proposed models, as well the predictions of fission-gas/liquid-sodium transport along a multi-rod fuel assembly of SFR during a partial loss-of-flow accident. (authors)« less

  9. Application of Probabilistic Analysis to Aircraft Impact Dynamics

    NASA Technical Reports Server (NTRS)

    Lyle, Karen H.; Padula, Sharon L.; Stockwell, Alan E.

    2003-01-01

    Full-scale aircraft crash simulations performed with nonlinear, transient dynamic, finite element codes can incorporate structural complexities such as: geometrically accurate models; human occupant models; and advanced material models to include nonlinear stressstrain behaviors, laminated composites, and material failure. Validation of these crash simulations is difficult due to a lack of sufficient information to adequately determine the uncertainty in the experimental data and the appropriateness of modeling assumptions. This paper evaluates probabilistic approaches to quantify the uncertainty in the simulated responses. Several criteria are used to determine that a response surface method is the most appropriate probabilistic approach. The work is extended to compare optimization results with and without probabilistic constraints.

  10. Reduction of Military Vehicle Acquisition Time and Cost through Advanced Modelling and Virtual Simulation (La reduction des couts et des delais d’acquisition des vehicules militaires par la modelisation avancee et la simulation de produit virtuel)

    DTIC Science & Technology

    2003-03-01

    nations, a very thorough examination of current practices. Introduction The Applied Vehicle Technology Panel (AVT) of the Research and Technology...the introduction of new information generated by computer codes required it to be timely and presented in appropriate fashion so that it could...military competition between the NATO allies and the Soviet Union. The second was the introduction of commercial, high capacity transonic aircraft and

  11. NAS (Numerical Aerodynamic Simulation Program) technical summaries, March 1989 - February 1990

    NASA Technical Reports Server (NTRS)

    1990-01-01

    Given here are selected scientific results from the Numerical Aerodynamic Simulation (NAS) Program's third year of operation. During this year, the scientific community was given access to a Cray-2 and a Cray Y-MP supercomputer. Topics covered include flow field analysis of fighter wing configurations, large-scale ocean modeling, the Space Shuttle flow field, advanced computational fluid dynamics (CFD) codes for rotary-wing airloads and performance prediction, turbulence modeling of separated flows, airloads and acoustics of rotorcraft, vortex-induced nonlinearities on submarines, and standing oblique detonation waves.

  12. WholeCellSimDB: a hybrid relational/HDF database for whole-cell model predictions.

    PubMed

    Karr, Jonathan R; Phillips, Nolan C; Covert, Markus W

    2014-01-01

    Mechanistic 'whole-cell' models are needed to develop a complete understanding of cell physiology. However, extracting biological insights from whole-cell models requires running and analyzing large numbers of simulations. We developed WholeCellSimDB, a database for organizing whole-cell simulations. WholeCellSimDB was designed to enable researchers to search simulation metadata to identify simulations for further analysis, and quickly slice and aggregate simulation results data. In addition, WholeCellSimDB enables users to share simulations with the broader research community. The database uses a hybrid relational/hierarchical data format architecture to efficiently store and retrieve both simulation setup metadata and results data. WholeCellSimDB provides a graphical Web-based interface to search, browse, plot and export simulations; a JavaScript Object Notation (JSON) Web service to retrieve data for Web-based visualizations; a command-line interface to deposit simulations; and a Python API to retrieve data for advanced analysis. Overall, we believe WholeCellSimDB will help researchers use whole-cell models to advance basic biological science and bioengineering. http://www.wholecellsimdb.org SOURCE CODE REPOSITORY: URL: http://github.com/CovertLab/WholeCellSimDB. © The Author(s) 2014. Published by Oxford University Press.

  13. Rolling-refresher simulation improves performance and retention of paediatric intensive care unit nurse code cart management.

    PubMed

    Singleton, Marcy N; Allen, Kimberly F; Li, Zhongze; McNerney, Kevin; Naber, Urs H; Braga, Matthew S

    2018-04-01

    Paediatric Intensive Care Unit Nurses (PICU RNs) manage the code cart during paediatric emergencies at the Children's Hospital at Dartmouth-Hitchcock. These are low -frequency, high-stakes events. An uncontrolled intervention study with 6-month follow-up. A collaboration of physician and nursing experts developed a rolling-refresher training programme consisting of five simulated scenarios, including 22 code cart skills, to establish nursing code cart competency. The cohort of PICU RNs underwent a competency assessment in training 1. To achieve competence, the participating RN received immediate feedback and instruction and repeated each task until mastery during training 1. The competencies were repeated 6 months later, designated training 2. Thirty-two RNs participated in training 1. Sixteen RNs (50%) completed the second training. Our rolling-refresher training programme resulted in a 43% reduction in the odds of first attempt failures between training 1 and training 2 (p=0.01). Multivariate linear regression evaluating the difference in first attempt failure between training 1 and training 2 revealed that the following covariates were not significantly associated with this improvement: interval Paediatric Advanced Life Support training, interval use of the code cart or defibrillator (either real or simulated) and time between training sessions. Univariate analysis between the two trainings revealed a statistically significant reduction in first attempt failures for: preparing an epinephrine infusion (72% vs 41%, p=0.04) and providing bag-mask ventilation (28% vs 0%, p=0.02). Our rolling-refresher training programme demonstrated significant improvement in performance for low-frequency, high-risk skills required to manage a paediatric code cart with retention after initial training.

  14. Gyrokinetic Particle Simulation of Turbulent Transport in Burning Plasmas (GPS - TTBP) Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chame, Jacqueline

    2011-05-27

    The goal of this project is the development of the Gyrokinetic Toroidal Code (GTC) Framework and its applications to problems related to the physics of turbulence and turbulent transport in tokamaks,. The project involves physics studies, code development, noise effect mitigation, supporting computer science efforts, diagnostics and advanced visualizations, verification and validation. Its main scientific themes are mesoscale dynamics and non-locality effects on transport, the physics of secondary structures such as zonal flows, and strongly coherent wave-particle interaction phenomena at magnetic precession resonances. Special emphasis is placed on the implications of these themes for rho-star and current scalings and formore » the turbulent transport of momentum. GTC-TTBP also explores applications to electron thermal transport, particle transport; ITB formation and cross-cuts such as edge-core coupling, interaction of energetic particles with turbulence and neoclassical tearing mode trigger dynamics. Code development focuses on major initiatives in the development of full-f formulations and the capacity to simulate flux-driven transport. In addition to the full-f -formulation, the project includes the development of numerical collision models and methods for coarse graining in phase space. Verification is pursued by linear stability study comparisons with the FULL and HD7 codes and by benchmarking with the GKV, GYSELA and other gyrokinetic simulation codes. Validation of gyrokinetic models of ion and electron thermal transport is pursed by systematic stressing comparisons with fluctuation and transport data from the DIII-D and NSTX tokamaks. The physics and code development research programs are supported by complementary efforts in computer sciences, high performance computing, and data management.« less

  15. Existing Fortran interfaces to Trilinos in preparation for exascale ForTrilinos development

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Evans, Katherine J.; Young, Mitchell T.; Collins, Benjamin S.

    This report summarizes the current state of Fortran interfaces to the Trilinos library within several key applications of the Exascale Computing Program (ECP), with the aim of informing developers about strategies to develop ForTrilinos, an exascale-ready, Fortran interface software package within Trilinos. The two software projects assessed within are the DOE Office of Science's Accelerated Climate Model for Energy (ACME) atmosphere component, CAM, and the DOE Office of Nuclear Energy's core-simulator portion of VERA, a nuclear reactor simulation code. Trilinos is an object-oriented, C++ based software project, and spans a collection of algorithms and other enabling technologies such as uncertaintymore » quantification and mesh generation. To date, Trilinos has enabled these codes to achieve large-scale simulation results, however the simulation needs of CAM and VERA-CS will approach exascale over the next five years. A Fortran interface to Trilinos that enables efficient use of programming models and more advanced algorithms is necessary. Where appropriate, the needs of the CAM and VERA-CS software to achieve their simulation goals are called out specifically. With this report, a design document and execution plan for ForTrilinos development can proceed.« less

  16. Best estimate plus uncertainty analysis of departure from nucleate boiling limiting case with CASL core simulator VERA-CS in response to PWR main steam line break event

    DOE PAGES

    Brown, Cameron S.; Zhang, Hongbin; Kucukboyaci, Vefa; ...

    2016-09-07

    VERA-CS (Virtual Environment for Reactor Applications, Core Simulator) is a coupled neutron transport and thermal-hydraulics subchannel code under development by the Consortium for Advanced Simulation of Light Water Reactors (CASL). VERA-CS was used to simulate a typical pressurized water reactor (PWR) full core response with 17x17 fuel assemblies for a main steam line break (MSLB) accident scenario with the most reactive rod cluster control assembly stuck out of the core. The accident scenario was initiated at the hot zero power (HZP) at the end of the first fuel cycle with return to power state points that were determined by amore » system analysis code and the most limiting state point was chosen for core analysis. The best estimate plus uncertainty (BEPU) analysis method was applied using Wilks’ nonparametric statistical approach. In this way, 59 full core simulations were performed to provide the minimum departure from nucleate boiling ratio (MDNBR) at the 95/95 (95% probability with 95% confidence level) tolerance limit. The results show that this typical PWR core remains within MDNBR safety limits for the MSLB accident.« less

  17. Best estimate plus uncertainty analysis of departure from nucleate boiling limiting case with CASL core simulator VERA-CS in response to PWR main steam line break event

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Cameron S.; Zhang, Hongbin; Kucukboyaci, Vefa

    VERA-CS (Virtual Environment for Reactor Applications, Core Simulator) is a coupled neutron transport and thermal-hydraulics subchannel code under development by the Consortium for Advanced Simulation of Light Water Reactors (CASL). VERA-CS was used to simulate a typical pressurized water reactor (PWR) full core response with 17x17 fuel assemblies for a main steam line break (MSLB) accident scenario with the most reactive rod cluster control assembly stuck out of the core. The accident scenario was initiated at the hot zero power (HZP) at the end of the first fuel cycle with return to power state points that were determined by amore » system analysis code and the most limiting state point was chosen for core analysis. The best estimate plus uncertainty (BEPU) analysis method was applied using Wilks’ nonparametric statistical approach. In this way, 59 full core simulations were performed to provide the minimum departure from nucleate boiling ratio (MDNBR) at the 95/95 (95% probability with 95% confidence level) tolerance limit. The results show that this typical PWR core remains within MDNBR safety limits for the MSLB accident.« less

  18. N-body simulation for self-gravitating collisional systems with a new SIMD instruction set extension to the x86 architecture, Advanced Vector eXtensions

    NASA Astrophysics Data System (ADS)

    Tanikawa, Ataru; Yoshikawa, Kohji; Okamoto, Takashi; Nitadori, Keigo

    2012-02-01

    We present a high-performance N-body code for self-gravitating collisional systems accelerated with the aid of a new SIMD instruction set extension of the x86 architecture: Advanced Vector eXtensions (AVX), an enhanced version of the Streaming SIMD Extensions (SSE). With one processor core of Intel Core i7-2600 processor (8 MB cache and 3.40 GHz) based on Sandy Bridge micro-architecture, we implemented a fourth-order Hermite scheme with individual timestep scheme ( Makino and Aarseth, 1992), and achieved the performance of ˜20 giga floating point number operations per second (GFLOPS) for double-precision accuracy, which is two times and five times higher than that of the previously developed code implemented with the SSE instructions ( Nitadori et al., 2006b), and that of a code implemented without any explicit use of SIMD instructions with the same processor core, respectively. We have parallelized the code by using so-called NINJA scheme ( Nitadori et al., 2006a), and achieved ˜90 GFLOPS for a system containing more than N = 8192 particles with 8 MPI processes on four cores. We expect to achieve about 10 tera FLOPS (TFLOPS) for a self-gravitating collisional system with N ˜ 10 5 on massively parallel systems with at most 800 cores with Sandy Bridge micro-architecture. This performance will be comparable to that of Graphic Processing Unit (GPU) cluster systems, such as the one with about 200 Tesla C1070 GPUs ( Spurzem et al., 2010). This paper offers an alternative to collisional N-body simulations with GRAPEs and GPUs.

  19. Advanced computations in plasma physics

    NASA Astrophysics Data System (ADS)

    Tang, W. M.

    2002-05-01

    Scientific simulation in tandem with theory and experiment is an essential tool for understanding complex plasma behavior. In this paper we review recent progress and future directions for advanced simulations in magnetically confined plasmas with illustrative examples chosen from magnetic confinement research areas such as microturbulence, magnetohydrodynamics, magnetic reconnection, and others. Significant recent progress has been made in both particle and fluid simulations of fine-scale turbulence and large-scale dynamics, giving increasingly good agreement between experimental observations and computational modeling. This was made possible by innovative advances in analytic and computational methods for developing reduced descriptions of physics phenomena spanning widely disparate temporal and spatial scales together with access to powerful new computational resources. In particular, the fusion energy science community has made excellent progress in developing advanced codes for which computer run-time and problem size scale well with the number of processors on massively parallel machines (MPP's). A good example is the effective usage of the full power of multi-teraflop (multi-trillion floating point computations per second) MPP's to produce three-dimensional, general geometry, nonlinear particle simulations which have accelerated progress in understanding the nature of turbulence self-regulation by zonal flows. It should be emphasized that these calculations, which typically utilized billions of particles for thousands of time-steps, would not have been possible without access to powerful present generation MPP computers and the associated diagnostic and visualization capabilities. In general, results from advanced simulations provide great encouragement for being able to include increasingly realistic dynamics to enable deeper physics insights into plasmas in both natural and laboratory environments. The associated scientific excitement should serve to stimulate improved cross-cutting collaborations with other fields and also to help attract bright young talent to plasma science.

  20. Advanced Computation in Plasma Physics

    NASA Astrophysics Data System (ADS)

    Tang, William

    2001-10-01

    Scientific simulation in tandem with theory and experiment is an essential tool for understanding complex plasma behavior. This talk will review recent progress and future directions for advanced simulations in magnetically-confined plasmas with illustrative examples chosen from areas such as microturbulence, magnetohydrodynamics, magnetic reconnection, and others. Significant recent progress has been made in both particle and fluid simulations of fine-scale turbulence and large-scale dynamics, giving increasingly good agreement between experimental observations and computational modeling. This was made possible by innovative advances in analytic and computational methods for developing reduced descriptions of physics phenomena spanning widely disparate temporal and spatial scales together with access to powerful new computational resources. In particular, the fusion energy science community has made excellent progress in developing advanced codes for which computer run-time and problem size scale well with the number of processors on massively parallel machines (MPP's). A good example is the effective usage of the full power of multi-teraflop MPP's to produce 3-dimensional, general geometry, nonlinear particle simulations which have accelerated progress in understanding the nature of turbulence self-regulation by zonal flows. It should be emphasized that these calculations, which typically utilized billions of particles for tens of thousands time-steps, would not have been possible without access to powerful present generation MPP computers and the associated diagnostic and visualization capabilities. In general, results from advanced simulations provide great encouragement for being able to include increasingly realistic dynamics to enable deeper physics insights into plasmas in both natural and laboratory environments. The associated scientific excitement should serve to stimulate improved cross-cutting collaborations with other fields and also to help attract bright young talent to plasma science.

  1. Surface Modeling and Grid Generation of Orbital Sciences X34 Vehicle. Phase 1

    NASA Technical Reports Server (NTRS)

    Alter, Stephen J.

    1997-01-01

    The surface modeling and grid generation requirements, motivations, and methods used to develop Computational Fluid Dynamic volume grids for the X34-Phase 1 are presented. The requirements set forth by the Aerothermodynamics Branch at the NASA Langley Research Center serve as the basis for the final techniques used in the construction of all volume grids, including grids for parametric studies of the X34. The Integrated Computer Engineering and Manufacturing code for Computational Fluid Dynamics (ICEM/CFD), the Grid Generation code (GRIDGEN), the Three-Dimensional Multi-block Advanced Grid Generation System (3DMAGGS) code, and Volume Grid Manipulator (VGM) code are used to enable the necessary surface modeling, surface grid generation, volume grid generation, and grid alterations, respectively. All volume grids generated for the X34, as outlined in this paper, were used for CFD simulations within the Aerothermodynamics Branch.

  2. Application of the DART Code for the Assessment of Advanced Fuel Behavior

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rest, J.; Totev, T.

    2007-07-01

    The Dispersion Analysis Research Tool (DART) code is a dispersion fuel analysis code that contains mechanistically-based fuel and reaction-product swelling models, a one dimensional heat transfer analysis, and mechanical deformation models. DART has been used to simulate the irradiation behavior of uranium oxide, uranium silicide, and uranium molybdenum aluminum dispersion fuels, as well as their monolithic counterparts. The thermal-mechanical DART code has been validated against RERTR tests performed in the ATR for irradiation data on interaction thickness, fuel, matrix, and reaction product volume fractions, and plate thickness changes. The DART fission gas behavior model has been validated against UO{sub 2}more » fission gas release data as well as measured fission gas-bubble size distributions. Here DART is utilized to analyze various aspects of the observed bubble growth in U-Mo/Al interaction product. (authors)« less

  3. Neoclassical Simulation of Tokamak Plasmas using Continuum Gyrokinetc Code TEMPEST

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, X Q

    We present gyrokinetic neoclassical simulations of tokamak plasmas with self-consistent electric field for the first time using a fully nonlinear (full-f) continuum code TEMPEST in a circular geometry. A set of gyrokinetic equations are discretized on a five dimensional computational grid in phase space. The present implementation is a Method of Lines approach where the phase-space derivatives are discretized with finite differences and implicit backwards differencing formulas are used to advance the system in time. The fully nonlinear Boltzmann model is used for electrons. The neoclassical electric field is obtained by solving gyrokinetic Poisson equation with self-consistent poloidal variation. Withmore » our 4D ({psi}, {theta}, {epsilon}, {mu}) version of the TEMPEST code we compute radial particle and heat flux, the Geodesic-Acoustic Mode (GAM), and the development of neoclassical electric field, which we compare with neoclassical theory with a Lorentz collision model. The present work provides a numerical scheme and a new capability for self-consistently studying important aspects of neoclassical transport and rotations in toroidal magnetic fusion devices.« less

  4. A study of workstation computational performance for real-time flight simulation

    NASA Technical Reports Server (NTRS)

    Maddalon, Jeffrey M.; Cleveland, Jeff I., II

    1995-01-01

    With recent advances in microprocessor technology, some have suggested that modern workstations provide enough computational power to properly operate a real-time simulation. This paper presents the results of a computational benchmark, based on actual real-time flight simulation code used at Langley Research Center, which was executed on various workstation-class machines. The benchmark was executed on different machines from several companies including: CONVEX Computer Corporation, Cray Research, Digital Equipment Corporation, Hewlett-Packard, Intel, International Business Machines, Silicon Graphics, and Sun Microsystems. The machines are compared by their execution speed, computational accuracy, and porting effort. The results of this study show that the raw computational power needed for real-time simulation is now offered by workstations.

  5. Combining Advanced Turbulent Mixing and Combustion Models with Advanced Multi-Phase CFD Code to Simulate Detonation and Post-Detonation Bio-Agent Mixing and Destruction

    DTIC Science & Technology

    2017-10-01

    perturbations in the energetic material to study their effects on the blast wave formation. The last case also makes use of the same PBX, however, the...configuration, Case A: Spore cloud located on the top of the charge at an angle 45 degree, Case B: Spore cloud located at an angle 45 degree from the charge...theoretical validation. The first is the Sedov case where the pressure decay and blast wave front are validated based on analytical solutions. In this test

  6. Development of comprehensive numerical schemes for predicting evaporating gas-droplets flow processes of a liquid-fueled combustor

    NASA Technical Reports Server (NTRS)

    Chen, C. P.

    1990-01-01

    An existing Computational Fluid Dynamics code for simulating complex turbulent flows inside a liquid rocket combustion chamber was validated and further developed. The Advanced Rocket Injector/Combustor Code (ARICC) is simplified and validated against benchmark flow situations for laminar and turbulent flows. The numerical method used in ARICC Code is re-examined for incompressible flow calculations. For turbulent flows, both the subgrid and the two equation k-epsilon turbulence models are studied. Cases tested include idealized Burger's equation in complex geometries and boundaries, a laminar pipe flow, a high Reynolds number turbulent flow, and a confined coaxial jet with recirculations. The accuracy of the algorithm is examined by comparing the numerical results with the analytical solutions as well as experimented data with different grid sizes.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dumitrescu, Eugene; Humble, Travis S.

    The accurate and reliable characterization of quantum dynamical processes underlies efforts to validate quantum technologies, where discrimination between competing models of observed behaviors inform efforts to fabricate and operate qubit devices. We present a protocol for quantum channel discrimination that leverages advances in direct characterization of quantum dynamics (DCQD) codes. We demonstrate that DCQD codes enable selective process tomography to improve discrimination between entangling and correlated quantum dynamics. Numerical simulations show selective process tomography requires only a few measurement configurations to achieve a low false alarm rate and that the DCQD encoding improves the resilience of the protocol to hiddenmore » sources of noise. Lastly, our results show that selective process tomography with DCQD codes is useful for efficiently distinguishing sources of correlated crosstalk from uncorrelated noise in current and future experimental platforms.« less

  8. Investigation of the current yaw engineering models for simulation of wind turbines in BEM and comparison with CFD and experiment

    NASA Astrophysics Data System (ADS)

    Rahimi, H.; Hartvelt, M.; Peinke, J.; Schepers, J. G.

    2016-09-01

    The aim of this work is to investigate the capabilities of current engineering tools based on Blade Element Momentum (BEM) and free vortex wake codes for the prediction of key aerodynamic parameters of wind turbines in yawed flow. Axial induction factor and aerodynamic loads of three wind turbines (NREL VI, AVATAR and INNWIND.EU) were investigated using wind tunnel measurements and numerical simulations for 0 and 30 degrees of yaw. Results indicated that for axial conditions there is a good agreement between all codes in terms of mean values of aerodynamic parameters, however in yawed flow significant deviations were observed. This was due to unsteady phenomena such as advancing & retreating and skewed wake effect. These deviations were more visible in aerodynamic parameters in comparison to the rotor azimuthal angle for the sections at the root and tip where the skewed wake effect plays a major role.

  9. X-ray metrology and performance of a 45-cm long x-ray deformable mirror

    DOE PAGES

    Poyneer, Lisa A.; Brejnholt, Nicolai F.; Hill, Randall; ...

    2016-05-20

    We describe experiments with a 45-cm long x-ray deformable mirror (XDM) that have been conducted in End Station 2, Beamline 5.3.1 at the Advanced Light Source. A detailed description of the hardware implementation is provided. We explain our one-dimensional Fresnel propagation code that correctly handles grazing incidence and includes a model of the XDM. This code is used to simulate and verify experimental results. Initial long trace profiler metrology of the XDM at 7.5 keV is presented. The ability to measure a large (150-nm amplitude) height change on the XDM is demonstrated. The results agree well with the simulated experimentmore » at an error level of 1 μrad RMS. Lastly, direct imaging of the x-ray beam also shows the expected change in intensity profile at the detector.« less

  10. X-ray metrology and performance of a 45-cm long x-ray deformable mirror

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Poyneer, Lisa A., E-mail: poyneer1@llnl.gov; Brejnholt, Nicolai F.; Hill, Randall

    2016-05-15

    We describe experiments with a 45-cm long x-ray deformable mirror (XDM) that have been conducted in End Station 2, Beamline 5.3.1 at the Advanced Light Source. A detailed description of the hardware implementation is provided. We explain our one-dimensional Fresnel propagation code that correctly handles grazing incidence and includes a model of the XDM. This code is used to simulate and verify experimental results. Initial long trace profiler metrology of the XDM at 7.5 keV is presented. The ability to measure a large (150-nm amplitude) height change on the XDM is demonstrated. The results agree well with the simulated experimentmore » at an error level of 1 μrad RMS. Direct imaging of the x-ray beam also shows the expected change in intensity profile at the detector.« less

  11. Simulation Studies of Mechanical Properties of Novel Silica Nano-structures

    NASA Astrophysics Data System (ADS)

    Muralidharan, Krishna; Torras Costa, Joan; Trickey, Samuel B.

    2006-03-01

    Advances in nanotechnology and the importance of silica as a technological material continue to stimulate computational study of the properties of possible novel silica nanostructures. Thus we have done classical molecular dynamics (MD) and multi-scale quantum mechanical (QM/MD) simulation studies of the mechanical properties of single-wall and multi-wall silica nano-rods of varying dimensions. Such nano-rods have been predicted by Mallik et al. to be unusually strong in tensile failure. Here we compare failure mechanisms of such nano-rods under tension, compression, and bending. The concurrent multi-scale QM/MD studies use the general PUPIL system (Torras et al.). In this case, PUPIL provides automated interoperation of the MNDO Transfer Hamiltonian QM code (Taylor et al.) and a locally written MD code. Embedding of the QM-forces domain is via the scheme of Mallik et al. Work supported by NSF ITR award DMR-0325553.

  12. A charging study of ACTS using NASCAP

    NASA Technical Reports Server (NTRS)

    Herr, Joel L.

    1991-01-01

    The NASA Charging Analyzer Program (NASCAP) computer code is a three dimensional finite element charging code designed to analyze spacecraft charging in the magnetosphere. Because of the characteristics of this problem, NASCAP can use an quasi-static approach to provide a spacecraft designer with an understanding of how a specific spacecraft will interact with a geomagnetic substorm. The results of the simulation can help designers evaluate the probability and location of arc discharges of charged surfaces on the spacecraft. A charging study of NASA's Advanced Communication Technology Satellite (ACTS) using NASCAP is reported. The results show that the ACTS metalized multilayer insulating blanket design should provide good electrostatic discharge control.

  13. Computational fluid dynamics applications at McDonnel Douglas

    NASA Technical Reports Server (NTRS)

    Hakkinen, R. J.

    1987-01-01

    Representative examples are presented of applications and development of advanced Computational Fluid Dynamics (CFD) codes for aerodynamic design at the McDonnell Douglas Corporation (MDC). Transonic potential and Euler codes, interactively coupled with boundary layer computation, and solutions of slender-layer Navier-Stokes approximation are applied to aircraft wing/body calculations. An optimization procedure using evolution theory is described in the context of transonic wing design. Euler methods are presented for analysis of hypersonic configurations, and helicopter rotors in hover and forward flight. Several of these projects were accepted for access to the Numerical Aerodynamic Simulation (NAS) facility at the NASA-Ames Research Center.

  14. Numerical investigation of tip clearance effects on the performance of ducted propeller

    NASA Astrophysics Data System (ADS)

    Ding, Yongle; Song, Baowei; Wang, Peng

    2015-09-01

    Tip clearance loss is a limitation of the improvement of turbomachine performance. Previous studies show the Tip clearance loss is generated by the leakage flow through the tip clearance, and is roughly linearly proportional to the gap size. This study investigates the tip clearance effects on the performance of ducted propeller. The investigation was carried out by solving the Navier-Stokes equations with the commercial Computational Fluid Dynamic (CFD) code CFX14.5. These simulations were carried out to determine the underlying mechanisms of the tip clearance effects. The calculations were performed at three different chosen advance ratios. Simulation results showed that the tip loss slope was not linearly at high advance due to the reversed pressure at the leading edge. Three type of vortical structures were observed in the tip clearance at different clearance size.

  15. Advanced Grid Simulator for Multi-Megawatt Power Converter Testing and Certification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koralewicz, Przemyslaw; Gevorgian, Vahan; Wallen, Robb

    2017-02-16

    Grid integration testing of inverter-coupled renewable energy technologies is an essential step in the qualification of renewable energy and energy storage systems to ensure the stability of the power system. New types of devices must be thoroughly tested and validated for compliance with relevant grid codes and interconnection requirements. For this purpose, highly specialized custom-made testing equipment is needed to emulate various types of realistic grid conditions that are required by certification bodies or for research purposes. For testing multi-megawatt converters, a high power grid simulator capable of creating controlled grid conditions and meeting both power quality and dynamic characteristicsmore » is needed. This paper describes the new grid simulator concept based on ABB's medium voltage ACS6000 drive technology that utilizes advanced modulation and control techniques to create an unique testing platform for various multi-megawatt power converter systems. Its performance is demonstrated utilizing the test results obtained during commissioning activities at the National Renewable Energy Laboratory in Colorado, USA.« less

  16. Advanced ST plasma scenario simulations for NSTX

    NASA Astrophysics Data System (ADS)

    Kessel, C. E.; Synakowski, E. J.; Bell, M. E.; Gates, D. A.; Harvey, R. W.; Kaye, S. M.; Mau, T. K.; Menard, J.; Phillips, C. K.; Taylor, G.; Wilson, R.; NSTX Research Team

    2005-08-01

    Integrated scenario simulations are done for NSTX that address four primary objectives for developing advanced spherical torus (ST) configurations: high β and high βN inductive discharges to study all aspects of ST physics in the high β regime; non-inductively sustained discharges for flattop times greater than the skin time to study the various current drive techniques; non-inductively sustained discharges at high β for flattop times much greater than a skin time which provides the integrated advanced ST target for NSTX and non-solenoidal startup and plasma current rampup. The simulations done here use the tokamak simulation code and are based on a discharge 109070. TRANSP analysis of the discharge provided the thermal diffusivities for electrons and ions, the neutral beam deposition profile and other characteristics. CURRAY is used to calculate the high harmonic fast wave (HHFW) heating depositions and current drive. GENRAY/CQL3D is used to establish the heating and CD deposition profiles for electron Bernstein waves (EBW). Analysis of the ideal MHD stability is done with JSOLVER, BALMSC and PEST2. The simulations indicate that the integrated advanced ST plasma is reachable, obtaining stable plasmas with βT ap 40% at βN's of 7.7-9, IP = 1.0 MA and BT = 0.35 T. The plasma is 100% non-inductive and has a flattop of four skin times. The resulting global energy confinement corresponds to a multiplier of H98(y),2 = 1.5. The simulations have demonstrated the importance of HHFW heating and CD, EBW off-axis CD, strong plasma shaping, density control and early heating/H-mode transition for producing and optimizing these plasma configurations.

  17. Methodology, status and plans for development and assessment of the code ATHLET

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Teschendorff, V.; Austregesilo, H.; Lerchl, G.

    1997-07-01

    The thermal-hydraulic computer code ATHLET (Analysis of THermal-hydraulics of LEaks and Transients) is being developed by the Gesellschaft fuer Anlagen- und Reaktorsicherheit (GRS) for the analysis of anticipated and abnormal plant transients, small and intermediate leaks as well as large breaks in light water reactors. The aim of the code development is to cover the whole spectrum of design basis and beyond design basis accidents (without core degradation) for PWRs and BWRs with only one code. The main code features are: advanced thermal-hydraulics; modular code architecture; separation between physical models and numerical methods; pre- and post-processing tools; portability. The codemore » has features that are of special interest for applications to small leaks and transients with accident management, e.g. initialization by a steady-state calculation, full-range drift-flux model, dynamic mixture level tracking. The General Control Simulation Module of ATHLET is a flexible tool for the simulation of the balance-of-plant and control systems including the various operator actions in the course of accident sequences with AM measures. The code development is accompained by a systematic and comprehensive validation program. A large number of integral experiments and separate effect tests, including the major International Standard Problems, have been calculated by GRS and by independent organizations. The ATHLET validation matrix is a well balanced set of integral and separate effects tests derived from the CSNI proposal emphasizing, however, the German combined ECC injection system which was investigated in the UPTF, PKL and LOBI test facilities.« less

  18. TOPAS Tool for Particle Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perl, Joseph

    2013-05-30

    TOPAS lets users simulate the passage of subatomic particles moving through any kind of radiation therapy treatment system, can import a patient geometry, can record dose and other quantities, has advanced graphics, and is fully four-dimensional (3D plus time) to handle the most challenging time-dependent aspects of modern cancer treatments.TOPAS unlocks the power of the most accurate particle transport simulation technique, the Monte Carlo (MC) method, while removing the painstaking coding work such methods used to require. Research physicists can use TOPAS to improve delivery systems towards safer and more effective radiation therapy treatments, easily setting up and running complexmore » simulations that previously used to take months of preparation. Clinical physicists can use TOPAS to increase accuracy while reducing side effects, simulating patient-specific treatment plans at the touch of a button. TOPAS is designed as a “user code” layered on top of the Geant4 Simulation Toolkit. TOPAS includes the standard Geant4 toolkit, plus additional code to make Geant4 easier to control and to extend Geant4 functionality. TOPAS aims to make proton simulation both “reliable” and “repeatable.” “Reliable” means both accurate physics and a high likelihood to simulate precisely what the user intended to simulate, reducing issues of wrong units, wrong materials, wrong scoring locations, etc. “Repeatable” means not just getting the same result from one simulation to another, but being able to easily restore a previously used setup and reducing sources of error when a setup is passed from one user to another. TOPAS control system incorporates key lessons from safety management, proactively removing possible sources of user error such as line-ordering mistakes In control files. TOPAS has been used to model proton therapy treatment examples including the UCSF eye treatment head, the MGH stereotactic alignment in radiosurgery treatment head and the MGH gantry treatment heads in passive scattering and scanning modes, and has demonstrated dose calculation based on patient-specific CT data.« less

  19. A direct-execution parallel architecture for the Advanced Continuous Simulation Language (ACSL)

    NASA Technical Reports Server (NTRS)

    Carroll, Chester C.; Owen, Jeffrey E.

    1988-01-01

    A direct-execution parallel architecture for the Advanced Continuous Simulation Language (ACSL) is presented which overcomes the traditional disadvantages of simulations executed on a digital computer. The incorporation of parallel processing allows the mapping of simulations into a digital computer to be done in the same inherently parallel manner as they are currently mapped onto an analog computer. The direct-execution format maximizes the efficiency of the executed code since the need for a high level language compiler is eliminated. Resolution is greatly increased over that which is available with an analog computer without the sacrifice in execution speed normally expected with digitial computer simulations. Although this report covers all aspects of the new architecture, key emphasis is placed on the processing element configuration and the microprogramming of the ACLS constructs. The execution times for all ACLS constructs are computed using a model of a processing element based on the AMD 29000 CPU and the AMD 29027 FPU. The increase in execution speed provided by parallel processing is exemplified by comparing the derived execution times of two ACSL programs with the execution times for the same programs executed on a similar sequential architecture.

  20. Development and Assessment of CTF for Pin-resolved BWR Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Salko, Robert K; Wysocki, Aaron J; Collins, Benjamin S

    2017-01-01

    CTF is the modernized and improved version of the subchannel code, COBRA-TF. It has been adopted by the Consortium for Advanced Simulation for Light Water Reactors (CASL) for subchannel analysis applications and thermal hydraulic feedback calculations in the Virtual Environment for Reactor Applications Core Simulator (VERA-CS). CTF is now jointly developed by Oak Ridge National Laboratory and North Carolina State University. Until now, CTF has been used for pressurized water reactor modeling and simulation in CASL, but in the future it will be extended to boiling water reactor designs. This required development activities to integrate the code into the VERA-CSmore » workflow and to make it more ecient for full-core, pin resolved simulations. Additionally, there is a significant emphasis on producing high quality tools that follow a regimented software quality assurance plan in CASL. Part of this plan involves performing validation and verification assessments on the code that are easily repeatable and tied to specific code versions. This work has resulted in the CTF validation and verification matrix being expanded to include several two-phase flow experiments, including the General Electric 3 3 facility and the BWR Full-Size Fine Mesh Bundle Tests (BFBT). Comparisons with both experimental databases is reasonable, but the BFBT analysis reveals a tendency of CTF to overpredict void, especially in the slug flow regime. The execution of these tests is fully automated, analysis is documented in the CTF Validation and Verification manual, and the tests have become part of CASL continuous regression testing system. This paper will summarize these recent developments and some of the two-phase assessments that have been performed on CTF.« less

  1. Resonant scattering experiments with radioactive nuclear beams - Recent results and future plans

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Teranishi, T.; Sakaguchi, S.; Uesaka, T.

    2013-04-19

    Resonant scattering with low-energy radioactive nuclear beams of E < 5 MeV/u have been studied at CRIB of CNS and at RIPS of RIKEN. As an extension to the present experimental technique, we will install an advanced polarized proton target for resonant scattering experiments. A Monte-Carlo simulation was performed to study the feasibility of future experiments with the polarized target. In the Monte-Carlo simulation, excitation functions and analyzing powers were calculated using a newly developed R-matrix calculation code. A project of a small-scale radioactive beam facility at Kyushu University is also briefly described.

  2. Advanced Computational Techniques for Hypersonic Propulsion

    NASA Technical Reports Server (NTRS)

    Povinelli, Louis A.

    1996-01-01

    CFD has played a major role in the resurgence of hypersonic flight, on the premise that numerical methods will allow us to perform simulations at conditions for which no ground test capability exists. Validation of CFD methods is being established using the experimental data base available, which is below Mach 8. It is important, however, to realize the limitations involved in the extrapolation process as well as the deficiencies that exist in numerical methods at the present time. Current features of CFD codes are examined for application to propulsion system components. The shortcomings in simulation and modeling are identified and discussed.

  3. Concurrent design of an RTP chamber and advanced control system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spence, P.; Schaper, C.; Kermani, A.

    1995-12-31

    A concurrent-engineering approach is applied to the development of an axisymmetric rapid-thermal-processing (RTP) reactor and its associated temperature controller. Using a detailed finite-element thermal model as a surrogate for actual hardware, the authors have developed and tested a multi-input multi-output (MIMO) controller. Closed-loop simulations are performed by linking the control algorithm with the finite-element code. Simulations show that good temperature uniformity is maintained on the wafer during both steady and transient conditions. A numerical study shows the effect of ramp rate, feedback gain, sensor placement, and wafer-emissivity patterns on system performance.

  4. TOPICAL REVIEW: Advances and challenges in computational plasma science

    NASA Astrophysics Data System (ADS)

    Tang, W. M.; Chan, V. S.

    2005-02-01

    Scientific simulation, which provides a natural bridge between theory and experiment, is an essential tool for understanding complex plasma behaviour. Recent advances in simulations of magnetically confined plasmas are reviewed in this paper, with illustrative examples, chosen from associated research areas such as microturbulence, magnetohydrodynamics and other topics. Progress has been stimulated, in particular, by the exponential growth of computer speed along with significant improvements in computer technology. The advances in both particle and fluid simulations of fine-scale turbulence and large-scale dynamics have produced increasingly good agreement between experimental observations and computational modelling. This was enabled by two key factors: (a) innovative advances in analytic and computational methods for developing reduced descriptions of physics phenomena spanning widely disparate temporal and spatial scales and (b) access to powerful new computational resources. Excellent progress has been made in developing codes for which computer run-time and problem-size scale well with the number of processors on massively parallel processors (MPPs). Examples include the effective usage of the full power of multi-teraflop (multi-trillion floating point computations per second) MPPs to produce three-dimensional, general geometry, nonlinear particle simulations that have accelerated advances in understanding the nature of turbulence self-regulation by zonal flows. These calculations, which typically utilized billions of particles for thousands of time-steps, would not have been possible without access to powerful present generation MPP computers and the associated diagnostic and visualization capabilities. In looking towards the future, the current results from advanced simulations provide great encouragement for being able to include increasingly realistic dynamics to enable deeper physics insights into plasmas in both natural and laboratory environments. This should produce the scientific excitement which will help to (a) stimulate enhanced cross-cutting collaborations with other fields and (b) attract the bright young talent needed for the future health of the field of plasma science.

  5. Advances and challenges in computational plasma science

    NASA Astrophysics Data System (ADS)

    Tang, W. M.

    2005-02-01

    Scientific simulation, which provides a natural bridge between theory and experiment, is an essential tool for understanding complex plasma behaviour. Recent advances in simulations of magnetically confined plasmas are reviewed in this paper, with illustrative examples, chosen from associated research areas such as microturbulence, magnetohydrodynamics and other topics. Progress has been stimulated, in particular, by the exponential growth of computer speed along with significant improvements in computer technology. The advances in both particle and fluid simulations of fine-scale turbulence and large-scale dynamics have produced increasingly good agreement between experimental observations and computational modelling. This was enabled by two key factors: (a) innovative advances in analytic and computational methods for developing reduced descriptions of physics phenomena spanning widely disparate temporal and spatial scales and (b) access to powerful new computational resources. Excellent progress has been made in developing codes for which computer run-time and problem-size scale well with the number of processors on massively parallel processors (MPPs). Examples include the effective usage of the full power of multi-teraflop (multi-trillion floating point computations per second) MPPs to produce three-dimensional, general geometry, nonlinear particle simulations that have accelerated advances in understanding the nature of turbulence self-regulation by zonal flows. These calculations, which typically utilized billions of particles for thousands of time-steps, would not have been possible without access to powerful present generation MPP computers and the associated diagnostic and visualization capabilities. In looking towards the future, the current results from advanced simulations provide great encouragement for being able to include increasingly realistic dynamics to enable deeper physics insights into plasmas in both natural and laboratory environments. This should produce the scientific excitement which will help to (a) stimulate enhanced cross-cutting collaborations with other fields and (b) attract the bright young talent needed for the future health of the field of plasma science.

  6. Gyrokinetic simulation of driftwave instability in field-reversed configuration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fulton, D. P., E-mail: dfulton@trialphaenergy.com; University of California, Irvine, California 92697; Lau, C. K.

    2016-05-15

    Following the recent remarkable progress in magnetohydrodynamic (MHD) stability control in the C-2U advanced beam driven field-reversed configuration (FRC), turbulent transport has become one of the foremost obstacles on the path towards an FRC-based fusion reactor. Significant effort has been made to expand kinetic simulation capabilities in FRC magnetic geometry. The recently upgraded Gyrokinetic Toroidal Code (GTC) now accommodates realistic magnetic geometry from the C-2U experiment at Tri Alpha Energy, Inc. and is optimized to efficiently handle the FRC's magnetic field line orientation. Initial electrostatic GTC simulations find that ion-scale instabilities are linearly stable in the FRC core for realisticmore » pressure gradient drives. Estimated instability thresholds from linear GTC simulations are qualitatively consistent with critical gradients determined from experimental Doppler backscattering fluctuation data, which also find ion scale modes to be depressed in the FRC core. Beyond GTC, A New Code (ANC) has been developed to accurately resolve the magnetic field separatrix and address the interaction between the core and scrape-off layer regions, which ultimately determines global plasma confinement in the FRC. The current status of ANC and future development targets are discussed.« less

  7. Gyrokinetic simulation of driftwave instability in field-reversed configuration

    NASA Astrophysics Data System (ADS)

    Fulton, D. P.; Lau, C. K.; Schmitz, L.; Holod, I.; Lin, Z.; Tajima, T.; Binderbauer, M. W.

    2016-05-01

    Following the recent remarkable progress in magnetohydrodynamic (MHD) stability control in the C-2U advanced beam driven field-reversed configuration (FRC), turbulent transport has become one of the foremost obstacles on the path towards an FRC-based fusion reactor. Significant effort has been made to expand kinetic simulation capabilities in FRC magnetic geometry. The recently upgraded Gyrokinetic Toroidal Code (GTC) now accommodates realistic magnetic geometry from the C-2U experiment at Tri Alpha Energy, Inc. and is optimized to efficiently handle the FRC's magnetic field line orientation. Initial electrostatic GTC simulations find that ion-scale instabilities are linearly stable in the FRC core for realistic pressure gradient drives. Estimated instability thresholds from linear GTC simulations are qualitatively consistent with critical gradients determined from experimental Doppler backscattering fluctuation data, which also find ion scale modes to be depressed in the FRC core. Beyond GTC, A New Code (ANC) has been developed to accurately resolve the magnetic field separatrix and address the interaction between the core and scrape-off layer regions, which ultimately determines global plasma confinement in the FRC. The current status of ANC and future development targets are discussed.

  8. MO-E-18C-04: Advanced Computer Simulation and Visualization Tools for Enhanced Understanding of Core Medical Physics Concepts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Naqvi, S

    2014-06-15

    Purpose: Most medical physics programs emphasize proficiency in routine clinical calculations and QA. The formulaic aspect of these calculations and prescriptive nature of measurement protocols obviate the need to frequently apply basic physical principles, which, therefore, gradually decay away from memory. E.g. few students appreciate the role of electron transport in photon dose, making it difficult to understand key concepts such as dose buildup, electronic disequilibrium effects and Bragg-Gray theory. These conceptual deficiencies manifest when the physicist encounters a new system, requiring knowledge beyond routine activities. Methods: Two interactive computer simulation tools are developed to facilitate deeper learning of physicalmore » principles. One is a Monte Carlo code written with a strong educational aspect. The code can “label” regions and interactions to highlight specific aspects of the physics, e.g., certain regions can be designated as “starters” or “crossers,” and any interaction type can be turned on and off. Full 3D tracks with specific portions highlighted further enhance the visualization of radiation transport problems. The second code calculates and displays trajectories of a collection electrons under arbitrary space/time dependent Lorentz force using relativistic kinematics. Results: Using the Monte Carlo code, the student can interactively study photon and electron transport through visualization of dose components, particle tracks, and interaction types. The code can, for instance, be used to study kerma-dose relationship, explore electronic disequilibrium near interfaces, or visualize kernels by using interaction forcing. The electromagnetic simulator enables the student to explore accelerating mechanisms and particle optics in devices such as cyclotrons and linacs. Conclusion: The proposed tools are designed to enhance understanding of abstract concepts by highlighting various aspects of the physics. The simulations serve as virtual experiments that give deeper and long lasting understanding of core principles. The student can then make sound judgements in novel situations encountered beyond routine clinical activities.« less

  9. Cogeneration computer model assessment: Advanced cogeneration research study

    NASA Technical Reports Server (NTRS)

    Rosenberg, L.

    1983-01-01

    Cogeneration computer simulation models to recommend the most desirable models or their components for use by the Southern California Edison Company (SCE) in evaluating potential cogeneration projects was assessed. Existing cogeneration modeling capabilities are described, preferred models are identified, and an approach to the development of a code which will best satisfy SCE requirements is recommended. Five models (CELCAP, COGEN 2, CPA, DEUS, and OASIS) are recommended for further consideration.

  10. A Cockpit Display Designed to Enable Limited Flight Deck Separation Responsibility

    NASA Technical Reports Server (NTRS)

    Johnson, Walter W.; Battiste, Vernol; Bochow, Sheila Holland

    2003-01-01

    Cockpit displays need to be substantially improved to serve the goals of situational awareness, conflict detection, and path replanning, in Free Flight. This paper describes the design of such an advanced cockpit display, along with an initial simulation based usability evaluation. Flight crews were particularly enthusiastic about color coding for relative altitude, dynamically pulsing predictors, and the use of 3-D flight plans for alerting and situational awareness.

  11. Influence of impact conditions on plasma generation during hypervelocity impact by aluminum projectile

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Song, Weidong, E-mail: swdgh@bit.edu.cn; Lv, Yangtao; Li, Jianqiao

    2016-07-15

    For describing hypervelocity impact (relative low-speed as related to space debris and much lower than travelling speed of meteoroids) phenomenon associated with plasma generation, a self-developed 3D code was advanced to numerically simulate projectiles impacting on a rigid wall. The numerical results were combined with a new ionization model which was developed in an early study to calculate the ionized materials during the impact. The calculated results of ionization were compared with the empirical formulas concluded by experiments in references and a good agreement was obtained. Then based on the reliable 3D numerical code, a series of impacts with differentmore » projectile configurations were simulated to investigate the influence of impact conditions on hypervelocity impact generated plasma. It was found that the form of empirical formula needed to be modified. A new empirical formula with a critical impact velocity was advanced to describe the velocity dependence of plasma generation and the parameters of the modified formula were ensured by the comparison between the numerical predictions and the empirical formulas. For different projectile configurations, the changes of plasma charges with time are different but the integrals of charges on time almost stayed in the same level.« less

  12. A CFD analysis of blade row interactions within a high-speed axial compressor

    NASA Astrophysics Data System (ADS)

    Richman, Michael Scott

    Aircraft engine design provides many technical and financial hurdles. In an effort to streamline the design process, save money, and improve reliability and performance, many manufacturers are relying on computational fluid dynamic simulations. An overarching goal of the design process for military aircraft engines is to reduce size and weight while maintaining (or improving) reliability. Designers often turn to the compression system to accomplish this goal. As pressure ratios increase and the number of compression stages decrease, many problems arise, for example stability and high cycle fatigue (HCF) become significant as individual stage loading is increased. CFD simulations have recently been employed to assist in the understanding of the aeroelastic problems. For accurate multistage blade row HCF prediction, it is imperative that advanced three-dimensional blade row unsteady aerodynamic interaction codes be validated with appropriate benchmark data. This research addresses this required validation process for TURBO, an advanced three-dimensional multi-blade row turbomachinery CFD code. The solution/prediction accuracy is characterized, identifying key flow field parameters driving the inlet guide vane (IGV) and stator response to the rotor generated forcing functions. The result is a quantified evaluation of the ability of TURBO to predict not only the fundamental flow field characteristics but the three dimensional blade loading.

  13. AN ADVANCED LEAKAGE SCHEME FOR NEUTRINO TREATMENT IN ASTROPHYSICAL SIMULATIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perego, A.; Cabezón, R. M.; Käppeli, R., E-mail: albino.perego@physik.tu-darmstadt.de

    We present an Advanced Spectral Leakage (ASL) scheme to model neutrinos in the context of core-collapse supernovae (CCSNe) and compact binary mergers. Based on previous gray leakage schemes, the ASL scheme computes the neutrino cooling rates by interpolating local production and diffusion rates (relevant in optically thin and thick regimes, respectively) separately for discretized values of the neutrino energy. Neutrino trapped components are also modeled, based on equilibrium and timescale arguments. The better accuracy achieved by the spectral treatment allows a more reliable computation of neutrino heating rates in optically thin conditions. The scheme has been calibrated and tested against Boltzmannmore » transport in the context of Newtonian spherically symmetric models of CCSNe. ASL shows a very good qualitative and a partial quantitative agreement for key quantities from collapse to a few hundreds of milliseconds after core bounce. We have proved the adaptability and flexibility of our ASL scheme, coupling it to an axisymmetric Eulerian and to a three-dimensional smoothed particle hydrodynamics code to simulate core collapse. Therefore, the neutrino treatment presented here is ideal for large parameter-space explorations, parametric studies, high-resolution tests, code developments, and long-term modeling of asymmetric configurations, where more detailed neutrino treatments are not available or are currently computationally too expensive.« less

  14. Coded-aperture Compton camera for gamma-ray imaging

    NASA Astrophysics Data System (ADS)

    Farber, Aaron M.

    This dissertation describes the development of a novel gamma-ray imaging system concept and presents results from Monte Carlo simulations of the new design. Current designs for large field-of-view gamma cameras suitable for homeland security applications implement either a coded aperture or a Compton scattering geometry to image a gamma-ray source. Both of these systems require large, expensive position-sensitive detectors in order to work effectively. By combining characteristics of both of these systems, a new design can be implemented that does not require such expensive detectors and that can be scaled down to a portable size. This new system has significant promise in homeland security, astronomy, botany and other fields, while future iterations may prove useful in medical imaging, other biological sciences and other areas, such as non-destructive testing. A proof-of-principle study of the new gamma-ray imaging system has been performed by Monte Carlo simulation. Various reconstruction methods have been explored and compared. General-Purpose Graphics-Processor-Unit (GPGPU) computation has also been incorporated. The resulting code is a primary design tool for exploring variables such as detector spacing, material selection and thickness and pixel geometry. The advancement of the system from a simple 1-dimensional simulation to a full 3-dimensional model is described. Methods of image reconstruction are discussed and results of simulations consisting of both a 4 x 4 and a 16 x 16 object space mesh have been presented. A discussion of the limitations and potential areas of further study is also presented.

  15. New methods to benchmark simulations of accreting black holes systems against observations

    NASA Astrophysics Data System (ADS)

    Markoff, Sera; Chatterjee, Koushik; Liska, Matthew; Tchekhovskoy, Alexander; Hesp, Casper; Ceccobello, Chiara; Russell, Thomas

    2017-08-01

    The field of black hole accretion has been significantly advanced by the use of complex ideal general relativistic magnetohydrodynamics (GRMHD) codes, now capable of simulating scales from the event horizon out to ~10^5 gravitational radii at high resolution. The challenge remains how to test these simulations against data, because the self-consistent treatment of radiation is still in its early days, and is complicated by dependence on non-ideal/microphysical processes not yet included in the codes. On the other extreme, a variety of phenomenological models (disk, corona, jet, wind) can well-describe spectra or variability signatures in a particular waveband, although often not both. To bring these two methodologies together, we need robust observational “benchmarks” that can be identified and studied in simulations. I will focus on one example of such a benchmark, from recent observational campaigns on black holes across the mass scale: the jet break. I will describe new work attempting to understand what drives this feature by searching for regions that share similar trends in terms of dependence on accretion power or magnetisation. Such methods can allow early tests of simulation assumptions and help pinpoint which regions will dominate the light production, well before full radiative processes are incorporated, and will help guide the interpretation of, e.g. Event Horizon Telescope data.

  16. Development of Spectral and Atomic Models for Diagnosing Energetic Particle Characteristics in Fast Ignition Experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    MacFarlane, Joseph J.; Golovkin, I. E.; Woodruff, P. R.

    2009-08-07

    This Final Report summarizes work performed under DOE STTR Phase II Grant No. DE-FG02-05ER86258 during the project period from August 2006 to August 2009. The project, “Development of Spectral and Atomic Models for Diagnosing Energetic Particle Characteristics in Fast Ignition Experiments,” was led by Prism Computational Sciences (Madison, WI), and involved collaboration with subcontractors University of Nevada-Reno and Voss Scientific (Albuquerque, NM). In this project, we have: Developed and implemented a multi-dimensional, multi-frequency radiation transport model in the LSP hybrid fluid-PIC (particle-in-cell) code [1,2]. Updated the LSP code to support the use of accurate equation-of-state (EOS) tables generated by Prism’smore » PROPACEOS [3] code to compute more accurate temperatures in high energy density physics (HEDP) plasmas. Updated LSP to support the use of Prism’s multi-frequency opacity tables. Generated equation of state and opacity data for LSP simulations for several materials being used in plasma jet experimental studies. Developed and implemented parallel processing techniques for the radiation physics algorithms in LSP. Benchmarked the new radiation transport and radiation physics algorithms in LSP and compared simulation results with analytic solutions and results from numerical radiation-hydrodynamics calculations. Performed simulations using Prism radiation physics codes to address issues related to radiative cooling and ionization dynamics in plasma jet experiments. Performed simulations to study the effects of radiation transport and radiation losses due to electrode contaminants in plasma jet experiments. Updated the LSP code to generate output using NetCDF to provide a better, more flexible interface to SPECT3D [4] in order to post-process LSP output. Updated the SPECT3D code to better support the post-processing of large-scale 2-D and 3-D datasets generated by simulation codes such as LSP. Updated atomic physics modeling to provide for more comprehensive and accurate atomic databases that feed into the radiation physics modeling (spectral simulations and opacity tables). Developed polarization spectroscopy modeling techniques suitable for diagnosing energetic particle characteristics in HEDP experiments. A description of these items is provided in this report. The above efforts lay the groundwork for utilizing the LSP and SPECT3D codes in providing simulation support for DOE-sponsored HEDP experiments, such as plasma jet and fast ignition physics experiments. We believe that taken together, the LSP and SPECT3D codes have unique capabilities for advancing our understanding of the physics of these HEDP plasmas. Based on conversations early in this project with our DOE program manager, Dr. Francis Thio, our efforts emphasized developing radiation physics and atomic modeling capabilities that can be utilized in the LSP PIC code, and performing radiation physics studies for plasma jets. A relatively minor component focused on the development of methods to diagnose energetic particle characteristics in short-pulse laser experiments related to fast ignition physics. The period of performance for the grant was extended by one year to August 2009 with a one-year no-cost extension, at the request of subcontractor University of Nevada-Reno.« less

  17. Dynamics of Magnetopause Reconnection in Response to Variable Solar Wind Conditions

    NASA Astrophysics Data System (ADS)

    Berchem, J.; Richard, R. L.; Escoubet, C. P.; Pitout, F.

    2017-12-01

    Quantifying the dynamics of magnetopause reconnection in response to variable solar wind driving is essential to advancing our predictive understanding of the interaction of the solar wind/IMF with the magnetosphere. To this end we have carried out numerical studies that combine global magnetohydrodynamic (MHD) and Large-Scale Kinetic (LSK) simulations to identify and understand the effects of solar wind/IMF variations. The use of the low dissipation, high resolution UCLA MHD code incorporating a non-linear local resistivity allows the representation of the global configuration of the dayside magnetosphere while the use of LSK ion test particle codes with distributed particle detectors allows us to compare the simulation results with spacecraft observations such as ion dispersion signatures observed by the Cluster spacecraft. We present the results of simulations that focus on the impacts of relatively simple solar wind discontinuities on the magnetopause and examine how the recent history of the interaction of the magnetospheric boundary with solar wind discontinuities can modify the dynamics of magnetopause reconnection in response to the solar wind input.

  18. The accurate particle tracer code

    NASA Astrophysics Data System (ADS)

    Wang, Yulei; Liu, Jian; Qin, Hong; Yu, Zhi; Yao, Yicun

    2017-11-01

    The Accurate Particle Tracer (APT) code is designed for systematic large-scale applications of geometric algorithms for particle dynamical simulations. Based on a large variety of advanced geometric algorithms, APT possesses long-term numerical accuracy and stability, which are critical for solving multi-scale and nonlinear problems. To provide a flexible and convenient I/O interface, the libraries of Lua and Hdf5 are used. Following a three-step procedure, users can efficiently extend the libraries of electromagnetic configurations, external non-electromagnetic forces, particle pushers, and initialization approaches by use of the extendible module. APT has been used in simulations of key physical problems, such as runaway electrons in tokamaks and energetic particles in Van Allen belt. As an important realization, the APT-SW version has been successfully distributed on the world's fastest computer, the Sunway TaihuLight supercomputer, by supporting master-slave architecture of Sunway many-core processors. Based on large-scale simulations of a runaway beam under parameters of the ITER tokamak, it is revealed that the magnetic ripple field can disperse the pitch-angle distribution significantly and improve the confinement of energetic runaway beam on the same time.

  19. Capturing atmospheric effects on 3D millimeter wave radar propagation patterns

    NASA Astrophysics Data System (ADS)

    Cook, Richard D.; Fiorino, Steven T.; Keefer, Kevin J.; Stringer, Jeremy

    2016-05-01

    Traditional radar propagation modeling is done using a path transmittance with little to no input for weather and atmospheric conditions. As radar advances into the millimeter wave (MMW) regime, atmospheric effects such as attenuation and refraction become more pronounced than at traditional radar wavelengths. The DoD High Energy Laser Joint Technology Offices High Energy Laser End-to-End Operational Simulation (HELEEOS) in combination with the Laser Environmental Effects Definition and Reference (LEEDR) code have shown great promise simulating atmospheric effects on laser propagation. Indeed, the LEEDR radiative transfer code has been validated in the UV through RF. Our research attempts to apply these models to characterize the far field radar pattern in three dimensions as a signal propagates from an antenna towards a point in space. Furthermore, we do so using realistic three dimensional atmospheric profiles. The results from these simulations are compared to those from traditional radar propagation software packages. In summary, a fast running method has been investigated which can be incorporated into computational models to enhance understanding and prediction of MMW propagation through various atmospheric and weather conditions.

  20. Modeling Plasma Turbulence and Flows in LAPD using BOUT++

    NASA Astrophysics Data System (ADS)

    Friedman, B.; Carter, T. A.; Schaffner, D.; Popovich, P.; Umansky, M. V.; Dudson, B.

    2010-11-01

    A Braginskii fluid model of plasma turbulence in the BOUT code has recently been applied to LAPD at UCLA [1]. While these initial simulations with a reduced model and periodic axial boundary conditions have shown good agreement with measurements (e.g. power spectrum, correlation lengths), these simulations have lacked physics essential for modeling self-consistent, quantitatively correct flows. In particular, the model did not contain parallel plasma flow induced by sheath boundary conditions, and the axisymmetric radial electric field was not consistent with experiment. This work addresses these issues by extending the simulation model in the BOUT++ code [2], a more advanced version of BOUT. Specifically, end-plate sheath boundary conditions are added, as well as equations to evolve electron temperature and parallel ion velocity. Finally, various techniques are used to attempt to match the experimental electric potential profile, including fixing an equilibrium profile, fixing the radial boundaries, and adding an angular momentum source. [4pt] [1] Popovich et al., http://arxiv.org/abs/1005.2418 (2010).[0pt] [2] Dudson et al., Computer Physics Communications 180 (2009).

  1. High-resolution multi-code implementation of unsteady Navier-Stokes flow solver based on paralleled overset adaptive mesh refinement and high-order low-dissipation hybrid schemes

    NASA Astrophysics Data System (ADS)

    Li, Gaohua; Fu, Xiang; Wang, Fuxin

    2017-10-01

    The low-dissipation high-order accurate hybrid up-winding/central scheme based on fifth-order weighted essentially non-oscillatory (WENO) and sixth-order central schemes, along with the Spalart-Allmaras (SA)-based delayed detached eddy simulation (DDES) turbulence model, and the flow feature-based adaptive mesh refinement (AMR), are implemented into a dual-mesh overset grid infrastructure with parallel computing capabilities, for the purpose of simulating vortex-dominated unsteady detached wake flows with high spatial resolutions. The overset grid assembly (OGA) process based on collection detection theory and implicit hole-cutting algorithm achieves an automatic coupling for the near-body and off-body solvers, and the error-and-try method is used for obtaining a globally balanced load distribution among the composed multiple codes. The results of flows over high Reynolds cylinder and two-bladed helicopter rotor show that the combination of high-order hybrid scheme, advanced turbulence model, and overset adaptive mesh refinement can effectively enhance the spatial resolution for the simulation of turbulent wake eddies.

  2. Characterization of Proxy Application Performance on Advanced Architectures. UMT2013, MCB, AMG2013

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Howell, Louis H.; Gunney, Brian T.; Bhatele, Abhinav

    2015-10-09

    Three codes were tested at LLNL as part of a Tri-Lab effort to make detailed assessments of several proxy applications on various advanced architectures, with the eventual goal of extending these assessments to codes of programmatic interest running more realistic simulations. Teams from Sandia and Los Alamos tested proxy apps of their own. The focus in this report is on the LLNL codes UMT2013, MCB, and AMG2013. We present weak and strong MPI scaling results and studies of OpenMP efficiency on a large BG/Q system at LLNL, with comparison against similar tests on an Intel Sandy Bridge TLCC2 system. Themore » hardware counters on BG/Q provide detailed information on many aspects of on-node performance, while information from the mpiP tool gives insight into the reasons for the differing scaling behavior on these two different architectures. Results from three more speculative tests are also included: one that exploits NVRAM as extended memory, one that studies performance under a power bound, and one that illustrates the effects of changing the torus network mapping on BG/Q.« less

  3. Constructing Neuronal Network Models in Massively Parallel Environments.

    PubMed

    Ippen, Tammo; Eppler, Jochen M; Plesser, Hans E; Diesmann, Markus

    2017-01-01

    Recent advances in the development of data structures to represent spiking neuron network models enable us to exploit the complete memory of petascale computers for a single brain-scale network simulation. In this work, we investigate how well we can exploit the computing power of such supercomputers for the creation of neuronal networks. Using an established benchmark, we divide the runtime of simulation code into the phase of network construction and the phase during which the dynamical state is advanced in time. We find that on multi-core compute nodes network creation scales well with process-parallel code but exhibits a prohibitively large memory consumption. Thread-parallel network creation, in contrast, exhibits speedup only up to a small number of threads but has little overhead in terms of memory. We further observe that the algorithms creating instances of model neurons and their connections scale well for networks of ten thousand neurons, but do not show the same speedup for networks of millions of neurons. Our work uncovers that the lack of scaling of thread-parallel network creation is due to inadequate memory allocation strategies and demonstrates that thread-optimized memory allocators recover excellent scaling. An analysis of the loop order used for network construction reveals that more complex tests on the locality of operations significantly improve scaling and reduce runtime by allowing construction algorithms to step through large networks more efficiently than in existing code. The combination of these techniques increases performance by an order of magnitude and harnesses the increasingly parallel compute power of the compute nodes in high-performance clusters and supercomputers.

  4. Constructing Neuronal Network Models in Massively Parallel Environments

    PubMed Central

    Ippen, Tammo; Eppler, Jochen M.; Plesser, Hans E.; Diesmann, Markus

    2017-01-01

    Recent advances in the development of data structures to represent spiking neuron network models enable us to exploit the complete memory of petascale computers for a single brain-scale network simulation. In this work, we investigate how well we can exploit the computing power of such supercomputers for the creation of neuronal networks. Using an established benchmark, we divide the runtime of simulation code into the phase of network construction and the phase during which the dynamical state is advanced in time. We find that on multi-core compute nodes network creation scales well with process-parallel code but exhibits a prohibitively large memory consumption. Thread-parallel network creation, in contrast, exhibits speedup only up to a small number of threads but has little overhead in terms of memory. We further observe that the algorithms creating instances of model neurons and their connections scale well for networks of ten thousand neurons, but do not show the same speedup for networks of millions of neurons. Our work uncovers that the lack of scaling of thread-parallel network creation is due to inadequate memory allocation strategies and demonstrates that thread-optimized memory allocators recover excellent scaling. An analysis of the loop order used for network construction reveals that more complex tests on the locality of operations significantly improve scaling and reduce runtime by allowing construction algorithms to step through large networks more efficiently than in existing code. The combination of these techniques increases performance by an order of magnitude and harnesses the increasingly parallel compute power of the compute nodes in high-performance clusters and supercomputers. PMID:28559808

  5. Standalone BISON Fuel Performance Results for Watts Bar Unit 1, Cycles 1-3

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clarno, Kevin T.; Pawlowski, Roger; Stimpson, Shane

    2016-03-07

    The Consortium for Advanced Simulation of Light Water Reactors (CASL) is moving forward with more complex multiphysics simulations and increased focus on incorporating fuel performance analysis methods. The coupled neutronics/thermal-hydraulics capabilities within the Virtual Environment for Reactor Applications Core Simulator (VERA-CS) have become relatively stable, and major advances have been made in analysis efforts, including the simulation of twelve cycles of Watts Bar Nuclear Unit 1 (WBN1) operation. While this is a major achievement, the VERA-CS approaches for treating fuel pin heat transfer have well-known limitations that could be eliminated through better integration with the BISON fuel performance code. Severalmore » approaches are being implemented to consider fuel performance, including a more direct multiway coupling with Tiamat, as well as a more loosely coupled one-way approach with standalone BISON cases. Fuel performance typically undergoes an independent analysis using a standalone fuel performance code with manually specified input defined from an independent core simulator solution or set of assumptions. This report summarizes the improvements made since the initial milestone to execute BISON from VERA-CS output. Many of these improvements were prompted through tighter collaboration with the BISON development team at Idaho National Laboratory (INL). A brief description of WBN1 and some of the VERA-CS data used to simulate it are presented. Data from a small mesh sensitivity study are shown, which helps justify the mesh parameters used in this work. The multi-cycle results are presented, followed by the results for the first three cycles of WBN1 operation, particularly the parameters of interest to pellet-clad interaction (PCI) screening (fuel-clad gap closure, maximum centerline fuel temperature, maximum/minimum clad hoop stress, and cumulative damage index). Once the mechanics of this capability are functioning, future work will target cycles with known or suspected PCI failures to determine how well they can be estimated.« less

  6. Tinker-HP: a massively parallel molecular dynamics package for multiscale simulations of large complex systems with advanced point dipole polarizable force fields.

    PubMed

    Lagardère, Louis; Jolly, Luc-Henri; Lipparini, Filippo; Aviat, Félix; Stamm, Benjamin; Jing, Zhifeng F; Harger, Matthew; Torabifard, Hedieh; Cisneros, G Andrés; Schnieders, Michael J; Gresh, Nohad; Maday, Yvon; Ren, Pengyu Y; Ponder, Jay W; Piquemal, Jean-Philip

    2018-01-28

    We present Tinker-HP, a massively MPI parallel package dedicated to classical molecular dynamics (MD) and to multiscale simulations, using advanced polarizable force fields (PFF) encompassing distributed multipoles electrostatics. Tinker-HP is an evolution of the popular Tinker package code that conserves its simplicity of use and its reference double precision implementation for CPUs. Grounded on interdisciplinary efforts with applied mathematics, Tinker-HP allows for long polarizable MD simulations on large systems up to millions of atoms. We detail in the paper the newly developed extension of massively parallel 3D spatial decomposition to point dipole polarizable models as well as their coupling to efficient Krylov iterative and non-iterative polarization solvers. The design of the code allows the use of various computer systems ranging from laboratory workstations to modern petascale supercomputers with thousands of cores. Tinker-HP proposes therefore the first high-performance scalable CPU computing environment for the development of next generation point dipole PFFs and for production simulations. Strategies linking Tinker-HP to Quantum Mechanics (QM) in the framework of multiscale polarizable self-consistent QM/MD simulations are also provided. The possibilities, performances and scalability of the software are demonstrated via benchmarks calculations using the polarizable AMOEBA force field on systems ranging from large water boxes of increasing size and ionic liquids to (very) large biosystems encompassing several proteins as well as the complete satellite tobacco mosaic virus and ribosome structures. For small systems, Tinker-HP appears to be competitive with the Tinker-OpenMM GPU implementation of Tinker. As the system size grows, Tinker-HP remains operational thanks to its access to distributed memory and takes advantage of its new algorithmic enabling for stable long timescale polarizable simulations. Overall, a several thousand-fold acceleration over a single-core computation is observed for the largest systems. The extension of the present CPU implementation of Tinker-HP to other computational platforms is discussed.

  7. Simulation of the space station information system in Ada

    NASA Technical Reports Server (NTRS)

    Spiegel, James R.

    1986-01-01

    The Flexible Ada Simulation Tool (FAST) is a discrete event simulation language which is written in Ada. FAST has been used to simulate a number of options for ground data distribution of Space Station payload data. The fact that Ada language is used for implementation has allowed a number of useful interactive features to be built into FAST and has facilitated quick enhancement of its capabilities to support new modeling requirements. General simulation concepts are discussed, and how these concepts are implemented in FAST. The FAST design is discussed, and it is pointed out how the used of the Ada language enabled the development of some significant advantages over classical FORTRAN based simulation languages. The advantages discussed are in the areas of efficiency, ease of debugging, and ease of integrating user code. The specific Ada language features which enable these advances are discussed.

  8. High fidelity studies of exploding foil initiator bridges, Part 3: ALEGRA MHD simulations

    NASA Astrophysics Data System (ADS)

    Neal, William; Garasi, Christopher

    2017-01-01

    Simulations of high voltage detonators, such as Exploding Bridgewire (EBW) and Exploding Foil Initiators (EFI), have historically been simple, often empirical, one-dimensional models capable of predicting parameters such as current, voltage, and in the case of EFIs, flyer velocity. Experimental methods have correspondingly generally been limited to the same parameters. With the advent of complex, first principles magnetohydrodynamic codes such as ALEGRA and ALE-MHD, it is now possible to simulate these components in three dimensions, and predict a much greater range of parameters than before. A significant improvement in experimental capability was therefore required to ensure these simulations could be adequately verified. In this third paper of a three part study, the experimental results presented in part 2 are compared against 3-dimensional MHD simulations. This improved experimental capability, along with advanced simulations, offer an opportunity to gain a greater understanding of the processes behind the functioning of EBW and EFI detonators.

  9. Discrimination of correlated and entangling quantum channels with selective process tomography

    DOE PAGES

    Dumitrescu, Eugene; Humble, Travis S.

    2016-10-10

    The accurate and reliable characterization of quantum dynamical processes underlies efforts to validate quantum technologies, where discrimination between competing models of observed behaviors inform efforts to fabricate and operate qubit devices. We present a protocol for quantum channel discrimination that leverages advances in direct characterization of quantum dynamics (DCQD) codes. We demonstrate that DCQD codes enable selective process tomography to improve discrimination between entangling and correlated quantum dynamics. Numerical simulations show selective process tomography requires only a few measurement configurations to achieve a low false alarm rate and that the DCQD encoding improves the resilience of the protocol to hiddenmore » sources of noise. Lastly, our results show that selective process tomography with DCQD codes is useful for efficiently distinguishing sources of correlated crosstalk from uncorrelated noise in current and future experimental platforms.« less

  10. Investigating the impact of the cielo cray XE6 architecture on scientific application codes.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rajan, Mahesh; Barrett, Richard; Pedretti, Kevin Thomas Tauke

    2010-12-01

    Cielo, a Cray XE6, is the Department of Energy NNSA Advanced Simulation and Computing (ASC) campaign's newest capability machine. Rated at 1.37 PFLOPS, it consists of 8,944 dual-socket oct-core AMD Magny-Cours compute nodes, linked using Cray's Gemini interconnect. Its primary mission objective is to enable a suite of the ASC applications implemented using MPI to scale to tens of thousands of cores. Cielo is an evolutionary improvement to a successful architecture previously available to many of our codes, thus enabling a basis for understanding the capabilities of this new architecture. Using three codes strategically important to the ASC campaign, andmore » supplemented with some micro-benchmarks that expose the fundamental capabilities of the XE6, we report on the performance characteristics and capabilities of Cielo.« less

  11. Analysis of activation and shutdown contact dose rate for EAST neutral beam port

    NASA Astrophysics Data System (ADS)

    Chen, Yuqing; Wang, Ji; Zhong, Guoqiang; Li, Jun; Wang, Jinfang; Xie, Yahong; Wu, Bin; Hu, Chundong

    2017-12-01

    For the safe operation and maintenance of neutral beam injector (NBI), specific activity and shutdown contact dose rate of the sample material SS316 are estimated around the experimental advanced superconducting tokamak (EAST) neutral beam port. Firstly, the neutron emission intensity is calculated by TRANSP code while the neutral beam is co-injected to EAST. Secondly, the neutron activation and shutdown contact dose rates for the neutral beam sample materials SS316 are derived by the Monte Carlo code MCNP and the inventory code FISPACT-2007. The simulations indicate that the primary radioactive nuclides of SS316 are 58Co and 54Mn. The peak contact dose rate is 8.52 × 10-6 Sv/h after EAST shutdown one second. That is under the International Thermonuclear Experimental Reactor (ITER) design values 1 × 10-5 Sv/h.

  12. Understanding the detector behavior through Montecarlo and calibration studies in view of the SOX measurement

    NASA Astrophysics Data System (ADS)

    Caminata, A.; Agostini, M.; Altenmüller, K.; Appel, S.; Bellini, G.; Benziger, J.; Berton, N.; Bick, D.; Bonfini, G.; Bravo, D.; Caccianiga, B.; Calaprice, F.; Cavalcante, P.; Chepurnov, A.; Choi, K.; Cribier, M.; D'Angelo, D.; Davini, S.; Derbin, A.; Di Noto, L.; Drachnev, I.; Durero, M.; Empl, A.; Etenko, A.; Farinon, S.; Fischer, V.; Fomenko, K.; Franco, D.; Gabriele, F.; Gaffiot, J.; Galbiati, C.; Ghiano, C.; Giammarchi, M.; Goeger-Neff, M.; Goretti, A.; Gromov, M.; Hagner, C.; Houdy, T.; Hungerford, E.; Ianni, Aldo; Ianni, Andrea; Jonquères, N.; Jedrzejczak, K.; Kaiser, M.; Kobychev, V.; Korablev, D.; Korga, G.; Kornoukhov, V.; Kryn, D.; Lachenmaier, T.; Lasserre, T.; Laubenstein, M.; Lehnert, B.; Link, J.; Litvinovich, E.; Lombardi, F.; Lombardi, P.; Ludhova, L.; Lukyanchenko, G.; Machulin, I.; Manecki, S.; Maneschg, W.; Marcocci, S.; Maricic, J.; Mention, G.; Meroni, E.; Meyer, M.; Miramonti, L.; Misiaszek, M.; Montuschi, M.; Mosteiro, P.; Muratova, V.; Musenich, R.; Neumair, B.; Oberauer, L.; Obolensky, M.; Ortica, F.; Pallavicini, M.; Papp, L.; Perasso, L.; Pocar, A.; Ranucci, G.; Razeto, A.; Re, A.; Romani, A.; Roncin, R.; Rossi, N.; Schönert, S.; Scola, L.; Semenov, D.; Simgen, H.; Skorokhvatov, M.; Smirnov, O.; Sotnikov, A.; Sukhotin, S.; Suvorov, Y.; Tartaglia, R.; Testera, G.; Thurn, J.; Toropova, M.; Unzhakov, E.; Veyssiere, C.; Vishneva, A.; Vivier, M.; Vogelaar, R. B.; von Feilitzsch, F.; Wang, H.; Weinz, S.; Winter, J.; Wojcik, M.; Wurm, M.; Yokley, Z.; Zaimidoroga, O.; Zavatarelli, S.; Zuber, K.; Zuzel, G.

    2016-02-01

    Borexino is an unsegmented neutrino detector operating at LNGS in central Italy. The experiment has shown its performances through its unprecedented accomplishments in the solar and geoneutrino detection. These performances make it an ideal tool to accomplish a state- of-the-art experiment able to test the existence of sterile neutrinos (SOX experiment). For both the solar and the SOX analysis, a good understanding of the detector response is fundamental. Consequently, calibration campaigns with radioactive sources have been performed over the years. The calibration data are of extreme importance to develop an accurate Monte Carlo code. This code is used in all the neutrino analyses. The Borexino-SOX calibration techniques and program and the advances on the detector simulation code in view of the start of the SOX data taking are presented. 1

  13. Kinetic Monte Carlo simulation of dopant-defect systems under submicrosecond laser thermal processes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fisicaro, G.; Pelaz, Lourdes; Lopez, P.

    2012-11-06

    An innovative Kinetic Monte Carlo (KMC) code has been developed, which rules the post-implant kinetics of the defects system in the extremely far-from-the equilibrium conditions caused by the laser irradiation close to the liquid-solid interface. It considers defect diffusion, annihilation and clustering. The code properly implements, consistently to the stochastic formalism, the fast varying local event rates related to the thermal field T(r,t) evolution. This feature of our numerical method represents an important advancement with respect to current state of the art KMC codes. The reduction of the implantation damage and its reorganization in defect aggregates are studied as amore » function of the process conditions. Phosphorus activation efficiency, experimentally determined in similar conditions, has been related to the emerging damage scenario.« less

  14. Simulation of the hybrid and steady state advanced operating modes in ITER

    NASA Astrophysics Data System (ADS)

    Kessel, C. E.; Giruzzi, G.; Sips, A. C. C.; Budny, R. V.; Artaud, J. F.; Basiuk, V.; Imbeaux, F.; Joffrin, E.; Schneider, M.; Murakami, M.; Luce, T.; St. John, Holger; Oikawa, T.; Hayashi, N.; Takizuka, T.; Ozeki, T.; Na, Y.-S.; Park, J. M.; Garcia, J.; Tucillo, A. A.

    2007-09-01

    Integrated simulations are performed to establish a physics basis, in conjunction with present tokamak experiments, for the operating modes in the International Thermonuclear Experimental Reactor (ITER). Simulations of the hybrid mode are done using both fixed and free-boundary 1.5D transport evolution codes including CRONOS, ONETWO, TSC/TRANSP, TOPICS and ASTRA. The hybrid operating mode is simulated using the GLF23 and CDBM05 energy transport models. The injected powers are limited to the negative ion neutral beam, ion cyclotron and electron cyclotron heating systems. Several plasma parameters and source parameters are specified for the hybrid cases to provide a comparison of 1.5D core transport modelling assumptions, source physics modelling assumptions, as well as numerous peripheral physics modelling. Initial results indicate that very strict guidelines will need to be imposed on the application of GLF23, for example, to make useful comparisons. Some of the variations among the simulations are due to source models which vary widely among the codes used. In addition, there are a number of peripheral physics models that should be examined, some of which include fusion power production, bootstrap current, treatment of fast particles and treatment of impurities. The hybrid simulations project to fusion gains of 5.6-8.3, βN values of 2.1-2.6 and fusion powers ranging from 350 to 500 MW, under the assumptions outlined in section 3. Simulations of the steady state operating mode are done with the same 1.5D transport evolution codes cited above, except the ASTRA code. In these cases the energy transport model is more difficult to prescribe, so that energy confinement models will range from theory based to empirically based. The injected powers include the same sources as used for the hybrid with the possible addition of lower hybrid. The simulations of the steady state mode project to fusion gains of 3.5-7, βN values of 2.3-3.0 and fusion powers of 290 to 415 MW, under the assumptions described in section 4. These simulations will be presented and compared with particular focus on the resulting temperature profiles, source profiles and peripheral physics profiles. The steady state simulations are at an early stage and are focused on developing a range of safety factor profiles with 100% non-inductive current.

  15. Advances and Challenges In Uncertainty Quantification with Application to Climate Prediction, ICF design and Science Stockpile Stewardship

    NASA Astrophysics Data System (ADS)

    Klein, R.; Woodward, C. S.; Johannesson, G.; Domyancic, D.; Covey, C. C.; Lucas, D. D.

    2012-12-01

    Uncertainty Quantification (UQ) is a critical field within 21st century simulation science that resides at the very center of the web of emerging predictive capabilities. The science of UQ holds the promise of giving much greater meaning to the results of complex large-scale simulations, allowing for quantifying and bounding uncertainties. This powerful capability will yield new insights into scientific predictions (e.g. Climate) of great impact on both national and international arenas, allow informed decisions on the design of critical experiments (e.g. ICF capsule design, MFE, NE) in many scientific fields, and assign confidence bounds to scientifically predictable outcomes (e.g. nuclear weapons design). In this talk I will discuss a major new strategic initiative (SI) we have developed at Lawrence Livermore National Laboratory to advance the science of Uncertainty Quantification at LLNL focusing in particular on (a) the research and development of new algorithms and methodologies of UQ as applied to multi-physics multi-scale codes, (b) incorporation of these advancements into a global UQ Pipeline (i.e. a computational superstructure) that will simplify user access to sophisticated tools for UQ studies as well as act as a self-guided, self-adapting UQ engine for UQ studies on extreme computing platforms and (c) use laboratory applications as a test bed for new algorithms and methodologies. The initial SI focus has been on applications for the quantification of uncertainty associated with Climate prediction, but the validated UQ methodologies we have developed are now being fed back into Science Based Stockpile Stewardship (SSS) and ICF UQ efforts. To make advancements in several of these UQ grand challenges, I will focus in talk on the following three research areas in our Strategic Initiative: Error Estimation in multi-physics and multi-scale codes ; Tackling the "Curse of High Dimensionality"; and development of an advanced UQ Computational Pipeline to enable complete UQ workflow and analysis for ensemble runs at the extreme scale (e.g. exascale) with self-guiding adaptation in the UQ Pipeline engine. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344 and was funded by the Uncertainty Quantification Strategic Initiative Laboratory Directed Research and Development Project at LLNL under project tracking code 10-SI-013 (UCRL LLNL-ABS-569112).

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Merzari, E.; Yuan, Haomin; Kraus, A.

    The NEAMS program aims to develop an integrated multi-physics simulation capability “pellet-to-plant” for the design and analysis of future generations of nuclear power plants. In particular, the Reactor Product Line code suite's multi-resolution hierarchy is being designed to ultimately span the full range of length and time scales present in relevant reactor design and safety analyses, as well as scale from desktop to petaflop computing platforms. Flow-induced vibration (FIV) is widespread problem in energy systems because they rely on fluid movement for energy conversion. Vibrating structures may be damaged as fatigue or wear occurs. Given the importance of reliable componentsmore » in the nuclear industry, flow-induced vibration has long been a major concern in safety and operation of nuclear reactors. In particular, nuclear fuel rods and steam generators have been known to suffer from flow-induced vibration and related failures. Advanced reactors, such as integral Pressurized Water Reactors (PWRs) considered for Small Modular Reactors (SMR), often rely on innovative component designs to meet cost and safety targets. One component that is the subject of advanced designs is the steam generator, some designs of which forego the usual shell-and-tube architecture in order to fit within the primary vessel. In addition to being more cost- and space-efficient, such steam generators need to be more reliable, since failure of the primary vessel represents a potential loss of coolant and a safety concern. A significant amount of data exists on flow-induced vibration in shell-and-tube heat exchangers, and heuristic methods are available to predict their occurrence based on a set of given assumptions. In contrast, advanced designs have far less data available. Advanced modeling and simulation based on coupled structural and fluid simulations have the potential to predict flow-induced vibration in a variety of designs, reducing the need for expensive experimental programs, especially at the design stage. Over the past five years, the Reactor Product Line has developed the integrated multi-physics code suite SHARP. The goal of developing such a tool is to perform multi-physics neutronics, thermal/fluid, and structural mechanics modeling of the components inside the full reactor core or portions of it with a user-specified fidelity. In particular SHARP contains high-fidelity single-physics codes Diablo for structural mechanics and Nek5000 for fluid mechanics calculations. Both codes are state-of-the-art, highly scalable tools that have been extensively validated. These tools form a strong basis on which to build a flow-induced vibration modeling capability. In this report we discuss one-way coupled calculations performed with Nek5000 and Diablo aimed at simulating available FIV experiments in helical steam generators in the turbulent buffeting regime. In this regime one-way coupling is judged sufficient because the pressure loads do not cause substantial displacements. It is also the most common source of vibration in helical steam generators at the low flows expected in integral PWRs. The legacy data is obtained from two datasets developed at Argonne and B&W.« less

  17. Enhanced Representation of Turbulent Flow Phenomena in Large-Eddy Simulations of the Atmospheric Boundary Layer using Grid Refinement with Pseudo-Spectral Numerics

    NASA Astrophysics Data System (ADS)

    Torkelson, G. Q.; Stoll, R., II

    2017-12-01

    Large Eddy Simulation (LES) is a tool commonly used to study the turbulent transport of momentum, heat, and moisture in the Atmospheric Boundary Layer (ABL). For a wide range of ABL LES applications, representing the full range of turbulent length scales in the flow field is a challenge. This is an acute problem in regions of the ABL with strong velocity or scalar gradients, which are typically poorly resolved by standard computational grids (e.g., near the ground surface, in the entrainment zone). Most efforts to address this problem have focused on advanced sub-grid scale (SGS) turbulence model development, or on the use of massive computational resources. While some work exists using embedded meshes, very little has been done on the use of grid refinement. Here, we explore the benefits of grid refinement in a pseudo-spectral LES numerical code. The code utilizes both uniform refinement of the grid in horizontal directions, and stretching of the grid in the vertical direction. Combining the two techniques allows us to refine areas of the flow while maintaining an acceptable grid aspect ratio. In tests that used only refinement of the vertical grid spacing, large grid aspect ratios were found to cause a significant unphysical spike in the stream-wise velocity variance near the ground surface. This was especially problematic in simulations of stably-stratified ABL flows. The use of advanced SGS models was not sufficient to alleviate this issue. The new refinement technique is evaluated using a series of idealized simulation test cases of neutrally and stably stratified ABLs. These test cases illustrate the ability of grid refinement to increase computational efficiency without loss in the representation of statistical features of the flow field.

  18. Report of experiments and evidence for ASC L2 milestone 4467 : demonstration of a legacy application's path to exascale.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Curry, Matthew L.; Ferreira, Kurt Brian; Pedretti, Kevin Thomas Tauke

    2012-03-01

    This report documents thirteen of Sandia's contributions to the Computational Systems and Software Environment (CSSE) within the Advanced Simulation and Computing (ASC) program between fiscal years 2009 and 2012. It describes their impact on ASC applications. Most contributions are implemented in lower software levels allowing for application improvement without source code changes. Improvements are identified in such areas as reduced run time, characterizing power usage, and Input/Output (I/O). Other experiments are more forward looking, demonstrating potential bottlenecks using mini-application versions of the legacy codes and simulating their network activity on Exascale-class hardware. The purpose of this report is to provemore » that the team has completed milestone 4467-Demonstration of a Legacy Application's Path to Exascale. Cielo is expected to be the last capability system on which existing ASC codes can run without significant modifications. This assertion will be tested to determine where the breaking point is for an existing highly scalable application. The goal is to stretch the performance boundaries of the application by applying recent CSSE RD in areas such as resilience, power, I/O, visualization services, SMARTMAP, lightweight LWKs, virtualization, simulation, and feedback loops. Dedicated system time reservations and/or CCC allocations will be used to quantify the impact of system-level changes to extend the life and performance of the ASC code base. Finally, a simulation of anticipated exascale-class hardware will be performed using SST to supplement the calculations. Determine where the breaking point is for an existing highly scalable application: Chapter 15 presented the CSSE work that sought to identify the breaking point in two ASC legacy applications-Charon and CTH. Their mini-app versions were also employed to complete the task. There is no single breaking point as more than one issue was found with the two codes. The results were that applications can expect to encounter performance issues related to the computing environment, system software, and algorithms. Careful profiling of runtime performance will be needed to identify the source of an issue, in strong combination with knowledge of system software and application source code.« less

  19. A PICKSC Science Gateway for enabling the common plasma physicist to run kinetic software

    NASA Astrophysics Data System (ADS)

    Hu, Q.; Winjum, B. J.; Zonca, A.; Youn, C.; Tsung, F. S.; Mori, W. B.

    2017-10-01

    Computer simulations offer tremendous opportunities for studying plasmas, ranging from simulations for students that illuminate fundamental educational concepts to research-level simulations that advance scientific knowledge. Nevertheless, there is a significant hurdle to using simulation tools. Users must navigate codes and software libraries, determine how to wrangle output into meaningful plots, and oftentimes confront a significant cyberinfrastructure with powerful computational resources. Science gateways offer a Web-based environment to run simulations without needing to learn or manage the underlying software and computing cyberinfrastructure. We discuss our progress on creating a Science Gateway for the Particle-in-Cell and Kinetic Simulation Software Center that enables users to easily run and analyze kinetic simulations with our software. We envision that this technology could benefit a wide range of plasma physicists, both in the use of our simulation tools as well as in its adaptation for running other plasma simulation software. Supported by NSF under Grant ACI-1339893 and by the UCLA Institute for Digital Research and Education.

  20. (U) Status of Trinity and Crossroads Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Archer, Billy Joe; Lujan, James Westley; Hemmert, K. S.

    2017-01-10

    (U) This paper provides a general overview of current and future plans for the Advanced Simulation and Computing (ASC) Advanced Technology (AT) systems fielded by the New Mexico Alliance for Computing at Extreme Scale (ACES), a collaboration between Los Alamos Laboratory and Sandia National Laboratories. Additionally, this paper touches on research of technology beyond traditional CMOS. The status of Trinity, ASCs first AT system, and Crossroads, anticipated to succeed Trinity as the third AT system in 2020 will be presented, along with initial performance studies of the Intel Knights Landing Xeon Phi processors, introduced on Trinity. The challenges and opportunitiesmore » for our production simulation codes on AT systems will also be discussed. Trinity and Crossroads are a joint procurement by ACES and Lawrence Berkeley Laboratory as part of the Alliance for application Performance at EXtreme scale (APEX) http://apex.lanl.gov.« less

  1. New virtual laboratories presenting advanced motion control concepts

    NASA Astrophysics Data System (ADS)

    Goubej, Martin; Krejčí, Alois; Reitinger, Jan

    2015-11-01

    The paper deals with development of software framework for rapid generation of remote virtual laboratories. Client-server architecture is chosen in order to employ real-time simulation core which is running on a dedicated server. Ordinary web browser is used as a final renderer to achieve hardware independent solution which can be run on different target platforms including laptops, tablets or mobile phones. The provided toolchain allows automatic generation of the virtual laboratory source code from the configuration file created in the open- source Inkscape graphic editor. Three virtual laboratories presenting advanced motion control algorithms have been developed showing the applicability of the proposed approach.

  2. Synergia: an accelerator modeling tool with 3-D space charge

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Amundson, James F.; Spentzouris, P.; /Fermilab

    2004-07-01

    High precision modeling of space-charge effects, together with accurate treatment of single-particle dynamics, is essential for designing future accelerators as well as optimizing the performance of existing machines. We describe Synergia, a high-fidelity parallel beam dynamics simulation package with fully three dimensional space-charge capabilities and a higher order optics implementation. We describe the computational techniques, the advanced human interface, and the parallel performance obtained using large numbers of macroparticles. We also perform code benchmarks comparing to semi-analytic results and other codes. Finally, we present initial results on particle tune spread, beam halo creation, and emittance growth in the Fermilab boostermore » accelerator.« less

  3. Low-Level Space Optimization of an AES Implementation for a Bit-Serial Fully Pipelined Architecture

    NASA Astrophysics Data System (ADS)

    Weber, Raphael; Rettberg, Achim

    A previously developed AES (Advanced Encryption Standard) implementation is optimized and described in this paper. The special architecture for which this implementation is targeted comprises synchronous and systematic bit-serial processing without a central controlling instance. In order to shrink the design in terms of logic utilization we deeply analyzed the architecture and the AES implementation to identify the most costly logic elements. We propose to merge certain parts of the logic to achieve better area efficiency. The approach was integrated into an existing synthesis tool which we used to produce synthesizable VHDL code. For testing purposes, we simulated the generated VHDL code and ran tests on an FPGA board.

  4. Advancements in Afterbody Radiative Heating Simulations for Earth Entry

    NASA Technical Reports Server (NTRS)

    Johnston, Christopher O.; Panesi, Marco; Brandis, Aaron M.

    2016-01-01

    Four advancements to the simulation of backshell radiative heating for Earth entry are presented. The first of these is the development of a flow field model that treats electronic levels of the dominant backshell radiator, N, as individual species. This is shown to allow improvements in the modeling of electron-ion recombination and two-temperature modeling, which are shown to increase backshell radiative heating by 10 to 40%. By computing the electronic state populations of N within the flow field solver, instead of through the quasi-steady state approximation in the radiation code, the coupling of radiative transition rates to the species continuity equations for the levels of N, including the impact of non-local absorption, becomes feasible. Implementation of this additional level of coupling between the flow field and radiation codes represents the second advancement presented in this work, which is shown to increase the backshell radiation by another 10 to 50%. The impact of radiative transition rates due to non-local absorption indicates the importance of accurate radiation transport in the relatively complex flow geometry of the backshell. This motivates the third advancement, which is the development of a ray-tracing radiation transport approach to compute the radiative transition rates and divergence of the radiative flux at every point for coupling to the flow field, therefore allowing the accuracy of the commonly applied tangent-slab approximation to be assessed for radiative source terms. For the sphere considered at lunar-return conditions, the tangent-slab approximation is shown to provide a sufficient level of accuracy for the radiative source terms, even for backshell cases. This is in contrast to the agreement between the two approaches for computing the radiative flux to the surface, which differ by up to 40%. The final advancement presented is the development of a nonequilibrium model for NO radiation, which provides significant backshell radiation at velocities below 10 km/s. The developed model reduces the nonequilibrium NO radiation by 50% relative to the previous model.

  5. An Advanced, Three-Dimensional Plotting Library for Astronomy

    NASA Astrophysics Data System (ADS)

    Barnes, David G.; Fluke, Christopher J.; Bourke, Paul D.; Parry, Owen T.

    2006-07-01

    We present a new, three-dimensional (3D) plotting library with advanced features, and support for standard and enhanced display devices. The library - s2plot - is written in c and can be used by c, c++, and fortran programs on GNU/Linux and Apple/OSX systems. s2plot draws objects in a 3D (x,y,z) Cartesian space and the user interactively controls how this space is rendered at run time. With a pgplot-inspired interface, s2plot provides astronomers with elegant techniques for displaying and exploring 3D data sets directly from their program code, and the potential to use stereoscopic and dome display devices. The s2plot architecture supports dynamic geometry and can be used to plot time-evolving data sets, such as might be produced by simulation codes. In this paper, we introduce s2plot to the astronomical community, describe its potential applications, and present some example uses of the library.

  6. OpenGeoSys: Performance-Oriented Computational Methods for Numerical Modeling of Flow in Large Hydrogeological Systems

    NASA Astrophysics Data System (ADS)

    Naumov, D.; Fischer, T.; Böttcher, N.; Watanabe, N.; Walther, M.; Rink, K.; Bilke, L.; Shao, H.; Kolditz, O.

    2014-12-01

    OpenGeoSys (OGS) is a scientific open source code for numerical simulation of thermo-hydro-mechanical-chemical processes in porous and fractured media. Its basic concept is to provide a flexible numerical framework for solving multi-field problems for applications in geoscience and hydrology as e.g. for CO2 storage applications, geothermal power plant forecast simulation, salt water intrusion, water resources management, etc. Advances in computational mathematics have revolutionized the variety and nature of the problems that can be addressed by environmental scientists and engineers nowadays and an intensive code development in the last years enables in the meantime the solutions of much larger numerical problems and applications. However, solving environmental processes along the water cycle at large scales, like for complete catchment or reservoirs, stays computationally still a challenging task. Therefore, we started a new OGS code development with focus on execution speed and parallelization. In the new version, a local data structure concept improves the instruction and data cache performance by a tight bundling of data with an element-wise numerical integration loop. Dedicated analysis methods enable the investigation of memory-access patterns in the local and global assembler routines, which leads to further data structure optimization for an additional performance gain. The concept is presented together with a technical code analysis of the recent development and a large case study including transient flow simulation in the unsaturated / saturated zone of the Thuringian Syncline, Germany. The analysis is performed on a high-resolution mesh (up to 50M elements) with embedded fault structures.

  7. Approaching the investigation of plasma turbulence through a rigorous verification and validation procedure: A practical example

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ricci, P., E-mail: paolo.ricci@epfl.ch; Riva, F.; Theiler, C.

    In the present work, a Verification and Validation procedure is presented and applied showing, through a practical example, how it can contribute to advancing our physics understanding of plasma turbulence. Bridging the gap between plasma physics and other scientific domains, in particular, the computational fluid dynamics community, a rigorous methodology for the verification of a plasma simulation code is presented, based on the method of manufactured solutions. This methodology assesses that the model equations are correctly solved, within the order of accuracy of the numerical scheme. The technique to carry out a solution verification is described to provide a rigorousmore » estimate of the uncertainty affecting the numerical results. A methodology for plasma turbulence code validation is also discussed, focusing on quantitative assessment of the agreement between experiments and simulations. The Verification and Validation methodology is then applied to the study of plasma turbulence in the basic plasma physics experiment TORPEX [Fasoli et al., Phys. Plasmas 13, 055902 (2006)], considering both two-dimensional and three-dimensional simulations carried out with the GBS code [Ricci et al., Plasma Phys. Controlled Fusion 54, 124047 (2012)]. The validation procedure allows progress in the understanding of the turbulent dynamics in TORPEX, by pinpointing the presence of a turbulent regime transition, due to the competition between the resistive and ideal interchange instabilities.« less

  8. Modeling interfacial fracture in Sierra.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Arthur A.; Ohashi, Yuki; Lu, Wei-Yang

    2013-09-01

    This report summarizes computational efforts to model interfacial fracture using cohesive zone models in the SIERRA/SolidMechanics (SIERRA/SM) finite element code. Cohesive surface elements were used to model crack initiation and propagation along predefined paths. Mesh convergence was observed with SIERRA/SM for numerous geometries. As the funding for this project came from the Advanced Simulation and Computing Verification and Validation (ASC V&V) focus area, considerable effort was spent performing verification and validation. Code verification was performed to compare code predictions to analytical solutions for simple three-element simulations as well as a higher-fidelity simulation of a double-cantilever beam. Parameter identification was conductedmore » with Dakota using experimental results on asymmetric double-cantilever beam (ADCB) and end-notched-flexure (ENF) experiments conducted under Campaign-6 funding. Discretization convergence studies were also performed with respect to mesh size and time step and an optimization study was completed for mode II delamination using the ENF geometry. Throughout this verification process, numerous SIERRA/SM bugs were found and reported, all of which have been fixed, leading to over a 10-fold increase in convergence rates. Finally, mixed-mode flexure experiments were performed for validation. One of the unexplained issues encountered was material property variability for ostensibly the same composite material. Since the variability is not fully understood, it is difficult to accurately assess uncertainty when performing predictions.« less

  9. Computational fluid dynamics assessment: Volume 1, Computer simulations of the METC (Morgantown Energy Technology Center) entrained-flow gasifier: Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Celik, I.; Chattree, M.

    1988-07-01

    An assessment of the theoretical and numerical aspects of the computer code, PCGC-2, is made; and the results of the application of this code to the Morgantown Energy Technology Center (METC) advanced gasification facility entrained-flow reactor, ''the gasifier,'' are presented. PCGC-2 is a code suitable for simulating pulverized coal combustion or gasification under axisymmetric (two-dimensional) flow conditions. The governing equations for the gas and particulate phase have been reviewed. The numerical procedure and the related programming difficulties have been elucidated. A single-particle model similar to the one used in PCGC-2 has been developed, programmed, and applied to some simple situationsmore » in order to gain insight to the physics of coal particle heat-up, devolatilization, and char oxidation processes. PCGC-2 was applied to the METC entrained-flow gasifier to study numerically the flash pyrolysis of coal, and gasification of coal with steam or carbon dioxide. The results from the simulations are compared with measurements. The gas and particle residence times, particle temperature, and mass component history were also calculated and the results were analyzed. The results provide useful information for understanding the fundamentals of coal gasification and for assessment of experimental results performed using the reactor considered. 69 refs., 35 figs., 23 tabs.« less

  10. Development and Implementation of CFD-Informed Models for the Advanced Subchannel Code CTF

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blyth, Taylor S.; Avramova, Maria

    The research described in this PhD thesis contributes to the development of efficient methods for utilization of high-fidelity models and codes to inform low-fidelity models and codes in the area of nuclear reactor core thermal-hydraulics. The objective is to increase the accuracy of predictions of quantities of interests using high-fidelity CFD models while preserving the efficiency of low-fidelity subchannel core calculations. An original methodology named Physics- based Approach for High-to-Low Model Information has been further developed and tested. The overall physical phenomena and corresponding localized effects, which are introduced by the presence of spacer grids in light water reactor (LWR)more » cores, are dissected in corresponding four building basic processes, and corresponding models are informed using high-fidelity CFD codes. These models are a spacer grid-directed cross-flow model, a grid-enhanced turbulent mixing model, a heat transfer enhancement model, and a spacer grid pressure loss model. The localized CFD-models are developed and tested using the CFD code STAR-CCM+, and the corresponding global model development and testing in sub-channel formulation is performed in the thermal- hydraulic subchannel code CTF. The improved CTF simulations utilize data-files derived from CFD STAR-CCM+ simulation results covering the spacer grid design desired for inclusion in the CTF calculation. The current implementation of these models is examined and possibilities for improvement and further development are suggested. The validation experimental database is extended by including the OECD/NRC PSBT benchmark data. The outcome is an enhanced accuracy of CTF predictions while preserving the computational efficiency of a low-fidelity subchannel code.« less

  11. Development and Implementation of CFD-Informed Models for the Advanced Subchannel Code CTF

    NASA Astrophysics Data System (ADS)

    Blyth, Taylor S.

    The research described in this PhD thesis contributes to the development of efficient methods for utilization of high-fidelity models and codes to inform low-fidelity models and codes in the area of nuclear reactor core thermal-hydraulics. The objective is to increase the accuracy of predictions of quantities of interests using high-fidelity CFD models while preserving the efficiency of low-fidelity subchannel core calculations. An original methodology named Physics-based Approach for High-to-Low Model Information has been further developed and tested. The overall physical phenomena and corresponding localized effects, which are introduced by the presence of spacer grids in light water reactor (LWR) cores, are dissected in corresponding four building basic processes, and corresponding models are informed using high-fidelity CFD codes. These models are a spacer grid-directed cross-flow model, a grid-enhanced turbulent mixing model, a heat transfer enhancement model, and a spacer grid pressure loss model. The localized CFD-models are developed and tested using the CFD code STAR-CCM+, and the corresponding global model development and testing in sub-channel formulation is performed in the thermal-hydraulic subchannel code CTF. The improved CTF simulations utilize data-files derived from CFD STAR-CCM+ simulation results covering the spacer grid design desired for inclusion in the CTF calculation. The current implementation of these models is examined and possibilities for improvement and further development are suggested. The validation experimental database is extended by including the OECD/NRC PSBT benchmark data. The outcome is an enhanced accuracy of CTF predictions while preserving the computational efficiency of a low-fidelity subchannel code.

  12. Application of Probability Methods to Assess Crash Modeling Uncertainty

    NASA Technical Reports Server (NTRS)

    Lyle, Karen H.; Stockwell, Alan E.; Hardy, Robin C.

    2003-01-01

    Full-scale aircraft crash simulations performed with nonlinear, transient dynamic, finite element codes can incorporate structural complexities such as: geometrically accurate models; human occupant models; and advanced material models to include nonlinear stress-strain behaviors, and material failure. Validation of these crash simulations is difficult due to a lack of sufficient information to adequately determine the uncertainty in the experimental data and the appropriateness of modeling assumptions. This paper evaluates probabilistic approaches to quantify the effects of finite element modeling assumptions on the predicted responses. The vertical drop test of a Fokker F28 fuselage section will be the focus of this paper. The results of a probabilistic analysis using finite element simulations will be compared with experimental data.

  13. Application of Probability Methods to Assess Crash Modeling Uncertainty

    NASA Technical Reports Server (NTRS)

    Lyle, Karen H.; Stockwell, Alan E.; Hardy, Robin C.

    2007-01-01

    Full-scale aircraft crash simulations performed with nonlinear, transient dynamic, finite element codes can incorporate structural complexities such as: geometrically accurate models; human occupant models; and advanced material models to include nonlinear stress-strain behaviors, and material failure. Validation of these crash simulations is difficult due to a lack of sufficient information to adequately determine the uncertainty in the experimental data and the appropriateness of modeling assumptions. This paper evaluates probabilistic approaches to quantify the effects of finite element modeling assumptions on the predicted responses. The vertical drop test of a Fokker F28 fuselage section will be the focus of this paper. The results of a probabilistic analysis using finite element simulations will be compared with experimental data.

  14. The NASA aircraft icing research program

    NASA Technical Reports Server (NTRS)

    Shaw, Robert J.; Reinmann, John J.

    1990-01-01

    The objective of the NASA aircraft icing research program is to develop and make available to industry icing technology to support the needs and requirements for all-weather aircraft designs. Research is being done for both fixed wing and rotary wing applications. The NASA program emphasizes technology development in two areas, advanced ice protection concepts and icing simulation. Reviewed here are the computer code development/validation, icing wind tunnel testing, and icing flight testing efforts.

  15. Main functions, recent updates, and applications of Synchrotron Radiation Workshop code

    NASA Astrophysics Data System (ADS)

    Chubar, Oleg; Rakitin, Maksim; Chen-Wiegart, Yu-Chen Karen; Chu, Yong S.; Fluerasu, Andrei; Hidas, Dean; Wiegart, Lutz

    2017-08-01

    The paper presents an overview of the main functions and new application examples of the "Synchrotron Radiation Workshop" (SRW) code. SRW supports high-accuracy calculations of different types of synchrotron radiation, and simulations of propagation of fully-coherent radiation wavefronts, partially-coherent radiation from a finite-emittance electron beam of a storage ring source, and time-/frequency-dependent radiation pulses of a free-electron laser, through X-ray optical elements of a beamline. An extended library of physical-optics "propagators" for different types of reflective, refractive and diffractive X-ray optics with its typical imperfections, implemented in SRW, enable simulation of practically any X-ray beamline in a modern light source facility. The high accuracy of calculation methods used in SRW allows for multiple applications of this code, not only in the area of development of instruments and beamlines for new light source facilities, but also in areas such as electron beam diagnostics, commissioning and performance benchmarking of insertion devices and individual X-ray optical elements of beamlines. Applications of SRW in these areas, facilitating development and advanced commissioning of beamlines at the National Synchrotron Light Source II (NSLS-II), are described.

  16. Comprehensive Micromechanics-Analysis Code - Version 4.0

    NASA Technical Reports Server (NTRS)

    Arnold, S. M.; Bednarcyk, B. A.

    2005-01-01

    Version 4.0 of the Micromechanics Analysis Code With Generalized Method of Cells (MAC/GMC) has been developed as an improved means of computational simulation of advanced composite materials. The previous version of MAC/GMC was described in "Comprehensive Micromechanics-Analysis Code" (LEW-16870), NASA Tech Briefs, Vol. 24, No. 6 (June 2000), page 38. To recapitulate: MAC/GMC is a computer program that predicts the elastic and inelastic thermomechanical responses of continuous and discontinuous composite materials with arbitrary internal microstructures and reinforcement shapes. The predictive capability of MAC/GMC rests on a model known as the generalized method of cells (GMC) - a continuum-based model of micromechanics that provides closed-form expressions for the macroscopic response of a composite material in terms of the properties, sizes, shapes, and responses of the individual constituents or phases that make up the material. Enhancements in version 4.0 include a capability for modeling thermomechanically and electromagnetically coupled ("smart") materials; a more-accurate (high-fidelity) version of the GMC; a capability to simulate discontinuous plies within a laminate; additional constitutive models of materials; expanded yield-surface-analysis capabilities; and expanded failure-analysis and life-prediction capabilities on both the microscopic and macroscopic scales.

  17. Modeling Laboratory Astrophysics Experiments using the CRASH code

    NASA Astrophysics Data System (ADS)

    Trantham, Matthew; Drake, R. P.; Grosskopf, Michael; Bauerle, Matthew; Kruanz, Carolyn; Keiter, Paul; Malamud, Guy; Crash Team

    2013-10-01

    The understanding of high energy density systems can be advanced by laboratory astrophysics experiments. Computer simulations can assist in the design and analysis of these experiments. The Center for Radiative Shock Hydrodynamics (CRASH) at the University of Michigan developed a code that has been used to design and analyze high-energy-density experiments on OMEGA, NIF, and other large laser facilities. This Eulerian code uses block-adaptive mesh refinement (AMR) with implicit multigroup radiation transport and electron heat conduction. This poster/talk will demonstrate some of the experiments the CRASH code has helped design or analyze including: Radiative shocks experiments, Kelvin-Helmholtz experiments, Rayleigh-Taylor experiments, plasma sheet, and interacting jets experiments. This work is funded by the Predictive Sciences Academic Alliances Program in NNSA-ASC via grant DEFC52- 08NA28616, by the NNSA-DS and SC-OFES Joint Program in High-Energy-Density Laboratory Plasmas, grant number DE-FG52-09NA29548, and by the National Laser User Facility Program, grant number DE-NA0000850.

  18. High-Fidelity Coupled Monte-Carlo/Thermal-Hydraulics Calculations

    NASA Astrophysics Data System (ADS)

    Ivanov, Aleksandar; Sanchez, Victor; Ivanov, Kostadin

    2014-06-01

    Monte Carlo methods have been used as reference reactor physics calculation tools worldwide. The advance in computer technology allows the calculation of detailed flux distributions in both space and energy. In most of the cases however, those calculations are done under the assumption of homogeneous material density and temperature distributions. The aim of this work is to develop a consistent methodology for providing realistic three-dimensional thermal-hydraulic distributions by coupling the in-house developed sub-channel code SUBCHANFLOW with the standard Monte-Carlo transport code MCNP. In addition to the innovative technique of on-the fly material definition, a flux-based weight-window technique has been introduced to improve both the magnitude and the distribution of the relative errors. Finally, a coupled code system for the simulation of steady-state reactor physics problems has been developed. Besides the problem of effective feedback data interchange between the codes, the treatment of temperature dependence of the continuous energy nuclear data has been investigated.

  19. Overview of codes and tools for nuclear engineering education

    NASA Astrophysics Data System (ADS)

    Yakovlev, D.; Pryakhin, A.; Medvedeva, L.

    2017-01-01

    The recent world trends in nuclear education have been developed in the direction of social education, networking, virtual tools and codes. MEPhI as a global leader on the world education market implements new advanced technologies for the distance and online learning and for student research work. MEPhI produced special codes, tools and web resources based on the internet platform to support education in the field of nuclear technology. At the same time, MEPhI actively uses codes and tools from the third parties. Several types of the tools are considered: calculation codes, nuclear data visualization tools, virtual labs, PC-based educational simulators for nuclear power plants (NPP), CLP4NET, education web-platforms, distance courses (MOOCs and controlled and managed content systems). The university pays special attention to integrated products such as CLP4NET, which is not a learning course, but serves to automate the process of learning through distance technologies. CLP4NET organizes all tools in the same information space. Up to now, MEPhI has achieved significant results in the field of distance education and online system implementation.

  20. The ZPIC educational code suite

    NASA Astrophysics Data System (ADS)

    Calado, R.; Pardal, M.; Ninhos, P.; Helm, A.; Mori, W. B.; Decyk, V. K.; Vieira, J.; Silva, L. O.; Fonseca, R. A.

    2017-10-01

    Particle-in-Cell (PIC) codes are used in almost all areas of plasma physics, such as fusion energy research, plasma accelerators, space physics, ion propulsion, and plasma processing, and many other areas. In this work, we present the ZPIC educational code suite, a new initiative to foster training in plasma physics using computer simulations. Leveraging on our expertise and experience from the development and use of the OSIRIS PIC code, we have developed a suite of 1D/2D fully relativistic electromagnetic PIC codes, as well as 1D electrostatic. These codes are self-contained and require only a standard laptop/desktop computer with a C compiler to be run. The output files are written in a new file format called ZDF that can be easily read using the supplied routines in a number of languages, such as Python, and IDL. The code suite also includes a number of example problems that can be used to illustrate several textbook and advanced plasma mechanisms, including instructions for parameter space exploration. We also invite contributions to this repository of test problems that will be made freely available to the community provided the input files comply with the format defined by the ZPIC team. The code suite is freely available and hosted on GitHub at https://github.com/zambzamb/zpic. Work partially supported by PICKSC.

  1. Object Based Numerical Zooming Between the NPSS Version 1 and a 1-Dimensional Meanline High Pressure Compressor Design Analysis Code

    NASA Technical Reports Server (NTRS)

    Follen, G.; Naiman, C.; auBuchon, M.

    2000-01-01

    Within NASA's High Performance Computing and Communication (HPCC) program, NASA Glenn Research Center is developing an environment for the analysis/design of propulsion systems for aircraft and space vehicles called the Numerical Propulsion System Simulation (NPSS). The NPSS focuses on the integration of multiple disciplines such as aerodynamics, structures, and heat transfer, along with the concept of numerical zooming between 0- Dimensional to 1-, 2-, and 3-dimensional component engine codes. The vision for NPSS is to create a "numerical test cell" enabling full engine simulations overnight on cost-effective computing platforms. Current "state-of-the-art" engine simulations are 0-dimensional in that there is there is no axial, radial or circumferential resolution within a given component (e.g. a compressor or turbine has no internal station designations). In these 0-dimensional cycle simulations the individual component performance characteristics typically come from a table look-up (map) with adjustments for off-design effects such as variable geometry, Reynolds effects, and clearances. Zooming one or more of the engine components to a higher order, physics-based analysis means a higher order code is executed and the results from this analysis are used to adjust the 0-dimensional component performance characteristics within the system simulation. By drawing on the results from more predictive, physics based higher order analysis codes, "cycle" simulations are refined to closely model and predict the complex physical processes inherent to engines. As part of the overall development of the NPSS, NASA and industry began the process of defining and implementing an object class structure that enables Numerical Zooming between the NPSS Version I (0-dimension) and higher order 1-, 2- and 3-dimensional analysis codes. The NPSS Version I preserves the historical cycle engineering practices but also extends these classical practices into the area of numerical zooming for use within a companies' design system. What follows here is a description of successfully zooming I-dimensional (row-by-row) high pressure compressor results back to a NPSS engine 0-dimension simulation and a discussion of the results illustrated using an advanced data visualization tool. This type of high fidelity system-level analysis, made possible by the zooming capability of the NPSS, will greatly improve the fidelity of the engine system simulation and enable the engine system to be "pre-validated" prior to commitment to engine hardware.

  2. Comparison of the LLNL ALE3D and AKTS Thermal Safety Computer Codes for Calculating Times to Explosion in ODTX and STEX Thermal Cookoff Experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wemhoff, A P; Burnham, A K

    2006-04-05

    Cross-comparison of the results of two computer codes for the same problem provides a mutual validation of their computational methods. This cross-validation exercise was performed for LLNL's ALE3D code and AKTS's Thermal Safety code, using the thermal ignition of HMX in two standard LLNL cookoff experiments: the One-Dimensional Time to Explosion (ODTX) test and the Scaled Thermal Explosion (STEX) test. The chemical kinetics model used in both codes was the extended Prout-Tompkins model, a relatively new addition to ALE3D. This model was applied using ALE3D's new pseudospecies feature. In addition, an advanced isoconversional kinetic approach was used in the AKTSmore » code. The mathematical constants in the Prout-Tompkins code were calibrated using DSC data from hermetically sealed vessels and the LLNL optimization code Kinetics05. The isoconversional kinetic parameters were optimized using the AKTS Thermokinetics code. We found that the Prout-Tompkins model calculations agree fairly well between the two codes, and the isoconversional kinetic model gives very similar results as the Prout-Tompkins model. We also found that an autocatalytic approach in the beta-delta phase transition model does affect the times to explosion for some conditions, especially STEX-like simulations at ramp rates above 100 C/hr, and further exploration of that effect is warranted.« less

  3. Advanced Computational Methods for Thermal Radiative Heat Transfer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tencer, John; Carlberg, Kevin Thomas; Larsen, Marvin E.

    2016-10-01

    Participating media radiation (PMR) in weapon safety calculations for abnormal thermal environments are too costly to do routinely. This cost may be s ubstantially reduced by applying reduced order modeling (ROM) techniques. The application of ROM to PMR is a new and unique approach for this class of problems. This approach was investigated by the authors and shown to provide significant reductions in the computational expense associated with typical PMR simulations. Once this technology is migrated into production heat transfer analysis codes this capability will enable the routine use of PMR heat transfer in higher - fidelity simulations of weaponmore » resp onse in fire environments.« less

  4. Thermal hydraulic simulations, error estimation and parameter sensitivity studies in Drekar::CFD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Thomas Michael; Shadid, John N.; Pawlowski, Roger P.

    2014-01-01

    This report describes work directed towards completion of the Thermal Hydraulics Methods (THM) CFD Level 3 Milestone THM.CFD.P7.05 for the Consortium for Advanced Simulation of Light Water Reactors (CASL) Nuclear Hub effort. The focus of this milestone was to demonstrate the thermal hydraulics and adjoint based error estimation and parameter sensitivity capabilities in the CFD code called Drekar::CFD. This milestone builds upon the capabilities demonstrated in three earlier milestones; THM.CFD.P4.02 [12], completed March, 31, 2012, THM.CFD.P5.01 [15] completed June 30, 2012 and THM.CFD.P5.01 [11] completed on October 31, 2012.

  5. Investigation of the Flow Physics Driving Stall-Side Flutter in Advanced Forward Swept Fan Designs

    NASA Technical Reports Server (NTRS)

    Sanders, Albert J.; Liu, Jong S.; Panovsky, Josef; Bakhle, Milind A.; Stefko, George; Srivastava, Rakesh

    2003-01-01

    Flutter-free operation of advanced transonic fan designs continues to be a challenging task for the designers of aircraft engines. In order to meet the demands of increased performance and lighter weight, these modern fan designs usually feature low-aspect ratio shroudless rotor blade designs that make the task of achieving adequate flutter margin even more challenging for the aeroelastician. This is especially true for advanced forward swept designs that encompass an entirely new design space compared to previous experience. Fortunately, advances in unsteady computational fluid dynamic (CFD) techniques over the past decade now provide an analysis capability that can be used to quantitatively assess the aeroelastic characteristics of these next generation fans during the design cycle. For aeroelastic applications, Mississippi State University and NASA Glenn Research Center have developed the CFD code TURBO-AE. This code is a time-accurate three-dimensional Euler/Navier-Stokes unsteady flow solver developed for axial-flow turbomachinery that can model multiple blade rows undergoing harmonic oscillations with arbitrary interblade phase angles, i.e., nodal diameter patterns. Details of the code can be found in Chen et al. (1993, 1994), Bakhle et al. (1997, 1998), and Srivastava et al. (1999). To assess aeroelastic stability, the work-per-cycle from TURBO-AE is converted to the critical damping ratio since this value is more physically meaningful, with both the unsteady normal pressure and viscous shear forces included in the work-per-cycle calculation. If the total damping (aerodynamic plus mechanical) is negative, then the blade is unstable since it extracts energy from the flow field over the vibration cycle. TURBO-AE is an integral part of an aeroelastic design system being developed at Honeywell Engines, Systems & Services for flutter and forced response predictions, with test cases from development rig and engine tests being used to validate its predictive capability. A recent experimental program (Sanders et al., 2002) was aimed at providing the necessary unsteady aerodynamic and vibratory response data needed to validate TURBO-AE for fan flutter predictions. A comparison of numerical TURBO-AE simulations with the benchmark flutter data is given in Sanders et al. (2003), with the data used to guide the validation of the code and define best practices for performing accurate unsteady simulations. The agreement between the analyses and the predictions was quite remarkable, demonstrating the ability of the analysis to accurately model the unsteady flow processes driving stall-side flutter.

  6. Proceedings of the 14th International Conference on the Numerical Simulation of Plasmas

    NASA Astrophysics Data System (ADS)

    Partial Contents are as follows: Numerical Simulations of the Vlasov-Maxwell Equations by Coupled Particle-Finite Element Methods on Unstructured Meshes; Electromagnetic PIC Simulations Using Finite Elements on Unstructured Grids; Modelling Travelling Wave Output Structures with the Particle-in-Cell Code CONDOR; SST--A Single-Slice Particle Simulation Code; Graphical Display and Animation of Data Produced by Electromagnetic, Particle-in-Cell Codes; A Post-Processor for the PEST Code; Gray Scale Rendering of Beam Profile Data; A 2D Electromagnetic PIC Code for Distributed Memory Parallel Computers; 3-D Electromagnetic PIC Simulation on the NRL Connection Machine; Plasma PIC Simulations on MIMD Computers; Vlasov-Maxwell Algorithm for Electromagnetic Plasma Simulation on Distributed Architectures; MHD Boundary Layer Calculation Using the Vortex Method; and Eulerian Codes for Plasma Simulations.

  7. Impact of Nuclear Data Uncertainties on Calculated Spent Fuel Nuclide Inventories and Advanced NDA Instrument Response

    DOE PAGES

    Hu, Jianwei; Gauld, Ian C.

    2014-12-01

    The U.S. Department of Energy’s Next Generation Safeguards Initiative Spent Fuel (NGSI-SF) project is nearing the final phase of developing several advanced nondestructive assay (NDA) instruments designed to measure spent nuclear fuel assemblies for the purpose of improving nuclear safeguards. Current efforts are focusing on calibrating several of these instruments with spent fuel assemblies at two international spent fuel facilities. Modelling and simulation is expected to play an important role in predicting nuclide compositions, neutron and gamma source terms, and instrument responses in order to inform the instrument calibration procedures. As part of NGSI-SF project, this work was carried outmore » to assess the impacts of uncertainties in the nuclear data used in the calculations of spent fuel content, radiation emissions and instrument responses. Nuclear data is an essential part of nuclear fuel burnup and decay codes and nuclear transport codes. Such codes are routinely used for analysis of spent fuel and NDA safeguards instruments. Hence, the uncertainties existing in the nuclear data used in these codes affect the accuracies of such analysis. In addition, nuclear data uncertainties represent the limiting (smallest) uncertainties that can be expected from nuclear code predictions, and therefore define the highest attainable accuracy of the NDA instrument. This work studies the impacts of nuclear data uncertainties on calculated spent fuel nuclide inventories and the associated NDA instrument response. Recently developed methods within the SCALE code system are applied in this study. The Californium Interrogation with Prompt Neutron instrument was selected to illustrate the impact of these uncertainties on NDA instrument response.« less

  8. Impact of Nuclear Data Uncertainties on Calculated Spent Fuel Nuclide Inventories and Advanced NDA Instrument Response

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hu, Jianwei; Gauld, Ian C.

    The U.S. Department of Energy’s Next Generation Safeguards Initiative Spent Fuel (NGSI-SF) project is nearing the final phase of developing several advanced nondestructive assay (NDA) instruments designed to measure spent nuclear fuel assemblies for the purpose of improving nuclear safeguards. Current efforts are focusing on calibrating several of these instruments with spent fuel assemblies at two international spent fuel facilities. Modelling and simulation is expected to play an important role in predicting nuclide compositions, neutron and gamma source terms, and instrument responses in order to inform the instrument calibration procedures. As part of NGSI-SF project, this work was carried outmore » to assess the impacts of uncertainties in the nuclear data used in the calculations of spent fuel content, radiation emissions and instrument responses. Nuclear data is an essential part of nuclear fuel burnup and decay codes and nuclear transport codes. Such codes are routinely used for analysis of spent fuel and NDA safeguards instruments. Hence, the uncertainties existing in the nuclear data used in these codes affect the accuracies of such analysis. In addition, nuclear data uncertainties represent the limiting (smallest) uncertainties that can be expected from nuclear code predictions, and therefore define the highest attainable accuracy of the NDA instrument. This work studies the impacts of nuclear data uncertainties on calculated spent fuel nuclide inventories and the associated NDA instrument response. Recently developed methods within the SCALE code system are applied in this study. The Californium Interrogation with Prompt Neutron instrument was selected to illustrate the impact of these uncertainties on NDA instrument response.« less

  9. Collaborative Research: Simulation of Beam-Electron Cloud Interactions in Circular Accelerators Using Plasma Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Katsouleas, Thomas; Decyk, Viktor

    Final Report for grant DE-FG02-06ER54888, "Simulation of Beam-Electron Cloud Interactions in Circular Accelerators Using Plasma Models" Viktor K. Decyk, University of California, Los Angeles Los Angeles, CA 90095-1547 The primary goal of this collaborative proposal was to modify the code QuickPIC and apply it to study the long-time stability of beam propagation in low density electron clouds present in circular accelerators. The UCLA contribution to this collaborative proposal was in supporting the development of the pipelining scheme for the QuickPIC code, which extended the parallel scaling of this code by two orders of magnitude. The USC work was as describedmore » here the PhD research for Ms. Bing Feng, lead author in reference 2 below, who performed the research at USC under the guidance of the PI Tom Katsouleas and the collaboration of Dr. Decyk The QuickPIC code [1] is a multi-scale Particle-in-Cell (PIC) code. The outer 3D code contains a beam which propagates through a long region of plasma and evolves slowly. The plasma response to this beam is modeled by slices of a 2D plasma code. This plasma response then is fed back to the beam code, and the process repeats. The pipelining is based on the observation that once the beam has passed a 2D slice, its response can be fed back to the beam immediately without waiting for the beam to pass all the other slices. Thus independent blocks of 2D slices from different time steps can be running simultaneously. The major difficulty was when particles at the edges needed to communicate with other blocks. Two versions of the pipelining scheme were developed, for the the full quasi-static code and the other for the basic quasi-static code used by this e-cloud proposal. Details of the pipelining scheme were published in [2]. The new version of QuickPIC was able to run with more than 1,000 processors, and was successfully applied in modeling e-clouds by our collaborators in this proposal [3-8]. Jean-Luc Vay at Lawrence Berkeley National Lab later implemented a similar basic quasistatic scheme including pipelining in the code WARP [9] and found good to very good quantitative agreement between the two codes in modeling e-clouds. References [1] C. Huang, V. K. Decyk, C. Ren, M. Zhou, W. Lu, W. B. Mori, J. H. Cooley, T. M. Antonsen, Jr., and T. Katsouleas, "QUICKPIC: A highly efficient particle-in-cell code for modeling wakefield acceleration in plasmas," J. Computational Phys. 217, 658 (2006). [2] B. Feng, C. Huang, V. K. Decyk, W. B. Mori, P. Muggli, and T. Katsouleas, "Enhancing parallel quasi-static particle-in-cell simulations with a pipelining algorithm," J. Computational Phys, 228, 5430 (2009). [3] C. Huang, V. K. Decyk, M. Zhou, W. Lu, W. B. Mori, J. H. Cooley, T. M. Antonsen, Jr., and B. Feng, T. Katsouleas, J. Vieira, and L. O. Silva, "QUICKPIC: A highly efficient fully parallelized PIC code for plasma-based acceleration," Proc. of the SciDAC 2006 Conf., Denver, Colorado, June, 2006 [Journal of Physics: Conference Series, W. M. Tang, Editor, vol. 46, Institute of Physics, Bristol and Philadelphia, 2006], p. 190. [4] B. Feng, C. Huang, V. Decyk, W. B. Mori, T. Katsouleas, P. Muggli, "Enhancing Plasma Wakefield and E-cloud Simulation Performance Using a Pipelining Algorithm," Proc. 12th Workshop on Advanced Accelerator Concepts, Lake Geneva, WI, July, 2006, p. 201 [AIP Conf. Proceedings, vol. 877, Melville, NY, 2006]. [5] B. Feng, P. Muggli, T. Katsouleas, V. Decyk, C. Huang, and W. Mori, "Long Time Electron Cloud Instability Simulation Using QuickPIC with Pipelining Algorithm," Proc. of the 2007 Particle Accelerator Conference, Albuquerque, NM, June, 2007, p. 3615. [6] B. Feng, C. Huang, V. Decyk, W. B. Mori, G. H. Hoffstaetter, P. Muggli, T. Katsouleas, "Simulation of Electron Cloud Effects on Electron Beam at ERL with Pipelined QuickPIC," Proc. 13th Workshop on Advanced Accelerator Concepts, Santa Cruz, CA, July-August, 2008, p. 340 [AIP Conf. Proceedings, vol. 1086, Melville, NY, 2008]. [7] B. Feng, C. Huang, V. K. Decyk, W. B. Mori, P. Muggli, and T. Katsouleas, "Enhancing parallel quasi-static particle-in-cell simulations with a pipelining algorithm," J. Computational Phys, 228, 5430 (2009). [8] C. Huang, W. An, V. K. Decyk, W. Lu, W. B. Mori, F. S. Tsung, M. Tzoufras, S. Morshed, T. Antonsen, B. Feng, T. Katsouleas, R., A. Fonseca, S. F. Martins, J. Vieira, L. O. Silva, E. Esarey, C. G. R. Geddes, W. P. Leemans, E. Cormier-Michel, J.-L. Vay, D. L. Bruhwiler, B. Cowan, J. R. Cary, and K. Paul, "Recent results and future challenges for large scale particleion- cell simulations of plasma-based accelerator concepts," Proc. of the SciDAC 2009 Conf., San Diego, CA, June, 2009 [Journal of Physics: Conference Series, vol. 180, Institute of Physics, Bristol and Philadelphia, 2009], p. 012005. [9] J.-L. Vay, C. M. Celata, M. A. Furman, G. Penn, M. Venturini, D. P. Grote, and K. G. Sonnad, ?Update on Electron-Cloud Simulations Using the Package WARP-POSINST.? Proc. of the 2009 Particle Accelerator Conference PAC09, Vancouver, Canada, June, 2009, paper FR5RFP078.« less

  10. A CFD Case Study of a Fan Stage with Split Flow Path Subject to Total Pressure Distortion Inflow

    NASA Technical Reports Server (NTRS)

    To, Wai-Ming

    2017-01-01

    This report is the documentation of the work performed under the Hypersonic Project of the NASA's Fundamental Aeronautics Program. It was funded through Task Number NNC10E444T under GESS-2 Contract NNC06BA07B. The objective of the task is to develop advanced computational tools for the simulation of multi-stage turbomachinery in support of aeropropulsion. This includes work elements in extending the TURBO code and validating the multi-stage URANS (Unsteady Reynolds Averaged Navier Stokes) simulation results with the experimental data. The unsteady CFD (Computation Fluid Dynamics) calculations were performed in full wheel mode with and without screen generated total pressure distortion at the computational inflow boundary, as well as in single passage phase lag mode for uniform inflow. The experimental data were provided by NASA from the single stage RTA (Revolutionary Turbine Accelerator) fan test program.Significant non-uniform flow condition at the fan-face of the aeropropulsion system is frequentlyencountered in many of the advanced aerospace vehicles. These propulsion systems can be eithera podded or an embedded design employed in HWB (Hybrid Wing Body) airframe concept. It isalso a topic of interest in military applications, in which advanced air vehicles have already deployedsome form of embedded propulsion systems in their design because of the requirementsof compact and low observable inlets. Even in the conventional airframe/engine design, the fancould operate under such condition when the air vehicle is undergoing rapid maneuvering action.It is believed that a better understanding of the fan’s aerodynamic and aeromechanical responseto this type of operating condition or off design operation would be beneficial to designing distortiontolerant blades for improved engine operability.The objective for this research is to assess the capability of turbomachinery code as an analysistool in understanding the effects and evaluating the impact of flow distortion on the aerodynamicand aeromechanical performance of the fan in advanced propulsion systems. Results from thetesting of an advanced fan stage released by NASA are available and will be used here for CFDcode validation. The experiment was performed at NASA’s high speed compressor facility aspart of the RTA (Revolutionary Turbine Accelerator) demonstration project, a joint effort ofNASA Glenn Research Center and GE Aircraft Engines in developing an advanced Mach 4TBCC (Turbine Based Combined Cycle) turbofan/ramjet engine for access to space. Part of thetest was to assess the aerodynamic performance and operability of the fan stage under nonuniforminflow condition. Various flow distortion patterns were created at the fan-face by manipulatingsets of screens placed upstream of the wind tunnel. Measurements at the fan-face willprovide the necessary distortion flow information as the inflow boundary condition for the CFDin a full wheel simulation. Therefore the purpose of this work is to demonstrate the NASA supportedmulti-stage turbomachinery code, TURBO [1-5], in the aerodynamic performance analysisof a modern fan design operating under off design condition, and in particular to validate theCFD results with the RTA fan test data.A brief description of the RTA fan rig configuration is given in the next section, explaining onhow flow distortion were measured in the test and constructed for the CFD at the fan-face. It isfollowed by a section summarizing previous CFD work performed at NASA relevant to the currentfan configuration. A short description of the TURBO code is given next, followed by detailsin the computational model of the fan rig, the required computing resources, and the numericalprocedure for the simulations. The CFD results are presented in the discussion section and finallyconcluding remarks are summarized.

  11. Multi-d CFD Modeling of a Free-piston Stirling Convertor at NASA Glenn

    NASA Technical Reports Server (NTRS)

    Wilson, Scott D.; Dyson, Rodger W.; Tew, Roy C.; Ibrahim, Mounir B.

    2004-01-01

    A high efficiency Stirling Radioisotope Generator (SRG) is being developed for possible use in long duration space science missions. NASA s advanced technology goals for next generation Stirling convertors include increasing the Carnot efficiency and percent of Carnot efficiency. To help achieve these goals, a multidimensional Computational Fluid Dynamics (CFD) code is being developed to numerically model unsteady fluid flow and heat transfer phenomena of the oscillating working gas inside Stirling convertors. Simulations of the Stirling convertors for the SRG will help characterize the thermodynamic losses resulting from fluid flow and heat transfer between the working gas and solid walls. The current CFD simulation represents approximated 2-dimensional convertor geometry. The simulation solves the Navier Stokes equations for an ideal helium gas oscillating at low speeds. The current simulation results are discussed.

  12. The numerical simulation of a high-speed axial flow compressor

    NASA Technical Reports Server (NTRS)

    Mulac, Richard A.; Adamczyk, John J.

    1991-01-01

    The advancement of high-speed axial-flow multistage compressors is impeded by a lack of detailed flow-field information. Recent development in compressor flow modeling and numerical simulation have the potential to provide needed information in a timely manner. The development of a computer program is described to solve the viscous form of the average-passage equation system for multistage turbomachinery. Programming issues such as in-core versus out-of-core data storage and CPU utilization (parallelization, vectorization, and chaining) are addressed. Code performance is evaluated through the simulation of the first four stages of a five-stage, high-speed, axial-flow compressor. The second part addresses the flow physics which can be obtained from the numerical simulation. In particular, an examination of the endwall flow structure is made, and its impact on blockage distribution assessed.

  13. Geometry Modeling and Grid Generation for Computational Aerodynamic Simulations Around Iced Airfoils and Wings

    NASA Technical Reports Server (NTRS)

    Choo, Yung K.; Slater, John W.; Vickerman, Mary B.; VanZante, Judith F.; Wadel, Mary F. (Technical Monitor)

    2002-01-01

    Issues associated with analysis of 'icing effects' on airfoil and wing performances are discussed, along with accomplishments and efforts to overcome difficulties with ice. Because of infinite variations of ice shapes and their high degree of complexity, computational 'icing effects' studies using available software tools must address many difficulties in geometry acquisition and modeling, grid generation, and flow simulation. The value of each technology component needs to be weighed from the perspective of the entire analysis process, from geometry to flow simulation. Even though CFD codes are yet to be validated for flows over iced airfoils and wings, numerical simulation, when considered together with wind tunnel tests, can provide valuable insights into 'icing effects' and advance our understanding of the relationship between ice characteristics and their effects on performance degradation.

  14. Simulation of Laser Cooling and Trapping in Engineering Applications

    NASA Technical Reports Server (NTRS)

    Ramirez-Serrano, Jaime; Kohel, James; Thompson, Robert; Yu, Nan; Lunblad, Nathan

    2005-01-01

    An advanced computer code is undergoing development for numerically simulating laser cooling and trapping of large numbers of atoms. The code is expected to be useful in practical engineering applications and to contribute to understanding of the roles that light, atomic collisions, background pressure, and numbers of particles play in experiments using laser-cooled and -trapped atoms. The code is based on semiclassical theories of the forces exerted on atoms by magnetic and optical fields. Whereas computer codes developed previously for the same purpose account for only a few physical mechanisms, this code incorporates many more physical mechanisms (including atomic collisions, sub-Doppler cooling mechanisms, Stark and Zeeman energy shifts, gravitation, and evanescent-wave phenomena) that affect laser-matter interactions and the cooling of atoms to submillikelvin temperatures. Moreover, whereas the prior codes can simulate the interactions of at most a few atoms with a resonant light field, the number of atoms that can be included in a simulation by the present code is limited only by computer memory. Hence, the present code represents more nearly completely the complex physics involved when using laser-cooled and -trapped atoms in engineering applications. Another advantage that the code incorporates is the possibility to analyze the interaction between cold atoms of different atomic number. Some properties that cold atoms of different atomic species have, like cross sections and the particular excited states they can occupy when interacting with each other and light fields, play important roles not yet completely understood in the new experiments that are under way in laboratories worldwide to form ultracold molecules. Other research efforts use cold atoms as holders of quantum information, and more recent developments in cavity quantum electrodynamics also use ultracold atoms to explore and expand new information-technology ideas. These experiments give a hint on the wide range of applications and technology developments that can be tackled using cold atoms and light fields. From more precise atomic clocks and gravity sensors to the development of quantum computers, there will be a need to completely understand the whole ensemble of physical mechanisms that play a role in the development of such technologies. The code also permits the study of the dynamic and steady-state operations of technologies that use cold atoms. The physical characteristics of lasers and fields can be time-controlled to give a realistic simulation of the processes involved such that the design process can determine the best control features to use. It is expected that with the features incorporated into the code it will become a tool for the useful application of ultracold atoms in engineering applications. Currently, the software is being used for the analysis and understanding of simple experiments using cold atoms, and for the design of a modular compact source of cold atoms to be used in future research and development projects. The results so far indicate that the code is a useful design instrument that shows good agreement with experimental measurements (see figure), and a Windows-based user-friendly interface is also under development.

  15. Update and evaluation of decay data for spent nuclear fuel analyses

    NASA Astrophysics Data System (ADS)

    Simeonov, Teodosi; Wemple, Charles

    2017-09-01

    Studsvik's approach to spent nuclear fuel analyses combines isotopic concentrations and multi-group cross-sections, calculated by the CASMO5 or HELIOS2 lattice transport codes, with core irradiation history data from the SIMULATE5 reactor core simulator and tabulated isotopic decay data. These data sources are used and processed by the code SNF to predict spent nuclear fuel characteristics. Recent advances in the generation procedure for the SNF decay data are presented. The SNF decay data includes basic data, such as decay constants, atomic masses and nuclide transmutation chains; radiation emission spectra for photons from radioactive decay, alpha-n reactions, bremsstrahlung, and spontaneous fission, electrons and alpha particles from radioactive decay, and neutrons from radioactive decay, spontaneous fission, and alpha-n reactions; decay heat production; and electro-atomic interaction data for bremsstrahlung production. These data are compiled from fundamental (ENDF, ENSDF, TENDL) and processed (ESTAR) sources for nearly 3700 nuclides. A rigorous evaluation procedure of internal consistency checks and comparisons to measurements and benchmarks, and code-to-code verifications is performed at the individual isotope level and using integral characteristics on a fuel assembly level (e.g., decay heat, radioactivity, neutron and gamma sources). Significant challenges are presented by the scope and complexity of the data processing, a dearth of relevant detailed measurements, and reliance on theoretical models for some data.

  16. Experimental studies of Micro- and Nano-grained UO 2: Grain Growth Behavior, Sufrace Morphology, and Fracture Toughness

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miao, Yinbin; Mo, Kun; Jamison, Laura M.

    This activity is supported by the US Nuclear Energy Advanced Modeling and Simulation (NEAMS) Fuels Product Line (FPL) and aims at providing experimental data for the validation of the mesoscale simulation code MARMOT. MARMOT is a mesoscale multiphysics code that predicts the coevolution of microstructure and properties within reactor fuel during its lifetime in the reactor. It is an important component of the Moose-Bison-Marmot (MBM) code suite that has been developed by Idaho National Laboratory (INL) to enable next generation fuel performance modeling capability as part of the NEAMS Program FPL. In order to ensure the accuracy of the microstructure-basedmore » materials models being developed within the MARMOT code, extensive validation efforts must be carried out. In this report, we summarize the experimental efforts in FY16 including the following important experiments: (1) in-situ grain growth measurement of nano-grained UO 2; (2) investigation of surface morphology in micrograined UO 2; (3) Nano-indentation experiments on nano- and micro-grained UO 2. The highlight of this year is: we have successfully demonstrated our capability to in-situ measure grain size development while maintaining the stoichiometry of nano-grained UO 2 materials; the experiment is, for the first time, using synchrotron X-ray diffraction to in-situ measure grain growth behavior of UO 2.« less

  17. Compression of computer generated phase-shifting hologram sequence using AVC and HEVC

    NASA Astrophysics Data System (ADS)

    Xing, Yafei; Pesquet-Popescu, Béatrice; Dufaux, Frederic

    2013-09-01

    With the capability of achieving twice the compression ratio of Advanced Video Coding (AVC) with similar reconstruction quality, High Efficiency Video Coding (HEVC) is expected to become the newleading technique of video coding. In order to reduce the storage and transmission burden of digital holograms, in this paper we propose to use HEVC for compressing the phase-shifting digital hologram sequences (PSDHS). By simulating phase-shifting digital holography (PSDH) interferometry, interference patterns between illuminated three dimensional( 3D) virtual objects and the stepwise phase changed reference wave are generated as digital holograms. The hologram sequences are obtained by the movement of the virtual objects and compressed by AVC and HEVC. The experimental results show that AVC and HEVC are efficient to compress PSDHS, with HEVC giving better performance. Good compression rate and reconstruction quality can be obtained with bitrate above 15000kbps.

  18. CASL Verification and Validation Plan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mousseau, Vincent Andrew; Dinh, Nam

    2016-06-30

    This report documents the Consortium for Advanced Simulation of LWRs (CASL) verification and validation plan. The document builds upon input from CASL subject matter experts, most notably the CASL Challenge Problem Product Integrators, CASL Focus Area leaders, and CASL code development and assessment teams. This document will be a living document that will track progress on CASL to do verification and validation for both the CASL codes (including MPACT, CTF, BISON, MAMBA) and for the CASL challenge problems (CIPS, PCI, DNB). The CASL codes and the CASL challenge problems are at differing levels of maturity with respect to validation andmore » verification. The gap analysis will summarize additional work that needs to be done. Additional VVUQ work will be done as resources permit. This report is prepared for the Department of Energy’s (DOE’s) CASL program in support of milestone CASL.P13.02.« less

  19. Assessment of chemistry models for compressible reacting flows

    NASA Astrophysics Data System (ADS)

    Lapointe, Simon; Blanquart, Guillaume

    2014-11-01

    Recent technological advances in propulsion and power devices and renewed interest in the development of next generation supersonic and hypersonic vehicles have increased the need for detailed understanding of turbulence-combustion interactions in compressible reacting flows. In numerical simulations of such flows, accurate modeling of the fuel chemistry is a critical component of capturing the relevant physics. Various chemical models are currently being used in reacting flow simulations. However, the differences between these models and their impacts on the fluid dynamics in the context of compressible flows are not well understood. In the present work, a numerical code is developed to solve the fully coupled compressible conservation equations for reacting flows. The finite volume code is based on the theoretical and numerical framework developed by Oefelein (Prog. Aero. Sci. 42 (2006) 2-37) and employs an all-Mach-number formulation with dual time-stepping and preconditioning. The numerical approach is tested on turbulent premixed flames at high Karlovitz numbers. Different chemical models of varying complexity and computational cost are used and their effects are compared.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stimpson, Shane G.

    Activities to incorporate fuel performance capabilities into the Virtual Environment for Reactor Applications (VERA) are receiving increasing attention. The multiphysics emphasis is expanding as the neutronics (MPACT) and thermal-hydraulics (CTF) packages are becoming more mature. Capturing the finer details of fuel phenomena (swelling, densification, relocation, gap closure, etc.) is the natural next step in the VERA Core Simulator (VERA-CS) development process since these phenomena are currently not directly taken into account. While several codes could be used to accomplish this, the BISON fuel performance code being developed by the Idaho National Laboratory (INL) is the focus of ongoing work inmore » the Consortium for Advanced Simulation of Light Water Reactors (CASL). Built on INL’s MOOSE framework, BISON uses the finite element method for geometric representation and a Jacobian-free Newton-Krylov (JFNK) scheme to solve systems of partial differential equations for various fuel characteristic relationships. There are several modes of operation in BISON, but, for this work, it uses a 2D azimuthally symmetric (R-Z) smeared-pellet model.« less

  1. Assessment of MARMOT. A Mesoscale Fuel Performance Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tonks, M. R.; Schwen, D.; Zhang, Y.

    2015-04-01

    MARMOT is the mesoscale fuel performance code under development as part of the US DOE Nuclear Energy Advanced Modeling and Simulation Program. In this report, we provide a high level summary of MARMOT, its capabilities, and its current state of validation. The purpose of MARMOT is to predict the coevolution of microstructure and material properties of nuclear fuel and cladding. It accomplished this using the phase field method coupled to solid mechanics and heat conduction. MARMOT is based on the Multiphysics Object-Oriented Simulation Environment (MOOSE), and much of its basic capability in the areas of the phase field method, mechanics,more » and heat conduction come directly from MOOSE modules. However, additional capability specific to fuel and cladding is available in MARMOT. While some validation of MARMOT has been completed in the areas of fission gas behavior and grain growth, much more validation needs to be conducted. However, new mesoscale data needs to be obtained in order to complete this validation.« less

  2. GYROKINETIC PARTICLE SIMULATION OF TURBULENT TRANSPORT IN BURNING PLASMAS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Horton, Claude Wendell

    2014-06-10

    The SciDAC project at the IFS advanced the state of high performance computing for turbulent structures and turbulent transport. The team project with Prof Zhihong Lin [PI] at Univ California Irvine produced new understanding of the turbulent electron transport. The simulations were performed at the Texas Advanced Computer Center TACC and the NERSC facility by Wendell Horton, Lee Leonard and the IFS Graduate Students working in that group. The research included a Validation of the electron turbulent transport code using the data from a steady state university experiment at the University of Columbia in which detailed probe measurements of themore » turbulence in steady state were used for wide range of temperature gradients to compare with the simulation data. These results were published in a joint paper with Texas graduate student Dr. Xiangrong Fu using the work in his PhD dissertation. X.R. Fu, W. Horton, Y. Xiao, Z. Lin, A.K. Sen and V. Sokolov, “Validation of electron Temperature gradient turbulence in the Columbia Linear Machine, Phys. Plasmas 19, 032303 (2012).« less

  3. Modeling NIF experimental designs with adaptive mesh refinement and Lagrangian hydrodynamics

    NASA Astrophysics Data System (ADS)

    Koniges, A. E.; Anderson, R. W.; Wang, P.; Gunney, B. T. N.; Becker, R.; Eder, D. C.; MacGowan, B. J.; Schneider, M. B.

    2006-06-01

    Incorporation of adaptive mesh refinement (AMR) into Lagrangian hydrodynamics algorithms allows for the creation of a highly powerful simulation tool effective for complex target designs with three-dimensional structure. We are developing an advanced modeling tool that includes AMR and traditional arbitrary Lagrangian-Eulerian (ALE) techniques. Our goal is the accurate prediction of vaporization, disintegration and fragmentation in National Ignition Facility (NIF) experimental target elements. Although our focus is on minimizing the generation of shrapnel in target designs and protecting the optics, the general techniques are applicable to modern advanced targets that include three-dimensional effects such as those associated with capsule fill tubes. Several essential computations in ordinary radiation hydrodynamics need to be redesigned in order to allow for AMR to work well with ALE, including algorithms associated with radiation transport. Additionally, for our goal of predicting fragmentation, we include elastic/plastic flow into our computations. We discuss the integration of these effects into a new ALE-AMR simulation code. Applications of this newly developed modeling tool as well as traditional ALE simulations in two and three dimensions are applied to NIF early-light target designs.

  4. Development and application of the dynamic system doctor to nuclear reactor probabilistic risk assessments.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kunsman, David Marvin; Aldemir, Tunc; Rutt, Benjamin

    2008-05-01

    This LDRD project has produced a tool that makes probabilistic risk assessments (PRAs) of nuclear reactors - analyses which are very resource intensive - more efficient. PRAs of nuclear reactors are being increasingly relied on by the United States Nuclear Regulatory Commission (U.S.N.R.C.) for licensing decisions for current and advanced reactors. Yet, PRAs are produced much as they were 20 years ago. The work here applied a modern systems analysis technique to the accident progression analysis portion of the PRA; the technique was a system-independent multi-task computer driver routine. Initially, the objective of the work was to fuse the accidentmore » progression event tree (APET) portion of a PRA to the dynamic system doctor (DSD) created by Ohio State University. Instead, during the initial efforts, it was found that the DSD could be linked directly to a detailed accident progression phenomenological simulation code - the type on which APET construction and analysis relies, albeit indirectly - and thereby directly create and analyze the APET. The expanded DSD computational architecture and infrastructure that was created during this effort is called ADAPT (Analysis of Dynamic Accident Progression Trees). ADAPT is a system software infrastructure that supports execution and analysis of multiple dynamic event-tree simulations on distributed environments. A simulator abstraction layer was developed, and a generic driver was implemented for executing simulators on a distributed environment. As a demonstration of the use of the methodological tool, ADAPT was applied to quantify the likelihood of competing accident progression pathways occurring for a particular accident scenario in a particular reactor type using MELCOR, an integrated severe accident analysis code developed at Sandia. (ADAPT was intentionally created with flexibility, however, and is not limited to interacting with only one code. With minor coding changes to input files, ADAPT can be linked to other such codes.) The results of this demonstration indicate that the approach can significantly reduce the resources required for Level 2 PRAs. From the phenomenological viewpoint, ADAPT can also treat the associated epistemic and aleatory uncertainties. This methodology can also be used for analyses of other complex systems. Any complex system can be analyzed using ADAPT if the workings of that system can be displayed as an event tree, there is a computer code that simulates how those events could progress, and that simulator code has switches to turn on and off system events, phenomena, etc. Using and applying ADAPT to particular problems is not human independent. While the human resources for the creation and analysis of the accident progression are significantly decreased, knowledgeable analysts are still necessary for a given project to apply ADAPT successfully. This research and development effort has met its original goals and then exceeded them.« less

  5. Validation of a Detailed Scoring Checklist for Use During Advanced Cardiac Life Support Certification

    PubMed Central

    McEvoy, Matthew D.; Smalley, Jeremy C.; Nietert, Paul J.; Field, Larry C.; Furse, Cory M.; Blenko, John W.; Cobb, Benjamin G.; Walters, Jenna L.; Pendarvis, Allen; Dalal, Nishita S.; Schaefer, John J.

    2012-01-01

    Introduction Defining valid, reliable, defensible, and generalizable standards for the evaluation of learner performance is a key issue in assessing both baseline competence and mastery in medical education. However, prior to setting these standards of performance, the reliability of the scores yielding from a grading tool must be assessed. Accordingly, the purpose of this study was to assess the reliability of scores generated from a set of grading checklists used by non-expert raters during simulations of American Heart Association (AHA) MegaCodes. Methods The reliability of scores generated from a detailed set of checklists, when used by four non-expert raters, was tested by grading team leader performance in eight MegaCode scenarios. Videos of the scenarios were reviewed and rated by trained faculty facilitators and by a group of non-expert raters. The videos were reviewed “continuously” and “with pauses.” Two content experts served as the reference standard for grading, and four non-expert raters were used to test the reliability of the checklists. Results Our results demonstrate that non-expert raters are able to produce reliable grades when using the checklists under consideration, demonstrating excellent intra-rater reliability and agreement with a reference standard. The results also demonstrate that non-expert raters can be trained in the proper use of the checklist in a short amount of time, with no discernible learning curve thereafter. Finally, our results show that a single trained rater can achieve reliable scores of team leader performance during AHA MegaCodes when using our checklist in continuous mode, as measures of agreement in total scoring were very strong (Lin’s Concordance Correlation Coefficient = 0.96; Intraclass Correlation Coefficient = 0.97). Discussion We have shown that our checklists can yield reliable scores, are appropriate for use by non-expert raters, and are able to be employed during continuous assessment of team leader performance during the review of a simulated MegaCode. This checklist may be more appropriate for use by Advanced Cardiac Life Support (ACLS) instructors during MegaCode assessments than current tools provided by the AHA. PMID:22863996

  6. Planning for Pre-Exascale Platform Environment (Fiscal Year 2015 Level 2 Milestone 5216)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Springmeyer, R.; Lang, M.; Noe, J.

    This Plan for ASC Pre-Exascale Platform Environments document constitutes the deliverable for the fiscal year 2015 (FY15) Advanced Simulation and Computing (ASC) Program Level 2 milestone Planning for Pre-Exascale Platform Environment. It acknowledges and quantifies challenges and recognized gaps for moving the ASC Program towards effective use of exascale platforms and recommends strategies to address these gaps. This document also presents an update to the concerns, strategies, and plans presented in the FY08 predecessor document that dealt with the upcoming (at the time) petascale high performance computing (HPC) platforms. With the looming push towards exascale systems, a review of themore » earlier document was appropriate in light of the myriad architectural choices currently under consideration. The ASC Program believes the platforms to be fielded in the 2020s will be fundamentally different systems that stress ASC’s ability to modify codes to take full advantage of new or unique features. In addition, the scale of components will increase the difficulty of maintaining an errorfree system, thus driving new approaches to resilience and error detection/correction. The code revamps of the past, from serial- to vector-centric code to distributed memory to threaded implementations, will be revisited as codes adapt to a new message passing interface (MPI) plus “x” or more advanced and dynamic programming models based on architectural specifics. Development efforts are already underway in some cases, and more difficult or uncertain aspects of the new architectures will require research and analysis that may inform future directions for program choices. In addition, the potential diversity of system architectures may require parallel if not duplicative efforts to analyze and modify environments, codes, subsystems, libraries, debugging tools, and performance analysis techniques as well as exploring new monitoring methodologies. It is difficult if not impossible to selectively eliminate some of these activities until more information is available through simulations of potential architectures, analysis of systems designs, and informed study of commodity technologies that will be the constituent parts of future platforms.« less

  7. Interrelating meteorite and asteroid spectra at UV-Vis-NIR wavelengths using novel multiple-scattering methods

    NASA Astrophysics Data System (ADS)

    Martikainen, Julia; Penttilä, Antti; Gritsevich, Maria; Muinonen, Karri

    2017-10-01

    Asteroids have remained mostly the same for the past 4.5 billion years, and provide us information on the origin, evolution and current state of the Solar System. Asteroids and meteorites can be linked by matching their respective reflectance spectra. This is difficult, because spectral features depend strongly on the surface properties, and meteorite surfaces are free of regolith dust present in asteroids. Furthermore, asteroid surfaces experience space weathering which affects their spectral features.We present a novel simulation framework for assessing the spectral properties of meteorites and asteroids and matching their reflectance spectra. The simulations are carried out by utilizing a light-scattering code that takes inhomogeneous waves into account and simulates light scattering by Gaussian-random-sphere particles large compared to the wavelength of the incident light. The code uses incoherent input and computes phase matrices by utilizing incoherent scattering matrices. Reflectance spectra are modeled by combining olivine, pyroxene, and iron, the most common materials that dominate the spectral features of asteroids and meteorites. Space weathering is taken into account by adding nanoiron into the modeled asteroid spectrum. The complex refractive indices needed for the simulations are obtained from existing databases, or derived using an optimization that utilizes our ray-optics code and the measured spectrum of the material.We demonstrate our approach by applying it to the reflectance spectrum of (4) Vesta and the reflectance spectrum of the Johnstown meteorite measured with the University of Helsinki integrating-sphere UV-Vis-NIR spectrometer.Acknowledgments. The research is funded by the ERC Advanced Grant No. 320773 (SAEMPL).

  8. The Advanced Software Development and Commercialization Project

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gallopoulos, E.; Canfield, T.R.; Minkoff, M.

    1990-09-01

    This is the first of a series of reports pertaining to progress in the Advanced Software Development and Commercialization Project, a joint collaborative effort between the Center for Supercomputing Research and Development of the University of Illinois and the Computing and Telecommunications Division of Argonne National Laboratory. The purpose of this work is to apply techniques of parallel computing that were pioneered by University of Illinois researchers to mature computational fluid dynamics (CFD) and structural dynamics (SD) computer codes developed at Argonne. The collaboration in this project will bring this unique combination of expertise to bear, for the first time,more » on industrially important problems. By so doing, it will expose the strengths and weaknesses of existing techniques for parallelizing programs and will identify those problems that need to be solved in order to enable wide spread production use of parallel computers. Secondly, the increased efficiency of the CFD and SD codes themselves will enable the simulation of larger, more accurate engineering models that involve fluid and structural dynamics. In order to realize the above two goals, we are considering two production codes that have been developed at ANL and are widely used by both industry and Universities. These are COMMIX and WHAMS-3D. The first is a computational fluid dynamics code that is used for both nuclear reactor design and safety and as a design tool for the casting industry. The second is a three-dimensional structural dynamics code used in nuclear reactor safety as well as crashworthiness studies. These codes are currently available for both sequential and vector computers only. Our main goal is to port and optimize these two codes on shared memory multiprocessors. In so doing, we shall establish a process that can be followed in optimizing other sequential or vector engineering codes for parallel processors.« less

  9. NASA. Lewis Research Center Advanced Modulation and Coding Project: Introduction and overview

    NASA Technical Reports Server (NTRS)

    Budinger, James M.

    1992-01-01

    The Advanced Modulation and Coding Project at LeRC is sponsored by the Office of Space Science and Applications, Communications Division, Code EC, at NASA Headquarters and conducted by the Digital Systems Technology Branch of the Space Electronics Division. Advanced Modulation and Coding is one of three focused technology development projects within the branch's overall Processing and Switching Program. The program consists of industry contracts for developing proof-of-concept (POC) and demonstration model hardware, university grants for analyzing advanced techniques, and in-house integration and testing of performance verification and systems evaluation. The Advanced Modulation and Coding Project is broken into five elements: (1) bandwidth- and power-efficient modems; (2) high-speed codecs; (3) digital modems; (4) multichannel demodulators; and (5) very high-data-rate modems. At least one contract and one grant were awarded for each element.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gehin, Jess C; Godfrey, Andrew T; Evans, Thomas M

    The Consortium for Advanced Simulation of Light Water Reactors (CASL) is developing a collection of methods and software products known as VERA, the Virtual Environment for Reactor Applications, including a core simulation capability called VERA-CS. A key milestone for this endeavor is to validate VERA against measurements from operating nuclear power reactors. The first step in validation against plant data is to determine the ability of VERA to accurately simulate the initial startup physics tests for Watts Bar Nuclear Power Station, Unit 1 (WBN1) cycle 1. VERA-CS calculations were performed with the Insilico code developed at ORNL using cross sectionmore » processing from the SCALE system and the transport capabilities within the Denovo transport code using the SPN method. The calculations were performed with ENDF/B-VII.0 cross sections in 252 groups (collapsed to 23 groups for the 3D transport solution). The key results of the comparison of calculations with measurements include initial criticality, control rod worth critical configurations, control rod worth, differential boron worth, and isothermal temperature reactivity coefficient (ITC). The VERA results for these parameters show good agreement with measurements, with the exception of the ITC, which requires additional investigation. Results are also compared to those obtained with Monte Carlo methods and a current industry core simulator.« less

  11. ICME — A Mere Coupling of Models or a Discipline of Its Own?

    NASA Astrophysics Data System (ADS)

    Bambach, Markus; Schmitz, Georg J.; Prahl, Ulrich

    Technically, ICME — Integrated computational materials engineering — is an approach for solving advanced engineering problems related to the design of new materials and processes by combining individual materials and process models. The combination of models by now is mainly achieved by manual transformation of the output of a simulation to form the input to a subsequent one. This subsequent simulation is either performed at a different length scale or constitutes a subsequent step along the process chain. Is ICME thus just a synonym for the coupling of simulations? In fact, most ICME publications up to now are examples of the joint application of selected models and software codes to a specific problem. However, from a systems point of view, the coupling of individual models and/or software codes across length scales and along material processing chains leads to highly complex meta-models. Their viability has to be ensured by joint efforts from science, industry, software developers and independent organizations. This paper identifies some developments that seem necessary to make future ICME simulations viable, sustainable and broadly accessible and accepted. The main conclusion is that ICME is more than a multi-disciplinary subject but a discipline of its own, for which a generic structural framework has to be elaborated and established.

  12. Implementation of extended Lagrangian dynamics in GROMACS for polarizable simulations using the classical Drude oscillator model.

    PubMed

    Lemkul, Justin A; Roux, Benoît; van der Spoel, David; MacKerell, Alexander D

    2015-07-15

    Explicit treatment of electronic polarization in empirical force fields used for molecular dynamics simulations represents an important advancement in simulation methodology. A straightforward means of treating electronic polarization in these simulations is the inclusion of Drude oscillators, which are auxiliary, charge-carrying particles bonded to the cores of atoms in the system. The additional degrees of freedom make these simulations more computationally expensive relative to simulations using traditional fixed-charge (additive) force fields. Thus, efficient tools are needed for conducting these simulations. Here, we present the implementation of highly scalable algorithms in the GROMACS simulation package that allow for the simulation of polarizable systems using extended Lagrangian dynamics with a dual Nosé-Hoover thermostat as well as simulations using a full self-consistent field treatment of polarization. The performance of systems of varying size is evaluated, showing that the present code parallelizes efficiently and is the fastest implementation of the extended Lagrangian methods currently available for simulations using the Drude polarizable force field. © 2015 Wiley Periodicals, Inc.

  13. Approach to Integrate Global-Sun Models of Magnetic Flux Emergence and Transport for Space Weather Studies

    NASA Technical Reports Server (NTRS)

    Mansour, Nagi N.; Wray, Alan A.; Mehrotra, Piyush; Henney, Carl; Arge, Nick; Godinez, H.; Manchester, Ward; Koller, J.; Kosovichev, A.; Scherrer, P.; hide

    2013-01-01

    The Sun lies at the center of space weather and is the source of its variability. The primary input to coronal and solar wind models is the activity of the magnetic field in the solar photosphere. Recent advancements in solar observations and numerical simulations provide a basis for developing physics-based models for the dynamics of the magnetic field from the deep convection zone of the Sun to the corona with the goal of providing robust near real-time boundary conditions at the base of space weather forecast models. The goal is to develop new strategic capabilities that enable characterization and prediction of the magnetic field structure and flow dynamics of the Sun by assimilating data from helioseismology and magnetic field observations into physics-based realistic magnetohydrodynamics (MHD) simulations. The integration of first-principle modeling of solar magnetism and flow dynamics with real-time observational data via advanced data assimilation methods is a new, transformative step in space weather research and prediction. This approach will substantially enhance an existing model of magnetic flux distribution and transport developed by the Air Force Research Lab. The development plan is to use the Space Weather Modeling Framework (SWMF) to develop Coupled Models for Emerging flux Simulations (CMES) that couples three existing models: (1) an MHD formulation with the anelastic approximation to simulate the deep convection zone (FSAM code), (2) an MHD formulation with full compressible Navier-Stokes equations and a detailed description of radiative transfer and thermodynamics to simulate near-surface convection and the photosphere (Stagger code), and (3) an MHD formulation with full, compressible Navier-Stokes equations and an approximate description of radiative transfer and heating to simulate the corona (Module in BATS-R-US). CMES will enable simulations of the emergence of magnetic structures from the deep convection zone to the corona. Finally, a plan will be summarized on the development of a Flux Emergence Prediction Tool (FEPT) in which helioseismology-derived data and vector magnetic maps are assimilated into CMES that couples the dynamics of magnetic flux from the deep interior to the corona.

  14. Introducing DeBRa: a detailed breast model for radiological studies

    NASA Astrophysics Data System (ADS)

    Ma, Andy K. W.; Gunn, Spencer; Darambara, Dimitra G.

    2009-07-01

    Currently, x-ray mammography is the method of choice in breast cancer screening programmes. As the mammography technology moves from 2D imaging modalities to 3D, conventional computational phantoms do not have sufficient detail to support the studies of these advanced imaging systems. Studies of these 3D imaging systems call for a realistic and sophisticated computational model of the breast. DeBRa (Detailed Breast model for Radiological studies) is the most advanced, detailed, 3D computational model of the breast developed recently for breast imaging studies. A DeBRa phantom can be constructed to model a compressed breast, as in film/screen, digital mammography and digital breast tomosynthesis studies, or a non-compressed breast as in positron emission mammography and breast CT studies. Both the cranial-caudal and mediolateral oblique views can be modelled. The anatomical details inside the phantom include the lactiferous duct system, the Cooper ligaments and the pectoral muscle. The fibroglandular tissues are also modelled realistically. In addition, abnormalities such as microcalcifications, irregular tumours and spiculated tumours are inserted into the phantom. Existing sophisticated breast models require specialized simulation codes. Unlike its predecessors, DeBRa has elemental compositions and densities incorporated into its voxels including those of the explicitly modelled anatomical structures and the noise-like fibroglandular tissues. The voxel dimensions are specified as needed by any study and the microcalcifications are embedded into the voxels so that the microcalcification sizes are not limited by the voxel dimensions. Therefore, DeBRa works with general-purpose Monte Carlo codes. Furthermore, general-purpose Monte Carlo codes allow different types of imaging modalities and detector characteristics to be simulated with ease. DeBRa is a versatile and multipurpose model specifically designed for both x-ray and γ-ray imaging studies.

  15. Coding tools investigation for next generation video coding based on HEVC

    NASA Astrophysics Data System (ADS)

    Chen, Jianle; Chen, Ying; Karczewicz, Marta; Li, Xiang; Liu, Hongbin; Zhang, Li; Zhao, Xin

    2015-09-01

    The new state-of-the-art video coding standard, H.265/HEVC, has been finalized in 2013 and it achieves roughly 50% bit rate saving compared to its predecessor, H.264/MPEG-4 AVC. This paper provides the evidence that there is still potential for further coding efficiency improvements. A brief overview of HEVC is firstly given in the paper. Then, our improvements on each main module of HEVC are presented. For instance, the recursive quadtree block structure is extended to support larger coding unit and transform unit. The motion information prediction scheme is improved by advanced temporal motion vector prediction, which inherits the motion information of each small block within a large block from a temporal reference picture. Cross component prediction with linear prediction model improves intra prediction and overlapped block motion compensation improves the efficiency of inter prediction. Furthermore, coding of both intra and inter prediction residual is improved by adaptive multiple transform technique. Finally, in addition to deblocking filter and SAO, adaptive loop filter is applied to further enhance the reconstructed picture quality. This paper describes above-mentioned techniques in detail and evaluates their coding performance benefits based on the common test condition during HEVC development. The simulation results show that significant performance improvement over HEVC standard can be achieved, especially for the high resolution video materials.

  16. An assessment of multibody simulation tools for articulated spacecraft

    NASA Technical Reports Server (NTRS)

    Man, Guy K.; Sirlin, Samuel W.

    1989-01-01

    A survey of multibody simulation codes was conducted in the spring of 1988, to obtain an assessment of the state of the art in multibody simulation codes from the users of the codes. This survey covers the most often used articulated multibody simulation codes in the spacecraft and robotics community. There was no attempt to perform a complete survey of all available multibody codes in all disciplines. Furthermore, this is not an exhaustive evaluation of even robotics and spacecraft multibody simulation codes, as the survey was designed to capture feedback on issues most important to the users of simulation codes. We must keep in mind that the information received was limited and the technical background of the respondents varied greatly. Therefore, only the most often cited observations from the questionnaire are reported here. In this survey, it was found that no one code had both many users (reports) and no limitations. The first section is a report on multibody code applications. Following applications is a discussion of execution time, which is the most troublesome issue for flexible multibody codes. The representation of component flexible bodies, which affects both simulation setup time as well as execution time, is presented next. Following component data preparation, two sections address the accessibility or usability of a code, evaluated by considering its user interface design and examining the overall simulation integrated environment. A summary of user efforts at code verification is reported, before a tabular summary of the questionnaire responses. Finally, some conclusions are drawn.

  17. Advancing Underwater Acoustic Communication for Autonomous Distributed Networks via Sparse Channel Sensing, Coding, and Navigation Support

    DTIC Science & Technology

    2014-09-30

    underwater acoustic communication technologies for autonomous distributed underwater networks , through innovative signal processing, coding, and...4. TITLE AND SUBTITLE Advancing Underwater Acoustic Communication for Autonomous Distributed Networks via Sparse Channel Sensing, Coding, and...coding: 3) OFDM modulated dynamic coded cooperation in underwater acoustic channels; 3 Localization, Networking , and Testbed: 4) On-demand

  18. Computational chemistry

    NASA Technical Reports Server (NTRS)

    Arnold, J. O.

    1987-01-01

    With the advent of supercomputers, modern computational chemistry algorithms and codes, a powerful tool was created to help fill NASA's continuing need for information on the properties of matter in hostile or unusual environments. Computational resources provided under the National Aerodynamics Simulator (NAS) program were a cornerstone for recent advancements in this field. Properties of gases, materials, and their interactions can be determined from solutions of the governing equations. In the case of gases, for example, radiative transition probabilites per particle, bond-dissociation energies, and rates of simple chemical reactions can be determined computationally as reliably as from experiment. The data are proving to be quite valuable in providing inputs to real-gas flow simulation codes used to compute aerothermodynamic loads on NASA's aeroassist orbital transfer vehicles and a host of problems related to the National Aerospace Plane Program. Although more approximate, similar solutions can be obtained for ensembles of atoms simulating small particles of materials with and without the presence of gases. Computational chemistry has application in studying catalysis, properties of polymers, all of interest to various NASA missions, including those previously mentioned. In addition to discussing these applications of computational chemistry within NASA, the governing equations and the need for supercomputers for their solution is outlined.

  19. The accurate particle tracer code

    DOE PAGES

    Wang, Yulei; Liu, Jian; Qin, Hong; ...

    2017-07-20

    The Accurate Particle Tracer (APT) code is designed for systematic large-scale applications of geometric algorithms for particle dynamical simulations. Based on a large variety of advanced geometric algorithms, APT possesses long-term numerical accuracy and stability, which are critical for solving multi-scale and nonlinear problems. To provide a flexible and convenient I/O interface, the libraries of Lua and Hdf5 are used. Following a three-step procedure, users can efficiently extend the libraries of electromagnetic configurations, external non-electromagnetic forces, particle pushers, and initialization approaches by use of the extendible module. APT has been used in simulations of key physical problems, such as runawaymore » electrons in tokamaks and energetic particles in Van Allen belt. As an important realization, the APT-SW version has been successfully distributed on the world’s fastest computer, the Sunway TaihuLight supercomputer, by supporting master–slave architecture of Sunway many-core processors. Here, based on large-scale simulations of a runaway beam under parameters of the ITER tokamak, it is revealed that the magnetic ripple field can disperse the pitch-angle distribution significantly and improve the confinement of energetic runaway beam on the same time.« less

  20. User Guidelines and Best Practices for CASL VUQ Analysis Using Dakota

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adams, Brian M.; Coleman, Kayla; Hooper, Russell W.

    2016-10-04

    In general, Dakota is the Consortium for Advanced Simulation of Light Water Reactors (CASL) delivery vehicle for verification, validation, and uncertainty quantification (VUQ) algorithms. It permits ready application of the VUQ methods described above to simulation codes by CASL researchers, code developers, and application engineers. More specifically, the CASL VUQ Strategy [33] prescribes the use of Predictive Capability Maturity Model (PCMM) assessments [37]. PCMM is an expert elicitation tool designed to characterize and communicate completeness of the approaches used for computational model definition, verification, validation, and uncertainty quantification associated with an intended application. Exercising a computational model with the methodsmore » in Dakota will yield, in part, evidence for a predictive capability maturity model (PCMM) assessment. Table 1.1 summarizes some key predictive maturity related activities (see details in [33]), with examples of how Dakota fits in. This manual offers CASL partners a guide to conducting Dakota-based VUQ studies for CASL problems. It motivates various classes of Dakota methods and includes examples of their use on representative application problems. On reading, a CASL analyst should understand why and how to apply Dakota to a simulation problem.« less

  1. The accurate particle tracer code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Yulei; Liu, Jian; Qin, Hong

    The Accurate Particle Tracer (APT) code is designed for systematic large-scale applications of geometric algorithms for particle dynamical simulations. Based on a large variety of advanced geometric algorithms, APT possesses long-term numerical accuracy and stability, which are critical for solving multi-scale and nonlinear problems. To provide a flexible and convenient I/O interface, the libraries of Lua and Hdf5 are used. Following a three-step procedure, users can efficiently extend the libraries of electromagnetic configurations, external non-electromagnetic forces, particle pushers, and initialization approaches by use of the extendible module. APT has been used in simulations of key physical problems, such as runawaymore » electrons in tokamaks and energetic particles in Van Allen belt. As an important realization, the APT-SW version has been successfully distributed on the world’s fastest computer, the Sunway TaihuLight supercomputer, by supporting master–slave architecture of Sunway many-core processors. Here, based on large-scale simulations of a runaway beam under parameters of the ITER tokamak, it is revealed that the magnetic ripple field can disperse the pitch-angle distribution significantly and improve the confinement of energetic runaway beam on the same time.« less

  2. Edge Simulation Laboratory Progress and Plans

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cohen, R

    The Edge Simulation Laboratory (ESL) is a project to develop a gyrokinetic code for MFE edge plasmas based on continuum (Eulerian) techniques. ESL is a base-program activity of OFES, with an allied algorithm research activity funded by the OASCR base math program. ESL OFES funds directly support about 0.8 FTE of career staff at LLNL, a postdoc and a small fraction of an FTE at GA, and a graduate student at UCSD. In addition the allied OASCR program funds about 1/2 FTE each in the computations directorates at LBNL and LLNL. OFES ESL funding for LLNL and UCSD began inmore » fall 2005, while funding for GA and the math team began about a year ago. ESL's continuum approach is a complement to the PIC-based methods of the CPES Project, and was selected (1) because of concerns about noise issues associated with PIC in the high-density-contrast environment of the edge pedestal, (2) to be able to exploit advanced numerical methods developed for fluid codes, and (3) to build upon the successes of core continuum gyrokinetic codes such as GYRO, GS2 and GENE. The ESL project presently has three components: TEMPEST, a full-f, full-geometry (single-null divertor, or arbitrary-shape closed flux surfaces) code in E, {mu} (energy, magnetic-moment) coordinates; EGK, a simple-geometry rapid-prototype code, presently of; and the math component, which is developing and implementing algorithms for a next-generation code. Progress would be accelerated if we could find funding for a fourth, computer science, component, which would develop software infrastructure, provide user support, and address needs for data handing and analysis. We summarize the status and plans for the three funded activities.« less

  3. Numerical Zooming Between a NPSS Engine System Simulation and a One-Dimensional High Compressor Analysis Code

    NASA Technical Reports Server (NTRS)

    Follen, Gregory; auBuchon, M.

    2000-01-01

    Within NASA's High Performance Computing and Communication (HPCC) program, NASA Glenn Research Center is developing an environment for the analysis/design of aircraft engines called the Numerical Propulsion System Simulation (NPSS). NPSS focuses on the integration of multiple disciplines such as aerodynamics, structures, and heat transfer along with the concept of numerical zooming between zero-dimensional to one-, two-, and three-dimensional component engine codes. In addition, the NPSS is refining the computing and communication technologies necessary to capture complex physical processes in a timely and cost-effective manner. The vision for NPSS is to create a "numerical test cell" enabling full engine simulations overnight on cost-effective computing platforms. Of the different technology areas that contribute to the development of the NPSS Environment, the subject of this paper is a discussion on numerical zooming between a NPSS engine simulation and higher fidelity representations of the engine components (fan, compressor, burner, turbines, etc.). What follows is a description of successfully zooming one-dimensional (row-by-row) high-pressure compressor analysis results back to a zero-dimensional NPSS engine simulation and a discussion of the results illustrated using an advanced data visualization tool. This type of high fidelity system-level analysis, made possible by the zooming capability of the NPSS, will greatly improve the capability of the engine system simulation and increase the level of virtual test conducted prior to committing the design to hardware.

  4. ORAC: a molecular dynamics simulation program to explore free energy surfaces in biomolecular systems at the atomistic level.

    PubMed

    Marsili, Simone; Signorini, Giorgio Federico; Chelli, Riccardo; Marchi, Massimo; Procacci, Piero

    2010-04-15

    We present the new release of the ORAC engine (Procacci et al., Comput Chem 1997, 18, 1834), a FORTRAN suite to simulate complex biosystems at the atomistic level. The previous release of the ORAC code included multiple time steps integration, smooth particle mesh Ewald method, constant pressure and constant temperature simulations. The present release has been supplemented with the most advanced techniques for enhanced sampling in atomistic systems including replica exchange with solute tempering, metadynamics and steered molecular dynamics. All these computational technologies have been implemented for parallel architectures using the standard MPI communication protocol. ORAC is an open-source program distributed free of charge under the GNU general public license (GPL) at http://www.chim.unifi.it/orac. 2009 Wiley Periodicals, Inc.

  5. Code Samples Used for Complexity and Control

    NASA Astrophysics Data System (ADS)

    Ivancevic, Vladimir G.; Reid, Darryn J.

    2015-11-01

    The following sections are included: * MathematicaⓇ Code * Generic Chaotic Simulator * Vector Differential Operators * NLS Explorer * 2C++ Code * C++ Lambda Functions for Real Calculus * Accelerometer Data Processor * Simple Predictor-Corrector Integrator * Solving the BVP with the Shooting Method * Linear Hyperbolic PDE Solver * Linear Elliptic PDE Solver * Method of Lines for a Set of the NLS Equations * C# Code * Iterative Equation Solver * Simulated Annealing: A Function Minimum * Simple Nonlinear Dynamics * Nonlinear Pendulum Simulator * Lagrangian Dynamics Simulator * Complex-Valued Crowd Attractor Dynamics * Freeform Fortran Code * Lorenz Attractor Simulator * Complex Lorenz Attractor * Simple SGE Soliton * Complex Signal Presentation * Gaussian Wave Packet * Hermitian Matrices * Euclidean L2-Norm * Vector/Matrix Operations * Plain C-Code: Levenberg-Marquardt Optimizer * Free Basic Code: 2D Crowd Dynamics with 3000 Agents

  6. A pedagogical walkthrough of computational modeling and simulation of Wnt signaling pathway using static causal models in MATLAB.

    PubMed

    Sinha, Shriprakash

    2016-12-01

    Simulation study in systems biology involving computational experiments dealing with Wnt signaling pathways abound in literature but often lack a pedagogical perspective that might ease the understanding of beginner students and researchers in transition, who intend to work on the modeling of the pathway. This paucity might happen due to restrictive business policies which enforce an unwanted embargo on the sharing of important scientific knowledge. A tutorial introduction to computational modeling of Wnt signaling pathway in a human colorectal cancer dataset using static Bayesian network models is provided. The walkthrough might aid biologists/informaticians in understanding the design of computational experiments that is interleaved with exposition of the Matlab code and causal models from Bayesian network toolbox. The manuscript elucidates the coding contents of the advance article by Sinha (Integr. Biol. 6:1034-1048, 2014) and takes the reader in a step-by-step process of how (a) the collection and the transformation of the available biological information from literature is done, (b) the integration of the heterogeneous data and prior biological knowledge in the network is achieved, (c) the simulation study is designed, (d) the hypothesis regarding a biological phenomena is transformed into computational framework, and (e) results and inferences drawn using d -connectivity/separability are reported. The manuscript finally ends with a programming assignment to help the readers get hands-on experience of a perturbation project. Description of Matlab files is made available under GNU GPL v3 license at the Google code project on https://code.google.com/p/static-bn-for-wnt-signaling-pathway and https: //sites.google.com/site/shriprakashsinha/shriprakashsinha/projects/static-bn-for-wnt-signaling-pathway. Latest updates can be found in the latter website.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guo, Z.; Zweibaum, N.; Shao, M.

    The University of California, Berkeley (UCB) is performing thermal hydraulics safety analysis to develop the technical basis for design and licensing of fluoride-salt-cooled, high-temperature reactors (FHRs). FHR designs investigated by UCB use natural circulation for emergency, passive decay heat removal when normal decay heat removal systems fail. The FHR advanced natural circulation analysis (FANCY) code has been developed for assessment of passive decay heat removal capability and safety analysis of these innovative system designs. The FANCY code uses a one-dimensional, semi-implicit scheme to solve for pressure-linked mass, momentum and energy conservation equations. Graph theory is used to automatically generate amore » staggered mesh for complicated pipe network systems. Heat structure models have been implemented for three types of boundary conditions (Dirichlet, Neumann and Robin boundary conditions). Heat structures can be composed of several layers of different materials, and are used for simulation of heat structure temperature distribution and heat transfer rate. Control models are used to simulate sequences of events or trips of safety systems. A proportional-integral controller is also used to automatically make thermal hydraulic systems reach desired steady state conditions. A point kinetics model is used to model reactor kinetics behavior with temperature reactivity feedback. The underlying large sparse linear systems in these models are efficiently solved by using direct and iterative solvers provided by the SuperLU code on high performance machines. Input interfaces are designed to increase the flexibility of simulation for complicated thermal hydraulic systems. In conclusion, this paper mainly focuses on the methodology used to develop the FANCY code, and safety analysis of the Mark 1 pebble-bed FHR under development at UCB is performed.« less

  8. Benchmark Simulation of Natural Circulation Cooling System with Salt Working Fluid Using SAM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahmed, K. K.; Scarlat, R. O.; Hu, R.

    Liquid salt-cooled reactors, such as the Fluoride Salt-Cooled High-Temperature Reactor (FHR), offer passive decay heat removal through natural circulation using Direct Reactor Auxiliary Cooling System (DRACS) loops. The behavior of such systems should be well-understood through performance analysis. The advanced system thermal-hydraulics tool System Analysis Module (SAM) from Argonne National Laboratory has been selected for this purpose. The work presented here is part of a larger study in which SAM modeling capabilities are being enhanced for the system analyses of FHR or Molten Salt Reactors (MSR). Liquid salt thermophysical properties have been implemented in SAM, as well as properties ofmore » Dowtherm A, which is used as a simulant fluid for scaled experiments, for future code validation studies. Additional physics modules to represent phenomena specific to salt-cooled reactors, such as freezing of coolant, are being implemented in SAM. This study presents a useful first benchmark for the applicability of SAM to liquid salt-cooled reactors: it provides steady-state and transient comparisons for a salt reactor system. A RELAP5-3D model of the Mark-1 Pebble-Bed FHR (Mk1 PB-FHR), and in particular its DRACS loop for emergency heat removal, provides steady state and transient results for flow rates and temperatures in the system that are used here for code-to-code comparison with SAM. The transient studied is a loss of forced circulation with SCRAM event. To the knowledge of the authors, this is the first application of SAM to FHR or any other molten salt reactors. While building these models in SAM, any gaps in the code’s capability to simulate such systems are identified and addressed immediately, or listed as future improvements to the code.« less

  9. Development of an object-oriented finite element program: application to metal-forming and impact simulations

    NASA Astrophysics Data System (ADS)

    Pantale, O.; Caperaa, S.; Rakotomalala, R.

    2004-07-01

    During the last 50 years, the development of better numerical methods and more powerful computers has been a major enterprise for the scientific community. In the same time, the finite element method has become a widely used tool for researchers and engineers. Recent advances in computational software have made possible to solve more physical and complex problems such as coupled problems, nonlinearities, high strain and high-strain rate problems. In this field, an accurate analysis of large deformation inelastic problems occurring in metal-forming or impact simulations is extremely important as a consequence of high amount of plastic flow. In this presentation, the object-oriented implementation, using the C++ language, of an explicit finite element code called DynELA is presented. The object-oriented programming (OOP) leads to better-structured codes for the finite element method and facilitates the development, the maintainability and the expandability of such codes. The most significant advantage of OOP is in the modeling of complex physical systems such as deformation processing where the overall complex problem is partitioned in individual sub-problems based on physical, mathematical or geometric reasoning. We first focus on the advantages of OOP for the development of scientific programs. Specific aspects of OOP, such as the inheritance mechanism, the operators overload procedure or the use of template classes are detailed. Then we present the approach used for the development of our finite element code through the presentation of the kinematics, conservative and constitutive laws and their respective implementation in C++. Finally, the efficiency and accuracy of our finite element program are investigated using a number of benchmark tests relative to metal forming and impact simulations.

  10. Center for Center for Technology for Advanced Scientific Component Software (TASCS)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kostadin, Damevski

    A resounding success of the Scientific Discovery through Advanced Computing (SciDAC) program is that high-performance computational science is now universally recognized as a critical aspect of scientific discovery [71], complementing both theoretical and experimental research. As scientific communities prepare to exploit unprecedented computing capabilities of emerging leadership-class machines for multi-model simulations at the extreme scale [72], it is more important than ever to address the technical and social challenges of geographically distributed teams that combine expertise in domain science, applied mathematics, and computer science to build robust and flexible codes that can incorporate changes over time. The Center for Technologymore » for Advanced Scientific Component Software (TASCS)1 tackles these these issues by exploiting component-based software development to facilitate collaborative high-performance scientific computing.« less

  11. Development and Validation of a Fast, Accurate and Cost-Effective Aeroservoelastic Method on Advanced Parallel Computing Systems

    NASA Technical Reports Server (NTRS)

    Goodwin, Sabine A.; Raj, P.

    1999-01-01

    Progress to date towards the development and validation of a fast, accurate and cost-effective aeroelastic method for advanced parallel computing platforms such as the IBM SP2 and the SGI Origin 2000 is presented in this paper. The ENSAERO code, developed at the NASA-Ames Research Center has been selected for this effort. The code allows for the computation of aeroelastic responses by simultaneously integrating the Euler or Navier-Stokes equations and the modal structural equations of motion. To assess the computational performance and accuracy of the ENSAERO code, this paper reports the results of the Navier-Stokes simulations of the transonic flow over a flexible aeroelastic wing body configuration. In addition, a forced harmonic oscillation analysis in the frequency domain and an analysis in the time domain are done on a wing undergoing a rigid pitch and plunge motion. Finally, to demonstrate the ENSAERO flutter-analysis capability, aeroelastic Euler and Navier-Stokes computations on an L-1011 wind tunnel model including pylon, nacelle and empennage are underway. All computational solutions are compared with experimental data to assess the level of accuracy of ENSAERO. As the computations described above are performed, a meticulous log of computational performance in terms of wall clock time, execution speed, memory and disk storage is kept. Code scalability is also demonstrated by studying the impact of varying the number of processors on computational performance on the IBM SP2 and the Origin 2000 systems.

  12. A new way of setting the phases for cosmological multiscale Gaussian initial conditions

    NASA Astrophysics Data System (ADS)

    Jenkins, Adrian

    2013-09-01

    We describe how to define an extremely large discrete realization of a Gaussian white noise field that has a hierarchical structure and the property that the value of any part of the field can be computed quickly. Tiny subregions of such a field can be used to set the phase information for Gaussian initial conditions for individual cosmological simulations of structure formation. This approach has several attractive features: (i) the hierarchical structure based on an octree is particularly well suited for generating follow-up resimulation or zoom initial conditions; (ii) the phases are defined for all relevant physical scales in advance so that resimulation initial conditions are, by construction, consistent both with their parent simulation and with each other; (iii) the field can easily be made public by releasing a code to compute it - once public, phase information can be shared or published by specifying a spatial location within the realization. In this paper, we describe the principles behind creating such realizations. We define an example called Panphasia and in a companion paper by Jenkins and Booth (2013) make public a code to compute it. With 50 octree levels Panphasia spans a factor of more than 1015 in linear scale - a range that significantly exceeds the ratio of the current Hubble radius to the putative cold dark matter free-streaming scale. We show how to modify a code used for making cosmological and resimulation initial conditions so that it can take the phase information from Panphasia and, using this code, we demonstrate that it is possible to make good quality resimulation initial conditions. We define a convention for publishing phase information from Panphasia and publish the initial phases for several of the Virgo Consortium's most recent cosmological simulations including the 303 billion particle MXXL simulation. Finally, for reference, we give the locations and properties of several dark matter haloes that can be resimulated within these volumes.

  13. Modeling the Blast Load Simulator Airblast Environment using First Principles Codes. Report 1, Blast Load Simulator Environment

    DTIC Science & Technology

    2016-11-01

    ER D C/ G SL T R- 16 -3 1 Modeling the Blast Load Simulator Airblast Environment Using First Principles Codes Report 1, Blast Load...Simulator Airblast Environment using First Principles Codes Report 1, Blast Load Simulator Environment Gregory C. Bessette, James L. O’Daniel...evaluate several first principles codes (FPCs) for modeling airblast environments typical of those encountered in the BLS. The FPCs considered were

  14. NPSS Space Team

    NASA Technical Reports Server (NTRS)

    Lavelle, Tom

    2003-01-01

    The objective is to increase the usability of the current NPSS code/architecture by incorporating an advanced space transportation propulsion system capability into the existing NPSS code and begin defining advanced capabilities for NPSS and provide an enhancement for the NPSS code/architecture.

  15. Integrated Predictive Tools for Customizing Microstructure and Material Properties of Additively Manufactured Aerospace Components

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Radhakrishnan, Balasubramaniam; Fattebert, Jean-Luc; Gorti, Sarma B.

    Additive Manufacturing (AM) refers to a process by which digital three-dimensional (3-D) design data is converted to build up a component by depositing material layer-by-layer. United Technologies Corporation (UTC) is currently involved in fabrication and certification of several AM aerospace structural components made from aerospace materials. This is accomplished by using optimized process parameters determined through numerous design-of-experiments (DOE)-based studies. Certification of these components is broadly recognized as a significant challenge, with long lead times, very expensive new product development cycles and very high energy consumption. Because of these challenges, United Technologies Research Center (UTRC), together with UTC business unitsmore » have been developing and validating an advanced physics-based process model. The specific goal is to develop a physics-based framework of an AM process and reliably predict fatigue properties of built-up structures as based on detailed solidification microstructures. Microstructures are predicted using process control parameters including energy source power, scan velocity, deposition pattern, and powder properties. The multi-scale multi-physics model requires solution and coupling of governing physics that will allow prediction of the thermal field and enable solution at the microstructural scale. The state-of-the-art approach to solve these problems requires a huge computational framework and this kind of resource is only available within academia and national laboratories. The project utilized the parallel phase-fields codes at Oak Ridge National Laboratory (ORNL) and Lawrence Livermore National Laboratory (LLNL), along with the high-performance computing (HPC) capabilities existing at the two labs to demonstrate the simulation of multiple dendrite growth in threedimensions (3-D). The LLNL code AMPE was used to implement the UTRC phase field model that was previously developed for a model binary alloy, and the simulation results were compared against the UTRC simulation results, followed by extension of the UTRC model to simulate multiple dendrite growth in 3-D. The ORNL MEUMAPPS code was used to simulate dendritic growth in a model ternary alloy with the same equilibrium solidification range as the Ni-base alloy 718 using realistic model parameters, including thermodynamic integration with a Calphad based model for the ternary alloy. Implementation of the UTRC model in AMPE met with several numerical and parametric issues that were resolved and good comparison between the simulation results obtained by the two codes was demonstrated for two dimensional (2-D) dendrites. 3-D dendrite growth was then demonstrated with the AMPE code using nondimensional parameters obtained in 2-D simulations. Multiple dendrite growth in 2-D and 3-D were demonstrated using ORNL’s MEUMAPPS code using simple thermal boundary conditions. MEUMAPPS was then modified to incorporate the complex, time-dependent thermal boundary conditions obtained by UTRC’s thermal modeling of single track AM experiments to drive the phase field simulations. The results were in good agreement with UTRC’s experimental measurements.« less

  16. A 3D-CFD code for accurate prediction of fluid flows and fluid forces in seals

    NASA Technical Reports Server (NTRS)

    Athavale, M. M.; Przekwas, A. J.; Hendricks, R. C.

    1994-01-01

    Current and future turbomachinery requires advanced seal configurations to control leakage, inhibit mixing of incompatible fluids and to control the rotodynamic response. In recognition of a deficiency in the existing predictive methodology for seals, a seven year effort was established in 1990 by NASA's Office of Aeronautics Exploration and Technology, under the Earth-to-Orbit Propulsion program, to develop validated Computational Fluid Dynamics (CFD) concepts, codes and analyses for seals. The effort will provide NASA and the U.S. Aerospace Industry with advanced CFD scientific codes and industrial codes for analyzing and designing turbomachinery seals. An advanced 3D CFD cylindrical seal code has been developed, incorporating state-of-the-art computational methodology for flow analysis in straight, tapered and stepped seals. Relevant computational features of the code include: stationary/rotating coordinates, cylindrical and general Body Fitted Coordinates (BFC) systems, high order differencing schemes, colocated variable arrangement, advanced turbulence models, incompressible/compressible flows, and moving grids. This paper presents the current status of code development, code demonstration for predicting rotordynamic coefficients, numerical parametric study of entrance loss coefficients for generic annular seals, and plans for code extensions to labyrinth, damping, and other seal configurations.

  17. Kinetic Simulation and Energetic Neutral Atom Imaging of the Magnetosphere

    NASA Technical Reports Server (NTRS)

    Fok, Mei-Ching H.

    2011-01-01

    Advanced simulation tools and measurement techniques have been developed to study the dynamic magnetosphere and its response to drivers in the solar wind. The Comprehensive Ring Current Model (CRCM) is a kinetic code that solves the 3D distribution in space, energy and pitch-angle information of energetic ions and electrons. Energetic Neutral Atom (ENA) imagers have been carried in past and current satellite missions. Global morphology of energetic ions were revealed by the observed ENA images. We have combined simulation and ENA analysis techniques to study the development of ring current ions during magnetic storms and substorms. We identify the timing and location of particle injection and loss. We examine the evolution of ion energy and pitch-angle distribution during different phases of a storm. In this talk we will discuss the findings from our ring current studies and how our simulation and ENA analysis tools can be applied to the upcoming TRIO-CINAMA mission.

  18. State of the art in electromagnetic modeling for the Compact Linear Collider

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Candel, Arno; Kabel, Andreas; Lee, Lie-Quan

    SLAC's Advanced Computations Department (ACD) has developed the parallel 3D electromagnetic time-domain code T3P for simulations of wakefields and transients in complex accelerator structures. T3P is based on state-of-the-art Finite Element methods on unstructured grids and features unconditional stability, quadratic surface approximation and up to 6th-order vector basis functions for unprecedented simulation accuracy. Optimized for large-scale parallel processing on leadership supercomputing facilities, T3P allows simulations of realistic 3D structures with fast turn-around times, aiding the design of the next generation of accelerator facilities. Applications include simulations of the proposed two-beam accelerator structures for the Compact Linear Collider (CLIC) - wakefieldmore » damping in the Power Extraction and Transfer Structure (PETS) and power transfer to the main beam accelerating structures are investigated.« less

  19. SHARP User Manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, Y. Q.; Shemon, E. R.; Thomas, J. W.

    SHARP is an advanced modeling and simulation toolkit for the analysis of nuclear reactors. It is comprised of several components including physical modeling tools, tools to integrate the physics codes for multi-physics analyses, and a set of tools to couple the codes within the MOAB framework. Physics modules currently include the neutronics code PROTEUS, the thermal-hydraulics code Nek5000, and the structural mechanics code Diablo. This manual focuses on performing multi-physics calculations with the SHARP ToolKit. Manuals for the three individual physics modules are available with the SHARP distribution to help the user to either carry out the primary multi-physics calculationmore » with basic knowledge or perform further advanced development with in-depth knowledge of these codes. This manual provides step-by-step instructions on employing SHARP, including how to download and install the code, how to build the drivers for a test case, how to perform a calculation and how to visualize the results. Since SHARP has some specific library and environment dependencies, it is highly recommended that the user read this manual prior to installing SHARP. Verification tests cases are included to check proper installation of each module. It is suggested that the new user should first follow the step-by-step instructions provided for a test problem in this manual to understand the basic procedure of using SHARP before using SHARP for his/her own analysis. Both reference output and scripts are provided along with the test cases in order to verify correct installation and execution of the SHARP package. At the end of this manual, detailed instructions are provided on how to create a new test case so that user can perform novel multi-physics calculations with SHARP. Frequently asked questions are listed at the end of this manual to help the user to troubleshoot issues.« less

  20. General purpose graphics-processing-unit implementation of cosmological domain wall network evolution.

    PubMed

    Correia, J R C C C; Martins, C J A P

    2017-10-01

    Topological defects unavoidably form at symmetry breaking phase transitions in the early universe. To probe the parameter space of theoretical models and set tighter experimental constraints (exploiting the recent advances in astrophysical observations), one requires more and more demanding simulations, and therefore more hardware resources and computation time. Improving the speed and efficiency of existing codes is essential. Here we present a general purpose graphics-processing-unit implementation of the canonical Press-Ryden-Spergel algorithm for the evolution of cosmological domain wall networks. This is ported to the Open Computing Language standard, and as a consequence significant speedups are achieved both in two-dimensional (2D) and 3D simulations.

  1. The updated algorithm of the Energy Consumption Program (ECP): A computer model simulating heating and cooling energy loads in buildings

    NASA Technical Reports Server (NTRS)

    Lansing, F. L.; Strain, D. M.; Chai, V. W.; Higgins, S.

    1979-01-01

    The energy Comsumption Computer Program was developed to simulate building heating and cooling loads and compute thermal and electric energy consumption and cost. This article reports on the new additional algorithms and modifications made in an effort to widen the areas of application. The program structure was rewritten accordingly to refine and advance the building model and to further reduce the processing time and cost. The program is noted for its very low cost and ease of use compared to other available codes. The accuracy of computations is not sacrificed however, since the results are expected to lie within + or - 10% of actual energy meter readings.

  2. GROMACS: High performance molecular simulations through multi-level parallelism from laptops to supercomputers

    DOE PAGES

    Abraham, Mark James; Murtola, Teemu; Schulz, Roland; ...

    2015-07-15

    GROMACS is one of the most widely used open-source and free software codes in chemistry, used primarily for dynamical simulations of biomolecules. It provides a rich set of calculation types, preparation and analysis tools. Several advanced techniques for free-energy calculations are supported. In version 5, it reaches new performance heights, through several new and enhanced parallelization algorithms. This work on every level; SIMD registers inside cores, multithreading, heterogeneous CPU–GPU acceleration, state-of-the-art 3D domain decomposition, and ensemble-level parallelization through built-in replica exchange and the separate Copernicus framework. Finally, the latest best-in-class compressed trajectory storage format is supported.

  3. GROMACS: High performance molecular simulations through multi-level parallelism from laptops to supercomputers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abraham, Mark James; Murtola, Teemu; Schulz, Roland

    GROMACS is one of the most widely used open-source and free software codes in chemistry, used primarily for dynamical simulations of biomolecules. It provides a rich set of calculation types, preparation and analysis tools. Several advanced techniques for free-energy calculations are supported. In version 5, it reaches new performance heights, through several new and enhanced parallelization algorithms. This work on every level; SIMD registers inside cores, multithreading, heterogeneous CPU–GPU acceleration, state-of-the-art 3D domain decomposition, and ensemble-level parallelization through built-in replica exchange and the separate Copernicus framework. Finally, the latest best-in-class compressed trajectory storage format is supported.

  4. Advanced LIGO constraints on neutron star mergers and r-process sites

    DOE PAGES

    Côté, Benoit; Belczynski, Krzysztof; Fryer, Chris L.; ...

    2017-02-20

    The role of compact binary mergers as the main production site of r-process elements is investigated by combining stellar abundances of Eu observed in the Milky Way, galactic chemical evolution (GCE) simulations, and binary population synthesis models, and gravitational wave measurements from Advanced LIGO. We compiled and reviewed seven recent GCE studies to extract the frequency of neutron star–neutron star (NS–NS) mergers that is needed in order to reproduce the observed [Eu/Fe] versus [Fe/H] relationship. We used our simple chemical evolution code to explore the impact of different analytical delay-time distribution functions for NS–NS mergers. We then combined our metallicity-dependent population synthesis models with our chemical evolution code to bring their predictions, for both NS–NS mergers and black hole–neutron star mergers, into a GCE context. Finally, we convolved our results with the cosmic star formation history to provide a direct comparison with current and upcoming Advanced LIGO measurements. When assuming that NS–NS mergers are the exclusive r-process sites, and that the ejected r-process mass per merger event is 0.01 Mmore » $${}_{\\odot }$$, the number of NS–NS mergers needed in GCE studies is about 10 times larger than what is predicted by standard population synthesis models. Here, these two distinct fields can only be consistent with each other when assuming optimistic rates, massive NS–NS merger ejecta, and low Fe yields for massive stars. For now, population synthesis models and GCE simulations are in agreement with the current upper limit (O1) established by Advanced LIGO during their first run of observations. Upcoming measurements will provide an important constraint on the actual local NS–NS merger rate, will provide valuable insights on the plausibility of the GCE requirement, and will help to define whether or not compact binary mergers can be the dominant source of r-process elements in the universe.« less

  5. Optimization of monitoring networks based on uncertainty quantification of model predictions of contaminant transport

    NASA Astrophysics Data System (ADS)

    Vesselinov, V. V.; Harp, D.

    2010-12-01

    The process of decision making to protect groundwater resources requires a detailed estimation of uncertainties in model predictions. Various uncertainties associated with modeling a natural system, such as: (1) measurement and computational errors; (2) uncertainties in the conceptual model and model-parameter estimates; (3) simplifications in model setup and numerical representation of governing processes, contribute to the uncertainties in the model predictions. Due to this combination of factors, the sources of predictive uncertainties are generally difficult to quantify individually. Decision support related to optimal design of monitoring networks requires (1) detailed analyses of existing uncertainties related to model predictions of groundwater flow and contaminant transport, (2) optimization of the proposed monitoring network locations in terms of their efficiency to detect contaminants and provide early warning. We apply existing and newly-proposed methods to quantify predictive uncertainties and to optimize well locations. An important aspect of the analysis is the application of newly-developed optimization technique based on coupling of Particle Swarm and Levenberg-Marquardt optimization methods which proved to be robust and computationally efficient. These techniques and algorithms are bundled in a software package called MADS. MADS (Model Analyses for Decision Support) is an object-oriented code that is capable of performing various types of model analyses and supporting model-based decision making. The code can be executed under different computational modes, which include (1) sensitivity analyses (global and local), (2) Monte Carlo analysis, (3) model calibration, (4) parameter estimation, (5) uncertainty quantification, and (6) model selection. The code can be externally coupled with any existing model simulator through integrated modules that read/write input and output files using a set of template and instruction files (consistent with the PEST I/O protocol). MADS can also be internally coupled with a series of built-in analytical simulators. MADS provides functionality to work directly with existing control files developed for the code PEST (Doherty 2009). To perform the computational modes mentioned above, the code utilizes (1) advanced Latin-Hypercube sampling techniques (including Improved Distributed Sampling), (2) various gradient-based Levenberg-Marquardt optimization methods, (3) advanced global optimization methods (including Particle Swarm Optimization), and (4) a selection of alternative objective functions. The code has been successfully applied to perform various model analyses related to environmental management of real contamination sites. Examples include source identification problems, quantification of uncertainty, model calibration, and optimization of monitoring networks. The methodology and software codes are demonstrated using synthetic and real case studies where monitoring networks are optimized taking into account the uncertainty in model predictions of contaminant transport.

  6. Heat-transfer optimization of a high-spin thermal battery

    NASA Astrophysics Data System (ADS)

    Krieger, Frank C.

    Recent advancements in thermal battery technology have produced batteries incorporating a fusible material heat reservoir for operating temperature control that operate reliably under the high spin rates often encountered in ordnance applications. Attention is presently given to the heat-transfer optimization of a high-spin thermal battery employing a nonfusible steel heat reservoir, on the basis of a computer code that simulated the effect of an actual fusible material heat reservoir on battery performance. Both heat paper and heat pellet employing thermal battery configurations were considered.

  7. Laser program annual report, 1980

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coleman, L.W.; Krupke, W.F.; Strack, J.R.

    1981-06-01

    Volume 2 contains five sections that cover the areas of target design, target fabrication, diagnostics, and fusion experiments. Section 3 reports on target design activities, plasma theory and simulation, code development, and atomic theory. Section 4 presents the accomplishments of the Target Fabrication Group, Section 5 contains the results of our diagnostics development, and Section 6 describes advances made in the management and analysis of experimental data. Finally, Section 7 in Volume 2 reports the results of laser target experiments conducted during the year.

  8. Engine dynamic analysis with general nonlinear finite element codes

    NASA Technical Reports Server (NTRS)

    Adams, M. L.; Padovan, J.; Fertis, D. G.

    1991-01-01

    A general engine dynamic analysis as a standard design study computational tool is described for the prediction and understanding of complex engine dynamic behavior. Improved definition of engine dynamic response provides valuable information and insights leading to reduced maintenance and overhaul costs on existing engine configurations. Application of advanced engine dynamic simulation methods provides a considerable cost reduction in the development of new engine designs by eliminating some of the trial and error process done with engine hardware development.

  9. Research in Computational Astrobiology

    NASA Technical Reports Server (NTRS)

    Chaban, Galina; Colombano, Silvano; Scargle, Jeff; New, Michael H.; Pohorille, Andrew; Wilson, Michael A.

    2003-01-01

    We report on several projects in the field of computational astrobiology, which is devoted to advancing our understanding of the origin, evolution and distribution of life in the Universe using theoretical and computational tools. Research projects included modifying existing computer simulation codes to use efficient, multiple time step algorithms, statistical methods for analysis of astrophysical data via optimal partitioning methods, electronic structure calculations on water-nuclei acid complexes, incorporation of structural information into genomic sequence analysis methods and calculations of shock-induced formation of polycylic aromatic hydrocarbon compounds.

  10. Experimental and Computational Analysis of Unidirectional Flow Through Stirling Engine Heater Head

    NASA Technical Reports Server (NTRS)

    Wilson, Scott D.; Dyson, Rodger W.; Tew, Roy C.; Demko, Rikako

    2006-01-01

    A high efficiency Stirling Radioisotope Generator (SRG) is being developed for possible use in long-duration space science missions. NASA s advanced technology goals for next generation Stirling convertors include increasing the Carnot efficiency and percent of Carnot efficiency. To help achieve these goals, a multi-dimensional Computational Fluid Dynamics (CFD) code is being developed to numerically model unsteady fluid flow and heat transfer phenomena of the oscillating working gas inside Stirling convertors. In the absence of transient pressure drop data for the zero mean oscillating multi-dimensional flows present in the Technology Demonstration Convertors on test at NASA Glenn Research Center, unidirectional flow pressure drop test data is used to compare against 2D and 3D computational solutions. This study focuses on tracking pressure drop and mass flow rate data for unidirectional flow though a Stirling heater head using a commercial CFD code (CFD-ACE). The commercial CFD code uses a porous-media model which is dependent on permeability and the inertial coefficient present in the linear and nonlinear terms of the Darcy-Forchheimer equation. Permeability and inertial coefficient were calculated from unidirectional flow test data. CFD simulations of the unidirectional flow test were validated using the porous-media model input parameters which increased simulation accuracy by 14 percent on average.

  11. MCNP6 Simulation of Light and Medium Nuclei Fragmentation at Intermediate Energies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mashnik, Stepan Georgievich; Kerby, Leslie Marie

    2015-05-22

    MCNP6, the latest and most advanced LANL Monte Carlo transport code, representing a merger of MCNP5 and MCNPX, is actually much more than the sum of those two computer codes; MCNP6 is available to the public via RSICC at Oak Ridge, TN, USA. In the present work, MCNP6 was validated and verified (V&V) against different experimental data on intermediate-energy fragmentation reactions, and results by several other codes, using mainly the latest modifications of the Cascade-Exciton Model (CEM) and of the Los Alamos version of the Quark-Gluon String Model (LAQGSM) event generators CEM03.03 and LAQGSM03.03. It was found that MCNP6 usingmore » CEM03.03 and LAQGSM03.03 describes well fragmentation reactions induced on light and medium target nuclei by protons and light nuclei of energies around 1 GeV/nucleon and below, and can serve as a reliable simulation tool for different applications, like cosmic-ray-induced single event upsets (SEU’s), radiation protection, and cancer therapy with proton and ion beams, to name just a few. Future improvements of the predicting capabilities of MCNP6 for such reactions are possible, and are discussed in this work.« less

  12. Development of a Multifidelity Approach to Acoustic Liner Impedance Eduction

    NASA Technical Reports Server (NTRS)

    Nark, Douglas M.; Jones, Michael G.

    2017-01-01

    The use of acoustic liners has proven to be extremely effective in reducing aircraft engine fan noise transmission/radiation. However, the introduction of advanced fan designs and shorter engine nacelles has highlighted a need for novel acoustic liner designs that provide increased fan noise reduction over a broader frequency range. To achieve aggressive noise reduction goals, advanced broadband liner designs, such as zone liners and variable impedance liners, will likely depart from conventional uniform impedance configurations. Therefore, educing the impedance of these axial- and/or spanwise-variable impedance liners will require models that account for three-dimensional effects, thereby increasing computational expense. Thus, it would seem advantageous to investigate the use of multifidelity modeling approaches to impedance eduction for these advanced designs. This paper describes an extension of the use of the CDUCT-LaRC code to acoustic liner impedance eduction. The proposed approach is applied to a hardwall insert and conventional liner using simulated data. Educed values compare well with those educed using two extensively tested and validated approaches. The results are very promising and provide justification to further pursue the complementary use of CDUCT-LaRC with the currently used finite element codes to increase the efficiency of the eduction process for configurations involving three-dimensional effects.

  13. Tinker-HP: a massively parallel molecular dynamics package for multiscale simulations of large complex systems with advanced point dipole polarizable force fields† †Electronic supplementary information (ESI) available. See DOI: 10.1039/c7sc04531j

    PubMed Central

    Lagardère, Louis; Jolly, Luc-Henri; Lipparini, Filippo; Aviat, Félix; Stamm, Benjamin; Jing, Zhifeng F.; Harger, Matthew; Torabifard, Hedieh; Cisneros, G. Andrés; Schnieders, Michael J.; Gresh, Nohad; Maday, Yvon; Ren, Pengyu Y.; Ponder, Jay W.

    2017-01-01

    We present Tinker-HP, a massively MPI parallel package dedicated to classical molecular dynamics (MD) and to multiscale simulations, using advanced polarizable force fields (PFF) encompassing distributed multipoles electrostatics. Tinker-HP is an evolution of the popular Tinker package code that conserves its simplicity of use and its reference double precision implementation for CPUs. Grounded on interdisciplinary efforts with applied mathematics, Tinker-HP allows for long polarizable MD simulations on large systems up to millions of atoms. We detail in the paper the newly developed extension of massively parallel 3D spatial decomposition to point dipole polarizable models as well as their coupling to efficient Krylov iterative and non-iterative polarization solvers. The design of the code allows the use of various computer systems ranging from laboratory workstations to modern petascale supercomputers with thousands of cores. Tinker-HP proposes therefore the first high-performance scalable CPU computing environment for the development of next generation point dipole PFFs and for production simulations. Strategies linking Tinker-HP to Quantum Mechanics (QM) in the framework of multiscale polarizable self-consistent QM/MD simulations are also provided. The possibilities, performances and scalability of the software are demonstrated via benchmarks calculations using the polarizable AMOEBA force field on systems ranging from large water boxes of increasing size and ionic liquids to (very) large biosystems encompassing several proteins as well as the complete satellite tobacco mosaic virus and ribosome structures. For small systems, Tinker-HP appears to be competitive with the Tinker-OpenMM GPU implementation of Tinker. As the system size grows, Tinker-HP remains operational thanks to its access to distributed memory and takes advantage of its new algorithmic enabling for stable long timescale polarizable simulations. Overall, a several thousand-fold acceleration over a single-core computation is observed for the largest systems. The extension of the present CPU implementation of Tinker-HP to other computational platforms is discussed. PMID:29732110

  14. FASTGRASS implementation in BISON and Fission gas behavior characterization in UO 2 and connection to validating MARMOT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yun, Di; Mo, Kun; Ye, Bei

    2015-09-30

    This activity is supported by the US Nuclear Energy Advanced Modeling and Simulation (NEAMS) Fuels Product Line (FPL). Two major accomplishments in FY 15 are summarized in this report: (1) implementation of the FASTGRASS module in the BISON code; and (2) a Xe implantation experiment for large-grained UO 2. Both BISON AND MARMOT codes have been developed by Idaho National Laboratory (INL) to enable next generation fuel performance modeling capability as part of the NEAMS Program FPL. To contribute to the development of the Moose-Bison-Marmot (MBM) code suite, we have implemented the FASTGRASS fission gas model as a module inmore » the BISON code. Based on rate theory formulations, the coupled FASTGRASS module in BISON is capable of modeling LWR oxide fuel fission gas behavior and fission gas release. In addition, we conducted a Xe implantation experiment at the Argonne Tandem Linac Accelerator System (ATLAS) in order to produce the needed UO 2 samples with desired bubble morphology. With these samples, further experiments to study the fission gas diffusivity are planned to provide validation data for the Fission Gas Release Model in MARMOT codes.« less

  15. Advanced adaptive computational methods for Navier-Stokes simulations in rotorcraft aerodynamics

    NASA Technical Reports Server (NTRS)

    Stowers, S. T.; Bass, J. M.; Oden, J. T.

    1993-01-01

    A phase 2 research and development effort was conducted in area transonic, compressible, inviscid flows with an ultimate goal of numerically modeling complex flows inherent in advanced helicopter blade designs. The algorithms and methodologies therefore are classified as adaptive methods, which are error estimation techniques for approximating the local numerical error, and automatically refine or unrefine the mesh so as to deliver a given level of accuracy. The result is a scheme which attempts to produce the best possible results with the least number of grid points, degrees of freedom, and operations. These types of schemes automatically locate and resolve shocks, shear layers, and other flow details to an accuracy level specified by the user of the code. The phase 1 work involved a feasibility study of h-adaptive methods for steady viscous flows, with emphasis on accurate simulation of vortex initiation, migration, and interaction. Phase 2 effort focused on extending these algorithms and methodologies to a three-dimensional topology.

  16. Design of the Experimental Exposure Conditions to Simulate Ionizing Radiation Effects on Candidate Replacement Materials for the Hubble Space Telescope

    NASA Technical Reports Server (NTRS)

    Smith, L. Montgomery

    1998-01-01

    In this effort, experimental exposure times for monoenergetic electrons and protons were determined to simulate the space radiation environment effects on Teflon components of the Hubble Space Telescope. Although the energy range of the available laboratory particle accelerators was limited, optimal exposure times for 50 keV, 220 keV, 350 keV, and 500 KeV electrons were calculated that produced a dose-versus-depth profile that approximated the full spectrum profile, and were realizable with existing equipment. For the case of proton exposure, the limited energy range of the laboratory accelerator restricted simulation of the dose to a depth of .5 mil. Also, while optimal exposure times were found for 200 keV, 500 keV and 700 keV protons that simulated the full spectrum dose-versus-depth profile to this depth, they were of such short duration that the existing laboratory could not be controlled to within the required accuracy. In addition to the obvious experimental issues, other areas exist in which the analytical work could be advanced. Improved computer codes for the dose prediction- along with improved methodology for data input and output- would accelerate and make more accurate the calculational aspects. This is particularly true in the case of proton fluxes where a paucity of available predictive software appears to exist. The dated nature of many of the existing Monte Carlo particle/radiation transport codes raises the issue as to whether existing codes are sufficient for this type of analysis. Other areas that would result in greater fidelity of laboratory exposure effects to the space environment is the use of a larger number of monoenergetic particle fluxes and improved optimization algorithms to determine the weighting values.

  17. Investigation of advanced counterrotation blade configuration concepts for high speed turboprop systems. Task 4: Advanced fan section aerodynamic analysis computer program user's manual

    NASA Technical Reports Server (NTRS)

    Crook, Andrew J.; Delaney, Robert A.

    1992-01-01

    The computer program user's manual for the ADPACAPES (Advanced Ducted Propfan Analysis Code-Average Passage Engine Simulation) program is included. The objective of the computer program is development of a three-dimensional Euler/Navier-Stokes flow analysis for fan section/engine geometries containing multiple blade rows and multiple spanwise flow splitters. An existing procedure developed by Dr. J. J. Adamczyk and associates at the NASA Lewis Research Center was modified to accept multiple spanwise splitter geometries and simulate engine core conditions. The numerical solution is based upon a finite volume technique with a four stage Runge-Kutta time marching procedure. Multiple blade row solutions are based upon the average-passage system of equations. The numerical solutions are performed on an H-type grid system, with meshes meeting the requirement of maintaining a common axisymmetric mesh for each blade row grid. The analysis was run on several geometry configurations ranging from one to five blade rows and from one to four radial flow splitters. The efficiency of the solution procedure was shown to be the same as the original analysis.

  18. A comparison between implicit and hybrid methods for the calculation of steady and unsteady inlet flows

    NASA Technical Reports Server (NTRS)

    Coakley, T. J.; Hsieh, T.

    1985-01-01

    Numerical simulation of steady and unsteady transonic diffuser flows using two different computer codes are discussed and compared with experimental data. The codes solve the Reynolds-averaged, compressible, Navier-Stokes equations using various turbulence models. One of the codes has been applied extensively to diffuser flows and uses the hybrid method of MacCormack. This code is relatively inefficient numerically. The second code, which was developed more recently, is fully implicit and is relatively efficient numerically. Simulations of steady flows using the implicit code are shown to be in good agreement with simulations using the hybrid code. Both simulations are in good agreement with experimental results. Simulations of unsteady flows using the two codes are in good qualitative agreement with each other, although the quantitative agreement is not as good as in the steady flow cases. The implicit code is shown to be eight times faster than the hybrid code for unsteady flow calculations and up to 32 times faster for steady flow calculations. Results of calculations using alternative turbulence models are also discussed.

  19. Validation of the Electromagnetic Code FACETS for Numerical Simulation of Radar Target Images

    DTIC Science & Technology

    2009-12-01

    Validation of the electromagnetic code FACETS for numerical simulation of radar target images S. Wong...Validation of the electromagnetic code FACETS for numerical simulation of radar target images S. Wong DRDC Ottawa...for simulating radar images of a target is obtained, through direct simulation-to-measurement comparisons. A 3-dimensional computer-aided design

  20. Combining high performance simulation, data acquisition, and graphics display computers

    NASA Technical Reports Server (NTRS)

    Hickman, Robert J.

    1989-01-01

    Issues involved in the continuing development of an advanced simulation complex are discussed. This approach provides the capability to perform the majority of tests on advanced systems, non-destructively. The controlled test environments can be replicated to examine the response of the systems under test to alternative treatments of the system control design, or test the function and qualification of specific hardware. Field tests verify that the elements simulated in the laboratories are sufficient. The digital computer is hosted by a Digital Equipment Corp. MicroVAX computer with an Aptec Computer Systems Model 24 I/O computer performing the communication function. An Applied Dynamics International AD100 performs the high speed simulation computing and an Evans and Sutherland PS350 performs on-line graphics display. A Scientific Computer Systems SCS40 acts as a high performance FORTRAN program processor to support the complex, by generating numerous large files from programs coded in FORTRAN that are required for the real time processing. Four programming languages are involved in the process, FORTRAN, ADSIM, ADRIO, and STAPLE. FORTRAN is employed on the MicroVAX host to initialize and terminate the simulation runs on the system. The generation of the data files on the SCS40 also is performed with FORTRAN programs. ADSIM and ADIRO are used to program the processing elements of the AD100 and its IOCP processor. STAPLE is used to program the Aptec DIP and DIA processors.

  1. The Italian experience on T/H best estimate codes: Achievements and perspectives

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alemberti, A.; D`Auria, F.; Fiorino, E.

    1997-07-01

    Themalhydraulic system codes are complex tools developed to simulate the power plants behavior during off-normal conditions. Among the objectives of the code calculations the evaluation of safety margins, the operator training, the optimization of the plant design and of the emergency operating procedures, are mostly considered in the field of the nuclear safety. The first generation of codes was developed in the United States at the end of `60s. Since that time, different research groups all over the world started the development of their own codes. At the beginning of the `80s, the second generation codes were proposed; these differmore » from the first generation codes owing to the number of balance equations solved (six instead of three), the sophistication of the constitutive models and of the adopted numerics. The capabilities of available computers have been fully exploited during the years. The authors then summarize some of the major steps in the process of developing, modifying, and advancing the capabilities of the codes. They touch on the fact that Italian, and for that matter non-American, researchers have not been intimately involved in much of this work. They then describe the application of these codes in Italy, even though there are no operating or under construction nuclear power plants at this time. Much of this effort is directed at the general question of plant safety in the face of transient type events.« less

  2. An Advanced simulation Code for Modeling Inductive Output Tubes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thuc Bui; R. Lawrence Ives

    2012-04-27

    During the Phase I program, CCR completed several major building blocks for a 3D large signal, inductive output tube (IOT) code using modern computer language and programming techniques. These included a 3D, Helmholtz, time-harmonic, field solver with a fully functional graphical user interface (GUI), automeshing and adaptivity. Other building blocks included the improved electrostatic Poisson solver with temporal boundary conditions to provide temporal fields for the time-stepping particle pusher as well as the self electric field caused by time-varying space charge. The magnetostatic field solver was also updated to solve for the self magnetic field caused by time changing currentmore » density in the output cavity gap. The goal function to optimize an IOT cavity was also formulated, and the optimization methodologies were investigated.« less

  3. Recent developments of the NESSUS probabilistic structural analysis computer program

    NASA Technical Reports Server (NTRS)

    Millwater, H.; Wu, Y.-T.; Torng, T.; Thacker, B.; Riha, D.; Leung, C. P.

    1992-01-01

    The NESSUS probabilistic structural analysis computer program combines state-of-the-art probabilistic algorithms with general purpose structural analysis methods to compute the probabilistic response and the reliability of engineering structures. Uncertainty in loading, material properties, geometry, boundary conditions and initial conditions can be simulated. The structural analysis methods include nonlinear finite element and boundary element methods. Several probabilistic algorithms are available such as the advanced mean value method and the adaptive importance sampling method. The scope of the code has recently been expanded to include probabilistic life and fatigue prediction of structures in terms of component and system reliability and risk analysis of structures considering cost of failure. The code is currently being extended to structural reliability considering progressive crack propagation. Several examples are presented to demonstrate the new capabilities.

  4. Scoping analysis of the Advanced Test Reactor using SN2ND

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wolters, E.; Smith, M.; SC)

    2012-07-26

    A detailed set of calculations was carried out for the Advanced Test Reactor (ATR) using the SN2ND solver of the UNIC code which is part of the SHARP multi-physics code being developed under the Nuclear Energy Advanced Modeling and Simulation (NEAMS) program in DOE-NE. The primary motivation of this work is to assess whether high fidelity deterministic transport codes can tackle coupled dynamics simulations of the ATR. The successful use of such codes in a coupled dynamics simulation can impact what experiments are performed and what power levels are permitted during those experiments at the ATR. The advantages of themore » SN2ND solver over comparable neutronics tools are its superior parallel performance and demonstrated accuracy on large scale homogeneous and heterogeneous reactor geometries. However, it should be noted that virtually no effort from this project was spent constructing a proper cross section generation methodology for the ATR usable in the SN2ND solver. While attempts were made to use cross section data derived from SCALE, the minimal number of compositional cross section sets were generated to be consistent with the reference Monte Carlo input specification. The accuracy of any deterministic transport solver is impacted by such an approach and clearly it causes substantial errors in this work. The reasoning behind this decision is justified given the overall funding dedicated to the task (two months) and the real focus of the work: can modern deterministic tools actually treat complex facilities like the ATR with heterogeneous geometry modeling. SN2ND has been demonstrated to solve problems with upwards of one trillion degrees of freedom which translates to tens of millions of finite elements, hundreds of angles, and hundreds of energy groups, resulting in a very high-fidelity model of the system unachievable by most deterministic transport codes today. A space-angle convergence study was conducted to determine the meshing and angular cubature requirements for the ATR, and also to demonstrate the feasibility of performing this analysis with a deterministic transport code capable of modeling heterogeneous geometries. The work performed indicates that a minimum of 260,000 linear finite elements combined with a L3T11 cubature (96 angles on the sphere) is required for both eigenvalue and flux convergence of the ATR. A critical finding was that the fuel meat and water channels must each be meshed with at least 3 'radial zones' for accurate flux convergence. A small number of 3D calculations were also performed to show axial mesh and eigenvalue convergence for a full core problem. Finally, a brief analysis was performed with different cross sections sets generated from DRAGON and SCALE, and the findings show that more effort will be required to improve the multigroup cross section generation process. The total number of degrees of freedom for a converged 27 group, 2D ATR problem is {approx}340 million. This number increases to {approx}25 billion for a 3D ATR problem. This scoping study shows that both 2D and 3D calculations are well within the capabilities of the current SN2ND solver, given the availability of a large-scale computing center such as BlueGene/P. However, dynamics calculations are not realistic without the implementation of improvements in the solver.« less

  5. Enhanced Thermal Diffusion of Li in Graphite by Alternating Vertical Electric Field: A Hybrid Quantum-Classical Simulation Study

    NASA Astrophysics Data System (ADS)

    Ohba, Nobuko; Ogata, Shuji; Tamura, Tomoyuki; Kobayashi, Ryo; Yamakawa, Shunsuke; Asahi, Ryoji

    2012-02-01

    Enhancing the diffusivity of the Li ion in a Li-graphite intercalation compound that has been used as a negative electrode in the Li-ion rechargeable battery, is important in improving both the recharging speed and power of the battery. In the compound, the Li ion creates a long-range stress field around itself by expanding the interlayer spacing of graphite. We advance the hybrid quantum-classical simulation code to include the external electric field in addition to the long-range stress field by first-principles simulation. In the hybrid code, the quantum region selected adaptively around the Li ion is treated using the real-space density-functional theory for electrons. The rest of the system is described with an empirical interatomic potential that includes the term relating to the dispersion force between the C atoms in different layers. Hybrid simulation runs for Li dynamics in graphite are performed at 423 K under various settings of the amplitude and frequency of alternating electric fields perpendicular to C-layers. We find that the in-plane diffusivity of the Li ion is enhanced significantly by the electric field if the amplitude is larger than 0.2 V/Å within its order and the frequency is as high as 1.7 THz. The microscopic mechanisms of the enhancement are explained.

  6. A Simulation and Modeling Framework for Space Situational Awareness

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Olivier, S S

    This paper describes the development and initial demonstration of a new, integrated modeling and simulation framework, encompassing the space situational awareness enterprise, for quantitatively assessing the benefit of specific sensor systems, technologies and data analysis techniques. The framework is based on a flexible, scalable architecture to enable efficient, physics-based simulation of the current SSA enterprise, and to accommodate future advancements in SSA systems. In particular, the code is designed to take advantage of massively parallel computer systems available, for example, at Lawrence Livermore National Laboratory. The details of the modeling and simulation framework are described, including hydrodynamic models of satellitemore » intercept and debris generation, orbital propagation algorithms, radar cross section calculations, optical brightness calculations, generic radar system models, generic optical system models, specific Space Surveillance Network models, object detection algorithms, orbit determination algorithms, and visualization tools. The use of this integrated simulation and modeling framework on a specific scenario involving space debris is demonstrated.« less

  7. An Object-Oriented Finite Element Framework for Multiphysics Phase Field Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Michael R Tonks; Derek R Gaston; Paul C Millett

    2012-01-01

    The phase field approach is a powerful and popular method for modeling microstructure evolution. In this work, advanced numerical tools are used to create a phase field framework that facilitates rapid model development. This framework, called MARMOT, is based on Idaho National Laboratory's finite element Multiphysics Object-Oriented Simulation Environment. In MARMOT, the system of phase field partial differential equations (PDEs) are solved simultaneously with PDEs describing additional physics, such as solid mechanics and heat conduction, using the Jacobian-Free Newton Krylov Method. An object-oriented architecture is created by taking advantage of commonalities in phase fields models to facilitate development of newmore » models with very little written code. In addition, MARMOT provides access to mesh and time step adaptivity, reducing the cost for performing simulations with large disparities in both spatial and temporal scales. In this work, phase separation simulations are used to show the numerical performance of MARMOT. Deformation-induced grain growth and void growth simulations are included to demonstrate the muliphysics capability.« less

  8. Hydrodynamic Simulations of Protoplanetary Disks with GIZMO

    NASA Astrophysics Data System (ADS)

    Rice, Malena; Laughlin, Greg

    2018-01-01

    Over the past several decades, the field of computational fluid dynamics has rapidly advanced as the range of available numerical algorithms and computationally feasible physical problems has expanded. The development of modern numerical solvers has provided a compelling opportunity to reconsider previously obtained results in search for yet undiscovered effects that may be revealed through longer integration times and more precise numerical approaches. In this study, we compare the results of past hydrodynamic disk simulations with those obtained from modern analytical resources. We focus our study on the GIZMO code (Hopkins 2015), which uses meshless methods to solve the homogeneous Euler equations of hydrodynamics while eliminating problems arising as a result of advection between grid cells. By comparing modern simulations with prior results, we hope to provide an improved understanding of the impact of fluid mechanics upon the evolution of protoplanetary disks.

  9. Coupled Neutronics Thermal-Hydraulic Solution of a Full-Core PWR Using VERA-CS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clarno, Kevin T; Palmtag, Scott; Davidson, Gregory G

    2014-01-01

    The Consortium for Advanced Simulation of Light Water Reactors (CASL) is developing a core simulator called VERA-CS to model operating PWR reactors with high resolution. This paper describes how the development of VERA-CS is being driven by a set of progression benchmark problems that specify the delivery of useful capability in discrete steps. As part of this development, this paper will describe the current capability of VERA-CS to perform a multiphysics simulation of an operating PWR at Hot Full Power (HFP) conditions using a set of existing computer codes coupled together in a novel method. Results for several single-assembly casesmore » are shown that demonstrate coupling for different boron concentrations and power levels. Finally, high-resolution results are shown for a full-core PWR reactor modeled in quarter-symmetry.« less

  10. Accuracy of Binary Black Hole waveforms for Advanced LIGO searches

    NASA Astrophysics Data System (ADS)

    Kumar, Prayush; Barkett, Kevin; Bhagwat, Swetha; Chu, Tony; Fong, Heather; Brown, Duncan; Pfeiffer, Harald; Scheel, Mark; Szilagyi, Bela

    2015-04-01

    Coalescing binaries of compact objects are flagship sources for the first direct detection of gravitational waves with LIGO-Virgo observatories. Matched-filtering based detection searches aimed at binaries of black holes will use aligned spin waveforms as filters, and their efficiency hinges on the accuracy of the underlying waveform models. A number of gravitational waveform models are available in literature, e.g. the Effective-One-Body, Phenomenological, and traditional post-Newtonian ones. While Numerical Relativity (NR) simulations provide for the most accurate modeling of gravitational radiation from compact binaries, their computational cost limits their application in large scale searches. In this talk we assess the accuracy of waveform models in two regions of parameter space, which have only been explored cursorily in the past: the high mass-ratio regime as well as the comparable mass-ratio + high spin regime.s Using the SpEC code, six q = 7 simulations with aligned-spins and lasting 60 orbits, and tens of q ∈ [1,3] simulations with high black hole spins were performed. We use them to study the accuracy and intrinsic parameter biases of different waveform families, and assess their viability for Advanced LIGO searches.

  11. Star and Planet Formation through Cosmic Time

    NASA Astrophysics Data System (ADS)

    Lee, Aaron Thomas

    The computational advances of the past several decades have allowed theoretical astrophysics to proceed at a dramatic pace. Numerical simulations can now simulate the formation of individual molecules all the way up to the evolution of the entire universe. Observational astrophysics is producing data at a prodigious rate, and sophisticated analysis techniques of large data sets continue to be developed. It is now possible for terabytes of data to be effectively turned into stunning astrophysical results. This is especially true for the field of star and planet formation. Theorists are now simulating the formation of individual planets and stars, and observing facilities are finally capturing snapshots of these processes within the Milky Way galaxy and other galaxies. While a coherent theory remains incomplete, great strides have been made toward this goal. This dissertation discusses several projects that develop models of star and planet forma- tion. This work spans large spatial and temporal scales: from the AU-scale of protoplanetary disks all the way up to the parsec-scale of star-forming clouds, and taking place in both contemporary environments like the Milky Way galaxy and primordial environments at redshifts of z 20. Particularly, I show that planet formation need not proceed in incremental stages, where planets grow from millimeter-sized dust grains all the way up to planets, but instead can proceed directly from small dust grains to large kilometer-sized boulders. The requirements for this model to operate effectively are supported by observations. Additionally, I draw suspicion toward one model for how you form high mass stars (stars with masses exceeding 8 Msun), which postulates that high-mass stars are built up from the gradual accretion of mass from the cloud onto low-mass stars. I show that magnetic fields in star forming clouds thwart this transfer of mass, and instead it is likely that high mass stars are created from the gravitational collapse of large clouds. This work also provides a sub-grid model for computational codes that employ sink particles accreting from magnetized gas. Finally, I analyze the role that radiation plays in determining the final masses of the first stars to ever form in the universe. These stars formed in starkly different environments than stars form in today, and the role of the direct radiation from these stars turns out to be a crucial component of primordial star formation theory. These projects use a variety of computational tools, including the use of spectral hydrodynamics codes, magneto-hydrodynamics grid codes that employ adaptive mesh refinement techniques, and long characteristic ray tracing methods. I develop and describe a long characteristic ray tracing method for modeling hydrogen-ionizing radiation from stars. Additionally, I have developed Monte Carlo routines that convert hydrodynamic data used in smoothed particle hydrodynamics codes for use in grid-based codes. Both of these advances will find use beyond simulations of star and planet formation and benefit the astronomical community at large.

  12. New coding advances for deep space communications

    NASA Technical Reports Server (NTRS)

    Yuen, Joseph H.

    1987-01-01

    Advances made in error-correction coding for deep space communications are described. The code believed to be the best is a (15, 1/6) convolutional code, with maximum likelihood decoding; when it is concatenated with a 10-bit Reed-Solomon code, it achieves a bit error rate of 10 to the -6th, at a bit SNR of 0.42 dB. This code outperforms the Voyager code by 2.11 dB. The use of source statics in decoding convolutionally encoded Voyager images from the Uranus encounter is investigated, and it is found that a 2 dB decoding gain can be achieved.

  13. Extending a CAD-Based Cartesian Mesh Generator for the Lattice Boltzmann Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cantrell, J Nathan; Inclan, Eric J; Joshi, Abhijit S

    2012-01-01

    This paper describes the development of a custom preprocessor for the PaRAllel Thermal Hydraulics simulations using Advanced Mesoscopic methods (PRATHAM) code based on an open-source mesh generator, CartGen [1]. PRATHAM is a three-dimensional (3D) lattice Boltzmann method (LBM) based parallel flow simulation software currently under development at the Oak Ridge National Laboratory. The LBM algorithm in PRATHAM requires a uniform, coordinate system-aligned, non-body-fitted structured mesh for its computational domain. CartGen [1], which is a GNU-licensed open source code, already comes with some of the above needed functionalities. However, it needs to be further extended to fully support the LBM specificmore » preprocessing requirements. Therefore, CartGen is being modified to (i) be compiler independent while converting a neutral-format STL (Stereolithography) CAD geometry to a uniform structured Cartesian mesh, (ii) provide a mechanism for PRATHAM to import the mesh and identify the fluid/solid domains, and (iii) provide a mechanism to visually identify and tag the domain boundaries on which to apply different boundary conditions.« less

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rey, D.; Ryan, W.; Ross, M.

    A method for more efficiently utilizing the frequency bandwidth allocated for data transmission is presented. Current space and range communication systems use modulation and coding schemes that transmit 0.5 to 1.0 bits per second per Hertz of radio frequency bandwidth. The goal in this LDRD project is to increase the bandwidth utilization by employing advanced digital communications techniques. This is done with little or no increase in the transmit power which is usually very limited on airborne systems. Teaming with New Mexico State University, an implementation of trellis coded modulation (TCM), a coding and modulation scheme pioneered by Ungerboeck, wasmore » developed for this application and simulated on a computer. TCM provides a means for reliably transmitting data while simultaneously increasing bandwidth efficiency. The penalty is increased receiver complexity. In particular, the trellis decoder requires high-speed, application-specific digital signal processing (DSP) chips. A system solution based on the QualComm Viterbi decoder and the Graychip DSP receiver chips is presented.« less

  15. CFD analyses for advanced pump design

    NASA Technical Reports Server (NTRS)

    Dejong, F. J.; Choi, S.-K.; Govindan, T. R.

    1994-01-01

    As one of the activities of the NASA/MSFC Pump Stage Technology Team, the present effort was focused on using CFD in the design and analysis of high performance rocket engine pumps. Under this effort, a three-dimensional Navier-Stokes code was used for various inducer and impeller flow field calculations. An existing algebraic grid generation procedure was-extended to allow for nonzero blade thickness, splitter blades, and hub/shroud cavities upstream or downstream of the (main) blades. This resulted in a fast, robust inducer/impeller geometry/grid generation package. Problems associated with running a compressible flow code to simulate an incompressible flow were resolved; related aspects of the numerical algorithm (viz., the matrix preconditioning, the artificial dissipation, and the treatment of low Mach number flows) were addressed. As shown by the calculations performed under the present effort, the resulting code, in conjunction with the grid generation package, is an effective tool for the rapid solution of three-dimensional viscous inducer and impeller flows.

  16. Verification of combined thermal-hydraulic and heat conduction analysis code FLOWNET/TRUMP

    NASA Astrophysics Data System (ADS)

    Maruyama, Soh; Fujimoto, Nozomu; Kiso, Yoshihiro; Murakami, Tomoyuki; Sudo, Yukio

    1988-09-01

    This report presents the verification results of the combined thermal-hydraulic and heat conduction analysis code, FLOWNET/TRUMP which has been utilized for the core thermal hydraulic design, especially for the analysis of flow distribution among fuel block coolant channels, the determination of thermal boundary conditions for fuel block stress analysis and the estimation of fuel temperature in the case of fuel block coolant channel blockage accident in the design of the High Temperature Engineering Test Reactor(HTTR), which the Japan Atomic Energy Research Institute has been planning to construct in order to establish basic technologies for future advanced very high temperature gas-cooled reactors and to be served as an irradiation test reactor for promotion of innovative high temperature new frontier technologies. The verification of the code was done through the comparison between the analytical results and experimental results of the Helium Engineering Demonstration Loop Multi-channel Test Section(HENDEL T(sub 1-M)) with simulated fuel rods and fuel blocks.

  17. ANNarchy: a code generation approach to neural simulations on parallel hardware

    PubMed Central

    Vitay, Julien; Dinkelbach, Helge Ü.; Hamker, Fred H.

    2015-01-01

    Many modern neural simulators focus on the simulation of networks of spiking neurons on parallel hardware. Another important framework in computational neuroscience, rate-coded neural networks, is mostly difficult or impossible to implement using these simulators. We present here the ANNarchy (Artificial Neural Networks architect) neural simulator, which allows to easily define and simulate rate-coded and spiking networks, as well as combinations of both. The interface in Python has been designed to be close to the PyNN interface, while the definition of neuron and synapse models can be specified using an equation-oriented mathematical description similar to the Brian neural simulator. This information is used to generate C++ code that will efficiently perform the simulation on the chosen parallel hardware (multi-core system or graphical processing unit). Several numerical methods are available to transform ordinary differential equations into an efficient C++code. We compare the parallel performance of the simulator to existing solutions. PMID:26283957

  18. Modeling Submarine Lava Flow with ASPECT

    NASA Astrophysics Data System (ADS)

    Storvick, E. R.; Lu, H.; Choi, E.

    2017-12-01

    Submarine lava flow is not easily observed and experimented on due to limited accessibility and challenges posed by the fast solidification of lava and the associated drastic changes in rheology. However, recent advances in numerical modeling techniques might address some of these challenges and provide unprecedented insight into the mechanics of submarine lava flow and conditions determining its wide-ranging morphologies. In this study, we explore the applicability ASPECT, Advanced Solver for Problems in Earth's ConvecTion, to submarine lava flow. ASPECT is a parallel finite element code that solves problems of thermal convection in the Earth's mantle. We will assess ASPECT's capability to model submarine lava flow by observing models of lava flow morphology simulated with GALE, a long-term tectonics finite element analysis code, with models created using comparable settings and parameters in ASPECT. From these observations we will contrast the differing models in order to identify the benefits of each code. While doing so, we anticipate we will learn about the conditions required for end-members of lava flow morphology, for example, pillows and sheet flows. With ASPECT specifically we focus on 1) whether the lava rheology can be implemented; 2) how effective the AMR is in resolving morphologies of the solidified crust; 3) whether and under what conditions the end-members of the lava flow morphologies, pillows and sheets, can be reproduced.

  19. Error coding simulations in C

    NASA Technical Reports Server (NTRS)

    Noble, Viveca K.

    1994-01-01

    When data is transmitted through a noisy channel, errors are produced within the data rendering it indecipherable. Through the use of error control coding techniques, the bit error rate can be reduced to any desired level without sacrificing the transmission data rate. The Astrionics Laboratory at Marshall Space Flight Center has decided to use a modular, end-to-end telemetry data simulator to simulate the transmission of data from flight to ground and various methods of error control. The simulator includes modules for random data generation, data compression, Consultative Committee for Space Data Systems (CCSDS) transfer frame formation, error correction/detection, error generation and error statistics. The simulator utilizes a concatenated coding scheme which includes CCSDS standard (255,223) Reed-Solomon (RS) code over GF(2(exp 8)) with interleave depth of 5 as the outermost code, (7, 1/2) convolutional code as an inner code and CCSDS recommended (n, n-16) cyclic redundancy check (CRC) code as the innermost code, where n is the number of information bits plus 16 parity bits. The received signal-to-noise for a desired bit error rate is greatly reduced through the use of forward error correction techniques. Even greater coding gain is provided through the use of a concatenated coding scheme. Interleaving/deinterleaving is necessary to randomize burst errors which may appear at the input of the RS decoder. The burst correction capability length is increased in proportion to the interleave depth. The modular nature of the simulator allows for inclusion or exclusion of modules as needed. This paper describes the development and operation of the simulator, the verification of a C-language Reed-Solomon code, and the possibility of using Comdisco SPW(tm) as a tool for determining optimal error control schemes.

  20. Numerical Simulations of Dynamical Mass Transfer in Binaries

    NASA Astrophysics Data System (ADS)

    Motl, P. M.; Frank, J.; Tohline, J. E.

    1999-05-01

    We will present results from our ongoing research project to simulate dynamically unstable mass transfer in near contact binaries with mass ratios different from one. We employ a fully three-dimensional self-consistent field technique to generate synchronously rotating polytropic binaries. With our self-consistent field code we can create equilibrium binaries where one component is, by radius, within about 99 of filling its Roche lobe for example. These initial configurations are evolved using a three-dimensional, Eulerian hydrodynamics code. We make no assumptions about the symmetry of the subsequent flow and the entire binary system is evolved self-consistently under the influence of its own gravitational potential. For a given mass ratio and polytropic index for the binary components, mass transfer via Roche lobe overflow can be predicted to be stable or unstable through simple theoretical arguments. The validity of the approximations made in the stability calculations are tested against our numerical simulations. We acknowledge support from the U.S. National Science Foundation through grants AST-9720771, AST-9528424, and DGE-9355007. This research has been supported, in part, by grants of high-performance computing time on NPACI facilities at the San Diego Supercomputer Center, the Texas Advanced Computing Center and through the PET program of the NAVOCEANO DoD Major Shared Resource Center in Stennis, MS.

  1. Flash Galaxy Cluster Merger, Simulated using the Flash Code, Mass Ratio 1:1

    ScienceCinema

    None

    2018-05-11

    Since structure in the universe forms in a bottom-up fashion, with smaller structures merging to form larger ones, modeling the merging process in detail is crucial to our understanding of cosmology. At the current epoch, we observe clusters of galaxies undergoing mergers. It is seen that the two major components of galaxy clusters, the hot intracluster gas and the dark matter, behave very differently during the course of a merger. Using the N-body and hydrodynamics capabilities in the FLASH code, we have simulated a suite of representative galaxy cluster mergers, including the dynamics of both the dark matter, which is collisionless, and the gas, which has the properties of a fluid. 3-D visualizations such as these demonstrate clearly the different behavior of these two components over time. Credits: Science: John Zuhone (Harvard-Smithsonian Center for Astrophysics Visualization: Jonathan Gallagher (Flash Center, University of Chicago)

 This research used resources of the Argonne Leadership Computing Facility at Argonne National Laboratory, which is supported by the Office of Science of the U.S. Dept. of Energy (DOE) under contract DE-AC02-06CH11357. This research was supported by the National Nuclear Security Administration's (NNSA) Advanced Simulation and Computing (ASC) Academic Strategic Alliance Program (ASAP).

  2. Flash Galaxy Cluster Merger, Simulated using the Flash Code, Mass Ratio 1:1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2010-08-09

    Since structure in the universe forms in a bottom-up fashion, with smaller structures merging to form larger ones, modeling the merging process in detail is crucial to our understanding of cosmology. At the current epoch, we observe clusters of galaxies undergoing mergers. It is seen that the two major components of galaxy clusters, the hot intracluster gas and the dark matter, behave very differently during the course of a merger. Using the N-body and hydrodynamics capabilities in the FLASH code, we have simulated a suite of representative galaxy cluster mergers, including the dynamics of both the dark matter, which ismore » collisionless, and the gas, which has the properties of a fluid. 3-D visualizations such as these demonstrate clearly the different behavior of these two components over time. Credits: Science: John Zuhone (Harvard-Smithsonian Center for Astrophysics Visualization: Jonathan Gallagher (Flash Center, University of Chicago)

 This research used resources of the Argonne Leadership Computing Facility at Argonne National Laboratory, which is supported by the Office of Science of the U.S. Dept. of Energy (DOE) under contract DE-AC02-06CH11357. This research was supported by the National Nuclear Security Administration's (NNSA) Advanced Simulation and Computing (ASC) Academic Strategic Alliance Program (ASAP).« less

  3. Advanced optical simulation of scintillation detectors in GATE V8.0: first implementation of a reflectance model based on measured data

    NASA Astrophysics Data System (ADS)

    Stockhoff, Mariele; Jan, Sebastien; Dubois, Albertine; Cherry, Simon R.; Roncali, Emilie

    2017-06-01

    Typical PET detectors are composed of a scintillator coupled to a photodetector that detects scintillation photons produced when high energy gamma photons interact with the crystal. A critical performance factor is the collection efficiency of these scintillation photons, which can be optimized through simulation. Accurate modelling of photon interactions with crystal surfaces is essential in optical simulations, but the existing UNIFIED model in GATE is often inaccurate, especially for rough surfaces. Previously a new approach for modelling surface reflections based on measured surfaces was validated using custom Monte Carlo code. In this work, the LUT Davis model is implemented and validated in GATE and GEANT4, and is made accessible for all users in the nuclear imaging research community. Look-up-tables (LUTs) from various crystal surfaces are calculated based on measured surfaces obtained by atomic force microscopy. The LUTs include photon reflection probabilities and directions depending on incidence angle. We provide LUTs for rough and polished surfaces with different reflectors and coupling media. Validation parameters include light output measured at different depths of interaction in the crystal and photon track lengths, as both parameters are strongly dependent on reflector characteristics and distinguish between models. Results from the GATE/GEANT4 beta version are compared to those from our custom code and experimental data, as well as the UNIFIED model. GATE simulations with the LUT Davis model show average variations in light output of  <2% from the custom code and excellent agreement for track lengths with R 2  >  0.99. Experimental data agree within 9% for relative light output. The new model also simplifies surface definition, as no complex input parameters are needed. The LUT Davis model makes optical simulations for nuclear imaging detectors much more precise, especially for studies with rough crystal surfaces. It will be available in GATE V8.0.

  4. Program Code Generator for Cardiac Electrophysiology Simulation with Automatic PDE Boundary Condition Handling

    PubMed Central

    Punzalan, Florencio Rusty; Kunieda, Yoshitoshi; Amano, Akira

    2015-01-01

    Clinical and experimental studies involving human hearts can have certain limitations. Methods such as computer simulations can be an important alternative or supplemental tool. Physiological simulation at the tissue or organ level typically involves the handling of partial differential equations (PDEs). Boundary conditions and distributed parameters, such as those used in pharmacokinetics simulation, add to the complexity of the PDE solution. These factors can tailor PDE solutions and their corresponding program code to specific problems. Boundary condition and parameter changes in the customized code are usually prone to errors and time-consuming. We propose a general approach for handling PDEs and boundary conditions in computational models using a replacement scheme for discretization. This study is an extension of a program generator that we introduced in a previous publication. The program generator can generate code for multi-cell simulations of cardiac electrophysiology. Improvements to the system allow it to handle simultaneous equations in the biological function model as well as implicit PDE numerical schemes. The replacement scheme involves substituting all partial differential terms with numerical solution equations. Once the model and boundary equations are discretized with the numerical solution scheme, instances of the equations are generated to undergo dependency analysis. The result of the dependency analysis is then used to generate the program code. The resulting program code are in Java or C programming language. To validate the automatic handling of boundary conditions in the program code generator, we generated simulation code using the FHN, Luo-Rudy 1, and Hund-Rudy cell models and run cell-to-cell coupling and action potential propagation simulations. One of the simulations is based on a published experiment and simulation results are compared with the experimental data. We conclude that the proposed program code generator can be used to generate code for physiological simulations and provides a tool for studying cardiac electrophysiology. PMID:26356082

  5. Tristan code and its application

    NASA Astrophysics Data System (ADS)

    Nishikawa, K.-I.

    Since TRISTAN: The 3-D Electromagnetic Particle Code was introduced in 1990, it has been used for many applications including the simulations of global solar windmagnetosphere interaction. The most essential ingridients of this code have been published in the ISSS-4 book. In this abstract we describe some of issues and an application of this code for the study of global solar wind-magnetosphere interaction including a substorm study. The basic code (tristan.f) for the global simulation and a local simulation of reconnection with a Harris model (issrec2.f) are available at http:/www.physics.rutger.edu/˜kenichi. For beginners the code (isssrc2.f) with simpler boundary conditions is suitable to start to run simulations. The future of global particle simulations for a global geospace general circulation (GGCM) model with predictive capability (for Space Weather Program) is discussed.

  6. SciDAC Center for Gyrokinetic Particle Simulation of Turbulent Transport in Burning Plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Zhihong

    2013-12-18

    During the first year of the SciDAC gyrokinetic particle simulation (GPS) project, the GPS team (Zhihong Lin, Liu Chen, Yasutaro Nishimura, and Igor Holod) at the University of California, Irvine (UCI) studied the tokamak electron transport driven by electron temperature gradient (ETG) turbulence, and by trapped electron mode (TEM) turbulence and ion temperature gradient (ITG) turbulence with kinetic electron effects, extended our studies of ITG turbulence spreading to core-edge coupling. We have developed and optimized an elliptic solver using finite element method (FEM), which enables the implementation of advanced kinetic electron models (split-weight scheme and hybrid model) in the SciDACmore » GPS production code GTC. The GTC code has been ported and optimized on both scalar and vector parallel computer architectures, and is being transformed into objected-oriented style to facilitate collaborative code development. During this period, the UCI team members presented 11 invited talks at major national and international conferences, published 22 papers in peer-reviewed journals and 10 papers in conference proceedings. The UCI hosted the annual SciDAC Workshop on Plasma Turbulence sponsored by the GPS Center, 2005-2007. The workshop was attended by about fifties US and foreign researchers and financially sponsored several gradual students from MIT, Princeton University, Germany, Switzerland, and Finland. A new SciDAC postdoc, Igor Holod, has arrived at UCI to initiate global particle simulation of magnetohydrodynamics turbulence driven by energetic particle modes. The PI, Z. Lin, has been promoted to the Associate Professor with tenure at UCI.« less

  7. NASA Glenn Research Center UEET (Ultra-Efficient Engine Technology) Program: Agenda and Abstracts

    NASA Technical Reports Server (NTRS)

    Manthey, Lri

    2001-01-01

    Topics discussed include: UEET Overview; Technology Benefits; Emissions Overview; P&W Low Emissions Combustor Development; GE Low Emissions Combustor Development; Rolls-Royce Low Emissions Combustor Development; Honeywell Low Emissions Combustor Development; NASA Multipoint LDI Development; Stanford Activities In Concepts for Advanced Gas Turbine Combustors; Large Eddy Simulation (LES) of Gas Turbine Combustion; NASA National Combustion Code Simulations; Materials Overview; Thermal Barrier Coatings for Airfoil Applications; Disk Alloy Development; Turbine Blade Alloy; Ceramic Matrix Composite (CMC) Materials Development; Ceramic Matrix Composite (CMC) Materials Characterization; Environmental Barrier Coatings (EBC) for Ceramic Matrix Composite (CMC) Materials; Ceramic Matrix Composite Vane Rig Testing and Design; Ultra-High Temperature Ceramic (UHTC) Development; Lightweight Structures; NPARC Alliance; Technology Transfer and Commercialization; and Turbomachinery Overview; etc.

  8. Supersonic Combustion Research at NASA

    NASA Technical Reports Server (NTRS)

    Drummond, J. P.; Danehy, Paul M.; Gaffney, Richard L., Jr.; Tedder, Sarah A.; Cutler, Andrew D.; Bivolaru, Daniel

    2007-01-01

    This paper discusses the progress of work to model high-speed supersonic reacting flow. The purpose of the work is to improve the state of the art of CFD capabilities for predicting the flow in high-speed propulsion systems, particularly combustor flowpaths. The program has several components including the development of advanced algorithms and models for simulating engine flowpaths as well as a fundamental experimental and diagnostic development effort to support the formulation and validation of the mathematical models. The paper will provide details of current work on experiments that will provide data for the modeling efforts along with the associated nonintrusive diagnostics used to collect the data from the experimental flowfield. Simulation of a recent experiment to partially validate the accuracy of a combustion code is also described.

  9. Shift Verification and Validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pandya, Tara M.; Evans, Thomas M.; Davidson, Gregory G

    2016-09-07

    This documentation outlines the verification and validation of Shift for the Consortium for Advanced Simulation of Light Water Reactors (CASL). Five main types of problems were used for validation: small criticality benchmark problems; full-core reactor benchmarks for light water reactors; fixed-source coupled neutron-photon dosimetry benchmarks; depletion/burnup benchmarks; and full-core reactor performance benchmarks. We compared Shift results to measured data and other simulated Monte Carlo radiation transport code results, and found very good agreement in a variety of comparison measures. These include prediction of critical eigenvalue, radial and axial pin power distributions, rod worth, leakage spectra, and nuclide inventories over amore » burn cycle. Based on this validation of Shift, we are confident in Shift to provide reference results for CASL benchmarking.« less

  10. Modulation and coding for satellite and space communications

    NASA Technical Reports Server (NTRS)

    Yuen, Joseph H.; Simon, Marvin K.; Pollara, Fabrizio; Divsalar, Dariush; Miller, Warner H.; Morakis, James C.; Ryan, Carl R.

    1990-01-01

    Several modulation and coding advances supported by NASA are summarized. To support long-constraint-length convolutional code, a VLSI maximum-likelihood decoder, utilizing parallel processing techniques, which is being developed to decode convolutional codes of constraint length 15 and a code rate as low as 1/6 is discussed. A VLSI high-speed 8-b Reed-Solomon decoder which is being developed for advanced tracking and data relay satellite (ATDRS) applications is discussed. A 300-Mb/s modem with continuous phase modulation (CPM) and codings which is being developed for ATDRS is discussed. Trellis-coded modulation (TCM) techniques are discussed for satellite-based mobile communication applications.

  11. Advanced Chemical Propulsion Study

    NASA Technical Reports Server (NTRS)

    Woodcock, Gordon; Byers, Dave; Alexander, Leslie A.; Krebsbach, Al

    2004-01-01

    A study was performed of advanced chemical propulsion technology application to space science (Code S) missions. The purpose was to begin the process of selecting chemical propulsion technology advancement activities that would provide greatest benefits to Code S missions. Several missions were selected from Code S planning data, and a range of advanced chemical propulsion options was analyzed to assess capabilities and benefits re these missions. Selected beneficial applications were found for higher-performing bipropellants, gelled propellants, and cryogenic propellants. Technology advancement recommendations included cryocoolers and small turbopump engines for cryogenic propellants; space storable propellants such as LOX-hydrazine; and advanced monopropellants. It was noted that fluorine-bearing oxidizers offer performance gains over more benign oxidizers. Potential benefits were observed for gelled propellants that could be allowed to freeze, then thawed for use.

  12. Auto Code Generation for Simulink-Based Attitude Determination Control System

    NASA Technical Reports Server (NTRS)

    MolinaFraticelli, Jose Carlos

    2012-01-01

    This paper details the work done to auto generate C code from a Simulink-Based Attitude Determination Control System (ADCS) to be used in target platforms. NASA Marshall Engineers have developed an ADCS Simulink simulation to be used as a component for the flight software of a satellite. This generated code can be used for carrying out Hardware in the loop testing of components for a satellite in a convenient manner with easily tunable parameters. Due to the nature of the embedded hardware components such as microcontrollers, this simulation code cannot be used directly, as it is, on the target platform and must first be converted into C code; this process is known as auto code generation. In order to generate C code from this simulation; it must be modified to follow specific standards set in place by the auto code generation process. Some of these modifications include changing certain simulation models into their atomic representations which can bring new complications into the simulation. The execution order of these models can change based on these modifications. Great care must be taken in order to maintain a working simulation that can also be used for auto code generation. After modifying the ADCS simulation for the auto code generation process, it is shown that the difference between the output data of the former and that of the latter is between acceptable bounds. Thus, it can be said that the process is a success since all the output requirements are met. Based on these results, it can be argued that this generated C code can be effectively used by any desired platform as long as it follows the specific memory requirements established in the Simulink Model.

  13. Guided wave energy trapping to detect hidden multilayer delamination damage

    NASA Astrophysics Data System (ADS)

    Leckey, Cara A. C.; Seebo, Jeffrey P.

    2015-03-01

    Nondestructive Evaluation (NDE) and Structural Health Monitoring (SHM) simulation tools capable of modeling three-dimensional (3D) realistic energy-damage interactions are needed for aerospace composites. Current practice in NDE/SHM simulation for composites commonly involves over-simplification of the material parameters and/or a simplified two-dimensional (2D) approach. The unique damage types that occur in composite materials (delamination, microcracking, etc) develop as complex 3D geometry features. This paper discusses the application of 3D custom ultrasonic simulation tools to study wave interaction with multilayer delamination damage in carbon-fiber reinforced polymer (CFRP) composites. In particular, simulation based studies of ultrasonic guided wave energy trapping due to multilayer delamination damage were performed. The simulation results show changes in energy trapping at the composite surface as additional delaminations are added through the composite thickness. The results demonstrate a potential approach for identifying the presence of hidden multilayer delamination damage in applications where only single-sided access to a component is available. The paper also describes recent advancements in optimizing the custom ultrasonic simulation code for increases in computation speed.

  14. Assessment and Application of the ROSE Code for Reactor Outage Thermal-Hydraulic and Safety Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liang, Thomas K.S.; Ko, F.-K.; Dai, L.-C

    The currently available tools, such as RELAP5, RETRAN, and others, cannot easily and correctly perform the task of analyzing the system behavior during plant outages. Therefore, a medium-sized program aiming at reactor outage simulation and evaluation, such as midloop operation (MLO) with loss of residual heat removal (RHR), has been developed. Important thermal-hydraulic processes involved during MLO with loss of RHR can be properly simulated by the newly developed reactor outage simulation and evaluation (ROSE) code. The two-region approach with a modified two-fluid model has been adopted to be the theoretical basis of the ROSE code.To verify the analytical modelmore » in the first step, posttest calculations against the integral midloop experiments with loss of RHR have been performed. The excellent simulation capacity of the ROSE code against the Institute of Nuclear Energy Research Integral System Test Facility test data is demonstrated. To further mature the ROSE code in simulating a full-sized pressurized water reactor, assessment against the WGOTHIC code and the Maanshan momentary-loss-of-RHR event has been undertaken. The successfully assessed ROSE code is then applied to evaluate the abnormal operation procedure (AOP) with loss of RHR during MLO (AOP 537.4) for the Maanshan plant. The ROSE code also has been successfully transplanted into the Maanshan training simulator to support operator training. How the simulator was upgraded by the ROSE code for MLO will be presented in the future.« less

  15. Comparison of DAC and MONACO DSMC Codes with Flat Plate Simulation

    NASA Technical Reports Server (NTRS)

    Padilla, Jose F.

    2010-01-01

    Various implementations of the direct simulation Monte Carlo (DSMC) method exist in academia, government and industry. By comparing implementations, deficiencies and merits of each can be discovered. This document reports comparisons between DSMC Analysis Code (DAC) and MONACO. DAC is NASA's standard DSMC production code and MONACO is a research DSMC code developed in academia. These codes have various differences; in particular, they employ distinct computational grid definitions. In this study, DAC and MONACO are compared by having each simulate a blunted flat plate wind tunnel test, using an identical volume mesh. Simulation expense and DSMC metrics are compared. In addition, flow results are compared with available laboratory data. Overall, this study revealed that both codes, excluding grid adaptation, performed similarly. For parallel processing, DAC was generally more efficient. As expected, code accuracy was mainly dependent on physical models employed.

  16. Advanced Discontinuous Galerkin Algorithms and First Open-Field Line Turbulence Simulations

    NASA Astrophysics Data System (ADS)

    Hammett, G. W.; Hakim, A.; Shi, E. L.

    2016-10-01

    New versions of Discontinuous Galerkin (DG) algorithms have interesting features that may help with challenging problems of higher-dimensional kinetic problems. We are developing the gyrokinetic code Gkeyll based on DG. DG also has features that may help with the next generation of Exascale computers. Higher-order methods do more FLOPS to extract more information per byte, thus reducing memory and communications costs (which are a bottleneck at exascale). DG uses efficient Gaussian quadrature like finite elements, but keeps the calculation local for the kinetic solver, also reducing communication. Sparse grid methods might further reduce the cost significantly in higher dimensions. The inner product norm can be chosen to preserve energy conservation with non-polynomial basis functions (such as Maxwellian-weighted bases), which can be viewed as a Petrov-Galerkin method. This allows a full- F code to benefit from similar Gaussian quadrature as used in popular δf gyrokinetic codes. Consistent basis functions avoid high-frequency numerical modes from electromagnetic terms. We will show our first results of 3 x + 2 v simulations of open-field line/SOL turbulence in a simple helical geometry (like Helimak/TORPEX), with parameters from LAPD, TORPEX, and NSTX. Supported by the Max-Planck/Princeton Center for Plasma Physics, the SciDAC Center for the Study of Plasma Microturbulence, and DOE Contract DE-AC02-09CH11466.

  17. Automated target recognition using passive radar and coordinated flight models

    NASA Astrophysics Data System (ADS)

    Ehrman, Lisa M.; Lanterman, Aaron D.

    2003-09-01

    Rather than emitting pulses, passive radar systems rely on illuminators of opportunity, such as TV and FM radio, to illuminate potential targets. These systems are particularly attractive since they allow receivers to operate without emitting energy, rendering them covert. Many existing passive radar systems estimate the locations and velocities of targets. This paper focuses on adding an automatic target recognition (ATR) component to such systems. Our approach to ATR compares the Radar Cross Section (RCS) of targets detected by a passive radar system to the simulated RCS of known targets. To make the comparison as accurate as possible, the received signal model accounts for aircraft position and orientation, propagation losses, and antenna gain patterns. The estimated positions become inputs for an algorithm that uses a coordinated flight model to compute probable aircraft orientation angles. The Fast Illinois Solver Code (FISC) simulates the RCS of several potential target classes as they execute the estimated maneuvers. The RCS is then scaled by the Advanced Refractive Effects Prediction System (AREPS) code to account for propagation losses that occur as functions of altitude and range. The Numerical Electromagnetic Code (NEC2) computes the antenna gain pattern, so that the RCS can be further scaled. The Rician model compares the RCS of the illuminated aircraft with those of the potential targets. This comparison results in target identification.

  18. TRIAD IV: Nationwide Survey of Medical Students' Understanding of Living Wills and DNR Orders.

    PubMed

    Mirarchi, Ferdinando L; Ray, Matthew; Cooney, Timothy

    2016-12-01

    Living wills are a form of advance directives that help to protect patient autonomy. They are frequently encountered in the conduct of medicine. Because of their impact on care, it is important to understand the adequacy of current medical school training in the preparation of physicians to interpret these directives. Between April and August 2011 of third and fourth year medical students participated in an internet survey involving the interpretation of living wills. The survey presented a standard living will as a "stand-alone," a standard living will with the addition an emergent clinical scenario and then variations of the standard living will that included a code status designation ("DNR," "Full Code," or "Comfort Care"). For each version/ scenario, respondents were asked to assign a code status and choose interventions based on the cases presented. Four hundred twenty-five students from medical schools throughout the country responded. The majority indicated they had received some form of advance directive training and understood the concept of code status and the term "DNR." Based on a stand-alone document, 15% of respondents correctly denoted "full code" as the appropriate code status; adding a clinical scenario yielded negligible improvement. When a code designation was added to the living will, correct code status responses ranged from 68% to 93%, whereas correct treatment decisions ranged from 18% to 78%. Previous training in advance directives had no impact on these results. Our data indicate that the majority of students failed to understand the key elements of a living will; adding a code status designations improved correct responses with the exception of the term DNR. Misunderstanding of advance directives is a nationwide problem and jeopardizes patient safety. Medical School ethics curricula need to be improved to ensure competency with respect to understanding advance directives.

  19. Probabilistic Evaluation of Advanced Ceramic Matrix Composite Structures

    NASA Technical Reports Server (NTRS)

    Abumeri, Galib H.; Chamis, Christos C.

    2003-01-01

    The objective of this report is to summarize the deterministic and probabilistic structural evaluation results of two structures made with advanced ceramic composites (CMC): internally pressurized tube and uniformly loaded flange. The deterministic structural evaluation includes stress, displacement, and buckling analyses. It is carried out using the finite element code MHOST, developed for the 3-D inelastic analysis of structures that are made with advanced materials. The probabilistic evaluation is performed using the integrated probabilistic assessment of composite structures computer code IPACS. The affects of uncertainties in primitive variables related to the material, fabrication process, and loadings on the material property and structural response behavior are quantified. The primitive variables considered are: thermo-mechanical properties of fiber and matrix, fiber and void volume ratios, use temperature, and pressure. The probabilistic structural analysis and probabilistic strength results are used by IPACS to perform reliability and risk evaluation of the two structures. The results will show that the sensitivity information obtained for the two composite structures from the computational simulation can be used to alter the design process to meet desired service requirements. In addition to detailed probabilistic analysis of the two structures, the following were performed specifically on the CMC tube: (1) predicted the failure load and the buckling load, (2) performed coupled non-deterministic multi-disciplinary structural analysis, and (3) demonstrated that probabilistic sensitivities can be used to select a reduced set of design variables for optimization.

  20. Advanced Shock Position Control for Mode Transition in a Turbine Based Combined Cycle Engine Inlet Model

    NASA Technical Reports Server (NTRS)

    Csank, Jeffrey T.; Stueber, Thomas J.

    2013-01-01

    A dual flow-path inlet system is being tested to evaluate methodologies for a Turbine Based Combined Cycle (TBCC) propulsion system to perform a controlled inlet mode transition. Prior to experimental testing, simulation models are used to test, debug, and validate potential control algorithms. One simulation package being used for testing is the High Mach Transient Engine Cycle Code simulation, known as HiTECC. This paper discusses the closed loop control system, which utilizes a shock location sensor to improve inlet performance and operability. Even though the shock location feedback has a coarse resolution, the feedback allows for a reduction in steady state error and, in some cases, better performance than with previous proposed pressure ratio based methods. This paper demonstrates the design and benefit with the implementation of a proportional-integral controller, an H-Infinity based controller, and a disturbance observer based controller.

  1. Performance of JT-60SA divertor Thomson scattering diagnostics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kajita, Shin, E-mail: kajita.shin@nagoya-u.jp; Hatae, Takaki; Tojo, Hiroshi

    2015-08-15

    For the satellite tokamak JT-60 Super Advanced (JT-60SA), a divertor Thomson scattering measurement system is planning to be installed. In this study, we improved the design of the collection optics based on the previous one, in which it was found that the solid angle of the collection optics became very small, mainly because of poor accessibility to the measurement region. By improvement, the solid angle was increased by up to approximately five times. To accurately assess the measurement performance, background noise was assessed using the plasma parameters in two typical discharges in JT-60SA calculated from the SONIC code. Moreover, themore » influence of the reflection of bremsstrahlung radiation by the wall is simulated by using a ray tracing simulation. The errors in the temperature and the density are assessed based on the simulation results for three typical field of views.« less

  2. Performance of JT-60SA divertor Thomson scattering diagnostics.

    PubMed

    Kajita, Shin; Hatae, Takaki; Tojo, Hiroshi; Enokuchi, Akito; Hamano, Takashi; Shimizu, Katsuhiro; Kawashima, Hisato

    2015-08-01

    For the satellite tokamak JT-60 Super Advanced (JT-60SA), a divertor Thomson scattering measurement system is planning to be installed. In this study, we improved the design of the collection optics based on the previous one, in which it was found that the solid angle of the collection optics became very small, mainly because of poor accessibility to the measurement region. By improvement, the solid angle was increased by up to approximately five times. To accurately assess the measurement performance, background noise was assessed using the plasma parameters in two typical discharges in JT-60SA calculated from the SONIC code. Moreover, the influence of the reflection of bremsstrahlung radiation by the wall is simulated by using a ray tracing simulation. The errors in the temperature and the density are assessed based on the simulation results for three typical field of views.

  3. Advances in Black-Hole Mergers: Spins and Unequal Masses

    NASA Technical Reports Server (NTRS)

    Kelly, Bernard

    2007-01-01

    The last two years have seen incredible development in numerical relativity: from fractions of an orbit, evolutions of an equal-mass binary have reached multiple orbits, and convergent gravitational waveforms have been produced from several research groups and numerical codes. We are now able to move our attention from pure numerics to astrophysics, and address scenarios relevant to current and future gravitational-wave detectors.Over the last 12 months at NASA Goddard, we have extended the accuracy of our Hahn-Dol code, and used it to move toward these goals. We have achieved high-accuracy simulations of black-hole binaries of low initial eccentricity, with enough orbits of inspiral before merger to allow us to produce hybrid waveforms that reflect accurately the entire lifetime of the BH binary. We are extending this work, looking at the effects of unequal masses and spins.

  4. The Particle Accelerator Simulation Code PyORBIT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gorlov, Timofey V; Holmes, Jeffrey A; Cousineau, Sarah M

    2015-01-01

    The particle accelerator simulation code PyORBIT is presented. The structure, implementation, history, parallel and simulation capabilities, and future development of the code are discussed. The PyORBIT code is a new implementation and extension of algorithms of the original ORBIT code that was developed for the Spallation Neutron Source accelerator at the Oak Ridge National Laboratory. The PyORBIT code has a two level structure. The upper level uses the Python programming language to control the flow of intensive calculations performed by the lower level code implemented in the C++ language. The parallel capabilities are based on MPI communications. The PyORBIT ismore » an open source code accessible to the public through the Google Open Source Projects Hosting service.« less

  5. Combustor Simulation

    NASA Technical Reports Server (NTRS)

    Norris, Andrew

    2003-01-01

    The goal was to perform 3D simulation of GE90 combustor, as part of full turbofan engine simulation. Requirements of high fidelity as well as fast turn-around time require massively parallel code. National Combustion Code (NCC) was chosen for this task as supports up to 999 processors and includes state-of-the-art combustion models. Also required is ability to take inlet conditions from compressor code and give exit conditions to turbine code.

  6. Investigation on the Capability of a Non Linear CFD Code to Simulate Wave Propagation

    DTIC Science & Technology

    2003-02-01

    Linear CFD Code to Simulate Wave Propagation Pedro de la Calzada Pablo Quintana Manuel Antonio Burgos ITP, S.A. Parque Empresarial Fernando avenida...mechanisms above presented, simulation of unsteady aerodynamics with linear and nonlinear CFD codes is an ongoing activity within the turbomachinery industry

  7. Software quality and process improvement in scientific simulation codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ambrosiano, J.; Webster, R.

    1997-11-01

    This report contains viewgraphs on the quest to develope better simulation code quality through process modeling and improvement. This study is based on the experience of the authors and interviews with ten subjects chosen from simulation code development teams at LANL. This study is descriptive rather than scientific.

  8. Real-time global MHD simulation of the solar wind interaction with the earth’s magnetosphere

    NASA Astrophysics Data System (ADS)

    Shimazu, H.; Kitamura, K.; Tanaka, T.; Fujita, S.; Nakamura, M. S.; Obara, T.

    2008-11-01

    We have developed a real-time global MHD (magnetohydrodynamics) simulation of the solar wind interaction with the earth’s magnetosphere. By adopting the real-time solar wind parameters and interplanetary magnetic field (IMF) observed routinely by the ACE (Advanced Composition Explorer) spacecraft, responses of the magnetosphere are calculated with MHD code. The simulation is carried out routinely on the super computer system at National Institute of Information and Communications Technology (NICT), Japan. The visualized images of the magnetic field lines around the earth, pressure distribution on the meridian plane, and the conductivity of the polar ionosphere, can be referred to on the web site (http://www2.nict.go.jp/y/y223/simulation/realtime/). The results show that various magnetospheric activities are almost reproduced qualitatively. They also give us information how geomagnetic disturbances develop in the magnetosphere in relation with the ionosphere. From the viewpoint of space weather, the real-time simulation helps us to understand the whole image in the current condition of the magnetosphere. To evaluate the simulation results, we compare the AE indices derived from the simulation and observations. The simulation and observation agree well for quiet days and isolated substorm cases in general.

  9. Implementation of the DPM Monte Carlo code on a parallel architecture for treatment planning applications.

    PubMed

    Tyagi, Neelam; Bose, Abhijit; Chetty, Indrin J

    2004-09-01

    We have parallelized the Dose Planning Method (DPM), a Monte Carlo code optimized for radiotherapy class problems, on distributed-memory processor architectures using the Message Passing Interface (MPI). Parallelization has been investigated on a variety of parallel computing architectures at the University of Michigan-Center for Advanced Computing, with respect to efficiency and speedup as a function of the number of processors. We have integrated the parallel pseudo random number generator from the Scalable Parallel Pseudo-Random Number Generator (SPRNG) library to run with the parallel DPM. The Intel cluster consisting of 800 MHz Intel Pentium III processor shows an almost linear speedup up to 32 processors for simulating 1 x 10(8) or more particles. The speedup results are nearly linear on an Athlon cluster (up to 24 processors based on availability) which consists of 1.8 GHz+ Advanced Micro Devices (AMD) Athlon processors on increasing the problem size up to 8 x 10(8) histories. For a smaller number of histories (1 x 10(8)) the reduction of efficiency with the Athlon cluster (down to 83.9% with 24 processors) occurs because the processing time required to simulate 1 x 10(8) histories is less than the time associated with interprocessor communication. A similar trend was seen with the Opteron Cluster (consisting of 1400 MHz, 64-bit AMD Opteron processors) on increasing the problem size. Because of the 64-bit architecture Opteron processors are capable of storing and processing instructions at a faster rate and hence are faster as compared to the 32-bit Athlon processors. We have validated our implementation with an in-phantom dose calculation study using a parallel pencil monoenergetic electron beam of 20 MeV energy. The phantom consists of layers of water, lung, bone, aluminum, and titanium. The agreement in the central axis depth dose curves and profiles at different depths shows that the serial and parallel codes are equivalent in accuracy.

  10. FISPACT-II: An Advanced Simulation System for Activation, Transmutation and Material Modelling

    NASA Astrophysics Data System (ADS)

    Sublet, J.-Ch.; Eastwood, J. W.; Morgan, J. G.; Gilbert, M. R.; Fleming, M.; Arter, W.

    2017-01-01

    Fispact-II is a code system and library database for modelling activation-transmutation processes, depletion-burn-up, time dependent inventory and radiation damage source terms caused by nuclear reactions and decays. The Fispact-II code, written in object-style Fortran, follows the evolution of material irradiated by neutrons, alphas, gammas, protons, or deuterons, and provides a wide range of derived radiological output quantities to satisfy most needs for nuclear applications. It can be used with any ENDF-compliant group library data for nuclear reactions, particle-induced and spontaneous fission yields, and radioactive decay (including but not limited to TENDL-2015, ENDF/B-VII.1, JEFF-3.2, JENDL-4.0u, CENDL-3.1 processed into fine-group-structure files, GEFY-5.2 and UKDD-16), as well as resolved and unresolved resonance range probability tables for self-shielding corrections and updated radiological hazard indices. The code has many novel features including: extension of the energy range up to 1 GeV; additional neutron physics including self-shielding effects, temperature dependence, thin and thick target yields; pathway analysis; and sensitivity and uncertainty quantification and propagation using full covariance data. The latest ENDF libraries such as TENDL encompass thousands of target isotopes. Nuclear data libraries for Fispact-II are prepared from these using processing codes PREPRO, NJOY and CALENDF. These data include resonance parameters, cross sections with covariances, probability tables in the resonance ranges, PKA spectra, kerma, dpa, gas and radionuclide production and energy-dependent fission yields, supplemented with all 27 decay types. All such data for the five most important incident particles are provided in evaluated data tables. The Fispact-II simulation software is described in detail in this paper, together with the nuclear data libraries. The Fispact-II system also includes several utility programs for code-use optimisation, visualisation and production of secondary radiological quantities. Included in the paper are summaries of results from the suite of verification and validation reports available with the code.

  11. Advanced propeller noise prediction in the time domain

    NASA Technical Reports Server (NTRS)

    Farassat, F.; Dunn, M. H.; Spence, P. L.

    1992-01-01

    The time domain code ASSPIN gives acousticians a powerful technique of advanced propeller noise prediction. Except for nonlinear effects, the code uses exact solutions of the Ffowcs Williams-Hawkings equation with exact blade geometry and kinematics. By including nonaxial inflow, periodic loading noise, and adaptive time steps to accelerate computer execution, the development of this code becomes complete.

  12. Simulations of Cavitating Cryogenic Inducers

    NASA Technical Reports Server (NTRS)

    Dorney, Dan (Technical Monitor); Hosangadi, Ashvin; Ahuja, Vineet; Ungewitter, Ronald J.

    2004-01-01

    Simulations of cavitating turbopump inducers at their design flow rate are presented. Results over a broad range of Nss, numbers extending from single-phase flow conditions through the critical head break down point are discussed. The flow characteristics and performance of a subscale geometry designed for water testing are compared with the fullscale configuration that employs LOX. In particular, thermal depression effects arising from cavitation in cryogenic fluids are identified and their impact on the suction performance of the inducer quantified. The simulations have been performed using the CRUNCH CFD[R] code that has a generalized multi-element unstructured framework suitable for turbomachinery applications. An advanced multi-phase formulation for cryogenic fluids that models temperature depression and real fluid property variations is employed. The formulation has been extensively validated for both liquid nitrogen and liquid hydrogen by simulating the experiments of Hord on hydrofoils; excellent estimates of the leading edge temperature and pressure depression were obtained while the comparisons in the cavity closure region were reasonable.

  13. Double-null divertor configuration discharge and disruptive heat flux simulation using TSC on EAST

    NASA Astrophysics Data System (ADS)

    Bo, SHI; Jinhong, YANG; Cheng, YANG; Desheng, CHENG; Hui, WANG; Hui, ZHANG; Haifei, DENG; Junli, QI; Xianzu, GONG; Weihua, WANG

    2018-07-01

    The tokamak simulation code (TSC) is employed to simulate the complete evolution of a disruptive discharge in the experimental advanced superconducting tokamak. The multiplication factor of the anomalous transport coefficient was adjusted to model the major disruptive discharge with double-null divertor configuration based on shot 61 916. The real-time feed-back control system for the plasma displacement was employed. Modeling results of the evolution of the poloidal field coil currents, the plasma current, the major radius, the plasma configuration all show agreement with experimental measurements. Results from the simulation show that during disruption, heat flux about 8 MW m‑2 flows to the upper divertor target plate and about 6 MW m‑2 flows to the lower divertor target plate. Computations predict that different amounts of heat fluxes on the divertor target plate could result by adjusting the multiplication factor of the anomalous transport coefficient. This shows that TSC has high flexibility and predictability.

  14. Tackling sampling challenges in biomolecular simulations.

    PubMed

    Barducci, Alessandro; Pfaendtner, Jim; Bonomi, Massimiliano

    2015-01-01

    Molecular dynamics (MD) simulations are a powerful tool to give an atomistic insight into the structure and dynamics of proteins. However, the time scales accessible in standard simulations, which often do not match those in which interesting biological processes occur, limit their predictive capabilities. Many advanced sampling techniques have been proposed over the years to overcome this limitation. This chapter focuses on metadynamics, a method based on the introduction of a time-dependent bias potential to accelerate sampling and recover equilibrium properties of a few descriptors that are able to capture the complexity of a process at a coarse-grained level. The theory of metadynamics and its combination with other popular sampling techniques such as the replica exchange method is briefly presented. Practical applications of these techniques to the study of the Trp-Cage miniprotein folding are also illustrated. The examples contain a guide for performing these calculations with PLUMED, a plugin to perform enhanced sampling simulations in combination with many popular MD codes.

  15. Advances in continuum kinetic and gyrokinetic simulations of turbulence on open-field line geometries

    NASA Astrophysics Data System (ADS)

    Hakim, Ammar; Shi, Eric; Juno, James; Bernard, Tess; Hammett, Greg

    2017-10-01

    For weakly collisional (or collisionless) plasmas, kinetic effects are required to capture the physics of micro-turbulence. We have implemented solvers for kinetic and gyrokinetic equations in the computational plasma physics framework, Gkeyll. We use a version of discontinuous Galerkin scheme that conserves energy exactly. Plasma sheaths are modeled with novel boundary conditions. Positivity of distribution functions is maintained via a reconstruction method, allowing robust simulations that continue to conserve energy even with positivity limiters. We have performed a large number of benchmarks, verifying the accuracy and robustness of our code. We demonstrate the application of our algorithm to two classes of problems (a) Vlasov-Maxwell simulations of turbulence in a magnetized plasma, applicable to space plasmas; (b) Gyrokinetic simulations of turbulence in open-field-line geometries, applicable to laboratory plasmas. Supported by the Max-Planck/Princeton Center for Plasma Physics, the SciDAC Center for the Study of Plasma Microturbulence, and DOE Contract DE-AC02-09CH11466.

  16. Parallel Discrete Molecular Dynamics Simulation With Speculation and In-Order Commitment*†

    PubMed Central

    Khan, Md. Ashfaquzzaman; Herbordt, Martin C.

    2011-01-01

    Discrete molecular dynamics simulation (DMD) uses simplified and discretized models enabling simulations to advance by event rather than by timestep. DMD is an instance of discrete event simulation and so is difficult to scale: even in this multi-core era, all reported DMD codes are serial. In this paper we discuss the inherent difficulties of scaling DMD and present our method of parallelizing DMD through event-based decomposition. Our method is microarchitecture inspired: speculative processing of events exposes parallelism, while in-order commitment ensures correctness. We analyze the potential of this parallelization method for shared-memory multiprocessors. Achieving scalability required extensive experimentation with scheduling and synchronization methods to mitigate serialization. The speed-up achieved for a variety of system sizes and complexities is nearly 6× on an 8-core and over 9× on a 12-core processor. We present and verify analytical models that account for the achieved performance as a function of available concurrency and architectural limitations. PMID:21822327

  17. Parallel Discrete Molecular Dynamics Simulation With Speculation and In-Order Commitment.

    PubMed

    Khan, Md Ashfaquzzaman; Herbordt, Martin C

    2011-07-20

    Discrete molecular dynamics simulation (DMD) uses simplified and discretized models enabling simulations to advance by event rather than by timestep. DMD is an instance of discrete event simulation and so is difficult to scale: even in this multi-core era, all reported DMD codes are serial. In this paper we discuss the inherent difficulties of scaling DMD and present our method of parallelizing DMD through event-based decomposition. Our method is microarchitecture inspired: speculative processing of events exposes parallelism, while in-order commitment ensures correctness. We analyze the potential of this parallelization method for shared-memory multiprocessors. Achieving scalability required extensive experimentation with scheduling and synchronization methods to mitigate serialization. The speed-up achieved for a variety of system sizes and complexities is nearly 6× on an 8-core and over 9× on a 12-core processor. We present and verify analytical models that account for the achieved performance as a function of available concurrency and architectural limitations.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zakharov, Leonic E.; Li, Xujing

    This paper formulates the Tokamak Magneto-Hydrodynamics (TMHD), initially outlined by X. Li and L.E. Zakharov [Plasma Science and Technology, accepted, ID:2013-257 (2013)] for proper simulations of macroscopic plasma dynamics. The simplest set of magneto-hydrodynamics equations, sufficient for disruption modeling and extendable to more refined physics, is explained in detail. First, the TMHD introduces to 3-D simulations the Reference Magnetic Coordinates (RMC), which are aligned with the magnetic field in the best possible way. The numerical implementation of RMC is adaptive grids. Being consistent with the high anisotropy of the tokamak plasma, RMC allow simulations at realistic, very high plasma electricmore » conductivity. Second, the TMHD splits the equation of motion into an equilibrium equation and the plasma advancing equation. This resolves the 4 decade old problem of Courant limitations of the time step in existing, plasma inertia driven numerical codes. The splitting allows disruption simulations on a relatively slow time scale in comparison with the fast time of ideal MHD instabilities. A new, efficient numerical scheme is proposed for TMHD.« less

  19. Production code control system for hydrodynamics simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Slone, D.M.

    1997-08-18

    We describe how the Production Code Control System (pCCS), written in Perl, has been used to control and monitor the execution of a large hydrodynamics simulation code in a production environment. We have been able to integrate new, disparate, and often independent, applications into the PCCS framework without the need to modify any of our existing application codes. Both users and code developers see a consistent interface to the simulation code and associated applications regardless of the physical platform, whether an MPP, SMP, server, or desktop workstation. We will also describe our use of Perl to develop a configuration managementmore » system for the simulation code, as well as a code usage database and report generator. We used Perl to write a backplane that allows us plug in preprocessors, the hydrocode, postprocessors, visualization tools, persistent storage requests, and other codes. We need only teach PCCS a minimal amount about any new tool or code to essentially plug it in and make it usable to the hydrocode. PCCS has made it easier to link together disparate codes, since using Perl has removed the need to learn the idiosyncrasies of system or RPC programming. The text handling in Perl makes it easy to teach PCCS about new codes, or changes to existing codes.« less

  20. Simulation of spacecraft attitude dynamics using TREETOPS and model-specific computer Codes

    NASA Technical Reports Server (NTRS)

    Cochran, John E.; No, T. S.; Fitz-Coy, Norman G.

    1989-01-01

    The simulation of spacecraft attitude dynamics and control using the generic, multi-body code called TREETOPS and other codes written especially to simulate particular systems is discussed. Differences in the methods used to derive equations of motion--Kane's method for TREETOPS and the Lagrangian and Newton-Euler methods, respectively, for the other two codes--are considered. Simulation results from the TREETOPS code are compared with those from the other two codes for two example systems. One system is a chain of rigid bodies; the other consists of two rigid bodies attached to a flexible base body. Since the computer codes were developed independently, consistent results serve as a verification of the correctness of all the programs. Differences in the results are discussed. Results for the two-rigid-body, one-flexible-body system are useful also as information on multi-body, flexible, pointing payload dynamics.

  1. Proceedings of the OECD/CSNI workshop on transient thermal-hydraulic and neutronic codes requirements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ebert, D.

    1997-07-01

    This is a report on the CSNI Workshop on Transient Thermal-Hydraulic and Neutronic Codes Requirements held at Annapolis, Maryland, USA November 5-8, 1996. This experts` meeting consisted of 140 participants from 21 countries; 65 invited papers were presented. The meeting was divided into five areas: (1) current and prospective plans of thermal hydraulic codes development; (2) current and anticipated uses of thermal-hydraulic codes; (3) advances in modeling of thermal-hydraulic phenomena and associated additional experimental needs; (4) numerical methods in multi-phase flows; and (5) programming language, code architectures and user interfaces. The workshop consensus identified the following important action items tomore » be addressed by the international community in order to maintain and improve the calculational capability: (a) preserve current code expertise and institutional memory, (b) preserve the ability to use the existing investment in plant transient analysis codes, (c) maintain essential experimental capabilities, (d) develop advanced measurement capabilities to support future code validation work, (e) integrate existing analytical capabilities so as to improve performance and reduce operating costs, (f) exploit the proven advances in code architecture, numerics, graphical user interfaces, and modularization in order to improve code performance and scrutibility, and (g) more effectively utilize user experience in modifying and improving the codes.« less

  2. Application of advanced computational codes in the design of an experiment for a supersonic throughflow fan rotor

    NASA Technical Reports Server (NTRS)

    Wood, Jerry R.; Schmidt, James F.; Steinke, Ronald J.; Chima, Rodrick V.; Kunik, William G.

    1987-01-01

    Increased emphasis on sustained supersonic or hypersonic cruise has revived interest in the supersonic throughflow fan as a possible component in advanced propulsion systems. Use of a fan that can operate with a supersonic inlet axial Mach number is attractive from the standpoint of reducing the inlet losses incurred in diffusing the flow from a supersonic flight Mach number to a subsonic one at the fan face. The design of the experiment using advanced computational codes to calculate the components required is described. The rotor was designed using existing turbomachinery design and analysis codes modified to handle fully supersonic axial flow through the rotor. A two-dimensional axisymmetric throughflow design code plus a blade element code were used to generate fan rotor velocity diagrams and blade shapes. A quasi-three-dimensional, thin shear layer Navier-Stokes code was used to assess the performance of the fan rotor blade shapes. The final design was stacked and checked for three-dimensional effects using a three-dimensional Euler code interactively coupled with a two-dimensional boundary layer code. The nozzle design in the expansion region was analyzed with a three-dimensional parabolized viscous code which corroborated the results from the Euler code. A translating supersonic diffuser was designed using these same codes.

  3. Zebra: An advanced PWR lattice code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cao, L.; Wu, H.; Zheng, Y.

    2012-07-01

    This paper presents an overview of an advanced PWR lattice code ZEBRA developed at NECP laboratory in Xi'an Jiaotong Univ.. The multi-group cross-section library is generated from the ENDF/B-VII library by NJOY and the 361-group SHEM structure is employed. The resonance calculation module is developed based on sub-group method. The transport solver is Auto-MOC code, which is a self-developed code based on the Method of Characteristic and the customization of AutoCAD software. The whole code is well organized in a modular software structure. Some numerical results during the validation of the code demonstrate that this code has a good precisionmore » and a high efficiency. (authors)« less

  4. On the effect of galactic outflows in cosmological simulations of disc galaxies

    NASA Astrophysics Data System (ADS)

    Valentini, Milena; Murante, Giuseppe; Borgani, Stefano; Monaco, Pierluigi; Bressan, Alessandro; Beck, Alexander M.

    2017-09-01

    We investigate the impact of galactic outflow modelling on the formation and evolution of a disc galaxy, by performing a suite of cosmological simulations with zoomed-in initial conditions (ICs) of a Milky Way-sized halo. We verify how sensitive the general properties of the simulated galaxy are to the way in which stellar feedback triggered outflows are implemented, keeping ICs, simulation code and star formation (SF) model all fixed. We present simulations that are based on a version of the gadget3 code where our sub-resolution model is coupled with an advanced implementation of smoothed particle hydrodynamics that ensures a more accurate fluid sampling and an improved description of gas mixing and hydrodynamical instabilities. We quantify the strong interplay between the adopted hydrodynamic scheme and the sub-resolution model describing SF and feedback. We consider four different galactic outflow models, including the one introduced by Dalla Vecchia & Schaye (2012) and a scheme that is inspired by the Springel & Hernquist (2003) model. We find that the sub-resolution prescriptions adopted to generate galactic outflows are the main shaping factor of the stellar disc component at low redshift. The key requirement that a feedback model must have to be successful in producing a disc-dominated galaxy is the ability to regulate the high-redshift SF (responsible for the formation of the bulge component), the cosmological infall of gas from the large-scale environment, and gas fall-back within the galactic radius at low redshift, in order to avoid a too high SF rate at z = 0.

  5. An Advanced N -body Model for Interacting Multiple Stellar Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brož, Miroslav

    We construct an advanced model for interacting multiple stellar systems in which we compute all trajectories with a numerical N -body integrator, namely the Bulirsch–Stoer from the SWIFT package. We can then derive various observables: astrometric positions, radial velocities, minima timings (TTVs), eclipse durations, interferometric visibilities, closure phases, synthetic spectra, spectral energy distribution, and even complete light curves. We use a modified version of the Wilson–Devinney code for the latter, in which the instantaneous true phase and inclination of the eclipsing binary are governed by the N -body integration. If all of these types of observations are at one’s disposal,more » a joint χ {sup 2} metric and an optimization algorithm (a simplex or simulated annealing) allow one to search for a global minimum and construct very robust models of stellar systems. At the same time, our N -body model is free from artifacts that may arise if mutual gravitational interactions among all components are not self-consistently accounted for. Finally, we present a number of examples showing dynamical effects that can be studied with our code and we discuss how systematic errors may affect the results (and how to prevent this from happening).« less

  6. Comprehensive approach to fast ion measurements in the beam-driven FRC

    NASA Astrophysics Data System (ADS)

    Magee, Richard; Smirnov, Artem; Onofri, Marco; Dettrick, Sean; Korepanov, Sergey; Knapp, Kurt; the TAE Team

    2015-11-01

    The C-2U experiment combines tangential neutral beam injection, edge biasing, and advanced recycling control to explore the sustainment of field-reversed configuration (FRC) plasmas. To study fast ion confinement in such advanced, beam-driven FRCs, a synergetic technique was developed that relies on the measurements of the DD fusion reaction products and the hybrid code Q2D, which treats the plasma as a fluid and the fast ions kinetically. Data from calibrated neutron and proton detectors are used in a complementary fashion to constrain the simulations: neutron detectors measure the volume integrated fusion rate to constrain the total number of fast ions, while proton detectors with multiple lines of sight through the plasma constrain the axial profile of fast ions. One application of this technique is the diagnosis of fast ion energy transfer and pitch angle scattering. A parametric numerical study was conducted, in which additional ad hoc loss and scattering terms of varying strengths were introduced in the code and constrained with measurement. Initial results indicate that the energy transfer is predominantly classical, while, in some cases, non-classical pitch angle scattering can be observed.

  7. Warthog: Coupling Status Update

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hart, Shane W. D.; Reardon, Bradley T.

    The Warthog code was developed to couple codes that are developed in both the Multi-Physics Object-Oriented Simulation Environment (MOOSE) from Idaho National Laboratory (INL) and SHARP from Argonne National Laboratory (ANL). The initial phase of this work, focused on coupling the neutronics code PROTEUS with the fuel performance code BISON. The main technical challenge involves mapping the power density solution determined by PROTEUS to the fuel in BISON. This presents a challenge since PROTEUS uses the MOAB mesh format, but BISON, like all other MOOSE codes, uses the libMesh format. When coupling the different codes, one must consider that Warthogmore » is a light-weight MOOSE-based program that uses the Data Transfer Kit (DTK) to transfer data between the various mesh types. Users set up inputs for the codes they want to run, and then Warthog transfers the data between them. Currently Warthog supports XSProc from SCALE or the Sub-Group Application Programming Interface (SGAPI) in PROTEUS for generating cross sections. It supports arbitrary geometries using PROTEUS and BISON. DTK will transfer power densities and temperatures between the codes where the domains overlap. In the past fiscal year (FY), much work has gone into demonstrating two-way coupling for simple pin cells of various materials. XSProc was used to calculate the cross sections, which were then passed to PROTEUS in an external file. PROTEUS calculates the fission/power density, and Warthog uses DTK to pass this information to BISON, where it is used as the heat source. BISON then calculates the temperature profile of the pin cell and sends it back to XSProc to obtain the temperature corrected cross sections. This process is repeated until the convergence criteria (tolerance on BISON solve, or number of time steps) is reached. Models have been constructed and run for both uranium oxide and uranium silicide fuels. These models demonstrate a clear difference in power shape that is not accounted for in a stand-alone BISON run. Future work involves improving the user interface (UI), likely through integration with the Nuclear Energy Advanced Modeling and Simulation (NEAMS) Workbench. Furthermore, automating the input creation would ease the user experience. The next priority is to continue coupling the work with other codes in the SHARP package. Efforts on other projects include work to couple the Nek5000 thermo-hydraulics code to MOOSE, but this is in the preliminary stages.« less

  8. Results of Two-Stage Light-Gas Gun Development Efforts and Hypervelocity Impact Tests of Advanced Thermal Protection Materials

    NASA Technical Reports Server (NTRS)

    Cornelison, C. J.; Watts, Eric T.

    1998-01-01

    Gun development efforts to increase the launching capabilities of the NASA Ames 0.5-inch two-stage light-gas gun have been investigated. A gun performance simulation code was used to guide initial parametric variations and hardware modifications, in order to increase the projectile impact velocity capability to 8 km/s, while maintaining acceptable levels of gun barrel erosion and gun component stresses. Concurrent with this facility development effort, a hypervelocity impact testing series in support of the X-33/RLV program was performed in collaboration with Rockwell International. Specifically, advanced thermal protection system materials were impacted with aluminum spheres to simulate impacts with on-orbit space debris. Materials tested included AETB-8, AETB-12, AETB-20, and SIRCA-25 tiles, tailorable advanced blanket insulation (TABI), and high temperature AFRSI (HTA). The ballistic limit for several Thermal Protection System (TPS) configurations was investigated to determine particle sizes which cause threshold TPS/structure penetration. Crater depth in tiles was measured as a function of impact particle size. The relationship between coating type and crater morphology was also explored. Data obtained during this test series was used to perform a preliminary analysis of the risks to a typical orbital vehicle from the meteoroid and space debris environment.

  9. HPC Annual Report 2017

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dennig, Yasmin

    Sandia National Laboratories has a long history of significant contributions to the high performance community and industry. Our innovative computer architectures allowed the United States to become the first to break the teraFLOP barrier—propelling us to the international spotlight. Our advanced simulation and modeling capabilities have been integral in high consequence US operations such as Operation Burnt Frost. Strong partnerships with industry leaders, such as Cray, Inc. and Goodyear, have enabled them to leverage our high performance computing (HPC) capabilities to gain a tremendous competitive edge in the marketplace. As part of our continuing commitment to providing modern computing infrastructuremore » and systems in support of Sandia missions, we made a major investment in expanding Building 725 to serve as the new home of HPC systems at Sandia. Work is expected to be completed in 2018 and will result in a modern facility of approximately 15,000 square feet of computer center space. The facility will be ready to house the newest National Nuclear Security Administration/Advanced Simulation and Computing (NNSA/ASC) Prototype platform being acquired by Sandia, with delivery in late 2019 or early 2020. This new system will enable continuing advances by Sandia science and engineering staff in the areas of operating system R&D, operation cost effectiveness (power and innovative cooling technologies), user environment and application code performance.« less

  10. TH-E-18A-01: Developments in Monte Carlo Methods for Medical Imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Badal, A; Zbijewski, W; Bolch, W

    Monte Carlo simulation methods are widely used in medical physics research and are starting to be implemented in clinical applications such as radiation therapy planning systems. Monte Carlo simulations offer the capability to accurately estimate quantities of interest that are challenging to measure experimentally while taking into account the realistic anatomy of an individual patient. Traditionally, practical application of Monte Carlo simulation codes in diagnostic imaging was limited by the need for large computational resources or long execution times. However, recent advancements in high-performance computing hardware, combined with a new generation of Monte Carlo simulation algorithms and novel postprocessing methods,more » are allowing for the computation of relevant imaging parameters of interest such as patient organ doses and scatter-to-primaryratios in radiographic projections in just a few seconds using affordable computational resources. Programmable Graphics Processing Units (GPUs), for example, provide a convenient, affordable platform for parallelized Monte Carlo executions that yield simulation times on the order of 10{sup 7} xray/ s. Even with GPU acceleration, however, Monte Carlo simulation times can be prohibitive for routine clinical practice. To reduce simulation times further, variance reduction techniques can be used to alter the probabilistic models underlying the x-ray tracking process, resulting in lower variance in the results without biasing the estimates. Other complementary strategies for further reductions in computation time are denoising of the Monte Carlo estimates and estimating (scoring) the quantity of interest at a sparse set of sampling locations (e.g. at a small number of detector pixels in a scatter simulation) followed by interpolation. Beyond reduction of the computational resources required for performing Monte Carlo simulations in medical imaging, the use of accurate representations of patient anatomy is crucial to the virtual generation of medical images and accurate estimation of radiation dose and other imaging parameters. For this, detailed computational phantoms of the patient anatomy must be utilized and implemented within the radiation transport code. Computational phantoms presently come in one of three format types, and in one of four morphometric categories. Format types include stylized (mathematical equation-based), voxel (segmented CT/MR images), and hybrid (NURBS and polygon mesh surfaces). Morphometric categories include reference (small library of phantoms by age at 50th height/weight percentile), patient-dependent (larger library of phantoms at various combinations of height/weight percentiles), patient-sculpted (phantoms altered to match the patient's unique outer body contour), and finally, patient-specific (an exact representation of the patient with respect to both body contour and internal anatomy). The existence and availability of these phantoms represents a very important advance for the simulation of realistic medical imaging applications using Monte Carlo methods. New Monte Carlo simulation codes need to be thoroughly validated before they can be used to perform novel research. Ideally, the validation process would involve comparison of results with those of an experimental measurement, but accurate replication of experimental conditions can be very challenging. It is very common to validate new Monte Carlo simulations by replicating previously published simulation results of similar experiments. This process, however, is commonly problematic due to the lack of sufficient information in the published reports of previous work so as to be able to replicate the simulation in detail. To aid in this process, the AAPM Task Group 195 prepared a report in which six different imaging research experiments commonly performed using Monte Carlo simulations are described and their results provided. The simulation conditions of all six cases are provided in full detail, with all necessary data on material composition, source, geometry, scoring and other parameters provided. The results of these simulations when performed with the four most common publicly available Monte Carlo packages are also provided in tabular form. The Task Group 195 Report will be useful for researchers needing to validate their Monte Carlo work, and for trainees needing to learn Monte Carlo simulation methods. In this symposium we will review the recent advancements in highperformance computing hardware enabling the reduction in computational resources needed for Monte Carlo simulations in medical imaging. We will review variance reduction techniques commonly applied in Monte Carlo simulations of medical imaging systems and present implementation strategies for efficient combination of these techniques with GPU acceleration. Trade-offs involved in Monte Carlo acceleration by means of denoising and “sparse sampling” will be discussed. A method for rapid scatter correction in cone-beam CT (<5 min/scan) will be presented as an illustration of the simulation speeds achievable with optimized Monte Carlo simulations. We will also discuss the development, availability, and capability of the various combinations of computational phantoms for Monte Carlo simulation of medical imaging systems. Finally, we will review some examples of experimental validation of Monte Carlo simulations and will present the AAPM Task Group 195 Report. Learning Objectives: Describe the advances in hardware available for performing Monte Carlo simulations in high performance computing environments. Explain variance reduction, denoising and sparse sampling techniques available for reduction of computational time needed for Monte Carlo simulations of medical imaging. List and compare the computational anthropomorphic phantoms currently available for more accurate assessment of medical imaging parameters in Monte Carlo simulations. Describe experimental methods used for validation of Monte Carlo simulations in medical imaging. Describe the AAPM Task Group 195 Report and its use for validation and teaching of Monte Carlo simulations in medical imaging.« less

  11. The ADVANCE Code of Conduct for collaborative vaccine studies.

    PubMed

    Kurz, Xavier; Bauchau, Vincent; Mahy, Patrick; Glismann, Steffen; van der Aa, Lieke Maria; Simondon, François

    2017-04-04

    Lessons learnt from the 2009 (H1N1) flu pandemic highlighted factors limiting the capacity to collect European data on vaccine exposure, safety and effectiveness, including lack of rapid access to available data sources or expertise, difficulties to establish efficient interactions between multiple parties, lack of confidence between private and public sectors, concerns about possible or actual conflicts of interest (or perceptions thereof) and inadequate funding mechanisms. The Innovative Medicines Initiative's Accelerated Development of VAccine benefit-risk Collaboration in Europe (ADVANCE) consortium was established to create an efficient and sustainable infrastructure for rapid and integrated monitoring of post-approval benefit-risk of vaccines, including a code of conduct and governance principles for collaborative studies. The development of the code of conduct was guided by three core and common values (best science, strengthening public health, transparency) and a review of existing guidance and relevant published articles. The ADVANCE Code of Conduct includes 45 recommendations in 10 topics (Scientific integrity, Scientific independence, Transparency, Conflicts of interest, Study protocol, Study report, Publication, Subject privacy, Sharing of study data, Research contract). Each topic includes a definition, a set of recommendations and a list of additional reading. The concept of the study team is introduced as a key component of the ADVANCE Code of Conduct with a core set of roles and responsibilities. It is hoped that adoption of the ADVANCE Code of Conduct by all partners involved in a study will facilitate and speed-up its initiation, design, conduct and reporting. Adoption of the ADVANCE Code of Conduct should be stated in the study protocol, study report and publications and journal editors are encouraged to use it as an indication that good principles of public health, science and transparency were followed throughout the study. Copyright © 2017. Published by Elsevier Ltd.

  12. Impact of velocity space distribution on hybrid kinetic-magnetohydrodynamic simulation of the (1,1) mode

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Charlson C.

    2008-07-15

    Numeric studies of the impact of the velocity space distribution on the stabilization of (1,1) internal kink mode and excitation of the fishbone mode are performed with a hybrid kinetic-magnetohydrodynamic model. These simulations demonstrate an extension of the physics capabilities of NIMROD[C. R. Sovinec et al., J. Comput. Phys. 195, 355 (2004)], a three-dimensional extended magnetohydrodynamic (MHD) code, to include the kinetic effects of an energetic minority ion species. Kinetic effects are captured by a modification of the usual MHD momentum equation to include a pressure tensor calculated from the {delta}f particle-in-cell method [S. E. Parker and W. W. Lee,more » Phys. Fluids B 5, 77 (1993)]. The particles are advanced in the self-consistent NIMROD fields. We outline the implementation and present simulation results of energetic minority ion stabilization of the (1,1) internal kink mode and excitation of the fishbone mode. A benchmark of the linear growth rate and real frequency is shown to agree well with another code. The impact of the details of the velocity space distribution is examined; particularly extending the velocity space cutoff of the simulation particles. Modestly increasing the cutoff strongly impacts the (1,1) mode. Numeric experiments are performed to study the impact of passing versus trapped particles. Observations of these numeric experiments suggest that assumptions of energetic particle effects should be re-examined.« less

  13. Evaluating statistical consistency in the ocean model component of the Community Earth System Model (pyCECT v2.0)

    NASA Astrophysics Data System (ADS)

    Baker, Allison H.; Hu, Yong; Hammerling, Dorit M.; Tseng, Yu-heng; Xu, Haiying; Huang, Xiaomeng; Bryan, Frank O.; Yang, Guangwen

    2016-07-01

    The Parallel Ocean Program (POP), the ocean model component of the Community Earth System Model (CESM), is widely used in climate research. Most current work in CESM-POP focuses on improving the model's efficiency or accuracy, such as improving numerical methods, advancing parameterization, porting to new architectures, or increasing parallelism. Since ocean dynamics are chaotic in nature, achieving bit-for-bit (BFB) identical results in ocean solutions cannot be guaranteed for even tiny code modifications, and determining whether modifications are admissible (i.e., statistically consistent with the original results) is non-trivial. In recent work, an ensemble-based statistical approach was shown to work well for software verification (i.e., quality assurance) on atmospheric model data. The general idea of the ensemble-based statistical consistency testing is to use a qualitative measurement of the variability of the ensemble of simulations as a metric with which to compare future simulations and make a determination of statistical distinguishability. The capability to determine consistency without BFB results boosts model confidence and provides the flexibility needed, for example, for more aggressive code optimizations and the use of heterogeneous execution environments. Since ocean and atmosphere models have differing characteristics in term of dynamics, spatial variability, and timescales, we present a new statistical method to evaluate ocean model simulation data that requires the evaluation of ensemble means and deviations in a spatial manner. In particular, the statistical distribution from an ensemble of CESM-POP simulations is used to determine the standard score of any new model solution at each grid point. Then the percentage of points that have scores greater than a specified threshold indicates whether the new model simulation is statistically distinguishable from the ensemble simulations. Both ensemble size and composition are important. Our experiments indicate that the new POP ensemble consistency test (POP-ECT) tool is capable of distinguishing cases that should be statistically consistent with the ensemble and those that should not, as well as providing a simple, subjective and systematic way to detect errors in CESM-POP due to the hardware or software stack, positively contributing to quality assurance for the CESM-POP code.

  14. Advances in Parallelization for Large Scale Oct-Tree Mesh Generation

    NASA Technical Reports Server (NTRS)

    O'Connell, Matthew; Karman, Steve L.

    2015-01-01

    Despite great advancements in the parallelization of numerical simulation codes over the last 20 years, it is still common to perform grid generation in serial. Generating large scale grids in serial often requires using special "grid generation" compute machines that can have more than ten times the memory of average machines. While some parallel mesh generation techniques have been proposed, generating very large meshes for LES or aeroacoustic simulations is still a challenging problem. An automated method for the parallel generation of very large scale off-body hierarchical meshes is presented here. This work enables large scale parallel generation of off-body meshes by using a novel combination of parallel grid generation techniques and a hybrid "top down" and "bottom up" oct-tree method. Meshes are generated using hardware commonly found in parallel compute clusters. The capability to generate very large meshes is demonstrated by the generation of off-body meshes surrounding complex aerospace geometries. Results are shown including a one billion cell mesh generated around a Predator Unmanned Aerial Vehicle geometry, which was generated on 64 processors in under 45 minutes.

  15. A novel feedback algorithm for simulating controlled dynamics and confinement in the advanced reversed-field pinch

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dahlin, J.-E.; Scheffel, J.

    2005-06-15

    In the advanced reversed-field pinch (RFP), the current density profile is externally controlled to diminish tearing instabilities. Thus the scaling of energy confinement time with plasma current and density is improved substantially as compared to the conventional RFP. This may be numerically simulated by introducing an ad hoc electric field, adjusted to generate a tearing mode stable parallel current density profile. In the present work a current profile control algorithm, based on feedback of the fluctuating electric field in Ohm's law, is introduced into the resistive magnetohydrodynamic code DEBSP [D. D. Schnack and D. C. Baxter, J. Comput. Phys. 55,more » 485 (1984); D. D. Schnack, D. C. Barnes, Z. Mikic, D. S. Marneal, E. J. Caramana, and R. A. Nebel, Comput. Phys. Commun. 43, 17 (1986)]. The resulting radial magnetic field is decreased considerably, causing an increase in energy confinement time and poloidal {beta}. It is found that the parallel current density profile spontaneously becomes hollow, and that a formation, being related to persisting resistive g modes, appears close to the reversal surface.« less

  16. Simulation of Quantum Many-Body Dynamics for Generic Strongly-Interacting Systems

    NASA Astrophysics Data System (ADS)

    Meyer, Gregory; Machado, Francisco; Yao, Norman

    2017-04-01

    Recent experimental advances have enabled the bottom-up assembly of complex, strongly interacting quantum many-body systems from individual atoms, ions, molecules and photons. These advances open the door to studying dynamics in isolated quantum systems as well as the possibility of realizing novel out-of-equilibrium phases of matter. Numerical studies provide insight into these systems; however, computational time and memory usage limit common numerical methods such as exact diagonalization to relatively small Hilbert spaces of dimension 215 . Here we present progress toward a new software package for dynamical time evolution of large generic quantum systems on massively parallel computing architectures. By projecting large sparse Hamiltonians into a much smaller Krylov subspace, we are able to compute the evolution of strongly interacting systems with Hilbert space dimension nearing 230. We discuss and benchmark different design implementations, such as matrix-free methods and GPU based calculations, using both pre-thermal time crystals and the Sachdev-Ye-Kitaev model as examples. We also include a simple symbolic language to describe generic Hamiltonians, allowing simulation of diverse quantum systems without any modification of the underlying C and Fortran code.

  17. Studies of numerical algorithms for gyrokinetics and the effects of shaping on plasma turbulence

    NASA Astrophysics Data System (ADS)

    Belli, Emily Ann

    Advanced numerical algorithms for gyrokinetic simulations are explored for more effective studies of plasma turbulent transport. The gyrokinetic equations describe the dynamics of particles in 5-dimensional phase space, averaging over the fast gyromotion, and provide a foundation for studying plasma microturbulence in fusion devices and in astrophysical plasmas. Several algorithms for Eulerian/continuum gyrokinetic solvers are compared. An iterative implicit scheme based on numerical approximations of the plasma response is developed. This method reduces the long time needed to set-up implicit arrays, yet still has larger time step advantages similar to a fully implicit method. Various model preconditioners and iteration schemes, including Krylov-based solvers, are explored. An Alternating Direction Implicit algorithm is also studied and is surprisingly found to yield a severe stability restriction on the time step. Overall, an iterative Krylov algorithm might be the best approach for extensions of core tokamak gyrokinetic simulations to edge kinetic formulations and may be particularly useful for studies of large-scale ExB shear effects. The effects of flux surface shape on the gyrokinetic stability and transport of tokamak plasmas are studied using the nonlinear GS2 gyrokinetic code with analytic equilibria based on interpolations of representative JET-like shapes. High shaping is found to be a stabilizing influence on both the linear ITG instability and nonlinear ITG turbulence. A scaling of the heat flux with elongation of chi ˜ kappa-1.5 or kappa-2 (depending on the triangularity) is observed, which is consistent with previous gyrofluid simulations. Thus, the GS2 turbulence simulations are explaining a significant fraction, but not all, of the empirical elongation scaling. The remainder of the scaling may come from (1) the edge boundary conditions for core turbulence, and (2) the larger Dimits nonlinear critical temperature gradient shift due to the enhancement of zonal flows with shaping, which is observed with the GS2 simulations. Finally, a local linear trial function-based gyrokinetic code is developed to aid in fast scoping studies of gyrokinetic linear stability. This code is successfully benchmarked with the full GS2 code in the collisionless, electrostatic limit, as well as in the more general electromagnetic description with higher-order Hermite basis functions.

  18. Huffman coding in advanced audio coding standard

    NASA Astrophysics Data System (ADS)

    Brzuchalski, Grzegorz

    2012-05-01

    This article presents several hardware architectures of Advanced Audio Coding (AAC) Huffman noiseless encoder, its optimisations and working implementation. Much attention has been paid to optimise the demand of hardware resources especially memory size. The aim of design was to get as short binary stream as possible in this standard. The Huffman encoder with whole audio-video system has been implemented in FPGA devices.

  19. Processes of code status transitions in hospitalized patients with advanced cancer.

    PubMed

    El-Jawahri, Areej; Lau-Min, Kelsey; Nipp, Ryan D; Greer, Joseph A; Traeger, Lara N; Moran, Samantha M; D'Arpino, Sara M; Hochberg, Ephraim P; Jackson, Vicki A; Cashavelly, Barbara J; Martinson, Holly S; Ryan, David P; Temel, Jennifer S

    2017-12-15

    Although hospitalized patients with advanced cancer have a low chance of surviving cardiopulmonary resuscitation (CPR), the processes by which they change their code status from full code to do not resuscitate (DNR) are unknown. We conducted a mixed-methods study on a prospective cohort of hospitalized patients with advanced cancer. Two physicians used a consensus-driven medical record review to characterize processes that led to code status order transitions from full code to DNR. In total, 1047 hospitalizations were reviewed among 728 patients. Admitting clinicians did not address code status in 53% of hospitalizations, resulting in code status orders of "presumed full." In total, 275 patients (26.3%) transitioned from full code to DNR, and 48.7% (134 of 275 patients) of those had an order of "presumed full" at admission; however, upon further clarification, the patients expressed that they had wished to be DNR before the hospitalization. We identified 3 additional processes leading to order transition from full code to DNR acute clinical deterioration (15.3%), discontinuation of cancer-directed therapy (17.1%), and education about the potential harms/futility of CPR (15.3%). Compared with discontinuing therapy and education, transitions because of acute clinical deterioration were associated with less patient involvement (P = .002), a shorter time to death (P < .001), and a greater likelihood of inpatient death (P = .005). One-half of code status order changes among hospitalized patients with advanced cancer were because of full code orders in patients who had a preference for DNR before hospitalization. Transitions due of acute clinical deterioration were associated with less patient engagement and a higher likelihood of inpatient death. Cancer 2017;123:4895-902. © 2017 American Cancer Society. © 2017 American Cancer Society.

  20. Studies and simulations of the DigiCipher system

    NASA Technical Reports Server (NTRS)

    Sayood, K.; Chen, Y. C.; Kipp, G.

    1993-01-01

    During this period the development of simulators for the various high definition television (HDTV) systems proposed to the FCC was continued. The FCC has indicated that it wants the various proposers to collaborate on a single system. Based on all available information this system will look very much like the advanced digital television (ADTV) system with major contributions only from the DigiCipher system. The results of our simulations of the DigiCipher system are described. This simulator was tested using test sequences from the MPEG committee. The results are extrapolated to HDTV video sequences. Once again, some caveats are in order. The sequences used for testing the simulator and generating the results are those used for testing the MPEG algorithm. The sequences are of much lower resolution than the HDTV sequences would be, and therefore the extrapolations are not totally accurate. One would expect to get significantly higher compression in terms of bits per pixel with sequences that are of higher resolution. However, the simulator itself is a valid one, and should HDTV sequences become available, they could be used directly with the simulator. A brief overview of the DigiCipher system is given. Some coding results obtained using the simulator are looked at. These results are compared to those obtained using the ADTV system. These results are evaluated in the context of the CCSDS specifications and make some suggestions as to how the DigiCipher system could be implemented in the NASA network. Simulations such as the ones reported can be biased depending on the particular source sequence used. In order to get more complete information about the system one needs to obtain a reasonable set of models which mirror the various kinds of sources encountered during video coding. A set of models which can be used to effectively model the various possible scenarios is provided. As this is somewhat tangential to the other work reported, the results are included as an appendix.

  1. Magnetohydrodynamic modes analysis and control of Fusion Advanced Studies Torus high-current scenarios

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Villone, F.; Mastrostefano, S.; Calabrò, G.

    2014-08-15

    One of the main FAST (Fusion Advanced Studies Torus) goals is to have a flexible experiment capable to test tools and scenarios for safe and reliable tokamak operation, in order to support ITER and help the final DEMO design. In particular, in this paper, we focus on operation close to a possible border of stability related to low-q operation. To this purpose, a new FAST scenario has then been designed at I{sub p} = 10 MA, B{sub T} = 8.5 T, q{sub 95} ≈ 2.3. Transport simulations, carried out by using the code JETTO and the first principle transport model GLF23, indicate that, under these conditions, FASTmore » could achieve an equivalent Q ≈ 3.5. FAST will be equipped with a set of internal active coils for feedback control, which will produce magnetic perturbation with toroidal number n = 1 or n = 2. Magnetohydrodynamic (MHD) mode analysis and feedback control simulations performed with the codes MARS, MARS-F, CarMa (both assuming the presence of a perfect conductive wall and using the exact 3D resistive wall structure) show the possibility of the FAST conductive structures to stabilize n = 1 ideal modes. This leaves therefore room for active mitigation of the resistive mode (down to a characteristic time of 1 ms) for safety purposes, i.e., to avoid dangerous MHD-driven plasma disruption, when working close to the machine limits and magnetic and kinetic energy density not far from reactor values.« less

  2. Python Radiative Transfer Emission code (PyRaTE): non-LTE spectral lines simulations

    NASA Astrophysics Data System (ADS)

    Tritsis, A.; Yorke, H.; Tassis, K.

    2018-05-01

    We describe PyRaTE, a new, non-local thermodynamic equilibrium (non-LTE) line radiative transfer code developed specifically for post-processing astrochemical simulations. Population densities are estimated using the escape probability method. When computing the escape probability, the optical depth is calculated towards all directions with density, molecular abundance, temperature and velocity variations all taken into account. A very easy-to-use interface, capable of importing data from simulations outputs performed with all major astrophysical codes, is also developed. The code is written in PYTHON using an "embarrassingly parallel" strategy and can handle all geometries and projection angles. We benchmark the code by comparing our results with those from RADEX (van der Tak et al. 2007) and against analytical solutions and present case studies using hydrochemical simulations. The code will be released for public use.

  3. The Use of a Code-generating System for the Derivation of the Equations for Wind Turbine Dynamics

    NASA Astrophysics Data System (ADS)

    Ganander, Hans

    2003-10-01

    For many reasons the size of wind turbines on the rapidly growing wind energy market is increasing. Relations between aeroelastic properties of these new large turbines change. Modifications of turbine designs and control concepts are also influenced by growing size. All these trends require development of computer codes for design and certification. Moreover, there is a strong desire for design optimization procedures, which require fast codes. General codes, e.g. finite element codes, normally allow such modifications and improvements of existing wind turbine models. This is done relatively easy. However, the calculation times of such codes are unfavourably long, certainly for optimization use. The use of an automatic code generating system is an alternative for relevance of the two key issues, the code and the design optimization. This technique can be used for rapid generation of codes of particular wind turbine simulation models. These ideas have been followed in the development of new versions of the wind turbine simulation code VIDYN. The equations of the simulation model were derived according to the Lagrange equation and using Mathematica®, which was directed to output the results in Fortran code format. In this way the simulation code is automatically adapted to an actual turbine model, in terms of subroutines containing the equations of motion, definitions of parameters and degrees of freedom. Since the start in 1997, these methods, constituting a systematic way of working, have been used to develop specific efficient calculation codes. The experience with this technique has been very encouraging, inspiring the continued development of new versions of the simulation code as the need has arisen, and the interest for design optimization is growing.

  4. MOCCA code for star cluster simulation: comparison with optical observations using COCOA

    NASA Astrophysics Data System (ADS)

    Askar, Abbas; Giersz, Mirek; Pych, Wojciech; Olech, Arkadiusz; Hypki, Arkadiusz

    2016-02-01

    We introduce and present preliminary results from COCOA (Cluster simulatiOn Comparison with ObservAtions) code for a star cluster after 12 Gyr of evolution simulated using the MOCCA code. The COCOA code is being developed to quickly compare results of numerical simulations of star clusters with observational data. We use COCOA to obtain parameters of the projected cluster model. For comparison, a FITS file of the projected cluster was provided to observers so that they could use their observational methods and techniques to obtain cluster parameters. The results show that the similarity of cluster parameters obtained through numerical simulations and observations depends significantly on the quality of observational data and photometric accuracy.

  5. VERAIn

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simunovic, Srdjan

    2015-02-16

    CASL's modeling and simulation technology, the Virtual Environment for Reactor Applications (VERA), incorporates coupled physics and science-based models, state-of-the-art numerical methods, modern computational science, integrated uncertainty quantification (UQ) and validation against data from operating pressurized water reactors (PWRs), single-effect experiments, and integral tests. The computational simulation component of VERA is the VERA Core Simulator (VERA-CS). The core simulator is the specific collection of multi-physics computer codes used to model and deplete a LWR core over multiple cycles. The core simulator has a single common input file that drives all of the different physics codes. The parser code, VERAIn, converts VERAmore » Input into an XML file that is used as input to different VERA codes.« less

  6. Mean Line Pump Flow Model in Rocket Engine System Simulation

    NASA Technical Reports Server (NTRS)

    Veres, Joseph P.; Lavelle, Thomas M.

    2000-01-01

    A mean line pump flow modeling method has been developed to provide a fast capability for modeling turbopumps of rocket engines. Based on this method, a mean line pump flow code PUMPA has been written that can predict the performance of pumps at off-design operating conditions, given the loss of the diffusion system at the design point. The pump code can model axial flow inducers, mixed-flow and centrifugal pumps. The code can model multistage pumps in series. The code features rapid input setup and computer run time, and is an effective analysis and conceptual design tool. The map generation capability of the code provides the map information needed for interfacing with a rocket engine system modeling code. The off-design and multistage modeling capabilities of the code permit parametric design space exploration of candidate pump configurations and provide pump performance data for engine system evaluation. The PUMPA code has been integrated with the Numerical Propulsion System Simulation (NPSS) code and an expander rocket engine system has been simulated. The mean line pump flow code runs as an integral part of the NPSS rocket engine system simulation and provides key pump performance information directly to the system model at all operating conditions.

  7. Tokamak magneto-hydrodynamics and reference magnetic coordinates for simulations of plasma disruptions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zakharov, Leonid E.; Li, Xujing

    This paper formulates the Tokamak Magneto-Hydrodynamics (TMHD), initially outlined by X. Li and L. E. Zakharov [Plasma Science and Technology 17(2), 97–104 (2015)] for proper simulations of macroscopic plasma dynamics. The simplest set of magneto-hydrodynamics equations, sufficient for disruption modeling and extendable to more refined physics, is explained in detail. First, the TMHD introduces to 3-D simulations the Reference Magnetic Coordinates (RMC), which are aligned with the magnetic field in the best possible way. The numerical implementation of RMC is adaptive grids. Being consistent with the high anisotropy of the tokamak plasma, RMC allow simulations at realistic, very high plasmamore » electric conductivity. Second, the TMHD splits the equation of motion into an equilibrium equation and the plasma advancing equation. This resolves the 4 decade old problem of Courant limitations of the time step in existing, plasma inertia driven numerical codes. The splitting allows disruption simulations on a relatively slow time scale in comparison with the fast time of ideal MHD instabilities. A new, efficient numerical scheme is proposed for TMHD.« less

  8. Physics-based multiscale coupling for full core nuclear reactor simulation

    DOE PAGES

    Gaston, Derek R.; Permann, Cody J.; Peterson, John W.; ...

    2015-10-01

    Numerical simulation of nuclear reactors is a key technology in the quest for improvements in efficiency, safety, and reliability of both existing and future reactor designs. Historically, simulation of an entire reactor was accomplished by linking together multiple existing codes that each simulated a subset of the relevant multiphysics phenomena. Recent advances in the MOOSE (Multiphysics Object Oriented Simulation Environment) framework have enabled a new approach: multiple domain-specific applications, all built on the same software framework, are efficiently linked to create a cohesive application. This is accomplished with a flexible coupling capability that allows for a variety of different datamore » exchanges to occur simultaneously on high performance parallel computational hardware. Examples based on the KAIST-3A benchmark core, as well as a simplified Westinghouse AP-1000 configuration, demonstrate the power of this new framework for tackling—in a coupled, multiscale manner—crucial reactor phenomena such as CRUD-induced power shift and fuel shuffle. 2014 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY-NC-SA license« less

  9. Modeling Combustion in Supersonic Flows

    NASA Technical Reports Server (NTRS)

    Drummond, J. Philip; Danehy, Paul M.; Bivolaru, Daniel; Gaffney, Richard L.; Tedder, Sarah A.; Cutler, Andrew D.

    2007-01-01

    This paper discusses the progress of work to model high-speed supersonic reacting flow. The purpose of the work is to improve the state of the art of CFD capabilities for predicting the flow in high-speed propulsion systems, particularly combustor flow-paths. The program has several components including the development of advanced algorithms and models for simulating engine flowpaths as well as a fundamental experimental and diagnostic development effort to support the formulation and validation of the mathematical models. The paper will provide details of current work on experiments that will provide data for the modeling efforts along with with the associated nonintrusive diagnostics used to collect the data from the experimental flowfield. Simulation of a recent experiment to partially validate the accuracy of a combustion code is also described.

  10. Proceedings of the Numerical Modeling for Underground Nuclear Test Monitoring Symposium

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taylor, S.R.; Kamm, J.R.

    1993-11-01

    The purpose of the meeting was to discuss the state-of-the-art in numerical simulations of nuclear explosion phenomenology with applications to test ban monitoring. We focused on the uniqueness of model fits to data, the measurement and characterization of material response models, advanced modeling techniques, and applications of modeling to monitoring problems. The second goal of the symposium was to establish a dialogue between seismologists and explosion-source code calculators. The meeting was divided into five main sessions: explosion source phenomenology, material response modeling, numerical simulations, the seismic source, and phenomenology from near source to far field. We feel the symposium reachedmore » many of its goals. Individual papers submitted at the conference are indexed separately on the data base.« less

  11. Intra-Beam and Touschek Scattering Computations for Beam with Non-Gaussian Longitudinal Distributions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xiao, A.; Borland, M.

    Both intra-beamscattering (IBS) and the Touschek effect become prominent formulti-bend-achromat- (MBA-) based ultra-low-emittance storage rings. To mitigate the transverse emittance degradation and obtain a reasonably long beam lifetime, a higher harmonic rf cavity (HHC) is often proposed to lengthen the bunch. The use of such a cavity results in a non-gaussian longitudinal distribution. However, common methods for computing IBS and Touschek scattering assume Gaussian distributions. Modifications have been made to several simulation codes that are part of the elegant [1] toolkit to allow these computations for arbitrary longitudinal distributions. After describing thesemodifications, we review the results of detailed simulations formore » the proposed hybrid seven-bend-achromat (H7BA) upgrade lattice [2] for the Advanced Photon Source.« less

  12. Mechanical Analysis of W78/88-1 Life Extension Program Warhead Design Options

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spencer, Nathan

    2014-09-01

    Life Extension Program (LEP) is a program to repair/replace components of nuclear weapons to ensure the ability to meet military requirements. The W78/88-1 LEP encompasses the modernization of two major nuclear weapon reentry systems into an interoperable warhead. Several design concepts exist to provide different options for robust safety and security themes, maximum non-nuclear commonality, and cost. Simulation is one capability used to evaluate the mechanical performance of the designs in various operational environments, plan for system and component qualification efforts, and provide insight into the survivability of the warhead in environments that are not currently testable. The simulation effortsmore » use several Sandia-developed tools through the Advanced Simulation and Computing program, including Cubit for mesh generation, the DART Model Manager, SIERRA codes running on the HPC TLCC2 platforms, DAKOTA, and ParaView. Several programmatic objectives were met using the simulation capability including: (1) providing early environmental specification estimates that may be used by component designers to understand the severity of the loads their components will need to survive, (2) providing guidance for load levels and configurations for subassembly tests intended to represent operational environments, and (3) recommending design options including modified geometry and material properties. These objectives were accomplished through regular interactions with component, system, and test engineers while using the laboratory's computational infrastructure to effectively perform ensembles of simulations. Because NNSA has decided to defer the LEP program, simulation results are being documented and models are being archived for future reference. However, some advanced and exploratory efforts will continue to mature key technologies, using the results from these and ongoing simulations for design insights, test planning, and model validation.« less

  13. ERP evidence for the recognition of emotional prosody through simulated cochlear implant strategies.

    PubMed

    Agrawal, Deepashri; Timm, Lydia; Viola, Filipa Campos; Debener, Stefan; Büchner, Andreas; Dengler, Reinhard; Wittfoth, Matthias

    2012-09-20

    Emotionally salient information in spoken language can be provided by variations in speech melody (prosody) or by emotional semantics. Emotional prosody is essential to convey feelings through speech. In sensori-neural hearing loss, impaired speech perception can be improved by cochlear implants (CIs). Aim of this study was to investigate the performance of normal-hearing (NH) participants on the perception of emotional prosody with vocoded stimuli. Semantically neutral sentences with emotional (happy, angry and neutral) prosody were used. Sentences were manipulated to simulate two CI speech-coding strategies: the Advance Combination Encoder (ACE) and the newly developed Psychoacoustic Advanced Combination Encoder (PACE). Twenty NH adults were asked to recognize emotional prosody from ACE and PACE simulations. Performance was assessed using behavioral tests and event-related potentials (ERPs). Behavioral data revealed superior performance with original stimuli compared to the simulations. For simulations, better recognition for happy and angry prosody was observed compared to the neutral. Irrespective of simulated or unsimulated stimulus type, a significantly larger P200 event-related potential was observed for happy prosody after sentence onset than the other two emotions. Further, the amplitude of P200 was significantly more positive for PACE strategy use compared to the ACE strategy. Results suggested P200 peak as an indicator of active differentiation and recognition of emotional prosody. Larger P200 peak amplitude for happy prosody indicated importance of fundamental frequency (F0) cues in prosody processing. Advantage of PACE over ACE highlighted a privileged role of the psychoacoustic masking model in improving prosody perception. Taken together, the study emphasizes on the importance of vocoded simulation to better understand the prosodic cues which CI users may be utilizing.

  14. Flow Analysis of a Gas Turbine Low- Pressure Subsystem

    NASA Technical Reports Server (NTRS)

    Veres, Joseph P.

    1997-01-01

    The NASA Lewis Research Center is coordinating a project to numerically simulate aerodynamic flow in the complete low-pressure subsystem (LPS) of a gas turbine engine. The numerical model solves the three-dimensional Navier-Stokes flow equations through all components within the low-pressure subsystem as well as the external flow around the engine nacelle. The Advanced Ducted Propfan Analysis Code (ADPAC), which is being developed jointly by Allison Engine Company and NASA, is the Navier-Stokes flow code being used for LPS simulation. The majority of the LPS project is being done under a NASA Lewis contract with Allison. Other contributors to the project are NYMA and the University of Toledo. For this project, the Energy Efficient Engine designed by GE Aircraft Engines is being modeled. This engine includes a low-pressure system and a high-pressure system. An inlet, a fan, a booster stage, a bypass duct, a lobed mixer, a low-pressure turbine, and a jet nozzle comprise the low-pressure subsystem within this engine. The tightly coupled flow analysis evaluates aerodynamic interactions between all components of the LPS. The high-pressure core engine of this engine is simulated with a one-dimensional thermodynamic cycle code in order to provide boundary conditions to the detailed LPS model. This core engine consists of a high-pressure compressor, a combustor, and a high-pressure turbine. The three-dimensional LPS flow model is coupled to the one-dimensional core engine model to provide a "hybrid" flow model of the complete gas turbine Energy Efficient Engine. The resulting hybrid engine model evaluates the detailed interaction between the LPS components at design and off-design engine operating conditions while considering the lumped-parameter performance of the core engine.

  15. Enhanced absorption cycle computer model

    NASA Astrophysics Data System (ADS)

    Grossman, G.; Wilk, M.

    1993-09-01

    Absorption heat pumps have received renewed and increasing attention in the past two decades. The rising cost of electricity has made the particular features of this heat-powered cycle attractive for both residential and industrial applications. Solar-powered absorption chillers, gas-fired domestic heat pumps, and waste-heat-powered industrial temperature boosters are a few of the applications recently subjected to intensive research and development. The absorption heat pump research community has begun to search for both advanced cycles in various multistage configurations and new working fluid combinations with potential for enhanced performance and reliability. The development of working absorption systems has created a need for reliable and effective system simulations. A computer code has been developed for simulation of absorption systems at steady state in a flexible and modular form, making it possible to investigate various cycle configurations with different working fluids. The code is based on unit subroutines containing the governing equations for the system's components and property subroutines containing thermodynamic properties of the working fluids. The user conveys to the computer an image of his cycle by specifying the different subunits and their interconnections. Based on this information, the program calculates the temperature, flow rate, concentration, pressure, and vapor fraction at each state point in the system, and the heat duty at each unit, from which the coefficient of performance (COP) may be determined. This report describes the code and its operation, including improvements introduced into the present version. Simulation results are described for LiBr-H2O triple-effect cycles, LiCl-H2O solar-powered open absorption cycles, and NH3-H2O single-effect and generator-absorber heat exchange cycles. An appendix contains the user's manual.

  16. Extraordinary Tools for Extraordinary Science: The Impact ofSciDAC on Accelerator Science&Technology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ryne, Robert D.

    2006-08-10

    Particle accelerators are among the most complex and versatile instruments of scientific exploration. They have enabled remarkable scientific discoveries and important technological advances that span all programs within the DOE Office of Science (DOE/SC). The importance of accelerators to the DOE/SC mission is evident from an examination of the DOE document, ''Facilities for the Future of Science: A Twenty-Year Outlook''. Of the 28 facilities listed, 13 involve accelerators. Thanks to SciDAC, a powerful suite of parallel simulation tools has been developed that represent a paradigm shift in computational accelerator science. Simulations that used to take weeks or more now takemore » hours, and simulations that were once thought impossible are now performed routinely. These codes have been applied to many important projects of DOE/SC including existing facilities (the Tevatron complex, the Relativistic Heavy Ion Collider), facilities under construction (the Large Hadron Collider, the Spallation Neutron Source, the Linac Coherent Light Source), and to future facilities (the International Linear Collider, the Rare Isotope Accelerator). The new codes have also been used to explore innovative approaches to charged particle acceleration. These approaches, based on the extremely intense fields that can be present in lasers and plasmas, may one day provide a path to the outermost reaches of the energy frontier. Furthermore, they could lead to compact, high-gradient accelerators that would have huge consequences for US science and technology, industry, and medicine. In this talk I will describe the new accelerator modeling capabilities developed under SciDAC, the essential role of multi-disciplinary collaboration with applied mathematicians, computer scientists, and other IT experts in developing these capabilities, and provide examples of how the codes have been used to support DOE/SC accelerator projects.« less

  17. Extraordinary tools for extraordinary science: the impact of SciDAC on accelerator science and technology

    NASA Astrophysics Data System (ADS)

    Ryne, Robert D.

    2006-09-01

    Particle accelerators are among the most complex and versatile instruments of scientific exploration. They have enabled remarkable scientific discoveries and important technological advances that span all programs within the DOE Office of Science (DOE/SC). The importance of accelerators to the DOE/SC mission is evident from an examination of the DOE document, ''Facilities for the Future of Science: A Twenty-Year Outlook.'' Of the 28 facilities listed, 13 involve accelerators. Thanks to SciDAC, a powerful suite of parallel simulation tools has been developed that represent a paradigm shift in computational accelerator science. Simulations that used to take weeks or more now take hours, and simulations that were once thought impossible are now performed routinely. These codes have been applied to many important projects of DOE/SC including existing facilities (the Tevatron complex, the Relativistic Heavy Ion Collider), facilities under construction (the Large Hadron Collider, the Spallation Neutron Source, the Linac Coherent Light Source), and to future facilities (the International Linear Collider, the Rare Isotope Accelerator). The new codes have also been used to explore innovative approaches to charged particle acceleration. These approaches, based on the extremely intense fields that can be present in lasers and plasmas, may one day provide a path to the outermost reaches of the energy frontier. Furthermore, they could lead to compact, high-gradient accelerators that would have huge consequences for US science and technology, industry, and medicine. In this talk I will describe the new accelerator modeling capabilities developed under SciDAC, the essential role of multi-disciplinary collaboration with applied mathematicians, computer scientists, and other IT experts in developing these capabilities, and provide examples of how the codes have been used to support DOE/SC accelerator projects.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Côté, Benoit; Belczynski, Krzysztof; Fryer, Chris L.

    The role of compact binary mergers as the main production site of r-process elements is investigated by combining stellar abundances of Eu observed in the Milky Way, galactic chemical evolution (GCE) simulations, and binary population synthesis models, and gravitational wave measurements from Advanced LIGO. We compiled and reviewed seven recent GCE studies to extract the frequency of neutron star–neutron star (NS–NS) mergers that is needed in order to reproduce the observed [Eu/Fe] versus [Fe/H] relationship. We used our simple chemical evolution code to explore the impact of different analytical delay-time distribution functions for NS–NS mergers. We then combined our metallicity-dependent population synthesis models with our chemical evolution code to bring their predictions, for both NS–NS mergers and black hole–neutron star mergers, into a GCE context. Finally, we convolved our results with the cosmic star formation history to provide a direct comparison with current and upcoming Advanced LIGO measurements. When assuming that NS–NS mergers are the exclusive r-process sites, and that the ejected r-process mass per merger event is 0.01 Mmore » $${}_{\\odot }$$, the number of NS–NS mergers needed in GCE studies is about 10 times larger than what is predicted by standard population synthesis models. Here, these two distinct fields can only be consistent with each other when assuming optimistic rates, massive NS–NS merger ejecta, and low Fe yields for massive stars. For now, population synthesis models and GCE simulations are in agreement with the current upper limit (O1) established by Advanced LIGO during their first run of observations. Upcoming measurements will provide an important constraint on the actual local NS–NS merger rate, will provide valuable insights on the plausibility of the GCE requirement, and will help to define whether or not compact binary mergers can be the dominant source of r-process elements in the universe.« less

  19. Development and Benchmarking of a Hybrid PIC Code For Dense Plasmas and Fast Ignition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Witherspoon, F. Douglas; Welch, Dale R.; Thompson, John R.

    Radiation processes play an important role in the study of both fast ignition and other inertial confinement schemes, such as plasma jet driven magneto-inertial fusion, both in their effect on energy balance, and in generating diagnostic signals. In the latter case, warm and hot dense matter may be produced by the convergence of a plasma shell formed by the merging of an assembly of high Mach number plasma jets. This innovative approach has the potential advantage of creating matter of high energy densities in voluminous amount compared with high power lasers or particle beams. An important application of this technologymore » is as a plasma liner for the flux compression of magnetized plasma to create ultra-high magnetic fields and burning plasmas. HyperV Technologies Corp. has been developing plasma jet accelerator technology in both coaxial and linear railgun geometries to produce plasma jets of sufficient mass, density, and velocity to create such imploding plasma liners. An enabling tool for the development of this technology is the ability to model the plasma dynamics, not only in the accelerators themselves, but also in the resulting magnetized target plasma and within the merging/interacting plasma jets during transport to the target. Welch pioneered numerical modeling of such plasmas (including for fast ignition) using the LSP simulation code. Lsp is an electromagnetic, parallelized, plasma simulation code under development since 1995. It has a number of innovative features making it uniquely suitable for modeling high energy density plasmas including a hybrid fluid model for electrons that allows electrons in dense plasmas to be modeled with a kinetic or fluid treatment as appropriate. In addition to in-house use at Voss Scientific, several groups carrying out research in Fast Ignition (LLNL, SNL, UCSD, AWE (UK), and Imperial College (UK)) also use LSP. A collaborative team consisting of HyperV Technologies Corp., Voss Scientific LLC, FAR-TECH, Inc., Prism Computational Sciences, Inc. and Advanced Energy Systems Inc. joined efforts to develop new physics and numerical models for LSP in several key areas to enhance the ability of LSP to model high energy density plasmas (HEDP). This final report details those efforts. Areas addressed in this research effort include: adding radiation transport to LSP, first in 2D and then fully 3D, extending the EMHD model to 3D, implementing more advanced radiation and electrode plasma boundary conditions, and installing more efficient implicit numerical algorithms to speed complex 2-D and 3-D computations. The new capabilities allow modeling of the dominant processes in high energy density plasmas, and further assist the development and optimization of plasma jet accelerators, with particular attention to MHD instabilities and plasma/wall interaction (based on physical models for ion drag friction and ablation/erosion of the electrodes). In the first funding cycle we implemented a solver for the radiation diffusion equation. To solve this equation in 2-D, we used finite-differencing and applied the parallelized sparse-matrix solvers in the PETSc library (Argonne National Laboratory) to the resulting system of equations. A database of the necessary coefficients for materials of interest was assembled using the PROPACEOS and ATBASE codes from Prism. The model was benchmarked against Prism's 1-D radiation hydrodynamics code HELIOS, and against experimental data obtained from HyperV's separately funded plasma jet accelerator development program. Work in the second funding cycle focused on extending the radiation diffusion model to full 3-D, continued development of the EMHD model, optimizing the direct-implicit model to speed up calculations, add in multiply ionized atoms, and improved the way boundary conditions are handled in LSP. These new LSP capabilities were then used, along with analytic calculations and Mach2 runs, to investigate plasma jet merging, plasma detachment and transport, restrike and advanced jet accelerator design. In addition, a strong linkage to diagnostic measurements was made by modeling plasma jet experiments on PLX to support benchmarking of the code. A large number of upgrades and improvements advancing hybrid PIC algorithms were implemented in LSP during the second funding cycle. These include development of fully 3D radiation transport algorithms, new boundary conditions for plasma-electrode interactions, and a charge conserving equation of state that permits multiply ionized high-Z ions. The final funding cycle focused on 1) mitigating the effects of a slow-growing grid instability which is most pronounced in plasma jet frame expansion problems using the two-fluid Eulerian remap algorithm, 2) extension of the Eulerian Smoothing Algorithm to allow EOS/Radiation modeling, 3) simulations of collisionless shocks formed by jet merging, 4) simulations of merging jets using high-Z gases, 5) generation of PROPACEOS EOS/Opacity databases, 6) simulations of plasma jet transport experiments, 7) simulations of plasma jet penetration through transverse magnetic fields, and 8) GPU PIC code development The tools developed during this project are applicable not only to the study of plasma jets, but also to a wide variety of HEDP plasmas of interest to DOE, including plasmas created in short-pulse laser experiments performed to study fast ignition concepts for inertial confinement fusion.« less

  20. Three-dimensional Monte-Carlo simulation of gamma-ray scattering and production in the atmosphere

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morris, D.J.

    1989-05-15

    Monte Carlo codes have been developed to simulate gamma-ray scattering and production in the atmosphere. The scattering code simulates interactions of low-energy gamma rays (20 to several hundred keV) from an astronomical point source in the atmosphere; a modified code also simulates scattering in a spacecraft. Four incident spectra, typical of gamma-ray bursts, solar flares, and the Crab pulsar, and 511 keV line radiation have been studied. These simulations are consistent with observations of solar flare radiation scattered from the atmosphere. The production code simulates the interactions of cosmic rays which produce high-energy (above 10 MeV) photons and electrons. Itmore » has been used to calculate gamma-ray and electron albedo intensities at Palestine, Texas and at the equator; the results agree with observations in most respects. With minor modifications this code can be used to calculate intensities of other high-energy particles. Both codes are fully three-dimensional, incorporating a curved atmosphere; the production code also incorporates the variation with both zenith and azimuth of the incident cosmic-ray intensity due to geomagnetic effects. These effects are clearly reflected in the calculated albedo by intensity contrasts between the horizon and nadir, and between the east and west horizons.« less

Top