Science.gov

Sample records for advanced petascale simulations

  1. Interoperable mesh and geometry tools for advanced petascale simulations

    SciTech Connect

    Diachin, L; Bauer, A; Fix, B; Kraftcheck, J; Jansen, K; Luo, X; Miller, M; Ollivier-Gooch, C; Shephard, M; Tautges, T; Trease, H

    2007-07-04

    SciDAC applications have a demonstrated need for advanced software tools to manage the complexities associated with sophisticated geometry, mesh, and field manipulation tasks, particularly as computer architectures move toward the petascale. The Center for Interoperable Technologies for Advanced Petascale Simulations (ITAPS) will deliver interoperable and interchangeable mesh, geometry, and field manipulation services that are of direct use to SciDAC applications. The premise of our technology development goal is to provide such services as libraries that can be used with minimal intrusion into application codes. To develop these technologies, we focus on defining a common data model and datastructure neutral interfaces that unify a number of different services such as mesh generation and improvement, front tracking, adaptive mesh refinement, shape optimization, and solution transfer operations. We highlight the use of several ITAPS services in SciDAC applications.

  2. Interoperable Technologies for Advanced Petascale Simulations (ITAPS)

    SciTech Connect

    Shephard, Mark S

    2010-02-05

    Efforts during the past year have contributed to the continued development of the ITAPS interfaces and services as well as specific efforts to support ITAPS applications. The ITAPS interface efforts have two components. The first is working with the ITAPS team on improving the ITAPS software infrastructure and level of compliance of our implementations of ITAPS interfaces (iMesh, iMeshP, iRel and iGeom). The second is being involved with the discussions on the design of the iField fields interface. Efforts to move the ITAPS technologies to petascale computers has identified a number of key technical developments that are required to effectively execute the ITAPS interfaces and services. Research to address these parallel method developments has been a major emphasis of the RPI’s team efforts over the past year. Efforts to move the ITAPS technologies to petascale computers has identified a number of key technical developments that are required to effectively execute the ITAPS interfaces and services. Research to address these parallel method developments has been a major emphasis of the RPI’s team efforts over the past year. The development of parallel unstructured mesh methods has considered the need to scale unstructured mesh solves to massively parallel computers. These efforts, summarized in section 2.1 show that with the addition of the ITAPS procedures described in sections 2.2 and 2.3 we are able to obtain excellent strong scaling with our unstructured mesh CFD code on up to 294,912 cores of IBM Blue Gene/P which is the highest core count machine available. The ITAPS developments that have contributed to the scaling and performance of PHASTA include an iterative migration algorithm to improve the combined region and vertex balance of the mesh partition, which increases scalability, and mesh data reordering, which improves computational performance. The other developments are associated with the further development of the ITAPS parallel unstructured mesh

  3. Interoperable Technologies for Advanced Petascale Simulations

    SciTech Connect

    Li, Xiaolin

    2013-01-14

    Our final report on the accomplishments of ITAPS at Stony Brook during period covered by the research award includes component service, interface service and applications. On the component service, we have designed and implemented a robust functionality for the Lagrangian tracking of dynamic interface. We have migrated the hyperbolic, parabolic and elliptic solver from stage-wise second order toward global second order schemes. We have implemented high order coupling between interface propagation and interior PDE solvers. On the interface service, we have constructed the FronTier application programer's interface (API) and its manual page using doxygen. We installed the FronTier functional interface to conform with the ITAPS specifications, especially the iMesh and iMeshP interfaces. On applications, we have implemented deposition and dissolution models with flow and implemented the two-reactant model for a more realistic precipitation at the pore level and its coupling with Darcy level model. We have continued our support to the study of fluid mixing problem for problems in inertial comfinement fusion. We have continued our support to the MHD model and its application to plasma liner implosion in fusion confinement. We have simulated a step in the reprocessing and separation of spent fuels from nuclear power plant fuel rods. We have implemented the fluid-structure interaction for 3D windmill and parachute simulations. We have continued our collaboration with PNNL, BNL, LANL, ORNL, and other SciDAC institutions.

  4. Advances in petascale kinetic plasma simulation with VPIC and Roadrunner

    SciTech Connect

    Bowers, Kevin J; Albright, Brian J; Yin, Lin; Daughton, William S; Roytershteyn, Vadim; Kwan, Thomas J T

    2009-01-01

    VPIC, a first-principles 3d electromagnetic charge-conserving relativistic kinetic particle-in-cell (PIC) code, was recently adapted to run on Los Alamos's Roadrunner, the first supercomputer to break a petaflop (10{sup 15} floating point operations per second) in the TOP500 supercomputer performance rankings. They give a brief overview of the modeling capabilities and optimization techniques used in VPIC and the computational characteristics of petascale supercomputers like Roadrunner. They then discuss three applications enabled by VPIC's unprecedented performance on Roadrunner: modeling laser plasma interaction in upcoming inertial confinement fusion experiments at the National Ignition Facility (NIF), modeling short pulse laser GeV ion acceleration and modeling reconnection in magnetic confinement fusion experiments.

  5. Collaboration Portal for Petascale Simulations

    SciTech Connect

    Klasky, Scott A; Podhorszki, Norbert; Mouallem, P. A.; Vouk, Mladen

    2009-01-01

    The emergence of leadership class computing is creating a tsunami of data from petascale simulations. Results are typically analyzed by dozens of scientists. In order for the scientist to digest the vast amount of data being produced from the simulations and auxiliary programs, it is critical to automate the effort to manage, analyze, visualize, and share this data. One aspect of this is leveraging of their collective knowledge and experiences through a scientific social network. This can be archived through a combination of parallel back-end services, provenance capturing, and an easy to use front-end tool. 'eSimMon', is one such tool we developed as part of the Scientific Discovery through Advanced Computing (SciDAC) program. In this paper we describe eSimMon, discuss its ease of use, its efficiency, and its ability to accelerate scientific discovery through advanced computing.

  6. Enabling Technologies for Petascale Electromagnetic Accelerator Simulation

    SciTech Connect

    Lee, Lie-Quan; Akcelik, Volkan; Chen, Sheng; Ge, Li-Xin; Prudencio, Ernesto; Schussman, Greg; Uplenchwar, Ravi; Ng, Cho; Ko, Kwok; Luo, Xiaojun; Shephard, Mark; /Rensselaer Poly.

    2007-11-09

    The SciDAC2 accelerator project at SLAC aims to simulate an entire three-cryomodule radio frequency (RF) unit of the International Linear Collider (ILC) main Linac. Petascale computing resources supported by advances in Applied Mathematics (AM) and Computer Science (CS) and INCITE Program are essential to enable such very large-scale electromagnetic accelerator simulations required by the ILC Global Design Effort. This poster presents the recent advances and achievements in the areas of CS/AM through collaborations.

  7. Collaboration Portal for Petascale Simulations

    SciTech Connect

    Tchoua, Roselyne B; Klasky, Scott A; Podhorszki, Norbert; Mouallem, P. A.; Vouk, Mladen

    2009-01-01

    The emergence of petascale computing is creating a tsunami of data from peta-scale simulations. Typically, results are analyzed by dozens of scientists who often work as teams. Obviously, it is very important to help these teams by facilitating management, analysis, sharing, and visualization of the data produced by their simulations, and by the auxiliary programs and activities used in the scientific process. One aspect of this is leveraging of their collective knowledge and experiences through a scientific social network. This can be achieved through a combination of back-end IT services, provenance capturing, and easy to use front-end tools. 'eSimMon', is one such tool. In this paper we describe this analysis support system, discuss its ease of use, its efficiency, and its ability to accelerate scientific discovery.

  8. Community Petascale Project for Accelerator Science and Simulation: Advancing Computational Science for Future Accelerators and Accelerator Technologies

    SciTech Connect

    Spentzouris, P.; Cary, J.; McInnes, L.C.; Mori, W.; Ng, C.; Ng, E.; Ryne, R.; /LBL, Berkeley

    2011-11-14

    The design and performance optimization of particle accelerators are essential for the success of the DOE scientific program in the next decade. Particle accelerators are very complex systems whose accurate description involves a large number of degrees of freedom and requires the inclusion of many physics processes. Building on the success of the SciDAC-1 Accelerator Science and Technology project, the SciDAC-2 Community Petascale Project for Accelerator Science and Simulation (ComPASS) is developing a comprehensive set of interoperable components for beam dynamics, electromagnetics, electron cooling, and laser/plasma acceleration modelling. ComPASS is providing accelerator scientists the tools required to enable the necessary accelerator simulation paradigm shift from high-fidelity single physics process modeling (covered under SciDAC1) to high-fidelity multiphysics modeling. Our computational frameworks have been used to model the behavior of a large number of accelerators and accelerator R&D experiments, assisting both their design and performance optimization. As parallel computational applications, the ComPASS codes have been shown to make effective use of thousands of processors. ComPASS is in the first year of executing its plan to develop the next-generation HPC accelerator modeling tools. ComPASS aims to develop an integrated simulation environment that will utilize existing and new accelerator physics modules with petascale capabilities, by employing modern computing and solver technologies. The ComPASS vision is to deliver to accelerator scientists a virtual accelerator and virtual prototyping modeling environment, with the necessary multiphysics, multiscale capabilities. The plan for this development includes delivering accelerator modeling applications appropriate for each stage of the ComPASS software evolution. Such applications are already being used to address challenging problems in accelerator design and optimization. The ComPASS organization

  9. Petascale Core-Collapse Supernova Simulation

    NASA Astrophysics Data System (ADS)

    Messer, Bronson

    2009-11-01

    The advent of petascale computing brings with it the promise of substantial increases in physical fidelity for a host of scientific problems. However, the realities of computing on these resources are daunting, and the architectural features of petascale machines will require considerable innovation for effective use. Nevertheless, there exists a class of scientific problems whose ultimate answer requires the application of petascale (and beyond) computing. One example is ascertaining the core-collapse supernova mechanism and explaining the rich phenomenology associated with these events. These stellar explosions produce and disseminate a dominant fraction of the elements in the Universe; are prodigious sources of neutrinos, gravitational waves, and photons across the electromagnetic spectrum; and lead to the formation of neutron stars and black holes. I will describe our recent multidimensional supernova simulations performed on petascale platforms fielded by the DOE and NSF.

  10. Commnity Petascale Project for Accelerator Science and Simulation: Advancing Computational Science for Future Accelerators and Accelerator Technologies

    SciTech Connect

    Spentzouris, Panagiotis; Cary, John; Mcinnes, Lois Curfman; Mori, Warren; Ng, Cho; Ng, Esmond; Ryne, Robert; /LBL, Berkeley

    2008-07-01

    The design and performance optimization of particle accelerators is essential for the success of the DOE scientific program in the next decade. Particle accelerators are very complex systems whose accurate description involves a large number of degrees of freedom and requires the inclusion of many physics processes. Building on the success of the SciDAC1 Accelerator Science and Technology project, the SciDAC2 Community Petascale Project for Accelerator Science and Simulation (ComPASS) is developing a comprehensive set of interoperable components for beam dynamics, electromagnetics, electron cooling, and laser/plasma acceleration modeling. ComPASS is providing accelerator scientists the tools required to enable the necessary accelerator simulation paradigm shift from high-fidelity single physics process modeling (covered under SciDAC1) to high-fidelity multi-physics modeling. Our computational frameworks have been used to model the behavior of a large number of accelerators and accelerator R&D experiments, assisting both their design and performance optimization. As parallel computational applications, the ComPASS codes have been shown to make effective use of thousands of processors.

  11. Community petascale project for accelerator science and simulation : Advancing computational science for future accelerators and accelerator technologies.

    SciTech Connect

    Spentzouris, P.; Cary, J.; McInnes, L. C.; Mori, W.; Ng, C.; Ng, E.; Ryne, R.

    2008-01-01

    The design and performance optimization of particle accelerators are essential for the success of the DOE scientific program in the next decade. Particle accelerators are very complex systems whose accurate description involves a large number of degrees of freedom and requires the inclusion of many physics processes. Building on the success of the SciDAC-1 Accelerator Science and Technology project, the SciDAC-2 Community Petascale Project for Accelerator Science and Simulation (ComPASS) is developing a comprehensive set of interoperable components for beam dynamics, electromagnetics, electron cooling, and laser/plasma acceleration modelling. ComPASS is providing accelerator scientists the tools required to enable the necessary accelerator simulation paradigm shift from high-fidelity single physics process modeling (covered under SciDAC1) to high-fidelity multiphysics modeling. Our computational frameworks have been used to model the behavior of a large number of accelerators and accelerator R & D experiments, assisting both their design and performance optimization. As parallel computational applications, the ComPASS codes have been shown to make effective use of thousands of processors.

  12. Commnity Petascale Project for Accelerator Science And Simulation: Advancing Computational Science for Future Accelerators And Accelerator Technologies

    SciTech Connect

    Spentzouris, Panagiotis; Cary, John; Mcinnes, Lois Curfman; Mori, Warren; Ng, Cho; Ng, Esmond; Ryne, Robert; /LBL, Berkeley

    2011-10-21

    The design and performance optimization of particle accelerators are essential for the success of the DOE scientific program in the next decade. Particle accelerators are very complex systems whose accurate description involves a large number of degrees of freedom and requires the inclusion of many physics processes. Building on the success of the SciDAC-1 Accelerator Science and Technology project, the SciDAC-2 Community Petascale Project for Accelerator Science and Simulation (ComPASS) is developing a comprehensive set of interoperable components for beam dynamics, electromagnetics, electron cooling, and laser/plasma acceleration modelling. ComPASS is providing accelerator scientists the tools required to enable the necessary accelerator simulation paradigm shift from high-fidelity single physics process modeling (covered under SciDAC1) to high-fidelity multiphysics modeling. Our computational frameworks have been used to model the behavior of a large number of accelerators and accelerator R&D experiments, assisting both their design and performance optimization. As parallel computational applications, the ComPASS codes have been shown to make effective use of thousands of processors.

  13. Spiking network simulation code for petascale computers.

    PubMed

    Kunkel, Susanne; Schmidt, Maximilian; Eppler, Jochen M; Plesser, Hans E; Masumoto, Gen; Igarashi, Jun; Ishii, Shin; Fukai, Tomoki; Morrison, Abigail; Diesmann, Markus; Helias, Moritz

    2014-01-01

    Brain-scale networks exhibit a breathtaking heterogeneity in the dynamical properties and parameters of their constituents. At cellular resolution, the entities of theory are neurons and synapses and over the past decade researchers have learned to manage the heterogeneity of neurons and synapses with efficient data structures. Already early parallel simulation codes stored synapses in a distributed fashion such that a synapse solely consumes memory on the compute node harboring the target neuron. As petaflop computers with some 100,000 nodes become increasingly available for neuroscience, new challenges arise for neuronal network simulation software: Each neuron contacts on the order of 10,000 other neurons and thus has targets only on a fraction of all compute nodes; furthermore, for any given source neuron, at most a single synapse is typically created on any compute node. From the viewpoint of an individual compute node, the heterogeneity in the synaptic target lists thus collapses along two dimensions: the dimension of the types of synapses and the dimension of the number of synapses of a given type. Here we present a data structure taking advantage of this double collapse using metaprogramming techniques. After introducing the relevant scaling scenario for brain-scale simulations, we quantitatively discuss the performance on two supercomputers. We show that the novel architecture scales to the largest petascale supercomputers available today. PMID:25346682

  14. Spiking network simulation code for petascale computers

    PubMed Central

    Kunkel, Susanne; Schmidt, Maximilian; Eppler, Jochen M.; Plesser, Hans E.; Masumoto, Gen; Igarashi, Jun; Ishii, Shin; Fukai, Tomoki; Morrison, Abigail; Diesmann, Markus; Helias, Moritz

    2014-01-01

    Brain-scale networks exhibit a breathtaking heterogeneity in the dynamical properties and parameters of their constituents. At cellular resolution, the entities of theory are neurons and synapses and over the past decade researchers have learned to manage the heterogeneity of neurons and synapses with efficient data structures. Already early parallel simulation codes stored synapses in a distributed fashion such that a synapse solely consumes memory on the compute node harboring the target neuron. As petaflop computers with some 100,000 nodes become increasingly available for neuroscience, new challenges arise for neuronal network simulation software: Each neuron contacts on the order of 10,000 other neurons and thus has targets only on a fraction of all compute nodes; furthermore, for any given source neuron, at most a single synapse is typically created on any compute node. From the viewpoint of an individual compute node, the heterogeneity in the synaptic target lists thus collapses along two dimensions: the dimension of the types of synapses and the dimension of the number of synapses of a given type. Here we present a data structure taking advantage of this double collapse using metaprogramming techniques. After introducing the relevant scaling scenario for brain-scale simulations, we quantitatively discuss the performance on two supercomputers. We show that the novel architecture scales to the largest petascale supercomputers available today. PMID:25346682

  15. Community Petascale Project for Accelerator Science and Simulation

    SciTech Connect

    Warren B. Mori

    2013-02-01

    The UCLA Plasma Simulation Group is a major partner of the "Community Petascale Project for Accelerator Science and Simulation. This is the final technical report. We include an overall summary, a list of publications and individual progress reports for each years. During the past five years we have made tremendous progress in enhancing the capabilities of OSIRIS and QuickPIC, in developing new algorithms and data structures for PIC codes to run on GPUS and many future core architectures, and in using these codes to model experiments and in making new scientific discoveries. Here we summarize some highlights for which SciDAC was a major contributor.

  16. Scalable parallel programming for high performance seismic simulation on petascale heterogeneous supercomputers

    NASA Astrophysics Data System (ADS)

    Zhou, Jun

    The 1994 Northridge earthquake in Los Angeles, California, killed 57 people, injured over 8,700 and caused an estimated $20 billion in damage. Petascale simulations are needed in California and elsewhere to provide society with a better understanding of the rupture and wave dynamics of the largest earthquakes at shaking frequencies required to engineer safe structures. As the heterogeneous supercomputing infrastructures are becoming more common, numerical developments in earthquake system research are particularly challenged by the dependence on the accelerator elements to enable "the Big One" simulations with higher frequency and finer resolution. Reducing time to solution and power consumption are two primary focus area today for the enabling technology of fault rupture dynamics and seismic wave propagation in realistic 3D models of the crust's heterogeneous structure. This dissertation presents scalable parallel programming techniques for high performance seismic simulation running on petascale heterogeneous supercomputers. A real world earthquake simulation code, AWP-ODC, one of the most advanced earthquake codes to date, was chosen as the base code in this research, and the testbed is based on Titan at Oak Ridge National Laboraratory, the world's largest hetergeneous supercomputer. The research work is primarily related to architecture study, computation performance tuning and software system scalability. An earthquake simulation workflow has also been developed to support the efficient production sets of simulations. The highlights of the technical development are an aggressive performance optimization focusing on data locality and a notable data communication model that hides the data communication latency. This development results in the optimal computation efficiency and throughput for the 13-point stencil code on heterogeneous systems, which can be extended to general high-order stencil codes. Started from scratch, the hybrid CPU/GPU version of AWP

  17. Final Report: Towards Optimal Petascale Simulations (TOPS), ER25785

    SciTech Connect

    Reynolds, Daniel R

    2011-04-15

    Multiscale, multirate scientific and engineering applications in the SciDAC portfolio possess resolution requirements that are practically inexhaustible and demand execution on the highest-capability computers available, which will soon reach the petascale. While the variety of applications is enormous, their needs for mathematical software infrastructure are surprisingly coincident; moreover the chief bottleneck is often the solver. At their current scalability limits, many applications spend a vast majority of their operations in solvers, due to solver algorithmic complexity that is superlinear in the problem size, whereas other phases scale linearly. Furthermore, the solver may be the phase of the simulation with the poorest parallel scalability, due to intrinsic global dependencies. This project brings together the providers of some of the world's most widely distributed, freely available, scalable solver software and focuses them on relieving this bottleneck for many specific applications within SciDAC, which are representative of many others outside. Solver software directly supported under TOPS includes: hypre, PETSc, SUNDIALS, SuperLU, TAO, and Trilinos. Transparent access is also provided to other solver software through the TOPS interface. The primary goals of TOPS are the development, testing, and dissemination of solver software, especially for systems governed by PDEs. Upon discretization, these systems possess mathematical structure that must be exploited for optimal scalability; therefore, application-targeted algorithmic research is included. TOPS software development includes attention to high performance as well as interoperability among the solver components. Support for integration of TOPS solvers into SciDAC applications is also directly supported by this proposal. The role of the UCSD PI in this overall CET, is one of direct interaction between the TOPS software partners and various DOE applications scientists' specifically toward

  18. Petascale Simulation Initiative Tech Base: FY2007 Final Report

    SciTech Connect

    May, J; Chen, R; Jefferson, D; Leek, J; Kaplan, I; Tannahill, J

    2007-10-26

    The Petascale Simulation Initiative began as an LDRD project in the middle of Fiscal Year 2004. The goal of the project was to develop techniques to allow large-scale scientific simulation applications to better exploit the massive parallelism that will come with computers running at petaflops per second. One of the major products of this work was the design and prototype implementation of a programming model and a runtime system that lets applications extend data-parallel applications to use task parallelism. By adopting task parallelism, applications can use processing resources more flexibly, exploit multiple forms of parallelism, and support more sophisticated multiscale and multiphysics models. Our programming model was originally called the Symponents Architecture but is now known as Cooperative Parallelism, and the runtime software that supports it is called Coop. (However, we sometimes refer to the programming model as Coop for brevity.) We have documented the programming model and runtime system in a submitted conference paper [1]. This report focuses on the specific accomplishments of the Cooperative Parallelism project (as we now call it) under Tech Base funding in FY2007. Development and implementation of the model under LDRD funding alone proceeded to the point of demonstrating a large-scale materials modeling application using Coop on more than 1300 processors by the end of FY2006. Beginning in FY2007, the project received funding from both LDRD and the Computation Directorate Tech Base program. Later in the year, after the three-year term of the LDRD funding ended, the ASC program supported the project with additional funds. The goal of the Tech Base effort was to bring Coop from a prototype to a production-ready system that a variety of LLNL users could work with. Specifically, the major tasks that we planned for the project were: (1) Port SARS [former name of the Coop runtime system] to another LLNL platform, probably Thunder or Peloton (depending

  19. Performance Impact of I/O on QMCPack Simulations at the Petascale and Beyond

    SciTech Connect

    Herbein, Stephen N; Matheny, Michael E; Wezowicz, Matthew R; Krogel, Jaron T; Kim, Jeongnim; Klasky, Scott A; Taufer, Michela

    2014-01-01

    Traditional petascale applications, such as QMCPack, can scale their computations to completely utilize modern supercomputers like Titan, but they cannot scale their I/O. To preserve scalability, scientists cannot save data at the granularity needed to enable scientific discovery and are forced to use large intervals between two checkpoint calls. In this paper, we work to increase the granularity of the I/O in QMCPack simulations without increasing the I/O associated overhead or compromising the scalability of the simulations. Our solution redesigns the I/O algorithms used by QMCPack to gather finer-grained data at high frequencies and integrate the ADIOS API to select effective I/O methods without major code changes. The extension of a tool such as Skel to mimic the variable I/O in QMCPack allows us to predict the I/O performance of the code when using ADIOS methods at the petascale. We show how I/O libraries like ADIOS allow us to increase the amount of scientific data extracted from QMCPack simulations at the granularity desired by the scientists while keeping the I/O overhead below 10%. We also show how the impact of checkpoint I/O for the QMCPack code using ADIOS is below 5% when using preventive tactics for checkpointing at the petascale and beyond.

  20. Final Report for "Community Petascale Project for Accelerator Science and Simulations".

    SciTech Connect

    Cary, J. R.; Bruhwiler, D. L.; Stoltz, P. H.; Cormier-Michel, E.; Cowan, B.; Schwartz, B. T.; Bell, G.; Paul, K.; Veitzer, S.

    2013-04-19

    This final report describes the work that has been accomplished over the past 5 years under the Community Petascale Project for Accelerator and Simulations (ComPASS) at Tech-X Corporation. Tech-X had been involved in the full range of ComPASS activities with simulation of laser plasma accelerator concepts, mainly in collaboration with LOASIS program at LBNL, simulation of coherent electron cooling in collaboration with BNL, modeling of electron clouds in high intensity accelerators, in collaboration with researchers at Fermilab and accurate modeling of superconducting RF cavity in collaboration with Fermilab, JLab and Cockcroft Institute in the UK.

  1. Real science at the petascale.

    PubMed

    Saksena, Radhika S; Boghosian, Bruce; Fazendeiro, Luis; Kenway, Owain A; Manos, Steven; Mazzeo, Marco D; Sadiq, S Kashif; Suter, James L; Wright, David; Coveney, Peter V

    2009-06-28

    We describe computational science research that uses petascale resources to achieve scientific results at unprecedented scales and resolution. The applications span a wide range of domains, from investigation of fundamental problems in turbulence through computational materials science research to biomedical applications at the forefront of HIV/AIDS research and cerebrovascular haemodynamics. This work was mainly performed on the US TeraGrid 'petascale' resource, Ranger, at Texas Advanced Computing Center, in the first half of 2008 when it was the largest computing system in the world available for open scientific research. We have sought to use this petascale supercomputer optimally across application domains and scales, exploiting the excellent parallel scaling performance found on up to at least 32 768 cores for certain of our codes in the so-called 'capability computing' category as well as high-throughput intermediate-scale jobs for ensemble simulations in the 32-512 core range. Furthermore, this activity provides evidence that conventional parallel programming with MPI should be successful at the petascale in the short to medium term. We also report on the parallel performance of some of our codes on up to 65 636 cores on the IBM Blue Gene/P system at the Argonne Leadership Computing Facility, which has recently been named the fastest supercomputer in the world for open science. PMID:19451110

  2. Enabling Breakthrough Kinetic Simulations of the Magnetosphere Using Petascale Computing

    NASA Astrophysics Data System (ADS)

    Vu, H. X.; Karimabadi, H.; Omelchenko, Y.; Tatineni, M.; Majumdar, A.; Krauss-Varban, D.; Dorelli, J.

    2009-12-01

    Currently global magnetospheric simulations are predominantly based on single-fluid magnetohydrodynamics (MHD). MHD simulations have proven useful in studies of the global dynamics of the magnetosphere with the goal of predicting eminent features of substorms and other global events. But it is well known that the magnetosphere is dominated by ion kinetic effects, which is ignored in MHD simulations, and many key aspects of the magnetosphere relating to transport and structure of boundaries await global kinetic simulations. We are using our recent innovations in hybrid (electron fluid, kinetic ions) simulations, as being developed in our Hybrid3D (H3D) code, and the power of massively parallel machines to make, breakthrough 3D global kinetic simulations of the magnetosphere. The innovations include (i) multi-zone (asynchronous) algorithm, (ii) dynamic load balancing, and (iii) code adaptation and optimization to large number of processors. In this presentation we will show preliminary results of our progress to date using from 512 to over 8192 cores. In particular, we focus on what we believe to be the first demonstration of the formation of a flux rope in 3D global hybrid simulations. As in the MHD simulations, the resulting flux rope has a very complex structure, wrapping up field lines from different regions and appears to be connected on at least one end to Earth. Magnetic topology of the FTE is examined to reveal the existence of several separators (3D X-lines). The formation and growth of this structure will be discussed and spatial profile of the magnetic and plasma variables will be compared with those from MHD simulations.

  3. Topology for statistical modeling of petascale data.

    SciTech Connect

    Pascucci, Valerio; Mascarenhas, Ajith Arthur; Rusek, Korben; Bennett, Janine Camille; Levine, Joshua; Pebay, Philippe Pierre; Gyulassy, Attila; Thompson, David C.; Rojas, Joseph Maurice

    2011-07-01

    This document presents current technical progress and dissemination of results for the Mathematics for Analysis of Petascale Data (MAPD) project titled 'Topology for Statistical Modeling of Petascale Data', funded by the Office of Science Advanced Scientific Computing Research (ASCR) Applied Math program. Many commonly used algorithms for mathematical analysis do not scale well enough to accommodate the size or complexity of petascale data produced by computational simulations. The primary goal of this project is thus to develop new mathematical tools that address both the petascale size and uncertain nature of current data. At a high level, our approach is based on the complementary techniques of combinatorial topology and statistical modeling. In particular, we use combinatorial topology to filter out spurious data that would otherwise skew statistical modeling techniques, and we employ advanced algorithms from algebraic statistics to efficiently find globally optimal fits to statistical models. This document summarizes the technical advances we have made to date that were made possible in whole or in part by MAPD funding. These technical contributions can be divided loosely into three categories: (1) advances in the field of combinatorial topology, (2) advances in statistical modeling, and (3) new integrated topological and statistical methods.

  4. Hierarchical Petascale Simulation Framework for Stress Corrosion Cracking

    SciTech Connect

    Vashishta, Priya

    2014-12-01

    Reaction Dynamics in Energetic Materials: Detonation is a prototype of mechanochemistry, in which mechanically and thermally induced chemical reactions far from equilibrium exhibit vastly different behaviors. It is also one of the hardest multiscale physics problems, in which diverse length and time scales play important roles. The CACS group has performed multimillion-atom reactive MD simulations to reveal a novel two-stage reaction mechanism during the detonation of cyclotrimethylenetrinitramine (RDX) crystal. Rapid production of N2 and H2O within ~10 ps is followed by delayed production of CO molecules within ~ 1 ns. They found that further decomposition towards the final products is inhibited by the formation of large metastable C- and O-rich clusters with fractal geometry. The CACS group has also simulated the oxidation dynamics of close-packed aggregates of aluminum nanoparticles passivated by oxide shells. Their simulation results suggest an unexpectedly active role of the oxide shell as a nanoreactor.

  5. COMPASS, the COMmunity Petascale Project for Accelerator Science and Simulation, a broad computational accelerator physics initiative

    SciTech Connect

    J.R. Cary; P. Spentzouris; J. Amundson; L. McInnes; M. Borland; B. Mustapha; B. Norris; P. Ostroumov; Y. Wang; W. Fischer; A. Fedotov; I. Ben-Zvi; R. Ryne; E. Esarey; C. Geddes; J. Qiang; E. Ng; S. Li; C. Ng; R. Lee; L. Merminga; H. Wang; D.L. Bruhwiler; D. Dechow; P. Mullowney; P. Messmer; C. Nieter; S. Ovtchinnikov; K. Paul; P. Stoltz; D. Wade-Stein; W.B. Mori; V. Decyk; C.K. Huang; W. Lu; M. Tzoufras; F. Tsung; M. Zhou; G.R. Werner; T. Antonsen; T. Katsouleas

    2007-06-01

    Accelerators are the largest and most costly scientific instruments of the Department of Energy, with uses across a broad range of science, including colliders for particle physics and nuclear science and light sources and neutron sources for materials studies. COMPASS, the Community Petascale Project for Accelerator Science and Simulation, is a broad, four-office (HEP, NP, BES, ASCR) effort to develop computational tools for the prediction and performance enhancement of accelerators. The tools being developed can be used to predict the dynamics of beams in the presence of optical elements and space charge forces, the calculation of electromagnetic modes and wake fields of cavities, the cooling induced by comoving beams, and the acceleration of beams by intense fields in plasmas generated by beams or lasers. In SciDAC-1, the computational tools had multiple successes in predicting the dynamics of beams and beam generation. In SciDAC-2 these tools will be petascale enabled to allow the inclusion of an unprecedented level of physics for detailed prediction.

  6. COMPASS, the COMmunity Petascale Project for Accelerator Science And Simulation, a Broad Computational Accelerator Physics Initiative

    SciTech Connect

    Cary, J.R.; Spentzouris, P.; Amundson, J.; McInnes, L.; Borland, M.; Mustapha, B.; Norris, B.; Ostroumov, P.; Wang, Y.; Fischer, W.; Fedotov, A.; Ben-Zvi, I.; Ryne, R.; Esarey, E.; Geddes, C.; Qiang, J.; Ng, E.; Li, S.; Ng, C.; Lee, R.; Merminga, L.; /Jefferson Lab /Tech-X, Boulder /UCLA /Colorado U. /Maryland U. /Southern California U.

    2007-11-09

    Accelerators are the largest and most costly scientific instruments of the Department of Energy, with uses across a broad range of science, including colliders for particle physics and nuclear science and light sources and neutron sources for materials studies. COMPASS, the Community Petascale Project for Accelerator Science and Simulation, is a broad, four-office (HEP, NP, BES, ASCR) effort to develop computational tools for the prediction and performance enhancement of accelerators. The tools being developed can be used to predict the dynamics of beams in the presence of optical elements and space charge forces, the calculation of electromagnetic modes and wake fields of cavities, the cooling induced by comoving beams, and the acceleration of beams by intense fields in plasmas generated by beams or lasers. In SciDAC-1, the computational tools had multiple successes in predicting the dynamics of beams and beam generation. In SciDAC-2 these tools will be petascale enabled to allow the inclusion of an unprecedented level of physics for detailed prediction.

  7. COMPASS, the COMmunity Petascale project for Accelerator Science and Simulation, a board computational accelerator physics initiative

    SciTech Connect

    Cary, J.R.; Spentzouris, P.; Amundson, J.; McInnes, L.; Borland, M.; Mustapha, B.; Ostroumov, P.; Wang, Y.; Fischer, W.; Fedotov, A.; Ben-Zvi, I.; Ryne, R.; Esarey, E.; Geddes, C.; Qiang, J.; Ng, E.; Li, S.; Ng, C.; Lee, R.; Merminga, L.; Wang, H.; Bruhwiler, D.L.; Dechow, D.; Mullowney, P.; Messmer, P.; Nieter, C.; Ovtchinnikov, S.; Paul, K.; Stoltz, P.; Wade-Stein, D.; Mori, W.B.; Decyk, V.; Huang, C.K.; Lu, W.; Tzoufras, M.; Tsung, F.; Zhou, M.; Werner, G.R.; Antonsen, T.; Katsouleas, T.; Morris, B.

    2007-07-16

    Accelerators are the largest and most costly scientific instruments of the Department of Energy, with uses across a broad range of science, including colliders for particle physics and nuclear science and light sources and neutron sources for materials studies. COMPASS, the Community Petascale Project for Accelerator Science and Simulation, is a broad, four-office (HEP, NP, BES, ASCR) effort to develop computational tools for the prediction and performance enhancement of accelerators. The tools being developed can be used to predict the dynamics of beams in the presence of optical elements and space charge forces, the calculation of electromagnetic modes and wake fields of cavities, the cooling induced by comoving beams, and the acceleration of beams by intense fields in plasmas generated by beams or lasers. In SciDAC-1, the computational tools had multiple successes in predicting the dynamics of beams and beam generation. In SciDAC-2 these tools will be petascale enabled to allow the inclusion of an unprecedented level of physics for detailed prediction.

  8. COMPASS, the COMmunity petascale project for accelerator science and simulation, a broad computational accelerator physics initiative

    NASA Astrophysics Data System (ADS)

    Cary, J. R.; Spentzouris, P.; Amundson, J.; McInnes, L.; Borland, M.; Mustapha, B.; Norris, B.; Ostroumov, P.; Wang, Y.; Fischer, W.; Fedotov, A.; Ben-Zvi, I.; Ryne, R.; Esarey, E.; Geddes, C.; Qiang, J.; Ng, E.; Li, S.; Ng, C.; Lee, R.; Merminga, L.; Wang, H.; Bruhwiler, D. L.; Dechow, D.; Mullowney, P.; Messmer, P.; Nieter, C.; Ovtchinnikov, S.; Paul, K.; Stoltz, P.; Wade-Stein, D.; Mori, W. B.; Decyk, V.; Huang, C. K.; Lu, W.; Tzoufras, M.; Tsung, F.; Zhou, M.; Werner, G. R.; Antonsen, T.; Katsouleas, T.

    2007-07-01

    Accelerators are the largest and most costly scientific instruments of the Department of Energy, with uses across a broad range of science, including colliders for particle physics and nuclear science and light sources and neutron sources for materials studies. COMPASS, the Community Petascale Project for Accelerator Science and Simulation, is a broad, four-office (HEP, NP, BES, ASCR) effort to develop computational tools for the prediction and performance enhancement of accelerators. The tools being developed can be used to predict the dynamics of beams in the presence of optical elements and space charge forces, the calculation of electromagnetic modes and wake fields of cavities, the cooling induced by comoving beams, and the acceleration of beams by intense fields in plasmas generated by beams or lasers. In SciDAC-1, the computational tools had multiple successes in predicting the dynamics of beams and beam generation. In SciDAC-2 these tools will be petascale enabled to allow the inclusion of an unprecedented level of physics for detailed prediction.

  9. Reactive Molecular Dynamics Simulations at the Petascale (Invited)

    NASA Astrophysics Data System (ADS)

    Nakano, A.

    2013-12-01

    We are developing a divide-conquer-recombine algorithmic framework into a metascalable (or 'design once, scale on new architectures') parallelization scheme to perform large spatiotemporal-scale reactive molecular dynamics simulations. The scheme has achieved parallel efficiency well over 0.9 on 786,432 IBM BlueGene/Q processors for 8.5 trillion-atom molecular dynamics and 1.9 trillion electronic degrees-of-freedom quantum molecular dynamics in the framework of density functional theory. Simulation results reveal intricate interplay between photoexcitation, mechanics, flow, and chemical reactions at the nanoscale. Specifically, we will discuss atomistic mechanisms of: (1) rapid hydrogen production from water using metallic alloy nanoparticles; (2) molecular control of charge transfer, charge recombination, and singlet fission for efficient solar cells; and (3) mechanically enhanced reaction kinetics in nanobubbles and nanojets.

  10. Mapping to Irregular Torus Topologies and Other Techniques for Petascale Biomolecular Simulation

    PubMed Central

    Phillips, James C.; Sun, Yanhua; Jain, Nikhil; Bohm, Eric J.; Kalé, Laxmikant V.

    2014-01-01

    Currently deployed petascale supercomputers typically use toroidal network topologies in three or more dimensions. While these networks perform well for topology-agnostic codes on a few thousand nodes, leadership machines with 20,000 nodes require topology awareness to avoid network contention for communication-intensive codes. Topology adaptation is complicated by irregular node allocation shapes and holes due to dedicated input/output nodes or hardware failure. In the context of the popular molecular dynamics program NAMD, we present methods for mapping a periodic 3-D grid of fixed-size spatial decomposition domains to 3-D Cray Gemini and 5-D IBM Blue Gene/Q toroidal networks to enable hundred-million atom full machine simulations, and to similarly partition node allocations into compact domains for smaller simulations using multiple-copy algorithms. Additional enabling techniques are discussed and performance is reported for NCSA Blue Waters, ORNL Titan, ANL Mira, TACC Stampede, and NERSC Edison. PMID:25594075

  11. First-Principles Petascale Simulations for Predicting Deflagration to Detonation Transition in Hydrogen-Oxygen Mixtures

    SciTech Connect

    Khokhlov, Alexei; Austin, Joanna

    2015-03-02

    Hydrogen has emerged as an important fuel across a range of industries as a means of achieving energy independence and to reduce emissions. DDT and the resulting detonation waves in hydrogen-oxygen can have especially catastrophic consequences in a variety of industrial and energy producing settings related to hydrogen. First-principles numerical simulations of flame acceleration and DDT are required for an in-depth understanding of the phenomena and facilitating design of safe hydrogen systems. The goals of this project were (1) to develop first-principles petascale reactive flow Navier-Stokes simulation code for predicting gaseous high-speed combustion and detonation (HSCD) phenomena and (2) demonstrate feasibility of first-principles simulations of rapid flame acceleration and deflagrationto- detonation transition (DDT) in stoichiometric hydrogen-oxygen mixture (2H2 + O2). The goals of the project have been accomplished. We have developed a novel numerical simulation code, named HSCD, for performing first-principles direct numerical simulations of high-speed hydrogen combustion. We carried out a series of validating numerical simulations of inert and reactive shock reflection experiments in shock tubes. We then performed a pilot numerical simulation of flame acceleration in a long pipe. The simulation showed the transition of the rapidly accelerating flame into a detonation. The DDT simulations were performed using BG/Q Mira at the Argonne National Laboratiory, currently the fourth fastest super-computer in the world. The HSCD is currently being actively used on BG/QMira for a systematic study of the DDT processes using computational resources provided through the 2014-2016 INCITE allocation ”First-principles simulations of high-speed combustion and detonation.” While the project was focused on hydrogen-oxygen and on DDT, with appropriate modifications of the input physics (reaction kinetics, transport coefficients, equation of state) the code has a much

  12. Computational Science Research in Support of Petascale Electromagnetic Modeling

    SciTech Connect

    Lee, L.-Q.; Akcelik, V; Ge, L; Chen, S; Schussman, G; Candel, A; Li, Z; Xiao, L; Kabel, A; Uplenchwar, R; Ng, C; Ko, K; /SLAC

    2008-06-20

    Computational science research components were vital parts of the SciDAC-1 accelerator project and are continuing to play a critical role in newly-funded SciDAC-2 accelerator project, the Community Petascale Project for Accelerator Science and Simulation (ComPASS). Recent advances and achievements in the area of computational science research in support of petascale electromagnetic modeling for accelerator design analysis are presented, which include shape determination of superconducting RF cavities, mesh-based multilevel preconditioner in solving highly-indefinite linear systems, moving window using h- or p- refinement for time-domain short-range wakefield calculations, and improved scalable application I/O.

  13. Petascale Simulations of the Morphology and the Molecular Interface of Bulk Heterojunctions

    DOE PAGESBeta

    Carrillo, Jan-Michael Y.; Seibers, Zach; Kumar, Rajeev; Matheson, Michael A.; Ankner, John F.; Goswami, Monojoy; Bhaskaran-Nair, Kiran; Shelton, William A.; Sumpter, Bobby G.; Kilbey, S. Michael

    2016-07-14

    Understanding how additives interact and segregate within bulk heterojunction (BHJ) thin films is critical for exercising control over structure at multiple length scales and delivering improvements in photovoltaic performance. The morphological evolution of poly(3-hexylthiophene) (P3HT) and phenyl-C61-butyric acid methyl ester (PCBM) blends that are commensurate with the size of a BHJ thin film is examined using petascale coarse-grained molecular dynamics simulations. When comparing 2 component and 3 component systems containing short P3HT chains as additives undergoing thermal annealing we demonstrate that the short chains alter the morphol- ogy in apparently useful ways: They efficiently migrate to the P3HT/PCBM interface, increasingmore » the P3HT domain size and interfacial area. Simulation results agree with depth profiles determined from neutron reflectometry measurements that reveal PCBM enrichment near substrate and air interfaces, but a decrease in that PCBM enrich- ment when a small amount of short P3HT chains are integrated into the BHJ blend. Atomistic simulations of the P3HT/PCBM blend interfaces show a non-monotonic dependence of the interfacial thickness as a function of number of repeat units in the oligomeric P3HT additive, and the thiophene rings orient parallel to the interfacial plane as they approach the PCBM domain. Using the nanoscale geometries of the P3HT oligomers, LUMO and HOMO energy levels calculated by density functional theory are found to be invariant across the donor/acceptor interface. Finally, these connections between additives, processing, and morphology at all length scales are generally useful for efforts to improve device performance.« less

  14. Petascale Simulations of the Morphology and the Molecular Interface of Bulk Heterojunctions.

    PubMed

    Carrillo, Jan-Michael Y; Seibers, Zach; Kumar, Rajeev; Matheson, Michael A; Ankner, John F; Goswami, Monojoy; Bhaskaran-Nair, Kiran; Shelton, William A; Sumpter, Bobby G; Kilbey, S Michael

    2016-07-26

    Understanding how additives interact and segregate within bulk heterojunction (BHJ) thin films is critical for exercising control over structure at multiple length scales and delivering improvements in photovoltaic performance. The morphological evolution of poly(3-hexylthiophene) (P3HT) and phenyl-C61-butyric acid methyl ester (PCBM) blends that are commensurate with the size of a BHJ thin film is examined using petascale coarse-grained molecular dynamics simulations. Comparisons between two-component and three-component systems containing short P3HT chains as additives undergoing thermal annealing demonstrate that the short chains alter the morphology in apparently useful ways: they efficiently migrate to the P3HT/PCBM interface, increasing the P3HT domain size and interfacial area. Simulation results agree with depth profiles determined from neutron reflectometry measurements that reveal PCBM enrichment near substrate and air interfaces but a decrease in that PCBM enrichment when a small amount of short P3HT chains are integrated into the BHJ blend. Atomistic simulations of the P3HT/PCBM blend interfaces show a nonmonotonic dependence of the interfacial thickness as a function of number of repeat units in the oligomeric P3HT additive, and the thiophene rings orient parallel to the interfacial plane as they approach the PCBM domain. Using the nanoscale geometries of the P3HT oligomers, LUMO and HOMO energy levels calculated by density functional theory are found to be invariant across the donor/acceptor interface. These connections between additives, processing, and morphology at all length scales are generally useful for efforts to improve device performance. PMID:27299676

  15. Topology for Statistical Modeling of Petascale Data.

    SciTech Connect

    Bennett, Janine Camille; Pebay, Philippe Pierre; Pascucci, Valerio; Levine, Joshua; Gyulassy, Attila; Rojas, Joseph Maurice

    2014-07-01

    This document presents current technical progress and dissemination of results for the Mathematics for Analysis of Petascale Data (MAPD) project titled "Topology for Statistical Modeling of Petascale Data", funded by the Office of Science Advanced Scientific Computing Research (ASCR) Applied Math program.

  16. Science and Engineering in the Petascale Era

    PubMed Central

    Dunning, Thom H.; Schulten, Klaus; Tromp, Jeroen; Ostriker, Jeremiah P.; Droegemeier, Kelvin; Xue, Ming; Fussell, Paul

    2011-01-01

    What breakthrough advances will petascale computing bring to various science and engineering fields? Experts in everything from astronomy to seismology envision the opportunities ahead and the impact they’ll have on advancing our understanding of the world. PMID:21998556

  17. Science and Engineering in the Petascale Era.

    PubMed

    Dunning, Thom H; Schulten, Klaus; Tromp, Jeroen; Ostriker, Jeremiah P; Droegemeier, Kelvin; Xue, Ming; Fussell, Paul

    2009-09-01

    What breakthrough advances will petascale computing bring to various science and engineering fields? Experts in everything from astronomy to seismology envision the opportunities ahead and the impact they'll have on advancing our understanding of the world. PMID:21998556

  18. Molecular dynamics simulation: at a crossroad between molecular biophysics and petascale computing

    NASA Astrophysics Data System (ADS)

    Cheng, Xiaolin

    2015-03-01

    High-performance computing (HPC) has become crucial for most advances made in chemistry and biology today. In particular, biophysical simulation is capable of helping generate critical new insights and drive the direction of experimentation. In this talk, I will discuss our work towards addressing some fundamental membrane biophysical questions using HPC capabilities at Oak Ridge National Laboratory. I will first provide a synopsis of our current progress in developing molecular dynamics (MD) techniques that make efficient use of massively parallel supercomputers. I will then discuss a few examples of large-scale MD simulations of biomembrane vesicles, an effort aimed at shedding light on the lateral organization and cross-layer coupling in biologically-relevant membranes. In conclusion, I will discuss a few scientific and technical challenges faced by MD simulation at the exascale. This research used resources of the Oak Ridge Leadership Computing Facility at the Oak Ridge National Laboratory, which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC05-00OR22725.

  19. Foundational Tools for Petascale Computing

    SciTech Connect

    Miller, Barton

    2014-05-19

    The Paradyn project has a history of developing algorithms, techniques, and software that push the cutting edge of tool technology for high-end computing systems. Under this funding, we are working on a three-year agenda to make substantial new advances in support of new and emerging Petascale systems. The overall goal for this work is to address the steady increase in complexity of these petascale systems. Our work covers two key areas: (1) The analysis, instrumentation and control of binary programs. Work in this area falls under the general framework of the Dyninst API tool kits. (2) Infrastructure for building tools and applications at extreme scale. Work in this area falls under the general framework of the MRNet scalability framework. Note that work done under this funding is closely related to work done under a contemporaneous grant, “High-Performance Energy Applications and Systems”, SC0004061/FG02-10ER25972, UW PRJ36WV.

  20. Petascale direct numerical simulation of blood flow on 200K cores and heterogeneous architectures

    SciTech Connect

    Sampath, Rahul S; Veerapaneni, Shravan; Biros, George; Zorin, Denis; Vuduc, Richard; Vetter, Jeffrey S; Moon, Logan; Malhotra, Dhairya; Shringarpure, Aashay; Rahimian, Abtin; Lashuk, Ilya; Chandramowlishwaran, Aparna

    2010-01-01

    We present a fast, petaflop-scalable algorithm for Stokesian particulate flows. Our goal is the direct simulation of blood, which we model as a mixture of a Stokesian fluid (plasma) and red blood cells (RBCs). Directly simulating blood is a challenging multiscale, multiphysics problem. We report simulations with up to 260 million deformable RBCs. The largest simulation amounts to 90 billion unknowns in space. In terms of the number of cells, we improve the state-of-the art by several orders of magnitude: the previous largest simulation, at the same physical fidelity as ours, resolved the flow of O(1,000-10,000) RBCs. Our approach has three distinct characteristics: (1) we faithfully represent the physics of RBCs by using nonlinear solid mechanics to capture the deformations of each cell; (2) we accurately resolve the long-range, N-body, hydrodynamic interactions between RBCs (which are caused by the surrounding plasma); and (3) we allow for highly non-uniform spatial distributions of RBCs. The new method has been implemented in the software library MOBO (for 'Moving Boundaries'). We designed MOBO to support parallelism at all levels, including inter-node distributed memory parallelism, intra-node shared memory parallelism, data parallelism (vectorization), and fine-grained multithreading for GPUs. We have implemented and optimized the majority of the computation kernels on both Intel/AMD x86 and NVidia's Tesla/Fermi platforms for single and double floating point precision. Overall, the code has scaled on 256 CPU-GPUs on the Teragrid's Lincoln cluster and on 200,000 AMD cores of the Oak Ridge National Laboratory's Jaguar PF system. In our largest simulation, we have achieved 0.7 Petaflops/s of sustained performance on Jaguar.

  1. Performance of hybrid programming models for multiscale cardiac simulations: preparing for petascale computation.

    PubMed

    Pope, Bernard J; Fitch, Blake G; Pitman, Michael C; Rice, John J; Reumann, Matthias

    2011-10-01

    Future multiscale and multiphysics models that support research into human disease, translational medical science, and treatment can utilize the power of high-performance computing (HPC) systems. We anticipate that computationally efficient multiscale models will require the use of sophisticated hybrid programming models, mixing distributed message-passing processes [e.g., the message-passing interface (MPI)] with multithreading (e.g., OpenMP, Pthreads). The objective of this study is to compare the performance of such hybrid programming models when applied to the simulation of a realistic physiological multiscale model of the heart. Our results show that the hybrid models perform favorably when compared to an implementation using only the MPI and, furthermore, that OpenMP in combination with the MPI provides a satisfactory compromise between performance and code complexity. Having the ability to use threads within MPI processes enables the sophisticated use of all processor cores for both computation and communication phases. Considering that HPC systems in 2012 will have two orders of magnitude more cores than what was used in this study, we believe that faster than real-time multiscale cardiac simulations can be achieved on these systems. PMID:21768044

  2. Scaling of Multimillion-Atom Biological Molecular Dynamics Simulation on a Petascale Supercomputer

    SciTech Connect

    Schulz, Roland; Lindner, Benjamin; Petridis, Loukas; Smith, Jeremy C

    2009-01-01

    A strategy is described for a fast all-atom molecular dynamics simulation of multimillion-atom biological systems on massively parallel supercomputers. The strategy is developed using benchmark systems of particular interest to bioenergy research, comprising models of cellulose and lignocellulosic biomass in an aqueous solution. The approach involves using the reaction field (RF) method for the computation of long-range electrostatic interactions, which permits efficient scaling on many thousands of cores. Although the range of applicability of the RF method for biomolecular systems remains to be demonstrated, for the benchmark systems the use of the RF produces molecular dipole moments, Kirkwood G factors, other structural properties, and mean-square fluctuations in excellent agreement with those obtained with the commonly used Particle Mesh Ewald method. With RF, three million- and five million atom biological systems scale well up to 30k cores, producing 30 ns/day. Atomistic simulations of very large systems for time scales approaching the microsecond would, therefore, appear now to be within reach.

  3. Scaling of Multimillion-Atom Biological Molecular Dynamics Simulation on a Petascale Supercomputer.

    PubMed

    Schulz, Roland; Lindner, Benjamin; Petridis, Loukas; Smith, Jeremy C

    2009-10-13

    A strategy is described for a fast all-atom molecular dynamics simulation of multimillion-atom biological systems on massively parallel supercomputers. The strategy is developed using benchmark systems of particular interest to bioenergy research, comprising models of cellulose and lignocellulosic biomass in an aqueous solution. The approach involves using the reaction field (RF) method for the computation of long-range electrostatic interactions, which permits efficient scaling on many thousands of cores. Although the range of applicability of the RF method for biomolecular systems remains to be demonstrated, for the benchmark systems the use of the RF produces molecular dipole moments, Kirkwood G factors, other structural properties, and mean-square fluctuations in excellent agreement with those obtained with the commonly used Particle Mesh Ewald method. With RF, three million- and five million-atom biological systems scale well up to ∼30k cores, producing ∼30 ns/day. Atomistic simulations of very large systems for time scales approaching the microsecond would, therefore, appear now to be within reach. PMID:26631792

  4. Simulating Solidification in Metals at High Pressure: The Drive to Petascale Computing

    SciTech Connect

    Streitz, F; Glosli, J; Patel, M

    2006-07-26

    We investigate solidification in metal systems ranging in size from 64,000 to 524,288,000 atoms on the IBM BlueGene/L computer at LLNL. Using the newly developed ddcMD code, we achieve performance rates as high as 103 TFlops, with a performance of 101.7 TFlop sustained over a 7 hour run on 131,072 cpus. We demonstrate superb strong and weak scaling. Our calculations are significant as they represent the first atomic-scale model of metal solidification to proceed, without finite size effects, from spontaneous nucleation and growth of solid out of the liquid, through the coalescence phase, and into the onset of coarsening. Thus, our simulations represent the first step towards an atomistic model of nucleation and growth that can directly link atomistic to mesoscopic length scales.

  5. The Petascale Data Storage Institute

    SciTech Connect

    Gibson, Garth; Long, Darrell; Honeyman, Peter; Grider, Gary; Kramer, William; Shalf, John; Roth, Philip; Felix, Evan; Ward, Lee

    2013-07-01

    Petascale computing infrastructures for scientific discovery make petascale demands on information storage capacity, performance, concurrency, reliability, availability, and manageability.The Petascale Data Storage Institute focuses on the data storage problems found in petascale scientific computing environments, with special attention to community issues such as interoperability, community buy-in, and shared tools.The Petascale Data Storage Institute is a collaboration between researchers at Carnegie Mellon University, National Energy Research Scientific Computing Center, Pacific Northwest National Laboratory, Oak Ridge National Laboratory, Sandia National Laboratory, Los Alamos National Laboratory, University of Michigan, and the University of California at Santa Cruz.

  6. Towards Optimal Petascale Simulations

    SciTech Connect

    Demmel, James W.

    2013-11-08

    Our goal in this project was to design scalable numerical algorithms needed by SciDAC applications that adapt to use evolving hardware resources as efficiently as possible. Our primary challenge is minimizing communication costs, where communication means moving data either between levels of a memory hierarchy (L1 cache to L2 cache to main memory etc.) or between processors over a network. Floating point rates are improving exponentially faster than bandwidth, which is improving exponentially faster than latency. So our goal is to minimize communication. We describe our progress in this area, both for direct and iterative linear algebra. In both areas we have (1) identified lower bounds on the amount of communication (measured both by the number of words moved and the number of messages) required to perform these algorithms, (2) analyzed existing algorithms, which by and large do not attain these lower bounds, and (3) identified or invented new algorithms that do attain them, and evaluated their speedups, which can be quite large.

  7. Enabling distributed petascale science.

    SciTech Connect

    Baranovski, A.; Bharathi, S.; Bresnahan, J.; chervenak, A.; Foster, I.; Fraser, D.; Freeman, T.; Gunter, D.; Jackson, K.; Keahey, K.; Kesselman, C.; Konerding, D. E.; Leroy, N.; Link, M.; Livny, M.; Miller, N.; Miller, R.; Oleynik, G.; Pearlman, L.; Schopf, J. M.; Schuler, R.; Tierney, B.; Mathematics and Computer Science; FNL; Univ. of Southern California; Univ. of Chicago; LBNL; Univ. of Wisconsin

    2007-01-01

    Petascale science is an end-to-end endeavour, involving not only the creation of massive datasets at supercomputers or experimental facilities, but the subsequent analysis of that data by a user community that may be distributed across many laboratories and universities. The new SciDAC Center for Enabling Distributed Petascale Science (CEDPS) is developing tools to support this end-to-end process. These tools include data placement services for the reliable, high-performance, secure, and policy-driven placement of data within a distributed science environment; tools and techniques for the construction, operation, and provisioning of scalable science services; and tools for the detection and diagnosis of failures in end-to-end data placement and distributed application hosting configurations. In each area, we build on a strong base of existing technology and have made useful progress in the first year of the project. For example, we have recently achieved order-of-magnitude improvements in transfer times (for lots of small files) and implemented asynchronous data staging capabilities; demonstrated dynamic deployment of complex application stacks for the STAR experiment; and designed and deployed end-to-end troubleshooting services. We look forward to working with SciDAC application and technology projects to realize the promise of petascale science.

  8. The grand challenge of managing the petascale facility.

    SciTech Connect

    Aiken, R. J.; Mathematics and Computer Science

    2007-02-28

    This report is the result of a study of networks and how they may need to evolve to support petascale leadership computing and science. As Dr. Ray Orbach, director of the Department of Energy's Office of Science, says in the spring 2006 issue of SciDAC Review, 'One remarkable example of growth in unexpected directions has been in high-end computation'. In the same article Dr. Michael Strayer states, 'Moore's law suggests that before the end of the next cycle of SciDAC, we shall see petaflop computers'. Given the Office of Science's strong leadership and support for petascale computing and facilities, we should expect to see petaflop computers in operation in support of science before the end of the decade, and DOE/SC Advanced Scientific Computing Research programs are focused on making this a reality. This study took its lead from this strong focus on petascale computing and the networks required to support such facilities, but it grew to include almost all aspects of the DOE/SC petascale computational and experimental science facilities, all of which will face daunting challenges in managing and analyzing the voluminous amounts of data expected. In addition, trends indicate the increased coupling of unique experimental facilities with computational facilities, along with the integration of multidisciplinary datasets and high-end computing with data-intensive computing; and we can expect these trends to continue at the petascale level and beyond. Coupled with recent technology trends, they clearly indicate the need for including capability petascale storage, networks, and experiments, as well as collaboration tools and programming environments, as integral components of the Office of Science's petascale capability metafacility. The objective of this report is to recommend a new cross-cutting program to support the management of petascale science and infrastructure. The appendices of the report document current and projected DOE computation facilities, science

  9. Petascale Parallelization of the Gyrokinetic Toroidal Code

    SciTech Connect

    Ethier, Stephane; Adams, Mark; Carter, Jonathan; Oliker, Leonid

    2010-05-01

    The Gyrokinetic Toroidal Code (GTC) is a global, three-dimensional particle-in-cell application developed to study microturbulence in tokamak fusion devices. The global capability of GTC is unique, allowing researchers to systematically analyze important dynamics such as turbulence spreading. In this work we examine a new radial domain decomposition approach to allow scalability onto the latest generation of petascale systems. Extensive performance evaluation is conducted on three high performance computing systems: the IBM BG/P, the Cray XT4, and an Intel Xeon Cluster. Overall results show that the radial decomposition approach dramatically increases scalability, while reducing the memory footprint - allowing for fusion device simulations at an unprecedented scale. After a decade where high-end computing (HEC) was dominated by the rapid pace of improvements to processor frequencies, the performance of next-generation supercomputers is increasingly differentiated by varying interconnect designs and levels of integration. Understanding the tradeoffs of these system designs is a key step towards making effective petascale computing a reality. In this work, we examine a new parallelization scheme for the Gyrokinetic Toroidal Code (GTC) [?] micro-turbulence fusion application. Extensive scalability results and analysis are presented on three HEC systems: the IBM BlueGene/P (BG/P) at Argonne National Laboratory, the Cray XT4 at Lawrence Berkeley National Laboratory, and an Intel Xeon cluster at Lawrence Livermore National Laboratory. Overall results indicate that the new radial decomposition approach successfully attains unprecedented scalability to 131,072 BG/P cores by overcoming the memory limitations of the previous approach. The new version is well suited to utilize emerging petascale resources to access new regimes of physical phenomena.

  10. Advanced Wellbore Thermal Simulator

    1992-03-04

    GEOTEMP2, which is based on the earlier GEOTEMP program, is a wellbore thermal simulator designed for geothermal well drilling and production applications. The code treats natural and forced convection and conduction within the wellbore and heat conduction within the surrounding rock matrix. A variety of well operations can be modeled including injection, production, forward and reverse circulation with gas or liquid, gas or liquid drilling, and two-phase steam injection and production. Well completion with severalmore » different casing sizes and cement intervals can be modeled. The code allows variables, such as flow rate, to change with time enabling a realistic treatment of well operations. Provision is made in the flow equations to allow the flow areas of the tubing to vary with depth in the wellbore. Multiple liquids can exist in GEOTEMP2 simulations. Liquid interfaces are tracked through the tubing and annulus as one liquid displaces another. GEOTEMP2, however, does not attempt to simulate displacement of liquids with a gas or two-phase steam or vice versa. This means that it is not possible to simulate an operation where the type of drilling fluid changes, e.g. mud going to air. GEOTEMP2 was designed primarily for use in predicting the behavior of geothermal wells, but it is flexible enough to handle many typical drilling, production, and injection problems in the oil industry as well. However, GEOTEMP2 does not allow the modeling of gas-filled annuli in production or injection problems. In gas or mist drilling, no radiation losses are included in the energy balance. No attempt is made to model flow in the formation. Average execution time is 50 CP seconds on a CDC CYBER170. This edition of GEOTEMP2 is designated as Version 2.0 by the contributors.« less

  11. Petascale Supernova Simulation with CHIMERA

    SciTech Connect

    Messer, Bronson; Bruenn, S. W.; Blondin, J. M.; Mezzacappa, Anthony; Hix, William Raphael; Dirk, Charlotte

    2007-01-01

    CHIMERA is a multi-dimensional radiation hydrodynamics code designed to study core-collapse supernovae. The code is made up of three essentially independent parts: a hydrodynamics module, a nuclear burning module, and a neutrino transport solver combined within an operator-split approach. We describe some ma jor algorithmic facets of the code and briefly discuss some recent results. The multi-physics nature of the problem, and the specific implementation of that physics in CHIMERA, provide a rather straightforward path to effective use of multi-core platforms in the near future.

  12. PETASCALE DATA STORAGE INSTITUTE (PDSI) Final Report

    SciTech Connect

    Gibson, Garth

    2012-11-26

    , and has a large team at EMC supporting and enhancing it. PLFS is open sourced with a BSD license on sourceforge. Post PDSI funding comes from NNSA and industry sources. Moreover, PLFS has spin out half a dozen or more papers, partnered on research with multiple schools and vendors, and has projects to transparently 1) dis- tribute metadata over independent metadata servers, 2) exploit drastically non-POSIX Hadoop storage for HPC POSIX applications, 3) compress checkpoints on the fly, 4) batch delayed writes for write speed, 5) compress read-back indexes and parallelize their redistribution, 6) double-buffer writes in NAND Flash storage to decouple host blocking during checkpoint from disk write time in the storage system, 7) pack small files into a smaller number of bigger containers. There are two large scale open source Linux software projects that PDSI significantly incubated, though neither were initated in PDSI. These are 1) Ceph, a UCSC parallel object storage research project that has continued to be a vehicle for research, and has become a released part of Linux, and 2) Parallel NFS (pNFS) a portion of the IETF’s NFSv4.1 that brings the core data parallelism found in Lustre, PanFS, PVFS, and Ceph to the industry standard NFS, with released code in Linux 3.0, and its vendor offerings, with products from NetApp, EMC, BlueArc and RedHat. Both are fundamentally supported and advanced by vendor companies now, but were critcally transferred from research demonstration to viable product with funding from PDSI, in part. At this point Lustre remains the primary path to scalable IO in Exascale systems, but both Ceph and pNFS are viable alternatives with different fundamental advantages. Finally, research community building was a big success for PDSI. Through the HECFSIO workshops and HECURA project with NSF PDSI stimulated and helped to steer leveraged funding of over $25M. Through the Petascale (now Parallel) Data Storage Workshop series, www.pdsw.org, colocated with

  13. Advanced simulation of digital filters

    NASA Astrophysics Data System (ADS)

    Doyle, G. S.

    1980-09-01

    An Advanced Simulation of Digital Filters has been implemented on the IBM 360/67 computer utilizing Tektronix hardware and software. The program package is appropriate for use by persons beginning their study of digital signal processing or for filter analysis. The ASDF programs provide the user with an interactive method by which filter pole and zero locations can be manipulated. Graphical output on both the Tektronix graphics screen and the Versatec plotter are provided to observe the effects of pole-zero movement.

  14. Lightweight and Statistical Techniques for Petascale PetaScale Debugging

    SciTech Connect

    Miller, Barton

    2014-06-30

    This project investigated novel techniques for debugging scientific applications on petascale architectures. In particular, we developed lightweight tools that narrow the problem space when bugs are encountered. We also developed techniques that either limit the number of tasks and the code regions to which a developer must apply a traditional debugger or that apply statistical techniques to provide direct suggestions of the location and type of error. We extend previous work on the Stack Trace Analysis Tool (STAT), that has already demonstrated scalability to over one hundred thousand MPI tasks. We also extended statistical techniques developed to isolate programming errors in widely used sequential or threaded applications in the Cooperative Bug Isolation (CBI) project to large scale parallel applications. Overall, our research substantially improved productivity on petascale platforms through a tool set for debugging that complements existing commercial tools. Previously, Office Of Science application developers relied either on primitive manual debugging techniques based on printf or they use tools, such as TotalView, that do not scale beyond a few thousand processors. However, bugs often arise at scale and substantial effort and computation cycles are wasted in either reproducing the problem in a smaller run that can be analyzed with the traditional tools or in repeated runs at scale that use the primitive techniques. New techniques that work at scale and automate the process of identifying the root cause of errors were needed. These techniques significantly reduced the time spent debugging petascale applications, thus leading to a greater overall amount of time for application scientists to pursue the scientific objectives for which the systems are purchased. We developed a new paradigm for debugging at scale: techniques that reduced the debugging scenario to a scale suitable for traditional debuggers, e.g., by narrowing the search for the root-cause analysis

  15. SciDAC advances in beam dynamics simulation: from light sources to colliders

    SciTech Connect

    Qiang, Ji; Qiang, J.; Borland, M.; Kabel, A.; Li, R.; Ryne, R.; Stern, E.; Wang, Y.; Wasserman, H.; Zhang, Y.

    2008-06-16

    In this paper, we report on progress that has been made in beam dynamics simulation, from light sources to colliders, during the first year of SciDAC-II accelerator project,"Community Petascale Project for Accelerator Science and Simulation (ComPASS)." Several parallel computational tools for beam dynamics simulation will be described. A number of applications in current and future accelerator facilities, e.g., LCLS, RHIC, Tevatron, LHC, ELIC, are presented.

  16. SciDAC advances in beam dynamics simulation: from light sources to colliders

    SciTech Connect

    Qiang, J.; Borland, M.; Kabel, A.; Li, Rui; Ryne, Robert; Stern, E.; Wang, Y.; Wasserman, H.; Zhang, Y.

    2008-08-01

    In this paper, we report on progress that has been made in beam dynamics simulation, from light sources to colliders, during the first year of the SciDAC-2 accelerator project 'Community Petascale Project for Accelerator Science and Simulation (ComPASS).' Several parallel computational tools for beam dynamics simulation are described. Also presented are number of applications in current and future accelerator facilities (e.g., LCLS, RHIC, Tevatron, LHC, and ELIC).

  17. Multi-petascale highly efficient parallel supercomputer

    SciTech Connect

    Asaad, Sameh; Bellofatto, Ralph E.; Blocksome, Michael A.; Blumrich, Matthias A.; Boyle, Peter; Brunheroto, Jose R.; Chen, Dong; Cher, Chen -Yong; Chiu, George L.; Christ, Norman; Coteus, Paul W.; Davis, Kristan D.; Dozsa, Gabor J.; Eichenberger, Alexandre E.; Eisley, Noel A.; Ellavsky, Matthew R.; Evans, Kahn C.; Fleischer, Bruce M.; Fox, Thomas W.; Gara, Alan; Giampapa, Mark E.; Gooding, Thomas M.; Gschwind, Michael K.; Gunnels, John A.; Hall, Shawn A.; Haring, Rudolf A.; Heidelberger, Philip; Inglett, Todd A.; Knudson, Brant L.; Kopcsay, Gerard V.; Kumar, Sameer; Mamidala, Amith R.; Marcella, James A.; Megerian, Mark G.; Miller, Douglas R.; Miller, Samuel J.; Muff, Adam J.; Mundy, Michael B.; O'Brien, John K.; O'Brien, Kathryn M.; Ohmacht, Martin; Parker, Jeffrey J.; Poole, Ruth J.; Ratterman, Joseph D.; Salapura, Valentina; Satterfield, David L.; Senger, Robert M.; Smith, Brian; Steinmacher-Burow, Burkhard; Stockdell, William M.; Stunkel, Craig B.; Sugavanam, Krishnan; Sugawara, Yutaka; Takken, Todd E.; Trager, Barry M.; Van Oosten, James L.; Wait, Charles D.; Walkup, Robert E.; Watson, Alfred T.; Wisniewski, Robert W.; Wu, Peng

    2015-07-14

    A Multi-Petascale Highly Efficient Parallel Supercomputer of 100 petaOPS-scale computing, at decreased cost, power and footprint, and that allows for a maximum packaging density of processing nodes from an interconnect point of view. The Supercomputer exploits technological advances in VLSI that enables a computing model where many processors can be integrated into a single Application Specific Integrated Circuit (ASIC). Each ASIC computing node comprises a system-on-chip ASIC utilizing four or more processors integrated into one die, with each having full access to all system resources and enabling adaptive partitioning of the processors to functions such as compute or messaging I/O on an application by application basis, and preferably, enable adaptive partitioning of functions in accordance with various algorithmic phases within an application, or if I/O or other processors are underutilized, then can participate in computation or communication nodes are interconnected by a five dimensional torus network with DMA that optimally maximize the throughput of packet communications between nodes and minimize latency.

  18. TECA: Extreme Climate Analytics on Petascale Platforms

    NASA Astrophysics Data System (ADS)

    Prabhat, M.; Byna, S.; Vishwanath, V.; Bethel, W.; Collins, W.; Wehner, M. F.

    2013-12-01

    We will cover recent developments under the TECA (Toolkit for Extreme Climate Analysis) project. We have developed capabilities to automatically detect and track Tropical Cyclones, Extra-Tropical Cyclones, Atmospheric Rivers and Blocking events in large climate datasets. The TECA framework enables such feature tracking codes to run at scale on modern petascale-class HPC platforms. We will review recent extreme scale TECA runs: 150,000 cores on NERSC Cray XE6 Hopper and 300,000 cores of ALCF IBM BG/Q Mira. These runs were able to process TBs of simulation output, and extract statistics of extreme weather phenomena in a 10s of minutes. This presentation will highlight Big Data management, Parallel I/O and optimization issues which need to be considered carefully when running jobs at these concurrencies. We will also present scientific results from running the TECA Tropical Cyclone detection code on a CAM5 multi-resolution dataset; these results have enabled us to characterize and assess the effect of resolution on reproducing extreme weather statistics.We will also present Extra-Tropical Cyclone detection results on the CAM5 CliVAR runs; these results indicate that the frequency of ETCs will decrease under future climate change scenarios. Time permitting, we will discuss novel feature detection capabilities (Blocking events) being incorporated into the TECA framework.

  19. Petascale Diagnostic Assessment of the Global Portfolio Rainfall Space Missions' Ability to Support Flood Forecasting

    NASA Astrophysics Data System (ADS)

    Reed, P. M.; Chaney, N.; Herman, J. D.; Wood, E. F.; Ferringer, M. P.

    2015-12-01

    This research represents a multi-institutional collaboration between Cornell University, The Aerospace Corporation, and Princeton University that has completed a Petascale diagnostic assessment of the current 10 satellite missions providing rainfall observations. Our diagnostic assessment has required four core tasks: (1) formally linking high-resolution astrodynamics design and coordination of space assets with their global hydrological impacts within a Petascale "many-objective" global optimization framework, (2) developing a baseline diagnostic evaluation of a 1-degree resolution global implementation of the Variable Infiltration Capacity (VIC) model to establish the required satellite observation frequencies and coverage to maintain acceptable global flood forecasts, (3) evaluating the limitations and vulnerabilities of the full suite of current satellite precipitation missions including the recently approved Global Precipitation Measurement (GPM) mission, and (4) conceptualizing the next generation spaced-based platforms for water cycle observation. Our team exploited over 100 Million hours of computing access on the 700,000+ core Blue Waters machine to radically advance our ability to discover and visualize key system tradeoffs and sensitivities. This project represents to our knowledge the first attempt to develop a 10,000 member Monte Carlo global hydrologic simulation at one degree resolution that characterizes the uncertain effects of changing the available frequencies of satellite precipitation on drought and flood forecasts. The simulation—optimization components of the work have set a theoretical baseline for the best possible frequencies and coverages for global precipitation given unlimited investment, broad international coordination in reconfiguring existing assets, and new satellite constellation design objectives informed directly by key global hydrologic forecasting requirements. Our research poses a step towards realizing the integrated

  20. Understanding failures in petascale computers

    NASA Astrophysics Data System (ADS)

    Schroeder, Bianca; Gibson, Garth A.

    2007-07-01

    With petascale computers only a year or two away there is a pressing need to anticipate and compensate for a probable increase in failure and application interruption rates. Researchers, designers and integrators have available to them far too little detailed information on the failures and interruptions that even smaller terascale computers experience. The information that is available suggests that application interruptions will become far more common in the coming decade, and the largest applications may surrender large fractions of the computer's resources to taking checkpoints and restarting from a checkpoint after an interruption. This paper reviews sources of failure information for compute clusters and storage systems, projects failure rates and the corresponding decrease in application effectiveness, and discusses coping strategies such as application-level checkpoint compression and system level process-pairs fault-tolerance for supercomputing. The need for a public repository for detailed failure and interruption records is particularly concerning, as projections from one architectural family of machines to another are widely disputed. To this end, this paper introduces the Computer Failure Data Repository and issues a call for failure history data to publish in it.

  1. Advanced Space Shuttle simulation model

    NASA Technical Reports Server (NTRS)

    Tatom, F. B.; Smith, S. R.

    1982-01-01

    A non-recursive model (based on von Karman spectra) for atmospheric turbulence along the flight path of the shuttle orbiter was developed. It provides for simulation of instantaneous vertical and horizontal gusts at the vehicle center-of-gravity, and also for simulation of instantaneous gusts gradients. Based on this model the time series for both gusts and gust gradients were generated and stored on a series of magnetic tapes, entitled Shuttle Simulation Turbulence Tapes (SSTT). The time series are designed to represent atmospheric turbulence from ground level to an altitude of 120,000 meters. A description of the turbulence generation procedure is provided. The results of validating the simulated turbulence are described. Conclusions and recommendations are presented. One-dimensional von Karman spectra are tabulated, while a discussion of the minimum frequency simulated is provided. The results of spectral and statistical analyses of the SSTT are presented.

  2. DEVELOPMENT OF THE ADVANCED UTILITY SIMULATION MODEL

    EPA Science Inventory

    The paper discusses the development of the Advanced Utility Simulation Model (AUSM), developed for the National Acid Precipitation Assessment Program (NAPAP), to forecast air emissions of pollutants from electric utilities. USM integrates generating unit engineering detail with d...

  3. An advanced dispatch simulator with advanced dispatch algorithm

    SciTech Connect

    Kafka, R.J. ); Fink, L.H. ); Balu, N.J. ); Crim, H.G. )

    1989-01-01

    This paper reports on an interactive automatic generation control (AGC) simulator. Improved and timely information regarding fossil fired plant performance is potentially useful in the economic dispatch of system generating units. Commonly used economic dispatch algorithms are not able to take full advantage of this information. The dispatch simulator was developed to test and compare economic dispatch algorithms which might be able to show improvement over standard economic dispatch algorithms if accurate unit information were available. This dispatch simulator offers substantial improvements over previously available simulators. In addition, it contains an advanced dispatch algorithm which shows control and performance advantages over traditional dispatch algorithms for both plants and electric systems.

  4. Advancing the LSST Operations Simulator

    NASA Astrophysics Data System (ADS)

    Saha, Abhijit; Ridgway, S. T.; Cook, K. H.; Delgado, F.; Chandrasekharan, S.; Petry, C. E.; Operations Simulator Group

    2013-01-01

    The Operations Simulator for the Large Synoptic Survey Telescope (LSST; http://lsst.org) allows the planning of LSST observations that obey explicit science driven observing specifications, patterns, schema, and priorities, while optimizing against the constraints placed by design-specific opto-mechanical system performance of the telescope facility, site specific conditions (including weather and seeing), as well as additional scheduled and unscheduled downtime. A simulation run records the characteristics of all observations (e.g., epoch, sky position, seeing, sky brightness) in a MySQL database, which can be queried for any desired purpose. Derivative information digests of the observing history database are made with an analysis package called Simulation Survey Tools for Analysis and Reporting (SSTAR). Merit functions and metrics have been designed to examine how suitable a specific simulation run is for several different science applications. This poster reports recent work which has focussed on an architectural restructuring of the code that will allow us to a) use "look-ahead" strategies that avoid cadence sequences that cannot be completed due to observing constraints; and b) examine alternate optimization strategies, so that the most efficient scheduling algorithm(s) can be identified and used: even few-percent efficiency gains will create substantive scientific opportunity. The enhanced simulator will be used to assess the feasibility of desired observing cadences, study the impact of changing science program priorities, and assist with performance margin investigations of the LSST system.

  5. Advanced Vadose Zone Simulations Using TOUGH

    SciTech Connect

    Finsterle, S.; Doughty, C.; Kowalsky, M.B.; Moridis, G.J.; Pan,L.; Xu, T.; Zhang, Y.; Pruess, K.

    2007-02-01

    The vadose zone can be characterized as a complex subsurfacesystem in which intricate physical and biogeochemical processes occur inresponse to a variety of natural forcings and human activities. Thismakes it difficult to describe, understand, and predict the behavior ofthis specific subsurface system. The TOUGH nonisothermal multiphase flowsimulators are well-suited to perform advanced vadose zone studies. Theconceptual models underlying the TOUGH simulators are capable ofrepresenting features specific to the vadose zone, and of addressing avariety of coupled phenomena. Moreover, the simulators are integratedinto software tools that enable advanced data analysis, optimization, andsystem-level modeling. We discuss fundamental and computationalchallenges in simulating vadose zone processes, review recent advances inmodeling such systems, and demonstrate some capabilities of the TOUGHsuite of codes using illustrative examples.

  6. Advances in atomic oxygen simulation

    NASA Technical Reports Server (NTRS)

    Froechtenigt, Joseph F.; Bareiss, Lyle E.

    1990-01-01

    Atomic oxygen (AO) present in the atmosphere at orbital altitudes of 200 to 700 km has been shown to degrade various exposed materials on Shuttle flights. The relative velocity of the AO with the spacecraft, together with the AO density, combine to yield an environment consisting of a 5 eV beam energy with a flux of 10(exp 14) to 10(exp 15) oxygen atoms/sq cm/s. An AO ion beam apparatus that produces flux levels and energy similar to that encountered by spacecraft in low Earth orbit (LEO) has been in existence since 1987. Test data was obtained from the interaction of the AO ion beam with materials used in space applications (carbon, silver, kapton) and with several special coatings of interest deposited on various surfaces. The ultimate design goal of the AO beam simulation device is to produce neutral AO at sufficient flux levels to replicate on-orbit conditions. A newly acquired mass spectrometer with energy discrimination has allowed 5 eV neutral oxygen atoms to be separated and detected from the background of thermal oxygen atoms of approx 0.2 eV. Neutralization of the AO ion beam at 5 eV was shown at the Martin Marietta AO facility.

  7. Final Project Report. Scalable fault tolerance runtime technology for petascale computers

    SciTech Connect

    Krishnamoorthy, Sriram; Sadayappan, P

    2015-06-16

    With the massive number of components comprising the forthcoming petascale computer systems, hardware failures will be routinely encountered during execution of large-scale applications. Due to the multidisciplinary, multiresolution, and multiscale nature of scientific problems that drive the demand for high end systems, applications place increasingly differing demands on the system resources: disk, network, memory, and CPU. In addition to MPI, future applications are expected to use advanced programming models such as those developed under the DARPA HPCS program as well as existing global address space programming models such as Global Arrays, UPC, and Co-Array Fortran. While there has been a considerable amount of work in fault tolerant MPI with a number of strategies and extensions for fault tolerance proposed, virtually none of advanced models proposed for emerging petascale systems is currently fault aware. To achieve fault tolerance, development of underlying runtime and OS technologies able to scale to petascale level is needed. This project has evaluated range of runtime techniques for fault tolerance for advanced programming models.

  8. Center for Advanced Modeling and Simulation Intern

    SciTech Connect

    Gertman, Vanessa

    2010-01-01

    Some interns just copy papers and seal envelopes. Not at INL! Check out how Vanessa Gertman, an INL intern working at the Center for Advanced Modeling and Simulation, spent her summer working with some intense visualization software. Lots more content like this is available at INL's facebook page http://www.facebook.com/idahonationallaboratory.

  9. Center for Advanced Modeling and Simulation Intern

    ScienceCinema

    Gertman, Vanessa

    2013-05-28

    Some interns just copy papers and seal envelopes. Not at INL! Check out how Vanessa Gertman, an INL intern working at the Center for Advanced Modeling and Simulation, spent her summer working with some intense visualization software. Lots more content like this is available at INL's facebook page http://www.facebook.com/idahonationallaboratory.

  10. PreDatA - Preparatory Data Analytics on Peta-Scale Machines

    SciTech Connect

    Zheng, Fang; Abbasi, H.; Docan, Ciprian; Lofstead, J.; Klasky, Scott A; Parashar, Manish; Podhorszki, Norbert; Schwan, Karsten; Wolf, Matthew D; Liu, Gary

    2010-01-01

    Peta-scale scientific applications running on High End Computing (HEC) platforms can generate large volumes of data. For high performance storage and in order to be useful to science end users, such data must be organized in its layout, indexed, sorted, and otherwise manipulated for subsequent data presentation, visualization, and detailed analysis. In addition, scientists desire to gain insights into selected data characteristics 'hidden' or 'latent' in the massive datasets while data is being produced by simulations. PreDatA, short for Preparatory Data Analytics, is an approach for preparing and characterizing data while it is being produced by the large scale simulations running on peta-scale machines. By dedicating additional compute nodes on the peta-scale machine as staging nodes and staging simulation's output data through these nodes, PreDatA can exploit their computational power to perform selected data manipulations with lower latency than attainable by first moving data into file systems and storage. Such in-transit manipulations are supported by the PreDatA middleware through RDMA-based data movement to reduce write latency, application-specific operations on streaming data that are able to discover latent data characteristics, and appropriate data reorganization and metadata annotation to speed up subsequent data access. As a result, PreDatA enhances the scalability and flexibility of current I/O stack on HEC platforms and is useful for data pre-processing, runtime data analysis and inspection, as well as for data exchange between concurrently running simulation models. Performance evaluations with several production peta-scale applications on Oak Ridge National Laboratory's Leadership Computing Facility demonstrate the feasibility and advantages of the PreDatA approach.

  11. Recent Advances in Simulation of Dendritic Polymers

    SciTech Connect

    Cagin, Tahir; Miklis, Paul J.; Wang, Guofeng; Zamanakos, Georgios; Martin, Ryan; Li, Hao; Mainz, Daniel T.; Nagarajan, V.; Goddard, William A.

    1999-05-11

    Dendrimers and hyperbranched polymers represent a revolution in methodology for directed synthesis of monodisperse polymers with enormous possibility of novel architectures. They demonstrate the ability to attain micelle-like structures with distinct internal and external character. Furthermore, the polyfunctional character of dendrimers allows varied response to environment and promise as selective sensors, carrier for drugs, encapsulation of toxic chemicals and metals. One of the key problems is the characterization of the structures. Theory and simulation can be essential to provide and predict structure and properties. We present some recent advances in theory, modeling and simulation of dendritic polymers.

  12. Lightweight and Statistical Techniques for Petascale Debugging: Correctness on Petascale Systems (CoPS) Preliminry Report

    SciTech Connect

    de Supinski, B R; Miller, B P; Liblit, B

    2011-09-13

    Petascale platforms with O(10{sup 5}) and O(10{sup 6}) processing cores are driving advancements in a wide range of scientific disciplines. These large systems create unprecedented application development challenges. Scalable correctness tools are critical to shorten the time-to-solution on these systems. Currently, many DOE application developers use primitive manual debugging based on printf or traditional debuggers such as TotalView or DDT. This paradigm breaks down beyond a few thousand cores, yet bugs often arise above that scale. Programmers must reproduce problems in smaller runs to analyze them with traditional tools, or else perform repeated runs at scale using only primitive techniques. Even when traditional tools run at scale, the approach wastes substantial effort and computation cycles. Continued scientific progress demands new paradigms for debugging large-scale applications. The Correctness on Petascale Systems (CoPS) project is developing a revolutionary debugging scheme that will reduce the debugging problem to a scale that human developers can comprehend. The scheme can provide precise diagnoses of the root causes of failure, including suggestions of the location and the type of errors down to the level of code regions or even a single execution point. Our fundamentally new strategy combines and expands three relatively new complementary debugging approaches. The Stack Trace Analysis Tool (STAT), a 2011 R&D 100 Award Winner, identifies behavior equivalence classes in MPI jobs and highlights behavior when elements of the class demonstrate divergent behavior, often the first indicator of an error. The Cooperative Bug Isolation (CBI) project has developed statistical techniques for isolating programming errors in widely deployed code that we will adapt to large-scale parallel applications. Finally, we are developing a new approach to parallelizing expensive correctness analyses, such as analysis of memory usage in the Memgrind tool. In the first two

  13. Advanced Civil Transport Simulator Cockpit View

    NASA Technical Reports Server (NTRS)

    1992-01-01

    The Advanced Civil Transport Simulator (ACTS) is a futuristic aircraft cockpit simulator designed to provide full-mission capabilities for researching issues that will affect future transport aircraft flight stations and crews. The objective is to heighten the pilots situation awareness through improved information availability and ease of interpretation in order to reduce the possibility of misinterpreted data. The simulators five 13-inch Cathode Ray Tubes are designed to display flight information in a logical easy-to-see format. Two color flat panel Control Display Units with touch sensitive screens provide monitoring and modification of aircraft parameters, flight plans, flight computers, and aircraft position. Three collimated visual display units have been installed to provide out-the-window scenes via the Computer Generated Image system. The major research objectives are to examine needs for transfer of information to and from the flight crew; study the use of advanced controls and displays for all-weather flying; explore ideas for using computers to help the crew in decision making; study visual scanning and reach behavior under different conditions with various levels of automation and flight deck-arrangements.

  14. Quantum Monte Carlo Endstation for Petascale Computing

    SciTech Connect

    Lubos Mitas

    2011-01-26

    NCSU research group has been focused on accomplising the key goals of this initiative: establishing new generation of quantum Monte Carlo (QMC) computational tools as a part of Endstation petaflop initiative for use at the DOE ORNL computational facilities and for use by computational electronic structure community at large; carrying out high accuracy quantum Monte Carlo demonstration projects in application of these tools to the forefront electronic structure problems in molecular and solid systems; expanding the impact of QMC methods and approaches; explaining and enhancing the impact of these advanced computational approaches. In particular, we have developed quantum Monte Carlo code (QWalk, www.qwalk.org) which was significantly expanded and optimized using funds from this support and at present became an actively used tool in the petascale regime by ORNL researchers and beyond. These developments have been built upon efforts undertaken by the PI's group and collaborators over the period of the last decade. The code was optimized and tested extensively on a number of parallel architectures including petaflop ORNL Jaguar machine. We have developed and redesigned a number of code modules such as evaluation of wave functions and orbitals, calculations of pfaffians and introduction of backflow coordinates together with overall organization of the code and random walker distribution over multicore architectures. We have addressed several bottlenecks such as load balancing and verified efficiency and accuracy of the calculations with the other groups of the Endstation team. The QWalk package contains about 50,000 lines of high quality object-oriented C++ and includes also interfaces to data files from other conventional electronic structure codes such as Gamess, Gaussian, Crystal and others. This grant supported PI for one month during summers, a full-time postdoc and partially three graduate students over the period of the grant duration, it has resulted in 13

  15. Onyx-Advanced Aeropropulsion Simulation Framework Created

    NASA Technical Reports Server (NTRS)

    Reed, John A.

    2001-01-01

    The Numerical Propulsion System Simulation (NPSS) project at the NASA Glenn Research Center is developing a new software environment for analyzing and designing aircraft engines and, eventually, space transportation systems. Its purpose is to dramatically reduce the time, effort, and expense necessary to design and test jet engines by creating sophisticated computer simulations of an aerospace object or system (refs. 1 and 2). Through a university grant as part of that effort, researchers at the University of Toledo have developed Onyx, an extensible Java-based (Sun Micro-systems, Inc.), objectoriented simulation framework, to investigate how advanced software design techniques can be successfully applied to aeropropulsion system simulation (refs. 3 and 4). The design of Onyx's architecture enables users to customize and extend the framework to add new functionality or adapt simulation behavior as required. It exploits object-oriented technologies, such as design patterns, domain frameworks, and software components, to develop a modular system in which users can dynamically replace components with others having different functionality.

  16. Software Framework for Advanced Power Plant Simulations

    SciTech Connect

    John Widmann; Sorin Munteanu; Aseem Jain; Pankaj Gupta; Mark Moales; Erik Ferguson; Lewis Collins; David Sloan; Woodrow Fiveland; Yi-dong Lang; Larry Biegler; Michael Locke; Simon Lingard; Jay Yun

    2010-08-01

    This report summarizes the work accomplished during the Phase II development effort of the Advanced Process Engineering Co-Simulator (APECS). The objective of the project is to develop the tools to efficiently combine high-fidelity computational fluid dynamics (CFD) models with process modeling software. During the course of the project, a robust integration controller was developed that can be used in any CAPE-OPEN compliant process modeling environment. The controller mediates the exchange of information between the process modeling software and the CFD software. Several approaches to reducing the time disparity between CFD simulations and process modeling have been investigated and implemented. These include enabling the CFD models to be run on a remote cluster and enabling multiple CFD models to be run simultaneously. Furthermore, computationally fast reduced-order models (ROMs) have been developed that can be 'trained' using the results from CFD simulations and then used directly within flowsheets. Unit operation models (both CFD and ROMs) can be uploaded to a model database and shared between multiple users.

  17. Quantum Monte Carlo Endstation for Petascale Computing

    SciTech Connect

    David Ceperley

    2011-03-02

    The major achievements enabled by QMC Endstation grant include * Performance improvement on clusters of x86 multi-core systems, especially on Cray XT systems * New and improved methods for the wavefunction optimizations * New forms of trial wavefunctions * Implementation of the full application on NVIDIA GPUs using CUDA The scaling studies of QMCPACK on large-scale systems show excellent parallel efficiency up to 216K cores on Jaguarpf (Cray XT5). The GPU implementation shows speedups of 10-15x over the CPU implementation on older generation of x86. We have implemented hybrid OpenMP/MPI scheme in QMC to take advantage of multi-core shared memory processors of petascale systems. Our hybrid scheme has several advantages over the standard MPI-only scheme. * Memory optimized: large read-only data to store one-body orbitals and other shared properties to represent the trial wave function and many-body Hamiltonian can be shared among threads, which reduces the memory footprint of a large-scale problem. * Cache optimized: the data associated with an active Walker are in cache during the compute-intensive drift-diffusion process and the operations on an Walker are optimized for cache reuse. Thread-local objects are used to ensure the data affinity to a thread. * Load balanced: Walkers in an ensemble are evenly distributed among threads and MPI tasks. The two-level parallelism reduces the population imbalance among MPI tasks and reduces the number of point-to-point communications of large messages (serialized objects) for the Walker exchange. * Communication optimized: the communication overhead, especially for the collective operations necessary to determine ET and measure the properties of an ensemble, is significantly lowered by using less MPI tasks. The multiple forms of parallelism afforded by QMC algorithms make them ideal candidates for acceleration in the many-core paradigm. We presented the results of our effort to port the QMCPACK simulation code to the NVIDIA

  18. Petascale Computing Enabling Technologies Project Final Report

    SciTech Connect

    de Supinski, B R

    2010-02-14

    The Petascale Computing Enabling Technologies (PCET) project addressed challenges arising from current trends in computer architecture that will lead to large-scale systems with many more nodes, each of which uses multicore chips. These factors will soon lead to systems that have over one million processors. Also, the use of multicore chips will lead to less memory and less memory bandwidth per core. We need fundamentally new algorithmic approaches to cope with these memory constraints and the huge number of processors. Further, correct, efficient code development is difficult even with the number of processors in current systems; more processors will only make it harder. The goal of PCET was to overcome these challenges by developing the computer science and mathematical underpinnings needed to realize the full potential of our future large-scale systems. Our research results will significantly increase the scientific output obtained from LLNL large-scale computing resources by improving application scientist productivity and system utilization. Our successes include scalable mathematical algorithms that adapt to these emerging architecture trends and through code correctness and performance methodologies that automate critical aspects of application development as well as the foundations for application-level fault tolerance techniques. PCET's scope encompassed several research thrusts in computer science and mathematics: code correctness and performance methodologies, scalable mathematics algorithms appropriate for multicore systems, and application-level fault tolerance techniques. Due to funding limitations, we focused primarily on the first three thrusts although our work also lays the foundation for the needed advances in fault tolerance. In the area of scalable mathematics algorithms, our preliminary work established that OpenMP performance of the AMG linear solver benchmark and important individual kernels on Atlas did not match the predictions of our

  19. Advanced Modeling, Simulation and Analysis (AMSA) Capability Roadmap Progress Review

    NASA Technical Reports Server (NTRS)

    Antonsson, Erik; Gombosi, Tamas

    2005-01-01

    Contents include the following: NASA capability roadmap activity. Advanced modeling, simulation, and analysis overview. Scientific modeling and simulation. Operations modeling. Multi-special sensing (UV-gamma). System integration. M and S Environments and Infrastructure.

  20. Precision Casting via Advanced Simulation and Manufacturing

    NASA Technical Reports Server (NTRS)

    1997-01-01

    A two-year program was conducted to develop and commercially implement selected casting manufacturing technologies to enable significant reductions in the costs of castings, increase the complexity and dimensional accuracy of castings, and reduce the development times for delivery of high quality castings. The industry-led R&D project was cost shared with NASA's Aerospace Industry Technology Program (AITP). The Rocketdyne Division of Boeing North American, Inc. served as the team lead with participation from Lockheed Martin, Ford Motor Company, Howmet Corporation, PCC Airfoils, General Electric, UES, Inc., University of Alabama, Auburn University, Robinson, Inc., Aracor, and NASA-LeRC. The technical effort was organized into four distinct tasks. The accomplishments reported herein. Task 1.0 developed advanced simulation technology for core molding. Ford headed up this task. On this program, a specialized core machine was designed and built. Task 2.0 focused on intelligent process control for precision core molding. Howmet led this effort. The primary focus of these experimental efforts was to characterize the process parameters that have a strong impact on dimensional control issues of injection molded cores during their fabrication. Task 3.0 developed and applied rapid prototyping to produce near net shape castings. Rocketdyne was responsible for this task. CAD files were generated using reverse engineering, rapid prototype patterns were fabricated using SLS and SLA, and castings produced and evaluated. Task 4.0 was aimed at developing technology transfer. Rocketdyne coordinated this task. Casting related technology, explored and evaluated in the first three tasks of this program, was implemented into manufacturing processes.

  1. Adjusting process count on demand for petascale global optimization

    SciTech Connect

    Sosonkina, Masha; Watson, Layne T.; Radcliffe, Nicholas R.; Haftka, Rafael T.; Trosset, Michael W.

    2012-11-23

    There are many challenges that need to be met before efficient and reliable computation at the petascale is possible. Many scientific and engineering codes running at the petascale are likely to be memory intensive, which makes thrashing a serious problem for many petascale applications. One way to overcome this challenge is to use a dynamic number of processes, so that the total amount of memory available for the computation can be increased on demand. This paper describes modifications made to the massively parallel global optimization code pVTdirect in order to allow for a dynamic number of processes. In particular, the modified version of the code monitors memory use and spawns new processes if the amount of available memory is determined to be insufficient. The primary design challenges are discussed, and performance results are presented and analyzed.

  2. Advanced in turbulence physics and modeling by direct numerical simulations

    NASA Technical Reports Server (NTRS)

    Reynolds, W. C.

    1987-01-01

    The advent of direct numerical simulations of turbulence has opened avenues for research on turbulence physics and turbulence modeling. Direct numerical simulation provides values for anything that the scientist or modeler would like to know about the flow. An overview of some recent advances in the physical understanding of turbulence and in turbulence modeling obtained through such simulations is presented.

  3. Computational Chemistry at the Petascale: Are We There Yet?

    SciTech Connect

    Apra, Edoardo; Harrison, Richard J; Tipparaju, Vinod; Vazquez-Mayagoitia, Alvaro

    2009-02-01

    We have run computational chemistry calculations approaching the Petascale level of performance ({approx}0.5 PFlops). We used the Coupled Cluster CCSD(T) module of the computational chemistry code NWChem to evaluate accurate energetics of water clusters on a 1.4 PFlops Cray XT5 computer.

  4. Advanced photovoltaic system simulator to demonstrate the performance of advanced photovoltaic cells and devices

    SciTech Connect

    Mrig, L.; DeBlasio, R.; O'Sullivan, G.A.; Tomko, R.P.

    1983-05-01

    This paper describes a photovoltaic system simulator for characterizing and evaluating the performance of advanced photovoltaic cells, modules, and arrays as well as for simulating the operation of advanced conceptual photovoltaic systems. The system simulator is capable of extrapolating the performance from a single laboratory cell, or of a module to power levels up to 10 kW. The major subsystems comprising the system simulator are (1) Solar Array Simulator, (2) Power Conditioning Unit, (3) Load Controller and Resistive Load Unit, (4) Data Acquisition and Control Unit, and (5) Cell Test Bed.

  5. SiSAR: advanced SAR simulation

    NASA Astrophysics Data System (ADS)

    Klaus, Ferdinand

    1995-11-01

    SiSAR was planned as a realistic as possible, modular, user-friendly and fast SAR raw data simulator running on ordinary workstations. Interest in (interferometric) SAR products is growing on an international scale. There is a concentration of manpower and financial resources. Dead ends, respectively failures, have to be avoided during design and mission of every SAR project by simulating the system thoroughly before the experiment. Another reason to make use of extensive reproducible simulations during design and development is the reduction of time and manpower costs. As it comes down to verifying and comparing different processing algorithms we see that (interferometric) SAR simulation is an indispensable tool for testing individual processing steps. SiSAR is a modular SAR raw data simulator for realistic description of the functions of a SAR-system. It contains an implementation of diverse models to characterize radar targets, various approaches to describe the trajectory and the motion of the footprint on the target surface and different raw data formation algorithms. Beyond there is a wide supply of tools for manipulation, analysis and user-friendly simulation handling. Results obtained by SiSAR and some first simulated interferometric SAR raw data are shown in the paper.

  6. Hybrid and Electric Advanced Vehicle Systems Simulation

    NASA Technical Reports Server (NTRS)

    Beach, R. F.; Hammond, R. A.; Mcgehee, R. K.

    1985-01-01

    Predefined components connected to represent wide variety of propulsion systems. Hybrid and Electric Advanced Vehicle System (HEAVY) computer program is flexible tool for evaluating performance and cost of electric and hybrid vehicle propulsion systems. Allows designer to quickly, conveniently, and economically predict performance of proposed drive train.

  7. 14 CFR Appendix H to Part 121 - Advanced Simulation

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 14 Aeronautics and Space 3 2014-01-01 2014-01-01 false Advanced Simulation H Appendix H to Part... REQUIREMENTS: DOMESTIC, FLAG, AND SUPPLEMENTAL OPERATIONS Pt. 121, App. H Appendix H to Part 121—Advanced... airmen used in appendix H training and checking are highly qualified to provide the training required...

  8. 14 CFR Appendix H to Part 121 - Advanced Simulation

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 14 Aeronautics and Space 3 2013-01-01 2013-01-01 false Advanced Simulation H Appendix H to Part... REQUIREMENTS: DOMESTIC, FLAG, AND SUPPLEMENTAL OPERATIONS Pt. 121, App. H Appendix H to Part 121—Advanced... ensure that all instructors and check airmen used in appendix H training and checking are...

  9. 14 CFR Appendix H to Part 121 - Advanced Simulation

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 14 Aeronautics and Space 3 2012-01-01 2012-01-01 false Advanced Simulation H Appendix H to Part... REQUIREMENTS: DOMESTIC, FLAG, AND SUPPLEMENTAL OPERATIONS Pt. 121, App. H Appendix H to Part 121—Advanced... ensure that all instructors and check airmen used in appendix H training and checking are...

  10. 14 CFR Appendix H to Part 121 - Advanced Simulation

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 14 Aeronautics and Space 3 2011-01-01 2011-01-01 false Advanced Simulation H Appendix H to Part... REQUIREMENTS: DOMESTIC, FLAG, AND SUPPLEMENTAL OPERATIONS Pt. 121, App. H Appendix H to Part 121—Advanced... ensure that all instructors and check airmen used in appendix H training and checking are...

  11. 14 CFR Appendix H to Part 121 - Advanced Simulation

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 14 Aeronautics and Space 3 2010-01-01 2010-01-01 false Advanced Simulation H Appendix H to Part... REQUIREMENTS: DOMESTIC, FLAG, AND SUPPLEMENTAL OPERATIONS Pt. 121, App. H Appendix H to Part 121—Advanced... ensure that all instructors and check airmen used in appendix H training and checking are...

  12. Understanding I/O workload characteristics of a Peta-scale storage system

    SciTech Connect

    Kim, Youngjae; Gunasekaran, Raghul

    2015-01-01

    Understanding workload characteristics is critical for optimizing and improving the performance of current systems and software, and architecting new storage systems based on observed workload patterns. In this paper, we characterize the I/O workloads of scientific applications of one of the world s fastest high performance computing (HPC) storage cluster, Spider, at the Oak Ridge Leadership Computing Facility (OLCF). OLCF flagship petascale simulation platform, Titan, and other large HPC clusters, in total over 250 thousands compute cores, depend on Spider for their I/O needs. We characterize the system utilization, the demands of reads and writes, idle time, storage space utilization, and the distribution of read requests to write requests for the Peta-scale Storage Systems. From this study, we develop synthesized workloads, and we show that the read and write I/O bandwidth usage as well as the inter-arrival time of requests can be modeled as a Pareto distribution. We also study the I/O load imbalance problems using I/O performance data collected from the Spider storage system.

  13. Simulator design for advanced ISDN satellite design and experiments

    NASA Technical Reports Server (NTRS)

    Pepin, Gerald R.

    1992-01-01

    This simulation design task completion report documents the simulation techniques associated with the network models of both the Interim Service ISDN (integrated services digital network) Satellite (ISIS) and the Full Service ISDN Satellite (FSIS) architectures. The ISIS network model design represents satellite systems like the Advanced Communication Technology Satellite (ACTS) orbiting switch. The FSIS architecture, the ultimate aim of this element of the Satellite Communications Applications Research (SCAR) program, moves all control and switching functions on-board the next generation ISDN communication satellite. The technical and operational parameters for the advanced ISDN communications satellite design will be obtained from the simulation of ISIS and FSIS engineering software models for their major subsystems. Discrete events simulation experiments will be performed with these models using various traffic scenarios, design parameters and operational procedures. The data from these simulations will be used to determine the engineering parameters for the advanced ISDN communications satellite.

  14. Molecular dynamics simulations: advances and applications

    PubMed Central

    Hospital, Adam; Goñi, Josep Ramon; Orozco, Modesto; Gelpí, Josep L

    2015-01-01

    Molecular dynamics simulations have evolved into a mature technique that can be used effectively to understand macromolecular structure-to-function relationships. Present simulation times are close to biologically relevant ones. Information gathered about the dynamic properties of macromolecules is rich enough to shift the usual paradigm of structural bioinformatics from studying single structures to analyze conformational ensembles. Here, we describe the foundations of molecular dynamics and the improvements made in the direction of getting such ensemble. Specific application of the technique to three main issues (allosteric regulation, docking, and structure refinement) is discussed.

  15. Interactive visualization to advance earthquake simulation

    USGS Publications Warehouse

    Kellogg, L.H.; Bawden, G.W.; Bernardin, T.; Billen, M.; Cowgill, E.; Hamann, B.; Jadamec, M.; Kreylos, O.; Staadt, O.; Sumner, D.

    2008-01-01

    The geological sciences are challenged to manage and interpret increasing volumes of data as observations and simulations increase in size and complexity. For example, simulations of earthquake-related processes typically generate complex, time-varying data sets in two or more dimensions. To facilitate interpretation and analysis of these data sets, evaluate the underlying models, and to drive future calculations, we have developed methods of interactive visualization with a special focus on using immersive virtual reality (VR) environments to interact with models of Earth's surface and interior. Virtual mapping tools allow virtual "field studies" in inaccessible regions. Interactive tools allow us to manipulate shapes in order to construct models of geological features for geodynamic models, while feature extraction tools support quantitative measurement of structures that emerge from numerical simulation or field observations, thereby enabling us to improve our interpretation of the dynamical processes that drive earthquakes. VR has traditionally been used primarily as a presentation tool, albeit with active navigation through data. Reaping the full intellectual benefits of immersive VR as a tool for scientific analysis requires building on the method's strengths, that is, using both 3D perception and interaction with observed or simulated data. This approach also takes advantage of the specialized skills of geological scientists who are trained to interpret, the often limited, geological and geophysical data available from field observations. ?? Birkhaueser 2008.

  16. Advanced Simulation and Computing Business Plan

    SciTech Connect

    Rummel, E.

    2015-07-09

    To maintain a credible nuclear weapons program, the National Nuclear Security Administration’s (NNSA’s) Office of Defense Programs (DP) needs to make certain that the capabilities, tools, and expert staff are in place and are able to deliver validated assessments. This requires a complete and robust simulation environment backed by an experimental program to test ASC Program models. This ASC Business Plan document encapsulates a complex set of elements, each of which is essential to the success of the simulation component of the Nuclear Security Enterprise. The ASC Business Plan addresses the hiring, mentoring, and retaining of programmatic technical staff responsible for building the simulation tools of the nuclear security complex. The ASC Business Plan describes how the ASC Program engages with industry partners—partners upon whom the ASC Program relies on for today’s and tomorrow’s high performance architectures. Each piece in this chain is essential to assure policymakers, who must make decisions based on the results of simulations, that they are receiving all the actionable information they need.

  17. Simulation Toolkit for Renewable Energy Advanced Materials Modeling

    2013-11-13

    STREAMM is a collection of python classes and scripts that enables and eases the setup of input files and configuration files for simulations of advanced energy materials. The core STREAMM python classes provide a general framework for storing, manipulating and analyzing atomic/molecular coordinates to be used in quantum chemistry and classical molecular dynamics simulations of soft materials systems. The design focuses on enabling the interoperability of materials simulation codes such as GROMACS, LAMMPS and Gaussian.

  18. Advances in NLTE modeling for integrated simulations

    NASA Astrophysics Data System (ADS)

    Scott, H. A.; Hansen, S. B.

    2010-01-01

    The last few years have seen significant progress in constructing the atomic models required for non-local thermodynamic equilibrium (NLTE) simulations. Along with this has come an increased understanding of the requirements for accurately modeling the ionization balance, energy content and radiative properties of different atomic species for a wide range of densities and temperatures. Much of this progress is the result of a series of workshops dedicated to comparing the results from different codes and computational approaches applied to a series of test problems. The results of these workshops emphasized the importance of atomic model completeness, especially in doubly-excited states and autoionization transitions, to calculating ionization balance, and the importance of accurate, detailed atomic data to producing reliable spectra. We describe a simple screened-hydrogenic model that calculates NLTE ionization balance with sufficient accuracy, at a low enough computational cost for routine use in radiation-hydrodynamics codes. The model incorporates term splitting, Δ n = 0 transitions, and approximate UTA widths for spectral calculations, with results comparable to those of much more detailed codes. Simulations done with this model have been increasingly successful at matching experimental data for laser-driven systems and hohlraums. Accurate and efficient atomic models are just one requirement for integrated NLTE simulations. Coupling the atomic kinetics to hydrodynamics and radiation transport constrains both discretizations and algorithms to retain energy conservation, accuracy and stability. In particular, the strong coupling between radiation and populations can require either very short time steps or significantly modified radiation transport algorithms to account for NLTE material response. Considerations such as these continue to provide challenges for NLTE simulations.

  19. Process simulation for advanced composites production

    SciTech Connect

    Allendorf, M.D.; Ferko, S.M.; Griffiths, S.

    1997-04-01

    The objective of this project is to improve the efficiency and lower the cost of chemical vapor deposition (CVD) processes used to manufacture advanced ceramics by providing the physical and chemical understanding necessary to optimize and control these processes. Project deliverables include: numerical process models; databases of thermodynamic and kinetic information related to the deposition process; and process sensors and software algorithms that can be used for process control. Target manufacturing techniques include CVD fiber coating technologies (used to deposit interfacial coatings on continuous fiber ceramic preforms), chemical vapor infiltration, thin-film deposition processes used in the glass industry, and coating techniques used to deposit wear-, abrasion-, and corrosion-resistant coatings for use in the pulp and paper, metals processing, and aluminum industries.

  20. Use of advanced computers for aerodynamic flow simulation

    NASA Technical Reports Server (NTRS)

    Bailey, F. R.; Ballhaus, W. F.

    1980-01-01

    The current and projected use of advanced computers for large-scale aerodynamic flow simulation applied to engineering design and research is discussed. The design use of mature codes run on conventional, serial computers is compared with the fluid research use of new codes run on parallel and vector computers. The role of flow simulations in design is illustrated by the application of a three dimensional, inviscid, transonic code to the Sabreliner 60 wing redesign. Research computations that include a more complete description of the fluid physics by use of Reynolds averaged Navier-Stokes and large-eddy simulation formulations are also presented. Results of studies for a numerical aerodynamic simulation facility are used to project the feasibility of design applications employing these more advanced three dimensional viscous flow simulations.

  1. SciDAC Advances in Beam Dynamics Simulation: From Light Sources to Colliders

    SciTech Connect

    Qiang, J.; Borland, M.; Kabel, A.; Li, R.; Ryne, R.; Stern, E.; Wang, Y.; Wasserman, H.; Zhang, Y.; /SLAC

    2011-11-14

    In this paper, we report on progress that has been made in beam dynamics simulation, from light sources to colliders, during the first year of the SciDAC-2 accelerator project 'Community Petascale Project for Accelerator Science and Simulation (ComPASS).' Several parallel computational tools for beam dynamics simulation are described. Also presented are number of applications in current and future accelerator facilities (e.g., LCLS, RHIC, Tevatron, LHC, and ELIC). Particle accelerators are some of most important tools of scientific discovery. They are widely used in high-energy physics, nuclear physics, and other basic and applied sciences to study the interaction of elementary particles, to probe the internal structure of matter, and to generate high-brightness radiation for research in materials science, chemistry, biology, and other fields. Modern accelerators are complex and expensive devices that may be several kilometers long and may consist of thousands of beamline elements. An accelerator may transport trillions of charged particles that interact electromagnetically among themselves, that interact with fields produced by the accelerator components, and that interact with beam-induced fields. Large-scale beam dynamics simulations on massively parallel computers can help provide understanding of these complex physical phenomena, help minimize design cost, and help optimize machine operation. In this paper, we report on beam dynamics simulations in a variety of accelerators ranging from next generation light sources to high-energy ring colliders that have been studied during the first year of the SciDAC-2 accelerator project.

  2. Alignment and Initial Operation of an Advanced Solar Simulator

    NASA Technical Reports Server (NTRS)

    Jaworske, Donald A.; Jefferies, Kent S.; Mason, Lee S.

    1996-01-01

    A solar simulator utilizing nine 30-kW xenon arc lamps was built to provide radiant power for testing a solar dynamic space power system in a thermal vacuum environment. The advanced solar simulator achieved the following values specific to the solar dynamic system: (1) a subtense angle of 1 deg; (2) the ability to vary solar simulator intensity up to 1.7 kW/sq m; (3) a beam diameter of 4.8 m; and (4) uniformity of illumination on the order of +/-10%. The flexibility of the solar simulator design allows for other potential uses of the facility.

  3. ADVANCED WAVEFORM SIMULATION FOR SEISMIC MONITORING EVENTS

    SciTech Connect

    Helmberger, Donald V.; Tromp, Jeroen; Rodgers, Arthur J.

    2008-04-15

    The recent Nevada Earthquake (M=6) produced an extraordinary set of crustal guided waves. In this study, we examine the three-component data at all the USArray stations in terms of how well existing models perform in predicting the various phases, Rayleigh waves, Love waves, and Pnl waves. To establish the source parameters, we applied the Cut and Paste Code up to distance of 5° for an average local crustal model which produced a normal mechanism (strike=35°,dip=41°,rake=-85°) at a depth of 9 km and Mw=5.9. Assuming this mechanism, we generated synthetics at all distances for a number of 1D and 3D models. The Pnl observations fit the synthetics for the simple models well both in timing (VPn=7.9km/s) and waveform fits out to a distance of about 5°. Beyond this distance a great deal of complexity can be seen to the northwest apparently caused by shallow subducted slab material. These paths require considerable crustal thinning and higher P-velocities. Small delays and advances outline the various tectonic province to the south, Colorado Plateau, etc. with velocities compatible with that reported on by Song et al.(1996). Five-second Rayleigh waves (Airy Phase) can be observed throughout the whole array and show a great deal of variation ( up to 30s). In general, the Love waves are better behaved than the Rayleigh waves. We are presently adding higher frequency to the source description by including source complexity. Preliminary inversions suggest rupture to northeast with a shallow asperity. We are, also, inverting the aftershocks to extend the frequencies to 2 Hz and beyond following the calibration method outlined in Tan and Helmberger (2007). This will allow accurate directivity measurements for events with magnitude larger than 3.5. Thus, we will address the energy decay with distance as s function of frequency band for the various source types.

  4. Brush seal numerical simulation: Concepts and advances

    NASA Technical Reports Server (NTRS)

    Braun, M. J.; Kudriavtsev, V. V.

    1994-01-01

    The development of the brush seal is considered to be most promising among the advanced type seals that are presently in use in the high speed turbomachinery. The brush is usually mounted on the stationary portions of the engine and has direct contact with the rotating element, in the process of limiting the 'unwanted' leakage flows between stages, or various engine cavities. This type of sealing technology is providing high (in comparison with conventional seals) pressure drops due mainly to the high packing density (around 100 bristles/sq mm), and brush compliance with the rotor motions. In the design of modern aerospace turbomachinery leakage flows between the stages must be minimal, thus contributing to the higher efficiency of the engine. Use of the brush seal instead of the labyrinth seal reduces the leakage flow by one order of magnitude. Brush seals also have been found to enhance dynamic performance, cost less, and are lighter than labyrinth seals. Even though industrial brush seals have been successfully developed through extensive experimentation, there is no comprehensive numerical methodology for the design or prediction of their performance. The existing analytical/numerical approaches are based on bulk flow models and do not allow the investigation of the effects of brush morphology (bristle arrangement), or brushes arrangement (number of brushes, spacing between them), on the pressure drops and flow leakage. An increase in the brush seal efficiency is clearly a complex problem that is closely related to the brush geometry and arrangement, and can be solved most likely only by means of a numerically distributed model.

  5. Brush seal numerical simulation: Concepts and advances

    NASA Astrophysics Data System (ADS)

    Braun, M. J.; Kudriavtsev, V. V.

    1994-07-01

    The development of the brush seal is considered to be most promising among the advanced type seals that are presently in use in the high speed turbomachinery. The brush is usually mounted on the stationary portions of the engine and has direct contact with the rotating element, in the process of limiting the 'unwanted' leakage flows between stages, or various engine cavities. This type of sealing technology is providing high (in comparison with conventional seals) pressure drops due mainly to the high packing density (around 100 bristles/sq mm), and brush compliance with the rotor motions. In the design of modern aerospace turbomachinery leakage flows between the stages must be minimal, thus contributing to the higher efficiency of the engine. Use of the brush seal instead of the labyrinth seal reduces the leakage flow by one order of magnitude. Brush seals also have been found to enhance dynamic performance, cost less, and are lighter than labyrinth seals. Even though industrial brush seals have been successfully developed through extensive experimentation, there is no comprehensive numerical methodology for the design or prediction of their performance. The existing analytical/numerical approaches are based on bulk flow models and do not allow the investigation of the effects of brush morphology (bristle arrangement), or brushes arrangement (number of brushes, spacing between them), on the pressure drops and flow leakage. An increase in the brush seal efficiency is clearly a complex problem that is closely related to the brush geometry and arrangement, and can be solved most likely only by means of a numerically distributed model.

  6. MOGO: Model-Oriented Global Optimization of Petascale Applications

    SciTech Connect

    Malony, Allen D.; Shende, Sameer S.

    2012-09-14

    The MOGO project was initiated under in 2008 under the DOE Program Announcement for Software Development Tools for Improved Ease-of-Use on Petascale systems (LAB 08-19). The MOGO team consisted of Oak Ridge National Lab, Argonne National Lab, and the University of Oregon. The overall goal of MOGO was to attack petascale performance analysis by developing a general framework where empirical performance data could be efficiently and accurately compared with performance expectations at various levels of abstraction. This information could then be used to automatically identify and remediate performance problems. MOGO was be based on performance models derived from application knowledge, performance experiments, and symbolic analysis. MOGO was able to make reasonable impact on existing DOE applications and systems. New tools and techniques were developed, which, in turn, were used on important DOE applications on DOE LCF systems to show significant performance improvements.

  7. An advanced photovoltaic system simulator to demonstrate the performance of advanced photovoltaic cells and devices

    SciTech Connect

    Mrig, L.; DeBlasio, R.; O'Sullivan, G.A.; Tomko, R.P.

    1982-09-01

    This paper describes a photovoltaic system simulator for characterizing and evaluating the performance of advanced photovoltaic cells, modules, and arrays as well as for simulating the operation of advanced conceptual photovoltaic systems. The system simulator is capable of extrapolating the performance from a single laboratory cell, or of a module to power levels up to 10 kw. The major subsystems comprising the system simulator are Solar Array Simulator, Power Conditioning Unit, Load Controller and Resistive Load Unit, Data Acquisition and Control Unit, and Cell Test Bed. The system was designed and fabricated by Abacus Controls, Inc., Somerville, NJ, under subcontract to SERI, and has recently been installed (except the cell test bed) at SERI, where initial operation is taking place.

  8. High-Fidelity Simulation for Advanced Cardiac Life Support Training

    PubMed Central

    Davis, Lindsay E.; Storjohann, Tara D.; Spiegel, Jacqueline J.; Beiber, Kellie M.

    2013-01-01

    Objective. To determine whether a high-fidelity simulation technique compared with lecture would produce greater improvement in advanced cardiac life support (ACLS) knowledge, confidence, and overall satisfaction with the training method. Design. This sequential, parallel-group, crossover trial randomized students into 2 groups distinguished by the sequence of teaching technique delivered for ACLS instruction (ie, classroom lecture vs high-fidelity simulation exercise). Assessment. Test scores on a written examination administered at baseline and after each teaching technique improved significantly from baseline in all groups but were highest when lecture was followed by simulation. Simulation was associated with a greater degree of overall student satisfaction compared with lecture. Participation in a simulation exercise did not improve pharmacy students’ knowledge of ACLS more than attending a lecture, but it was associated with improved student confidence in skills and satisfaction with learning and application. Conclusions. College curricula should incorporate simulation to complement but not replace lecture for ACLS education. PMID:23610477

  9. Advances in modeling and simulation of vacuum electronic devices

    SciTech Connect

    Antonsen, T.M. Jr.; Mondelli, A.A.; Levush, B.; Verboncoeur, J.P.; Birdsall, C.K.

    1999-05-01

    Recent advances in the modeling and simulation of vacuum electronic devices are reviewed. Design of these devices makes use of a variety of physical models and numerical code types. Progress in the development of these models and codes is outlined and illustrated with specific examples. The state of the art in device simulation is evolving to the point such that devices can be designed on the computer, thereby eliminating many trial and error fabrication and test steps. The role of numerical simulation in the design process can be expected to grow further in the future.

  10. Advanced computer graphic techniques for laser range finder (LRF) simulation

    NASA Astrophysics Data System (ADS)

    Bedkowski, Janusz; Jankowski, Stanislaw

    2008-11-01

    This paper show an advanced computer graphic techniques for laser range finder (LRF) simulation. The LRF is the common sensor for unmanned ground vehicle, autonomous mobile robot and security applications. The cost of the measurement system is extremely high, therefore the simulation tool is designed. The simulation gives an opportunity to execute algorithm such as the obstacle avoidance[1], slam for robot localization[2], detection of vegetation and water obstacles in surroundings of the robot chassis[3], LRF measurement in crowd of people[1]. The Axis Aligned Bounding Box (AABB) and alternative technique based on CUDA (NVIDIA Compute Unified Device Architecture) is presented.

  11. On-line Machine Learning and Event Detection in Petascale Data Streams

    NASA Astrophysics Data System (ADS)

    Thompson, David R.; Wagstaff, K. L.

    2012-01-01

    Traditional statistical data mining involves off-line analysis in which all data are available and equally accessible. However, petascale datasets have challenged this premise since it is often impossible to store, let alone analyze, the relevant observations. This has led the machine learning community to investigate adaptive processing chains where data mining is a continuous process. Here pattern recognition permits triage and followup decisions at multiple stages of a processing pipeline. Such techniques can also benefit new astronomical instruments such as the Large Synoptic Survey Telescope (LSST) and Square Kilometre Array (SKA) that will generate petascale data volumes. We summarize some machine learning perspectives on real time data mining, with representative cases of astronomical applications and event detection in high volume datastreams. The first is a "supervised classification" approach currently used for transient event detection at the Very Long Baseline Array (VLBA). It injects known signals of interest - faint single-pulse anomalies - and tunes system parameters to recover these events. This permits meaningful event detection for diverse instrument configurations and observing conditions whose noise cannot be well-characterized in advance. Second, "semi-supervised novelty detection" finds novel events based on statistical deviations from previous patterns. It detects outlier signals of interest while considering known examples of false alarm interference. Applied to data from the Parkes pulsar survey, the approach identifies anomalous "peryton" phenomena that do not match previous event models. Finally, we consider online light curve classification that can trigger adaptive followup measurements of candidate events. Classifier performance analyses suggest optimal survey strategies, and permit principled followup decisions from incomplete data. These examples trace a broad range of algorithm possibilities available for online astronomical data

  12. Gasification CFD Modeling for Advanced Power Plant Simulations

    SciTech Connect

    Zitney, S.E.; Guenther, C.P.

    2005-09-01

    In this paper we have described recent progress on developing CFD models for two commercial-scale gasifiers, including a two-stage, coal slurry-fed, oxygen-blown, pressurized, entrained-flow gasifier and a scaled-up design of the PSDF transport gasifier. Also highlighted was NETL’s Advanced Process Engineering Co-Simulator for coupling high-fidelity equipment models with process simulation for the design, analysis, and optimization of advanced power plants. Using APECS, we have coupled the entrained-flow gasifier CFD model into a coal-fired, gasification-based FutureGen power and hydrogen production plant. The results for the FutureGen co-simulation illustrate how the APECS technology can help engineers better understand and optimize gasifier fluid dynamics and related phenomena that impact overall power plant performance.

  13. Simulating advanced life support systems to test integrated control approaches

    NASA Astrophysics Data System (ADS)

    Kortenkamp, D.; Bell, S.

    Simulations allow for testing of life support control approaches before hardware is designed and built. Simulations also allow for the safe exploration of alternative control strategies during life support operation. As such, they are an important component of any life support research program and testbed. This paper describes a specific advanced life support simulation being created at NASA Johnson Space Center. It is a discrete-event simulation that is dynamic and stochastic. It simulates all major components of an advanced life support system, including crew (with variable ages, weights and genders), biomass production (with scalable plantings of ten different crops), water recovery, air revitalization, food processing, solid waste recycling and energy production. Each component is modeled as a producer of certain resources and a consumer of certain resources. The control system must monitor (via sensors) and control (via actuators) the flow of resources throughout the system to provide life support functionality. The simulation is written in an object-oriented paradigm that makes it portable, extensible and reconfigurable.

  14. CASL: The Consortium for Advanced Simulation of Light Water Reactors

    NASA Astrophysics Data System (ADS)

    Kothe, Douglas B.

    2010-11-01

    Like the fusion community, the nuclear engineering community is embarking on a new computational effort to create integrated, multiphysics simulations. The Consortium for Advanced Simulation of Light Water Reactors (CASL), one of 3 newly-funded DOE Energy Innovation Hubs, brings together an exceptionally capable team that will apply existing modeling and simulation capabilities and develop advanced capabilities to create a usable environment for predictive simulation of light water reactors (LWRs). This environment, designated the Virtual Reactor (VR), will: 1) Enable the use of leadership-class computing for engineering design and analysis to improve reactor capabilities, 2) Promote an enhanced scientific basis and understanding by replacing empirically based design and analysis tools with predictive capabilities, 3) Develop a highly integrated multiphysics environment for engineering analysis through increased fidelity methods, and 4) Incorporate UQ as a basis for developing priorities and supporting, application of the VR tools for predictive simulation. In this presentation, we present the plans for CASL and comment on the similarity and differences with the proposed Fusion Simulation Project (FSP).

  15. Patient Simulation Software to Augment an Advanced Pharmaceutics Course

    PubMed Central

    Schonder, Kristine

    2011-01-01

    Objective To implement and assess the effectiveness of adding a pharmaceutical care simulation program to an advanced therapeutics course. Design PharmaCAL (University of Pittsburgh), a software program that uses a branched-outcome decision making model, was used to create patient simulations to augment lectures given in the course. In each simulation, students were presented with a challenge, given choices, and then provided with consequences specific to their choices. Assessments A survey was administered at the end of the course and students indicated the simulations were enjoyable (92%), easy to use (90%), stimulated interest in critically ill patients (82%), and allowed for application of lecture material (91%). A 5-item presimulation and postsimulation test on the anemia simulation was administered to assess learning. Students answered significantly more questions correctly on the postsimulation test than on the presimulation test (p < 0.001). Seventy-eight percent of students answered the same 5 questions correctly on the final examination. Conclusion Patient simulation software that used a branched-outcome decision model was an effective supplement to class lectures in an advanced pharmaceutics course and was well-received by pharmacy students. PMID:21519411

  16. Integration of Advanced Simulation and Visualization for Manufacturing Process Optimization

    NASA Astrophysics Data System (ADS)

    Zhou, Chenn; Wang, Jichao; Tang, Guangwu; Moreland, John; Fu, Dong; Wu, Bin

    2016-05-01

    The integration of simulation and visualization can provide a cost-effective tool for process optimization, design, scale-up and troubleshooting. The Center for Innovation through Visualization and Simulation (CIVS) at Purdue University Northwest has developed methodologies for such integration with applications in various manufacturing processes. The methodologies have proven to be useful for virtual design and virtual training to provide solutions addressing issues on energy, environment, productivity, safety, and quality in steel and other industries. In collaboration with its industrial partnerships, CIVS has provided solutions to companies, saving over US38 million. CIVS is currently working with the steel industry to establish an industry-led Steel Manufacturing Simulation and Visualization Consortium through the support of National Institute of Standards and Technology AMTech Planning Grant. The consortium focuses on supporting development and implementation of simulation and visualization technologies to advance steel manufacturing across the value chain.

  17. Requirements for advanced simulation of nuclear reactor and chemicalseparation plants.

    SciTech Connect

    Palmiotti, G.; Cahalan, J.; Pfeiffer, P.; Sofu, T.; Taiwo, T.; Wei,T.; Yacout, A.; Yang, W.; Siegel, A.; Insepov, Z.; Anitescu, M.; Hovland,P.; Pereira, C.; Regalbuto, M.; Copple, J.; Willamson, M.

    2006-12-11

    This report presents requirements for advanced simulation of nuclear reactor and chemical processing plants that are of interest to the Global Nuclear Energy Partnership (GNEP) initiative. Justification for advanced simulation and some examples of grand challenges that will benefit from it are provided. An integrated software tool that has its main components, whenever possible based on first principles, is proposed as possible future approach for dealing with the complex problems linked to the simulation of nuclear reactor and chemical processing plants. The main benefits that are associated with a better integrated simulation have been identified as: a reduction of design margins, a decrease of the number of experiments in support of the design process, a shortening of the developmental design cycle, and a better understanding of the physical phenomena and the related underlying fundamental processes. For each component of the proposed integrated software tool, background information, functional requirements, current tools and approach, and proposed future approaches have been provided. Whenever possible, current uncertainties have been quoted and existing limitations have been presented. Desired target accuracies with associated benefits to the different aspects of the nuclear reactor and chemical processing plants were also given. In many cases the possible gains associated with a better simulation have been identified, quantified, and translated into economical benefits.

  18. Hybrid and electric advanced vehicle systems (heavy) simulation

    NASA Technical Reports Server (NTRS)

    Hammond, R. A.; Mcgehee, R. K.

    1981-01-01

    A computer program to simulate hybrid and electric advanced vehicle systems (HEAVY) is described. It is intended for use early in the design process: concept evaluation, alternative comparison, preliminary design, control and management strategy development, component sizing, and sensitivity studies. It allows the designer to quickly, conveniently, and economically predict the performance of a proposed drive train. The user defines the system to be simulated using a library of predefined component models that may be connected to represent a wide variety of propulsion systems. The development of three models are discussed as examples.

  19. Advanced 3D Photocathode Modeling and Simulations Final Report

    SciTech Connect

    Dimitre A Dimitrov; David L Bruhwiler

    2005-06-06

    High brightness electron beams required by the proposed Next Linear Collider demand strong advances in photocathode electron gun performance. Significant improvement in the production of such beams with rf photocathode electron guns is hampered by the lack high-fidelity simulations. The critical missing piece in existing gun codes is a physics-based, detailed treatment of the very complex and highly nonlinear photoemission process.

  20. Super instruction architecture of petascale electronic structure software: the story

    NASA Astrophysics Data System (ADS)

    Lotrich, V. F.; Ponton, J. M.; Perera, A. S.; Deumens, E.; Bartlett, R. J.; Sanders, B. A.

    2010-11-01

    Theoretical methods in chemistry lead to algorithms for the computation of electronic energies and other properties of electronic wave functions that require large numbers of floating point operations and involve large data sets. Thus, computational chemists are very interested in using massively parallel computer systems and in particular the new petascale systems. In this paper we discuss a new programming paradigm that was developed at the Quantum Theory Project to construct electronic structure software that can scale to large numbers of cores of the order of 100,000 and beyond to solve problems in materials engineering relevant to the problems facing society today.

  1. Capturing Petascale Application Characteristics with the Sequoia Toolkit

    SciTech Connect

    Vetter, Jeffrey S; Bhatia, Nikhil; Grobelny, Eric M; Roth, Philip C

    2005-09-01

    Characterization of the computation, communication, memory, and I/O demands of current scientific applications is crucial for identifying which technologies will enable petascale scientific computing. In this paper, we present the Sequoia Toolkit for characterizing HPC applications. The Sequoia Toolkit consists of the Sequoia trace capture library and the Sequoia Event Analysis Library, or SEAL, that facilitates the development of tools for analyzing Sequoia event traces. Using the Sequoia Toolkit, we have characterized the behavior of application runs with up to 2048 application processes. To illustrate the use of the Sequoia Toolkit, we present a preliminary characterization of LAMMPS, a molecular dynamics application of great interest to the computational biology community.

  2. The Consortium for Advanced Simulation of Light Water Reactors

    SciTech Connect

    Ronaldo Szilard; Hongbin Zhang; Doug Kothe; Paul Turinsky

    2011-10-01

    The Consortium for Advanced Simulation of Light Water Reactors (CASL) is a DOE Energy Innovation Hub for modeling and simulation of nuclear reactors. It brings together an exceptionally capable team from national labs, industry and academia that will apply existing modeling and simulation capabilities and develop advanced capabilities to create a usable environment for predictive simulation of light water reactors (LWRs). This environment, designated as the Virtual Environment for Reactor Applications (VERA), will incorporate science-based models, state-of-the-art numerical methods, modern computational science and engineering practices, and uncertainty quantification (UQ) and validation against data from operating pressurized water reactors (PWRs). It will couple state-of-the-art fuel performance, neutronics, thermal-hydraulics (T-H), and structural models with existing tools for systems and safety analysis and will be designed for implementation on both today's leadership-class computers and the advanced architecture platforms now under development by the DOE. CASL focuses on a set of challenge problems such as CRUD induced power shift and localized corrosion, grid-to-rod fretting fuel failures, pellet clad interaction, fuel assembly distortion, etc. that encompass the key phenomena limiting the performance of PWRs. It is expected that much of the capability developed will be applicable to other types of reactors. CASL's mission is to develop and apply modeling and simulation capabilities to address three critical areas of performance for nuclear power plants: (1) reduce capital and operating costs per unit energy by enabling power uprates and plant lifetime extension, (2) reduce nuclear waste volume generated by enabling higher fuel burnup, and (3) enhance nuclear safety by enabling high-fidelity predictive capability for component performance.

  3. Optimizing GW for Petascale HPC and Beyond

    NASA Astrophysics Data System (ADS)

    Deslippe, Jack; Canning, Andrew; Saad, Yousef; Chelikowsky, James; Louie, Steven

    2014-03-01

    The traditional GW-Bethe-Salpeter (BSE) approach has, in practice, been prohibitively expensive on systems with more than 50 atoms. We show that through a combination of methodological and algorithmic improvements, the standard GW-BSE approach can be applied to systems with hundreds of atoms. We will discuss the massively parallel GW-BSE implementation in the BerkeleyGW package (on-top of common DFT packages) including the importance of hybrid MPI-OpenMP parallelism, parallel IO and library performance. We will discuss optimization strategies for and performance on many-core architectures. Support for this work is provided through Scientific Discovery through Advanced Computing (SciDAC) program funded by U.S. Department of Energy, Office of Science, Advanced Scientific Computing Research and Basic Energy Sciences. Grant Number DE-FG02-12ER4

  4. Advanced simulation study on bunch gap transient effect

    NASA Astrophysics Data System (ADS)

    Kobayashi, Tetsuya; Akai, Kazunori

    2016-06-01

    Bunch phase shift along the train due to a bunch gap transient is a concern in high-current colliders. In KEKB operation, the measured phase shift along the train agreed well with a simulation and a simple analytical form in most part of the train. However, a rapid phase change was observed at the leading part of the train, which was not predicted by the simulation or by the analytical form. In order to understand the cause of this observation, we have developed an advanced simulation, which treats the transient loading in each of the cavities of the three-cavity system of the accelerator resonantly coupled with energy storage (ARES) instead of the equivalent single cavities used in the previous simulation, operating in the accelerating mode. In this paper, we show that the new simulation reproduces the observation, and clarify that the rapid phase change at the leading part of the train is caused by a transient loading in the three-cavity system of ARES. KEKB is being upgraded to SuperKEKB, which is aiming at 40 times higher luminosity than KEKB. The gap transient in SuperKEKB is investigated using the new simulation, and the result shows that the rapid phase change at the leading part of the train is much larger due to higher beam currents. We will also present measures to mitigate possible luminosity reduction or beam performance deterioration due to the rapid phase change caused by the gap transient.

  5. Simulated herbivory advances autumn phenology in Acer rubrum

    NASA Astrophysics Data System (ADS)

    Forkner, Rebecca E.

    2014-05-01

    To determine the degree to which herbivory contributes to phenotypic variation in autumn phenology for deciduous trees, red maple ( Acer rubrum) branches were subjected to low and high levels of simulated herbivory and surveyed at the end of the season to assess abscission and degree of autumn coloration. Overall, branches with simulated herbivory abscised ˜7 % more leaves at each autumn survey date than did control branches within trees. While branches subjected to high levels of damage showed advanced phenology, abscission rates did not differ from those of undamaged branches within trees because heavy damage induced earlier leaf loss on adjacent branch nodes in this treatment. Damaged branches had greater proportions of leaf area colored than undamaged branches within trees, having twice the amount of leaf area colored at the onset of autumn and having ˜16 % greater leaf area colored in late October when nearly all leaves were colored. When senescence was scored as the percent of all leaves abscised and/or colored, branches in both treatments reached peak senescence earlier than did control branches within trees: dates of 50 % senescence occurred 2.5 days earlier for low herbivory branches and 9.7 days earlier for branches with high levels of simulated damage. These advanced rates are of the same time length as reported delays in autumn senescence and advances in spring onset due to climate warming. Thus, results suggest that should insect damage increase as a consequence of climate change, it may offset a lengthening of leaf life spans in some tree species.

  6. Hierarchical Petascale Simulation Framework For Stress Corrosion Cracking

    SciTech Connect

    Grama, Ananth

    2013-12-18

    A number of major accomplishments resulted from the project. These include: • Data Structures, Algorithms, and Numerical Methods for Reactive Molecular Dynamics. We have developed a range of novel data structures, algorithms, and solvers (amortized ILU, Spike) for use with ReaxFF and charge equilibration. • Parallel Formulations of ReactiveMD (Purdue ReactiveMolecular Dynamics Package, PuReMD, PuReMD-GPU, and PG-PuReMD) for Messaging, GPU, and GPU Cluster Platforms. We have developed efficient serial, parallel (MPI), GPU (Cuda), and GPU Cluster (MPI/Cuda) implementations. Our implementations have been demonstrated to be significantly better than the state of the art, both in terms of performance and scalability. • Comprehensive Validation in the Context of Diverse Applications. We have demonstrated the use of our software in diverse systems, including silica-water, silicon-germanium nanorods, and as part of other projects, extended it to applications ranging from explosives (RDX) to lipid bilayers (biomembranes under oxidative stress). • Open Source Software Packages for Reactive Molecular Dynamics. All versions of our soft- ware have been released over the public domain. There are over 100 major research groups worldwide using our software. • Implementation into the Department of Energy LAMMPS Software Package. We have also integrated our software into the Department of Energy LAMMPS software package.

  7. The Advanced Gamma-ray Imaging System (AGIS) - Simulation Studies

    SciTech Connect

    Maier, G.; Buckley, J.; Bugaev, V.; Fegan, S.; Vassiliev, V. V.; Funk, S.; Konopelko, A.

    2008-12-24

    The Advanced Gamma-ray Imaging System (AGIS) is a US-led concept for a next-generation instrument in ground-based very-high-energy gamma-ray astronomy. The most important design requirement for AGIS is a sensitivity of about 10 times greater than current observatories like Veritas, H.E.S.S or MAGIC. We present results of simulation studies of various possible designs for AGIS. The primary characteristics of the array performance, collecting area, angular resolution, background rejection, and sensitivity are discussed.

  8. The Advanced Gamma-ray Imaging System (AGIS): Simulation studies

    SciTech Connect

    Maier, G.; Buckley, J.; Bugaev, V.; Fegan, S.; Funk, S.; Konopelko, A.; Vassiliev, V.V.; /UCLA

    2011-06-14

    The Advanced Gamma-ray Imaging System (AGIS) is a next-generation ground-based gamma-ray observatory being planned in the U.S. The anticipated sensitivity of AGIS is about one order of magnitude better than the sensitivity of current observatories, allowing it to measure gamma-ray emission from a large number of Galactic and extra-galactic sources. We present here results of simulation studies of various possible designs for AGIS. The primary characteristics of the array performance - collecting area, angular resolution, background rejection, and sensitivity - are discussed.

  9. Direct Simulation Monte Carlo: Recent Advances and Applications

    NASA Astrophysics Data System (ADS)

    Oran, E. S.; Oh, C. K.; Cybyk, B. Z.

    The principles of and procedures for implementing direct simulation Monte Carlo (DSMC) are described. Guidelines to inherent and external errors common in DSMC applications are provided. Three applications of DSMC to transitional and nonequilibrium flows are considered: rarefied atmospheric flows, growth of thin films, and microsystems. Selected new, potentially important advances in DSMC capabilities are described: Lagrangian DSMC, optimization on parallel computers, and hybrid algorithms for computations in mixed flow regimes. Finally, the limitations of current computer technology for using DSMC to compute low-speed, high-Knudsen-number flows are outlined as future challenges.

  10. New Developments in the Simulation of Advanced Accelerator Concepts

    SciTech Connect

    Bruhwiler, David L.; Cary, John R.; Cowan, Benjamin M.; Paul, Kevin; Mullowney, Paul J.; Messmer, Peter; Geddes, Cameron G. R.; Esarey, Eric; Cormier-Michel, Estelle; Leemans, Wim; Vay, Jean-Luc

    2009-01-22

    Improved computational methods are essential to the diverse and rapidly developing field of advanced accelerator concepts. We present an overview of some computational algorithms for laser-plasma concepts and high-brightness photocathode electron sources. In particular, we discuss algorithms for reduced laser-plasma models that can be orders of magnitude faster than their higher-fidelity counterparts, as well as important on-going efforts to include relevant additional physics that has been previously neglected. As an example of the former, we present 2D laser wakefield accelerator simulations in an optimal Lorentz frame, demonstrating >10 GeV energy gain of externally injected electrons over a 2 m interaction length, showing good agreement with predictions from scaled simulations and theory, with a speedup factor of {approx}2,000 as compared to standard particle-in-cell.

  11. New Developments in the Simulation of Advanced Accelerator Concepts

    SciTech Connect

    Paul, K.; Cary, J.R.; Cowan, B.; Bruhwiler, D.L.; Geddes, C.G.R.; Mullowney, P.J.; Messmer, P.; Esarey, E.; Cormier-Michel, E.; Leemans, W.P.; Vay, J.-L.

    2008-09-10

    Improved computational methods are essential to the diverse and rapidly developing field of advanced accelerator concepts. We present an overview of some computational algorithms for laser-plasma concepts and high-brightness photocathode electron sources. In particular, we discuss algorithms for reduced laser-plasma models that can be orders of magnitude faster than their higher-fidelity counterparts, as well as important on-going efforts to include relevant additional physics that has been previously neglected. As an example of the former, we present 2D laser wakefield accelerator simulations in an optimal Lorentz frame, demonstrating>10 GeV energy gain of externally injected electrons over a 2 m interaction length, showing good agreement with predictions from scaled simulations and theory, with a speedup factor of ~;;2,000 as compared to standard particle-in-cell.

  12. Advanced simulations of optical transition and diffraction radiation

    NASA Astrophysics Data System (ADS)

    Aumeyr, T.; Billing, M. G.; Bobb, L. M.; Bolzon, B.; Bravin, E.; Karataev, P.; Kruchinin, K.; Lefevre, T.; Mazzoni, S.

    2015-04-01

    Charged particle beam diagnostics is a key task in modern and future accelerator installations. The diagnostic tools are practically the "eyes" of the operators. The precision and resolution of the diagnostic equipment are crucial to define the performance of the accelerator. Transition and diffraction radiation (TR and DR) are widely used for electron beam parameter monitoring. However, the precision and resolution of those devices are determined by how well the production, transport and detection of these radiation types are understood. This paper reports on simulations of TR and DR spatial-spectral characteristics using the physical optics propagation (POP) mode of the Zemax advanced optics simulation software. A good consistency with theory is demonstrated. Also, realistic optical system alignment issues are discussed.

  13. Advanced radiometric millimeter-wave scene simulation: ARMSS

    NASA Astrophysics Data System (ADS)

    Hauss, Bruce I.; Agravante, Hiroshi H.; Chaiken, Steven

    1997-06-01

    In order to predict the performance of a passive millimeter wave sensor under a variety of weather, terrain and sensor operational conditions, TRW has developed the Advanced Radiometric Millimeter-Wave Scene Simulation (ARMSS) code. This code provides a comprehensive, end-to-end scene simulation capability based on rigorous, `first-principle' physics models of the passive millimeter wave phenomenology and sensor characteristics. The ARMSS code has been extensively benchmarked against both data in the literature and a wide array of millimeter-wave-field-imaging data. The code has been used in support of numerous passive millimeter wave technology programs for interpreting millimeter wave data, establishing scene signatures, performing mission analyses, and developing system requirements for the design of millimeter wave sensor systems. In this paper, we will present details of the ARMSS code and describe its current use in defining system requirements for the passive millimeter wave camera being developed under the Passive Millimeter Wave Camera Consortium led by TRW.

  14. Recent advances of strong-strong beam-beam simulation

    SciTech Connect

    Qiang, Ji; Furman, Miguel A.; Ryne, Robert D.; Fischer, Wolfram; Ohmi,Kazuhito

    2004-09-15

    In this paper, we report on recent advances in strong-strong beam-beam simulation. Numerical methods used in the calculation of the beam-beam forces are reviewed. A new computational method to solve the Poisson equation on nonuniform grid is presented. This method reduces the computational cost by a half compared with the standard FFT based method on uniform grid. It is also more accurate than the standard method for a colliding beam with low transverse aspect ratio. In applications, we present the study of coherent modes with multi-bunch, multi-collision beam-beam interactions at RHIC. We also present the strong-strong simulation of the luminosity evolution at KEKB with and without finite crossing angle.

  15. Advanced radiometric and interferometric milimeter-wave scene simulations

    NASA Technical Reports Server (NTRS)

    Hauss, B. I.; Moffa, P. J.; Steele, W. G.; Agravante, H.; Davidheiser, R.; Samec, T.; Young, S. K.

    1993-01-01

    Smart munitions and weapons utilize various imaging sensors (including passive IR, active and passive millimeter-wave, and visible wavebands) to detect/identify targets at short standoff ranges and in varied terrain backgrounds. In order to design and evaluate these sensors under a variety of conditions, a high-fidelity scene simulation capability is necessary. Such a capability for passive millimeter-wave scene simulation exists at TRW. TRW's Advanced Radiometric Millimeter-Wave Scene Simulation (ARMSS) code is a rigorous, benchmarked, end-to-end passive millimeter-wave scene simulation code for interpreting millimeter-wave data, establishing scene signatures and evaluating sensor performance. In passive millimeter-wave imaging, resolution is limited due to wavelength and aperture size. Where high resolution is required, the utility of passive millimeter-wave imaging is confined to short ranges. Recent developments in interferometry have made possible high resolution applications on military platforms. Interferometry or synthetic aperture radiometry allows the creation of a high resolution image with a sparsely filled aperture. Borrowing from research work in radio astronomy, we have developed and tested at TRW scene reconstruction algorithms that allow the recovery of the scene from a relatively small number of spatial frequency components. In this paper, the TRW modeling capability is described and numerical results are presented.

  16. Advanced radiometric and interferometric milimeter-wave scene simulations

    NASA Astrophysics Data System (ADS)

    Hauss, B. I.; Moffa, P. J.; Steele, W. G.; Agravante, H.; Davidheiser, R.; Samec, T.; Young, S. K.

    1993-12-01

    Smart munitions and weapons utilize various imaging sensors (including passive IR, active and passive millimeter-wave, and visible wavebands) to detect/identify targets at short standoff ranges and in varied terrain backgrounds. In order to design and evaluate these sensors under a variety of conditions, a high-fidelity scene simulation capability is necessary. Such a capability for passive millimeter-wave scene simulation exists at TRW. TRW's Advanced Radiometric Millimeter-Wave Scene Simulation (ARMSS) code is a rigorous, benchmarked, end-to-end passive millimeter-wave scene simulation code for interpreting millimeter-wave data, establishing scene signatures and evaluating sensor performance. In passive millimeter-wave imaging, resolution is limited due to wavelength and aperture size. Where high resolution is required, the utility of passive millimeter-wave imaging is confined to short ranges. Recent developments in interferometry have made possible high resolution applications on military platforms. Interferometry or synthetic aperture radiometry allows the creation of a high resolution image with a sparsely filled aperture. Borrowing from research work in radio astronomy, we have developed and tested at TRW scene reconstruction algorithms that allow the recovery of the scene from a relatively small number of spatial frequency components. In this paper, the TRW modeling capability is described and numerical results are presented.

  17. Design and Optimization of Large Accelerator Systems through High-Fidelity Electromagnetic Simulations

    SciTech Connect

    Ng, Cho; Akcelik, Volkan; Candel, Arno; Chen, Sheng; Ge, Lixin; Kabel, Andreas; Lee, Lie-Quan; Li, Zenghai; Prudencio, Ernesto; Schussman, Greg; Uplenchwar1, Ravi; Xiao1, Liling; Ko1, Kwok; Austin, T.; Cary, J.R.; Ovtchinnikov, S.; Smith, D.N.; Werner, G.R.; Bellantoni, L.; /SLAC /TechX Corp. /Fermilab

    2008-08-01

    SciDAC1, with its support for the 'Advanced Computing for 21st Century Accelerator Science and Technology' (AST) project, witnessed dramatic advances in electromagnetic (EM) simulations for the design and optimization of important accelerators across the Office of Science. In SciDAC2, EM simulations continue to play an important role in the 'Community Petascale Project for Accelerator Science and Simulation' (ComPASS), through close collaborations with SciDAC CETs/Institutes in computational science. Existing codes will be improved and new multi-physics tools will be developed to model large accelerator systems with unprecedented realism and high accuracy using computing resources at petascale. These tools aim at targeting the most challenging problems facing the ComPASS project. Supported by advances in computational science research, they have been successfully applied to the International Linear Collider (ILC) and the Large Hadron Collider (LHC) in High Energy Physics (HEP), the JLab 12-GeV Upgrade in Nuclear Physics (NP), as well as the Spallation Neutron Source (SNS) and the Linac Coherent Light Source (LCLS) in Basic Energy Sciences (BES).

  18. Co-Simulation for Advanced Process Design and Optimization

    SciTech Connect

    Stephen E. Zitney

    2009-01-01

    Meeting the increasing demand for clean, affordable, and secure energy is arguably the most important challenge facing the world today. Fossil fuels can play a central role in a portfolio of carbon-neutral energy options provided CO{sub 2} emissions can be dramatically reduced by capturing CO{sub 2} and storing it safely and effectively. Fossil energy industry faces the challenge of meeting aggressive design goals for next-generation power plants with CCS. Process designs will involve large, highly-integrated, and multipurpose systems with advanced equipment items with complex geometries and multiphysics. APECS is enabling software to facilitate effective integration, solution, and analysis of high-fidelity process/equipment (CFD) co-simulations. APECS helps to optimize fluid flow and related phenomena that impact overall power plant performance. APECS offers many advanced capabilities including ROMs, design optimization, parallel execution, stochastic analysis, and virtual plant co-simulations. NETL and its collaborative R&D partners are using APECS to reduce the time, cost, and technical risk of developing high-efficiency, zero-emission power plants with CCS.

  19. Graphics simulation and training aids for advanced teleoperation

    NASA Technical Reports Server (NTRS)

    Kim, Won S.; Schenker, Paul S.; Bejczy, Antal K.

    1993-01-01

    Graphics displays can be of significant aid in accomplishing a teleoperation task throughout all three phases of off-line task analysis and planning, operator training, and online operation. In the first phase, graphics displays provide substantial aid to investigate work cell layout, motion planning with collision detection and with possible redundancy resolution, and planning for camera views. In the second phase, graphics displays can serve as very useful tools for introductory training of operators before training them on actual hardware. In the third phase, graphics displays can be used for previewing planned motions and monitoring actual motions in any desired viewing angle, or, when communication time delay prevails, for providing predictive graphics overlay on the actual camera view of the remote site to show the non-time-delayed consequences of commanded motions in real time. This paper addresses potential space applications of graphics displays in all three operational phases of advanced teleoperation. Possible applications are illustrated with techniques developed and demonstrated in the Advanced Teleoperation Laboratory at JPL. The examples described include task analysis and planning of a simulated Solar Maximum Satellite Repair task, a novel force-reflecting teleoperation simulator for operator training, and preview and predictive displays for on-line operations.

  20. A Virtual Engineering Framework for Simulating Advanced Power System

    SciTech Connect

    Mike Bockelie; Dave Swensen; Martin Denison; Stanislav Borodai

    2008-06-18

    In this report is described the work effort performed to provide NETL with VE-Suite based Virtual Engineering software and enhanced equipment models to support NETL's Advanced Process Engineering Co-simulation (APECS) framework for advanced power generation systems. Enhancements to the software framework facilitated an important link between APECS and the virtual engineering capabilities provided by VE-Suite (e.g., equipment and process visualization, information assimilation). Model enhancements focused on improving predictions for the performance of entrained flow coal gasifiers and important auxiliary equipment (e.g., Air Separation Units) used in coal gasification systems. In addition, a Reduced Order Model generation tool and software to provide a coupling between APECS/AspenPlus and the GE GateCycle simulation system were developed. CAPE-Open model interfaces were employed where needed. The improved simulation capability is demonstrated on selected test problems. As part of the project an Advisory Panel was formed to provide guidance on the issues on which to focus the work effort. The Advisory Panel included experts from industry and academics in gasification, CO2 capture issues, process simulation and representatives from technology developers and the electric utility industry. To optimize the benefit to NETL, REI coordinated its efforts with NETL and NETL funded projects at Iowa State University, Carnegie Mellon University and ANSYS/Fluent, Inc. The improved simulation capabilities incorporated into APECS will enable researchers and engineers to better understand the interactions of different equipment components, identify weaknesses and processes needing improvement and thereby allow more efficient, less expensive plants to be developed and brought on-line faster and in a more cost-effective manner. These enhancements to APECS represent an important step toward having a fully integrated environment for performing plant simulation and engineering

  1. Quantum Monte Carlo for large chemical systems: implementing efficient strategies for petascale platforms and beyond.

    PubMed

    Scemama, Anthony; Caffarel, Michel; Oseret, Emmanuel; Jalby, William

    2013-04-30

    Various strategies to implement efficiently quantum Monte Carlo (QMC) simulations for large chemical systems are presented. These include: (i) the introduction of an efficient algorithm to calculate the computationally expensive Slater matrices. This novel scheme is based on the use of the highly localized character of atomic Gaussian basis functions (not the molecular orbitals as usually done), (ii) the possibility of keeping the memory footprint minimal, (iii) the important enhancement of single-core performance when efficient optimization tools are used, and (iv) the definition of a universal, dynamic, fault-tolerant, and load-balanced framework adapted to all kinds of computational platforms (massively parallel machines, clusters, or distributed grids). These strategies have been implemented in the QMC=Chem code developed at Toulouse and illustrated with numerical applications on small peptides of increasing sizes (158, 434, 1056, and 1731 electrons). Using 10-80 k computing cores of the Curie machine (GENCI-TGCC-CEA, France), QMC=Chem has been shown to be capable of running at the petascale level, thus demonstrating that for this machine a large part of the peak performance can be achieved. Implementation of large-scale QMC simulations for future exascale platforms with a comparable level of efficiency is expected to be feasible. PMID:23288704

  2. Investigations and advanced concepts on gyrotron interaction modeling and simulations

    SciTech Connect

    Avramidis, K. A.

    2015-12-15

    In gyrotron theory, the interaction between the electron beam and the high frequency electromagnetic field is commonly modeled using the slow variables approach. The slow variables are quantities that vary slowly in time in comparison to the electron cyclotron frequency. They represent the electron momentum and the high frequency field of the resonant TE modes in the gyrotron cavity. For their definition, some reference frequencies need to be introduced. These include the so-called averaging frequency, used to define the slow variable corresponding to the electron momentum, and the carrier frequencies, used to define the slow variables corresponding to the field envelopes of the modes. From the mathematical point of view, the choice of the reference frequencies is, to some extent, arbitrary. However, from the numerical point of view, there are arguments that point toward specific choices, in the sense that these choices are advantageous in terms of simulation speed and accuracy. In this paper, the typical monochromatic gyrotron operation is considered, and the numerical integration of the interaction equations is performed by the trajectory approach, since it is the fastest, and therefore it is the one that is most commonly used. The influence of the choice of the reference frequencies on the interaction simulations is studied using theoretical arguments, as well as numerical simulations. From these investigations, appropriate choices for the values of the reference frequencies are identified. In addition, novel, advanced concepts for the definitions of these frequencies are addressed, and their benefits are demonstrated numerically.

  3. Advanced altitude simulation facility P8 - current status

    NASA Astrophysics Data System (ADS)

    Pauly, C.; Suslov, D.; Haidn, O. J.

    2011-10-01

    The paper reports the current status of a DLR Lampoldshausen project towards the design, erection, and operation of an advanced altitude simulation facility at the European R&T Facility P8. The system will allow for testing subscale thrust chamber assemblies (TCAs) including surrounding supersonic flow around the nozzle. This facility will allow for investigation into the specific features of altitude simulation facilities but also on the interaction of nozzle and its exhaust plume and the surrounding coflow for subsonic, transitional, and low supersonic coflow conditions. The design bases entirely on the broad experience on design and operation of various altitude simulation facilities such as the satellite engine bench P1.0, the cryogenic and storable upper-stage engine facilities P4.1 and P4.2, sophisticated engineering design tools and continuous numerical effort. Knowledge about nozzle and thrust chamber design and operation bases on broad investigations carried out at the cold-flow facility P6.2 and the hot-fire M3 and P8 test benches.

  4. PRATHAM: Parallel Thermal Hydraulics Simulations using Advanced Mesoscopic Methods

    SciTech Connect

    Joshi, Abhijit S; Jain, Prashant K; Mudrich, Jaime A; Popov, Emilian L

    2012-01-01

    At the Oak Ridge National Laboratory, efforts are under way to develop a 3D, parallel LBM code called PRATHAM (PaRAllel Thermal Hydraulic simulations using Advanced Mesoscopic Methods) to demonstrate the accuracy and scalability of LBM for turbulent flow simulations in nuclear applications. The code has been developed using FORTRAN-90, and parallelized using the message passing interface MPI library. Silo library is used to compact and write the data files, and VisIt visualization software is used to post-process the simulation data in parallel. Both the single relaxation time (SRT) and multi relaxation time (MRT) LBM schemes have been implemented in PRATHAM. To capture turbulence without prohibitively increasing the grid resolution requirements, an LES approach [5] is adopted allowing large scale eddies to be numerically resolved while modeling the smaller (subgrid) eddies. In this work, a Smagorinsky model has been used, which modifies the fluid viscosity by an additional eddy viscosity depending on the magnitude of the rate-of-strain tensor. In LBM, this is achieved by locally varying the relaxation time of the fluid.

  5. Investigations and advanced concepts on gyrotron interaction modeling and simulations

    NASA Astrophysics Data System (ADS)

    Avramidis, K. A.

    2015-12-01

    In gyrotron theory, the interaction between the electron beam and the high frequency electromagnetic field is commonly modeled using the slow variables approach. The slow variables are quantities that vary slowly in time in comparison to the electron cyclotron frequency. They represent the electron momentum and the high frequency field of the resonant TE modes in the gyrotron cavity. For their definition, some reference frequencies need to be introduced. These include the so-called averaging frequency, used to define the slow variable corresponding to the electron momentum, and the carrier frequencies, used to define the slow variables corresponding to the field envelopes of the modes. From the mathematical point of view, the choice of the reference frequencies is, to some extent, arbitrary. However, from the numerical point of view, there are arguments that point toward specific choices, in the sense that these choices are advantageous in terms of simulation speed and accuracy. In this paper, the typical monochromatic gyrotron operation is considered, and the numerical integration of the interaction equations is performed by the trajectory approach, since it is the fastest, and therefore it is the one that is most commonly used. The influence of the choice of the reference frequencies on the interaction simulations is studied using theoretical arguments, as well as numerical simulations. From these investigations, appropriate choices for the values of the reference frequencies are identified. In addition, novel, advanced concepts for the definitions of these frequencies are addressed, and their benefits are demonstrated numerically.

  6. Computational Chemistry at the Petascale: Are We There Yet?

    SciTech Connect

    Harrison, Robert J; Apra, Edoardo; Shelton Jr, William Allison; Tipparaju, Vinod; Vazquez-Mayagoitia, Alvaro

    2009-01-01

    The field of electronic structure is struggling to get efficient parallel implementation on Petascale class hardware. One notable exception has been the achievement of Qbox, a planewave pseudopotential electronic structure code that obtained a performance of 207 TFlops on a BlueGene/L computer. Qbox makes use of the message-passing MPI library for parallelization. Instead, NWChem makes use of the Global Arrays library; this allows the software developer to reach a high level of abstraction and, at the same time, to use one-sided communication to efficiently exploit the network hardware. In the remainder of the paper, we will discuss recent benchmarks and scientic results obtained with NWChem on a parallel computer whose theoretical peak performance is in excess of 1 PFlops.

  7. LSST Data Management: Entering the Era of Petascale Optical Astronomy

    NASA Astrophysics Data System (ADS)

    Juric, Mario; Tyson, Tony

    2015-03-01

    The Large Synoptic Survey Telescope (LSST; Ivezic et al. 2008, http://lsst.org) is a planned, large-aperture, wide-field, ground-based telescope that will survey half the sky every few nights in six optical bands from 320 to 1050 nm. It will explore a wide range of astrophysical questions, ranging from discovering killer asteroids, to examining the nature of dark energy. LSST will produce on average 15 terabytes of data per night, yielding an (uncompressed) data set of 200 petabytes at the end of its 10-year mission. Dedicated HPC facilities (with a total of 320 TFLOPS at start, scaling up to 1.7 PFLOPS by the end) will process the image data in near real time, with full-dataset reprocessing on annual scale. The nature, quality, and volume of LSST data will be unprecedented, so the data system design requires petascale storage, terascale computing, and gigascale communications.

  8. Advanced modeling and simulation to design and manufacture high performance and reliable advanced microelectronics and microsystems.

    SciTech Connect

    Nettleship, Ian (University of Pittsburgh, Pittsburgh, PA); Hinklin, Thomas; Holcomb, David Joseph; Tandon, Rajan; Arguello, Jose Guadalupe, Jr.; Dempsey, James Franklin; Ewsuk, Kevin Gregory; Neilsen, Michael K.; Lanagan, Michael (Pennsylvania State University, University Park, PA)

    2007-07-01

    An interdisciplinary team of scientists and engineers having broad expertise in materials processing and properties, materials characterization, and computational mechanics was assembled to develop science-based modeling/simulation technology to design and reproducibly manufacture high performance and reliable, complex microelectronics and microsystems. The team's efforts focused on defining and developing a science-based infrastructure to enable predictive compaction, sintering, stress, and thermomechanical modeling in ''real systems'', including: (1) developing techniques to and determining materials properties and constitutive behavior required for modeling; (2) developing new, improved/updated models and modeling capabilities, (3) ensuring that models are representative of the physical phenomena being simulated; and (4) assessing existing modeling capabilities to identify advances necessary to facilitate the practical application of Sandia's predictive modeling technology.

  9. Petascale Object Classification of the LSST Event Stream

    NASA Astrophysics Data System (ADS)

    Borne, Kirk D.; Laher, R.; Ivezic, Z.; Hamam, N.; LSST Collaboration

    2009-01-01

    The LSST object database will contain detailed information for 20 billion sources, including approximately 10 billion galaxies and a similar number of stars. After 10 years of LSST survey operations, the object database will comprise 10-20 Petabytes of science catalog attributes: over 200 science attributes per object will be available for classification, characterization, and mining. Deep co-added values (in multiple passbands) for static objects and long-term time series (at various cadences) for dynamic objects, based upon about 1000 individual observations of each, will yield an enormously rich potential for new scientific discoveries. As part of this petascale discovery process, the impressive quantity and quality of parameter data for each object will enable the classification of astronomical objects on a grand scale. This is especially critical for the LSST event stream -- the LSST is likely to detect 10 to 100 thousand astronomical events per night. An event is defined to be any source that has changed in position and/or brightness relative to the baseline "template sky". In order for the astronomical research community to assimilate, cope with, and process such an enormous nightly flood of events, it is essential to develop and deploy a petascale object classification pipeline. This science pipeline will generate object classifications and likelihoods, based upon spatial data (e.g., positional coincidences and associations), temporal data (e.g., LSST photometric and astrometric time series), and VO-accessible data (i.e., corroborating catalog data within other data repositories at this sky location). These classifications will permit knowledge-based prioritization of the most significant events for follow-up time-critical observation. We describe some specific examples using ANN (Artificial Neural Networks) based on data from SDSS (Sloan Digital Sky Survey).

  10. Advanced Simulation Capability for Environmental Management (ASCEM): Early Site Demonstration

    SciTech Connect

    Meza, Juan; Hubbard, Susan; Freshley, Mark D.; Gorton, Ian; Moulton, David; Denham, Miles E.

    2011-03-07

    The U.S. Department of Energy Office of Environmental Management, Technology Innovation and Development (EM-32), is supporting development of the Advanced Simulation Capability for Environmental Management (ASCEM). ASCEM is a state-of-the-art scientific tool and approach for understanding and predicting contaminant fate and transport in natural and engineered systems. The modular and open source high performance computing tool will facilitate integrated approaches to modeling and site characterization that enable robust and standardized assessments of performance and risk for EM cleanup and closure activities. As part of the initial development process, a series of demonstrations were defined to test ASCEM components and provide feedback to developers, engage end users in applications, and lead to an outcome that would benefit the sites. The demonstration was implemented for a sub-region of the Savannah River Site General Separations Area that includes the F-Area Seepage Basins. The physical domain included the unsaturated and saturated zones in the vicinity of the seepage basins and Fourmile Branch, using an unstructured mesh fit to the hydrostratigraphy and topography of the site. The calculations modeled variably saturated flow and the resulting flow field was used in simulations of the advection of non-reactive species and the reactive-transport of uranium. As part of the demonstrations, a new set of data management, visualization, and uncertainty quantification tools were developed to analyze simulation results and existing site data. These new tools can be used to provide summary statistics, including information on which simulation parameters were most important in the prediction of uncertainty and to visualize the relationships between model input and output.

  11. TID Simulation of Advanced CMOS Devices for Space Applications

    NASA Astrophysics Data System (ADS)

    Sajid, Muhammad

    2016-07-01

    This paper focuses on Total Ionizing Dose (TID) effects caused by accumulation of charges at silicon dioxide, substrate/silicon dioxide interface, Shallow Trench Isolation (STI) for scaled CMOS bulk devices as well as at Buried Oxide (BOX) layer in devices based on Silicon-On-Insulator (SOI) technology to be operated in space radiation environment. The radiation induced leakage current and corresponding density/concentration electrons in leakage current path was presented/depicted for 180nm, 130nm and 65nm NMOS, PMOS transistors based on CMOS bulk as well as SOI process technologies on-board LEO and GEO satellites. On the basis of simulation results, the TID robustness analysis for advanced deep sub-micron technologies was accomplished up to 500 Krad. The correlation between the impact of technology scaling and magnitude of leakage current with corresponding total dose was established utilizing Visual TCAD Genius program.

  12. Simulated Interactive Research Experiments as Educational Tools for Advanced Science

    NASA Astrophysics Data System (ADS)

    Tomandl, Mathias; Mieling, Thomas; Losert-Valiente Kroon, Christiane M.; Hopf, Martin; Arndt, Markus

    2015-09-01

    Experimental research has become complex and thus a challenge to science education. Only very few students can typically be trained on advanced scientific equipment. It is therefore important to find new tools that allow all students to acquire laboratory skills individually and independent of where they are located. In a design-based research process we have investigated the feasibility of using a virtual laboratory as a photo-realistic and scientifically valid representation of advanced scientific infrastructure to teach modern experimental science, here, molecular quantum optics. We found a concept based on three educational principles that allows undergraduate students to become acquainted with procedures and concepts of a modern research field. We find a significant increase in student understanding using our Simulated Interactive Research Experiment (SiReX), by evaluating the learning outcomes with semi-structured interviews in a pre/post design. This suggests that this concept of an educational tool can be generalized to disseminate findings in other fields.

  13. Simulated Interactive Research Experiments as Educational Tools for Advanced Science.

    PubMed

    Tomandl, Mathias; Mieling, Thomas; Losert-Valiente Kroon, Christiane M; Hopf, Martin; Arndt, Markus

    2015-01-01

    Experimental research has become complex and thus a challenge to science education. Only very few students can typically be trained on advanced scientific equipment. It is therefore important to find new tools that allow all students to acquire laboratory skills individually and independent of where they are located. In a design-based research process we have investigated the feasibility of using a virtual laboratory as a photo-realistic and scientifically valid representation of advanced scientific infrastructure to teach modern experimental science, here, molecular quantum optics. We found a concept based on three educational principles that allows undergraduate students to become acquainted with procedures and concepts of a modern research field. We find a significant increase in student understanding using our Simulated Interactive Research Experiment (SiReX), by evaluating the learning outcomes with semi-structured interviews in a pre/post design. This suggests that this concept of an educational tool can be generalized to disseminate findings in other fields. PMID:26370627

  14. Simulated Interactive Research Experiments as Educational Tools for Advanced Science

    PubMed Central

    Tomandl, Mathias; Mieling, Thomas; Losert-Valiente Kroon, Christiane M.; Hopf, Martin; Arndt, Markus

    2015-01-01

    Experimental research has become complex and thus a challenge to science education. Only very few students can typically be trained on advanced scientific equipment. It is therefore important to find new tools that allow all students to acquire laboratory skills individually and independent of where they are located. In a design-based research process we have investigated the feasibility of using a virtual laboratory as a photo-realistic and scientifically valid representation of advanced scientific infrastructure to teach modern experimental science, here, molecular quantum optics. We found a concept based on three educational principles that allows undergraduate students to become acquainted with procedures and concepts of a modern research field. We find a significant increase in student understanding using our Simulated Interactive Research Experiment (SiReX), by evaluating the learning outcomes with semi-structured interviews in a pre/post design. This suggests that this concept of an educational tool can be generalized to disseminate findings in other fields. PMID:26370627

  15. Toward Interoperable Mesh, Geometry and Field Components for PDE Simulation Development

    SciTech Connect

    Chand, K K; Diachin, L F; Li, X; Ollivier-Gooch, C; Seol, E S; Shephard, M; Tautges, T; Trease, H

    2005-07-11

    Mesh-based PDE simulation codes are becoming increasingly sophisticated and rely on advanced meshing and discretization tools. Unfortunately, it is still difficult to interchange or interoperate tools developed by different communities to experiment with various technologies or to develop new capabilities. To address these difficulties, we have developed component interfaces designed to support the information flow of mesh-based PDE simulations. We describe this information flow and discuss typical roles and services provided by the geometry, mesh, and field components of the simulation. Based on this delineation for the roles of each component, we give a high-level description of the abstract data model and set of interfaces developed by the Department of Energy's Interoperable Tools for Advanced Petascale Simulation (ITAPS) center. These common interfaces are critical to our interoperability goal, and we give examples of several services based upon these interfaces including mesh adaptation and mesh improvement.

  16. Advancements in Afterbody Radiative Heating Simulations for Earth Entry

    NASA Technical Reports Server (NTRS)

    Johnston, Christopher O.; Panesi, Marco; Brandis, Aaron M.

    2016-01-01

    Four advancements to the simulation of backshell radiative heating for Earth entry are presented. The first of these is the development of a flow field model that treats electronic levels of the dominant backshell radiator, N, as individual species. This is shown to allow improvements in the modeling of electron-ion recombination and two-temperature modeling, which are shown to increase backshell radiative heating by 10 to 40%. By computing the electronic state populations of N within the flow field solver, instead of through the quasi-steady state approximation in the radiation code, the coupling of radiative transition rates to the species continuity equations for the levels of N, including the impact of non-local absorption, becomes feasible. Implementation of this additional level of coupling between the flow field and radiation codes represents the second advancement presented in this work, which is shown to increase the backshell radiation by another 10 to 50%. The impact of radiative transition rates due to non-local absorption indicates the importance of accurate radiation transport in the relatively complex flow geometry of the backshell. This motivates the third advancement, which is the development of a ray-tracing radiation transport approach to compute the radiative transition rates and divergence of the radiative flux at every point for coupling to the flow field, therefore allowing the accuracy of the commonly applied tangent-slab approximation to be assessed for radiative source terms. For the sphere considered at lunar-return conditions, the tangent-slab approximation is shown to provide a sufficient level of accuracy for the radiative source terms, even for backshell cases. This is in contrast to the agreement between the two approaches for computing the radiative flux to the surface, which differ by up to 40%. The final advancement presented is the development of a nonequilibrium model for NO radiation, which provides significant backshell

  17. The advanced computational testing and simulation toolkit (ACTS)

    SciTech Connect

    Drummond, L.A.; Marques, O.

    2002-05-21

    During the past decades there has been a continuous growth in the number of physical and societal problems that have been successfully studied and solved by means of computational modeling and simulation. Distinctively, a number of these are important scientific problems ranging in scale from the atomic to the cosmic. For example, ionization is a phenomenon as ubiquitous in modern society as the glow of fluorescent lights and the etching on silicon computer chips; but it was not until 1999 that researchers finally achieved a complete numerical solution to the simplest example of ionization, the collision of a hydrogen atom with an electron. On the opposite scale, cosmologists have long wondered whether the expansion of the Universe, which began with the Big Bang, would ever reverse itself, ending the Universe in a Big Crunch. In 2000, analysis of new measurements of the cosmic microwave background radiation showed that the geometry of the Universe is flat, and thus the Universe will continue expanding forever. Both of these discoveries depended on high performance computer simulations that utilized computational tools included in the Advanced Computational Testing and Simulation (ACTS) Toolkit. The ACTS Toolkit is an umbrella project that brought together a number of general purpose computational tool development projects funded and supported by the U.S. Department of Energy (DOE). These tools, which have been developed independently, mainly at DOE laboratories, make it easier for scientific code developers to write high performance applications for parallel computers. They tackle a number of computational issues that are common to a large number of scientific applications, mainly implementation of numerical algorithms, and support for code development, execution and optimization. The ACTS Toolkit Project enables the use of these tools by a much wider community of computational scientists, and promotes code portability, reusability, reduction of duplicate efforts

  18. Enabling Advanced Modeling and Simulations for Fuel-Flexible Combustors

    SciTech Connect

    Pitsch, Heinz

    2010-05-31

    The overall goal of the present project is to enable advanced modeling and simulations for the design and optimization of fuel-flexible turbine combustors. For this purpose we use a high fidelity, extensively-tested large-eddy simulation (LES) code and state-of-the-art models for premixed/partially-premixed turbulent combustion developed in the PI's group. In the frame of the present project, these techniques are applied, assessed, and improved for hydrogen enriched premixed and partially premixed gas-turbine combustion. Our innovative approaches include a completely consistent description of flame propagation; a coupled progress variable/level set method to resolve the detailed flame structure, and incorporation of thermal-diffusion (non-unity Lewis number) effects. In addition, we have developed a general flamelet-type transformation holding in the limits of both non-premixed and premixed burning. As a result, a model for partially premixed combustion has been derived. The coupled progress variable/level method and the general flamelet transformation were validated by LES of a lean-premixed low-swirl burner that has been studied experimentally at Lawrence Berkeley National Laboratory. The model is extended to include the non-unity Lewis number effects, which play a critical role in fuel-flexible combustor with high hydrogen content fuel. More specifically, a two-scalar model for lean hydrogen and hydrogen-enriched combustion is developed and validated against experimental and direct numerical simulation (DNS) data. Results are presented to emphasize the importance of non-unity Lewis number effects in the lean-premixed low-swirl burner of interest in this project. The proposed model gives improved results, which shows that the inclusion of the non-unity Lewis number effects is essential for accurate prediction of the lean-premixed low-swirl flame.

  19. Enabling Advanced Modeling and Simulations for Fuel-Flexible Combustors

    SciTech Connect

    Heinz Pitsch

    2010-05-31

    The overall goal of the present project is to enable advanced modeling and simulations for the design and optimization of fuel-flexible turbine combustors. For this purpose we use a high-fidelity, extensively-tested large-eddy simulation (LES) code and state-of-the-art models for premixed/partially-premixed turbulent combustion developed in the PI's group. In the frame of the present project, these techniques are applied, assessed, and improved for hydrogen enriched premixed and partially premixed gas-turbine combustion. Our innovative approaches include a completely consistent description of flame propagation, a coupled progress variable/level set method to resolve the detailed flame structure, and incorporation of thermal-diffusion (non-unity Lewis number) effects. In addition, we have developed a general flamelet-type transformation holding in the limits of both non-premixed and premixed burning. As a result, a model for partially premixed combustion has been derived. The coupled progress variable/level method and the general flamelet tranformation were validated by LES of a lean-premixed low-swirl burner that has been studied experimentally at Lawrence Berkeley National Laboratory. The model is extended to include the non-unity Lewis number effects, which play a critical role in fuel-flexible combustor with high hydrogen content fuel. More specifically, a two-scalar model for lean hydrogen and hydrogen-enriched combustion is developed and validated against experimental and direct numerical simulation (DNS) data. Results are presented to emphasize the importance of non-unity Lewis number effects in the lean-premixed low-swirl burner of interest in this project. The proposed model gives improved results, which shows that the inclusion of the non-unity Lewis number effects is essential for accurate prediction of the lean-premixed low-swirl flame.

  20. Linear-scaling density-functional theory with wavelets: challenges and opportunities for petascale and beyond

    NASA Astrophysics Data System (ADS)

    Ratcliff, Laura; Genovese, Luigi; Mohr, Stephan; Deutsch, Thierry

    2015-03-01

    Density-functional theory (DFT) has been used to study a wide range of materials in simulations with a moderate level of parallelism. A common approach divides the electronic orbitals between MPI tasks, however this limits the number of tasks that can be used for a given system. The most straightforward path to exploiting petascale machines is therefore to increase the size of the system being studied. However, standard implementations of DFT scale cubically with the number of atoms so that the time rapidly increases for large systems. Algorithms must therefore be designed with reduced scaling, such as the linear-scaling approach in BigDFT, which uses an adaptive localized basis set that is itself represented in an underlying wavelet basis set. It thus retains all the benefits of wavelets, such as systematic convergence, while also presenting some new advantages, e.g. the definition of a fragment approach. Nonetheless, as we move towards the exascale, there remain a number of challenges associated both with increasing parallelism and the treatment of large systems. We will outline the algorithms and parallelization used in BigDFT and present some recent results which have been facilitated by this approach, as well as discussing some of the future challenges.

  1. ADVANCED TECHNIQUES FOR RESERVOIR SIMULATION AND MODELING OF NONCONVENTIONAL WELLS

    SciTech Connect

    Louis J. Durlofsky; Khalid Aziz

    2004-08-20

    Nonconventional wells, which include horizontal, deviated, multilateral and ''smart'' wells, offer great potential for the efficient management of oil and gas reservoirs. These wells are able to contact larger regions of the reservoir than conventional wells and can also be used to target isolated hydrocarbon accumulations. The use of nonconventional wells instrumented with downhole inflow control devices allows for even greater flexibility in production. Because nonconventional wells can be very expensive to drill, complete and instrument, it is important to be able to optimize their deployment, which requires the accurate prediction of their performance. However, predictions of nonconventional well performance are often inaccurate. This is likely due to inadequacies in some of the reservoir engineering and reservoir simulation tools used to model and optimize nonconventional well performance. A number of new issues arise in the modeling and optimization of nonconventional wells. For example, the optimal use of downhole inflow control devices has not been addressed for practical problems. In addition, the impact of geological and engineering uncertainty (e.g., valve reliability) has not been previously considered. In order to model and optimize nonconventional wells in different settings, it is essential that the tools be implemented into a general reservoir simulator. This simulator must be sufficiently general and robust and must in addition be linked to a sophisticated well model. Our research under this five year project addressed all of the key areas indicated above. The overall project was divided into three main categories: (1) advanced reservoir simulation techniques for modeling nonconventional wells; (2) improved techniques for computing well productivity (for use in reservoir engineering calculations) and for coupling the well to the simulator (which includes the accurate calculation of well index and the modeling of multiphase flow in the wellbore

  2. Advanced Simulation Capability for Environmental Management: Development and Demonstrations - 12532

    SciTech Connect

    Freshley, Mark D.; Freedman, Vicky; Gorton, Ian; Hubbard, Susan S.; Moulton, J. David; Dixon, Paul

    2012-07-01

    The U.S. Department of Energy Office of Environmental Management (EM), Technology Innovation and Development is supporting development of the Advanced Simulation Capability for Environmental Management (ASCEM). ASCEM is a state-of-the-art scientific tool and approach for understanding and predicting contaminant fate and transport in natural and engineered systems. The modular and open source high-performance computing tool facilitates integrated approaches to modeling and site characterization that enable robust and standardized assessments of performance and risk for EM cleanup and closure activities. The ASCEM project continues to make significant progress in development of capabilities, which are organized into Platform and Integrated Tool-sets and a High-Performance Computing Multi-process Simulator. The Platform capabilities target a level of functionality to allow end-to-end model development, starting with definition of the conceptual model and management of data for model input. The High-Performance Computing capabilities target increased functionality of process model representations, tool-sets for interaction with Platform, and verification and model confidence testing. The new capabilities are demonstrated through working groups, including one focused on the Hanford Site Deep Vadose Zone. The ASCEM program focused on planning during the first year and executing a prototype tool-set for an early demonstration of individual components. Subsequently, ASCEM has focused on developing and demonstrating an integrated set of capabilities, making progress toward a version of the capabilities that can be used to engage end users. Demonstration of capabilities continues to be implemented through working groups. Three different working groups, one focused on EM problems in the deep vadose zone, another investigating attenuation mechanisms for metals and radionuclides, and a third focusing on waste tank performance assessment, continue to make progress. The project

  3. Modelling and Simulation of the Advanced Plasma Source

    SciTech Connect

    Schroeder, Benjamin; Peter, Ralf; Harhausen, Jens; Ohl, Andreas

    2011-08-15

    Plasma ion assisted-deposition (PIAD) is a combination of conventional thermal evaporation deposition and plasma-beam surface modification; it serves as a well-established technology for the creation of high quality coatings on mirrors, lenses, and other optical devices. It is closely related to ion-assisted deposition to the extent that electrons preserve quasineutrality of the ion beam. This paper investigates the Advanced Plasma Source (APS), a plasma beam source employed for PIAD. A field enhanced glow discharge generates a radially expanding plasma flow with an ion energy of about 80-120 eV. Charge exchange collisions with the neutral background gas (pressure 0.1 Pa and below) produce a cold secondary plasma, which expands as well. A model is developed which describes the primary ions by a simplified Boltzmann equation, the secondary ions by the equations of continuity and momentum balance, and the electrons by the condition of Boltzmann equilibrium. Additionally, quasineutrality is assumed. The model can be reduced to a single nonlinear differential equation for the velocity of the secondary ions, which has several removable singularities and one essential singularity, identified as the Bohm singularity. Solving the model yields macroscopic plasma features, such as fluxes, densities, and the electrical field. An add-on Monte-Carlo simulation is employed to calculate the ion energy distribution function at the substrate. All results compare well to experiments conducted at a commercial APS system.

  4. An Advanced Leakage Scheme for Neutrino Treatment in Astrophysical Simulations

    NASA Astrophysics Data System (ADS)

    Perego, A.; Cabezón, R. M.; Käppeli, R.

    2016-04-01

    We present an Advanced Spectral Leakage (ASL) scheme to model neutrinos in the context of core-collapse supernovae (CCSNe) and compact binary mergers. Based on previous gray leakage schemes, the ASL scheme computes the neutrino cooling rates by interpolating local production and diffusion rates (relevant in optically thin and thick regimes, respectively) separately for discretized values of the neutrino energy. Neutrino trapped components are also modeled, based on equilibrium and timescale arguments. The better accuracy achieved by the spectral treatment allows a more reliable computation of neutrino heating rates in optically thin conditions. The scheme has been calibrated and tested against Boltzmann transport in the context of Newtonian spherically symmetric models of CCSNe. ASL shows a very good qualitative and a partial quantitative agreement for key quantities from collapse to a few hundreds of milliseconds after core bounce. We have proved the adaptability and flexibility of our ASL scheme, coupling it to an axisymmetric Eulerian and to a three-dimensional smoothed particle hydrodynamics code to simulate core collapse. Therefore, the neutrino treatment presented here is ideal for large parameter-space explorations, parametric studies, high-resolution tests, code developments, and long-term modeling of asymmetric configurations, where more detailed neutrino treatments are not available or are currently computationally too expensive.

  5. Modeling and simulation challenges pursued by the Consortium for Advanced Simulation of Light Water Reactors (CASL)

    NASA Astrophysics Data System (ADS)

    Turinsky, Paul J.; Kothe, Douglas B.

    2016-05-01

    The Consortium for the Advanced Simulation of Light Water Reactors (CASL), the first Energy Innovation Hub of the Department of Energy, was established in 2010 with the goal of providing modeling and simulation (M&S) capabilities that support and accelerate the improvement of nuclear energy's economic competitiveness and the reduction of spent nuclear fuel volume per unit energy, and all while assuring nuclear safety. To accomplish this requires advances in M&S capabilities in radiation transport, thermal-hydraulics, fuel performance and corrosion chemistry. To focus CASL's R&D, industry challenge problems have been defined, which equate with long standing issues of the nuclear power industry that M&S can assist in addressing. To date CASL has developed a multi-physics "core simulator" based upon pin-resolved radiation transport and subchannel (within fuel assembly) thermal-hydraulics, capitalizing on the capabilities of high performance computing. CASL's fuel performance M&S capability can also be optionally integrated into the core simulator, yielding a coupled multi-physics capability with untapped predictive potential. Material models have been developed to enhance predictive capabilities of fuel clad creep and growth, along with deeper understanding of zirconium alloy clad oxidation and hydrogen pickup. Understanding of corrosion chemistry (e.g., CRUD formation) has evolved at all scales: micro, meso and macro. CFD R&D has focused on improvement in closure models for subcooled boiling and bubbly flow, and the formulation of robust numerical solution algorithms. For multiphysics integration, several iterative acceleration methods have been assessed, illuminating areas where further research is needed. Finally, uncertainty quantification and data assimilation techniques, based upon sampling approaches, have been made more feasible for practicing nuclear engineers via R&D on dimensional reduction and biased sampling. Industry adoption of CASL's evolving M

  6. Advanced wellbore thermal simulator GEOTEMP2 research report

    SciTech Connect

    Mitchell, R.F.

    1982-02-01

    The development of the GEOTEMP2 wellbore thermal simulator is described. The major technical features include a general purpose air and mist drilling simulator and a two-phase steam flow simulator that can model either injection or production.

  7. Million atom DFT calculations using coarse graining and petascale computing

    NASA Astrophysics Data System (ADS)

    Nicholson, Don; Odbadrakh, Kh.; Samolyuk, G. D.; Stoller, R. E.; Zhang, X. G.; Stocks, G. M.

    2014-03-01

    Researchers performing classical Molecular Dynamics (MD) on defect structures often find it necessary to use millions of atoms in their models. It would be useful to perform density functional calculations on these large configurations in order to observe electron-based properties such as local charge and spin and the Helmann-Feynman forces on the atoms. The great number of atoms usually requires that a subset be ``carved'' from the configuration and terminated in a less that satisfactory manner, e.g. free space or inappropriate periodic boundary conditions. Coarse graining based on the Locally Self-consistent Multiple Scattering method (LSMS) and petascale computing can circumvent this problem by treating the whole system but dividing the atoms into two groups. In Coarse Grained LSMS (CG-LSMS) one group of atoms has its charge and scattering determined prescriptively based on neighboring atoms while the remaining group of atoms have their charge and scattering determined according to DFT as implemented in the LSMS. The method will be demonstrated for a one-million-atom model of a displacement cascade in Fe for which 24,130 atoms are treated with full DFT and the remaining atoms are treated prescriptively. Work supported as part of Center for Defect Physics, an Energy Frontier Research Center funded by the U.S. DOE, Office of Science, Basic Energy Sciences, used Oak Ridge Leadership Computing Facility, Oak Ridge National Lab, of DOE Office of Science.

  8. Advancements in HWIL simulation at the U.S. Army Aviation and Missile Command

    NASA Astrophysics Data System (ADS)

    Buford, James A., Jr.; Jolly, Alexander C.; Mobley, Scott B.

    1999-07-01

    This paper describes the Advanced Simulation Center (ASC) role, recaps the past year, describes the hardware-in-the- loop (HWIL) components and advancements, and outlines the path-ahead for the ASC in terms of both missile and complete system HWIL simulations and test with a focus on the imaging infrared systems.

  9. Genome Reshuffling for Advanced Intercross Permutation (GRAIP): Simulation and permutation for advanced intercross population analysis

    SciTech Connect

    Pierce, Jeremy; Broman, Karl; Lu, Lu; Chesler, Elissa J; Zhou, Guomin; Airey, David; Birmingham, Amanda; Williams, Robert

    2008-04-01

    Background: Advanced intercross lines (AIL) are segregating populations created using a multi-generation breeding protocol for fine mapping complex trait loci (QTL) in mice and other organisms. Applying QTL mapping methods for intercross and backcross populations, often followed by na ve permutation of individuals and phenotypes, does not account for the effect of AIL family structure in which final generations have been expanded and leads to inappropriately low significance thresholds. The critical problem with na ve mapping approaches in AIL populations is that the individual is not an exchangeable unit. Methodology/Principal Findings: The effect of family structure has immediate implications for the optimal AIL creation (many crosses, few animals per cross, and population expansion before the final generation) and we discuss these and the utility of AIL populations for QTL fine mapping. We also describe Genome Reshuffling for Advanced Intercross Permutation, (GRAIP) a method for analyzing AIL data that accounts for family structure. GRAIP permutes a more interchangeable unit in the final generation crosses - the parental genome - and simulating regeneration of a permuted AIL population based on exchanged parental identities. GRAIP determines appropriate genome-wide significance thresholds and locus-specific Pvalues for AILs and other populations with similar family structures. We contrast GRAIP with na ve permutation using a large densely genotyped mouse AIL population (1333 individuals from 32 crosses). A na ve permutation using coat color as a model phenotype demonstrates high false-positive locus identification and uncertain significance levels, which are corrected using GRAIP. GRAIP also detects an established hippocampus weight locus and a new locus, Hipp9a. Conclusions and Significance: GRAIP determines appropriate genome-wide significance thresholds and locus-specific Pvalues for AILs and other populations with similar family structures. The effect of

  10. Advanced Simulation Capability for Environmental Management (ASCEM) Phase II Demonstration

    SciTech Connect

    Freshley, M.; Hubbard, S.; Flach, G.; Freedman, V.; Agarwal, D.; Andre, B.; Bott, Y.; Chen, X.; Davis, J.; Faybishenko, B.; Gorton, I.; Murray, C.; Moulton, D.; Meyer, J.; Rockhold, M.; Shoshani, A.; Steefel, C.; Wainwright, H.; Waichler, S.

    2012-09-28

    In 2009, the National Academies of Science (NAS) reviewed and validated the U.S. Department of Energy Office of Environmental Management (EM) Technology Program in its publication, Advice on the Department of Energy’s Cleanup Technology Roadmap: Gaps and Bridges. The NAS report outlined prioritization needs for the Groundwater and Soil Remediation Roadmap, concluded that contaminant behavior in the subsurface is poorly understood, and recommended further research in this area as a high priority. To address this NAS concern, the EM Office of Site Restoration began supporting the development of the Advanced Simulation Capability for Environmental Management (ASCEM). ASCEM is a state-of-the-art scientific approach that uses an integration of toolsets for understanding and predicting contaminant fate and transport in natural and engineered systems. The ASCEM modeling toolset is modular and open source. It is divided into three thrust areas: Multi-Process High Performance Computing (HPC), Platform and Integrated Toolsets, and Site Applications. The ASCEM toolsets will facilitate integrated approaches to modeling and site characterization that enable robust and standardized assessments of performance and risk for EM cleanup and closure activities. During fiscal year 2012, the ASCEM project continued to make significant progress in capabilities development. Capability development occurred in both the Platform and Integrated Toolsets and Multi-Process HPC Simulator areas. The new Platform and Integrated Toolsets capabilities provide the user an interface and the tools necessary for end-to-end model development that includes conceptual model definition, data management for model input, model calibration and uncertainty analysis, and model output processing including visualization. The new HPC Simulator capabilities target increased functionality of process model representations, toolsets for interaction with the Platform, and model confidence testing and verification for

  11. Active Storage with Analytics Capabilities and I/O Runtime System for Petascale Systems

    SciTech Connect

    Choudhary, Alok

    2015-03-18

    Computational scientists must understand results from experimental, observational and computational simulation generated data to gain insights and perform knowledge discovery. As systems approach the petascale range, problems that were unimaginable a few years ago are within reach. With the increasing volume and complexity of data produced by ultra-scale simulations and high-throughput experiments, understanding the science is largely hampered by the lack of comprehensive I/O, storage, acceleration of data manipulation, analysis, and mining tools. Scientists require techniques, tools and infrastructure to facilitate better understanding of their data, in particular the ability to effectively perform complex data analysis, statistical analysis and knowledge discovery. The goal of this work is to enable more effective analysis of scientific datasets through the integration of enhancements in the I/O stack, from active storage support at the file system layer to MPI-IO and high-level I/O library layers. We propose to provide software components to accelerate data analytics, mining, I/O, and knowledge discovery for large-scale scientific applications, thereby increasing productivity of both scientists and the systems. Our approaches include 1) design the interfaces in high-level I/O libraries, such as parallel netCDF, for applications to activate data mining operations at the lower I/O layers; 2) Enhance MPI-IO runtime systems to incorporate the functionality developed as a part of the runtime system design; 3) Develop parallel data mining programs as part of runtime library for server-side file system in PVFS file system; and 4) Prototype an active storage cluster, which will utilize multicore CPUs, GPUs, and FPGAs to carry out the data mining workload.

  12. Simulation Code Development and Its Applications

    NASA Astrophysics Data System (ADS)

    Li, Zenghai

    2015-10-01

    Under the support of the U.S. DOE SciDAC program, SLAC has been developing a suite of 3D parallel finite-element codes aimed at high-accuracy, high-fidelity electromagnetic and beam physics simulations for the design and optimization of next-generation particle accelerators. Running on the latest supercomputers, these codes have made great strides in advancing the state of the art in applied math and computer science at the petascale that enable the integrated modeling of electromagnetics, self-consistent Particle-In-Cell (PIC) particle dynamics as well as thermal, mechanical, and multi-physics effects. This paper will present the latest development and application of ACE3P to a wide range of accelerator projects.

  13. Genome Reshuffling for Advanced Intercross Permutation (GRAIP): Simulation and permutation for advanced intercross population analysis

    SciTech Connect

    Pierce, Jeremy; Broman, Karl; Chesler, Elissa J; Zhou, Guomin; Airey, David; Birmingham, Amanda; Williams, Robert

    2008-01-01

    Abstract Background Advanced intercross lines (AIL) are segregating populations created using a multigeneration breeding protocol for fine mapping complex traits in mice and other organisms. Applying quantitative trait locus (QTL) mapping methods for intercross and backcross populations, often followed by na ve permutation of individuals and phenotypes, does not account for the effect of family structure in AIL populations in which final generations have been expanded and leads to inappropriately low significance thresholds. The critical problem with a na ve mapping approach in such AIL populations is that the individual is not an exchangeable unit given the family structure. Methodology/Principal Findings The effect of family structure has immediate implications for the optimal AIL creation (many crosses, few animals per cross, and population expansion before the final generation) and we discuss these and the utility of AIL populations for QTL fine mapping. We also describe Genome Reshuffling for Advanced Intercross Permutation, (GRAIP) a method for analyzing AIL data that accounts for family structure. RAIP permutes a more interchangeable unit in the final generation crosses - the parental genome - and simulating regeneration of a permuted AIL population based on exchanged parental identities. GRAIP determines appropriate genome- ide significance thresholds and locus-specific P-values for AILs and other populations with similar family structures. We contrast GRAIP with na ve permutation using a large densely genotyped mouse AIL population (1333 individuals from 32 crosses). A na ve permutation using coat color as a model phenotype demonstrates high false-positive locus identification and uncertain significance levels in our AIL population, which are corrected by use of GRAIP. We also show that GRAIP detects an established hippocampus weight locus and a new locus, Hipp9a. Conclusions and Significance GRAIP determines appropriate genome-wide significance thresholds

  14. Advancement of DOE's EnergyPlus Building Energy Simulation Payment

    SciTech Connect

    Gu, Lixing; Shirey, Don; Raustad, Richard; Nigusse, Bereket; Sharma, Chandan; Lawrie, Linda; Strand, Rick; Pedersen, Curt; Fisher, Dan; Lee, Edwin; Witte, Mike; Glazer, Jason; Barnaby, Chip

    2011-09-30

    EnergyPlus{sup TM} is a new generation computer software analysis tool that has been developed, tested, and commercialized to support DOE's Building Technologies (BT) Program in terms of whole-building, component, and systems R&D (http://www.energyplus.gov). It is also being used to support evaluation and decision making of zero energy building (ZEB) energy efficiency and supply technologies during new building design and existing building retrofits. The 5-year project was managed by the National Energy Technology Laboratory and was divided into 5 budget period between 2006 and 2011. During the project period, 11 versions of EnergyPlus were released. This report summarizes work performed by an EnergyPlus development team led by the University of Central Florida's Florida Solar Energy Center (UCF/FSEC). The team members consist of DHL Consulting, C. O. Pedersen Associates, University of Illinois at Urbana-Champaign, Oklahoma State University, GARD Analytics, Inc., and WrightSoft Corporation. The project tasks involved new feature development, testing and validation, user support and training, and general EnergyPlus support. The team developed 146 new features during the 5-year period to advance the EnergyPlus capabilities. Annual contributions of new features are 7 in budget period 1, 19 in period 2, 36 in period 3, 41 in period 4, and 43 in period 5, respectively. The testing and validation task focused on running test suite and publishing report, developing new IEA test suite cases, testing and validating new source code, addressing change requests, and creating and testing installation package. The user support and training task provided support for users and interface developers, and organized and taught workshops. The general support task involved upgrading StarTeam (team sharing) software and updating existing utility software. The project met the DOE objectives and completed all tasks successfully. Although the EnergyPlus software was enhanced significantly

  15. Advanced Simulation in Undergraduate Pilot Training: Systems Integration. Final Report (February 1972-March 1975).

    ERIC Educational Resources Information Center

    Larson, D. F.; Terry, C.

    The Advanced Simulator for Undergraduate Pilot Training (ASUPT) was designed to investigate the role of simulation in the future Undergraduate Pilot Training (UPT) program. The problem addressed in this report was one of integrating two unlike components into one synchronized system. These two components were the Basic T-37 Simulators and their…

  16. Probabilistic Photometric Redshifts in the Era of Petascale Astronomy

    SciTech Connect

    Carrasco Kind, Matias

    2014-01-01

    to enable the development of precision cosmology in the era of petascale astronomical surveys.

  17. Using 100G Network Technology in Support of Petascale Science

    NASA Technical Reports Server (NTRS)

    Gary, James P.

    2011-01-01

    NASA in collaboration with a number of partners conducted a set of individual experiments and demonstrations during SC 10 that collectively were titled "Using 100G Network Technology in Support of Petascale Science". The partners included the iCAIR, Internet2, LAC, MAX, National LambdaRail (NLR), NOAA and SCinet Research Sandbox (SRS) as well as the vendors Ciena, Cisco, ColorChip, cPacket, Extreme Networks, Fusion-io, HP and Panduit who most generously allowed some of their leading edge 40G/100G optical transport, Ethernet switch and Internet Protocol router equipment and file server technologies to be involved. The experiments and demonstrations featured different vendor-provided 40G/100G network technology solutions for full-duplex 40G and 100G LAN data flows across SRS-deployed single-node fiber-pairs among the Exhibit Booths of NASA, the National Center for Data lining, NOAA and the SCinet Network Operations Center, as well as between the NASA Exhibit Booth in New Orleans and the Starlight Communications Exchange facility in Chicago across special SC 10- only 80- and 100-Gbps wide area network links provisioned respectively by the NLR and Internet2, then on to GSFC across a 40-Gbps link. provisioned by the Mid-Atlantic Crossroads. The networks and vendor equipment were load-stressed by sets of NASA/GSFC High End Computer Network Team-built, relatively inexpensive, net-test-workstations that are capable of demonstrating greater than 100Gbps uni-directional nuttcp-enabled memory-to-memory data transfers, greater than 80-Gbps aggregate--bidirectional memory-to-memory data transfers, and near 40-Gbps uni-directional disk-to-disk file copying. This paper will summarize the background context, key accomplishments and some significances of these experiments and demonstrations.

  18. Final Report for Enhancing the MPI Programming Model for PetaScale Systems

    SciTech Connect

    Gropp, William Douglas

    2013-07-22

    This project performed research into enhancing the MPI programming model in two ways: developing improved algorithms and implementation strategies, tested and realized in the MPICH implementation, and exploring extensions to the MPI standard to better support PetaScale and ExaScale systems.

  19. Using Simulated Debates to Teach History of Engineering Advances

    ERIC Educational Resources Information Center

    Reynolds, Terry S.

    1976-01-01

    Described is a technique for utilizing debates of past engineering controversies in the classroom as a means of teaching the history of engineering advances. Included is a bibliography for three debate topics relating to important controversies. (SL)

  20. Multi-physics nuclear reactor simulator for advanced nuclear engineering education

    SciTech Connect

    Yamamoto, A.

    2012-07-01

    Multi-physics nuclear reactor simulator, which aims to utilize for advanced nuclear engineering education, is being introduced to Nagoya Univ.. The simulator consists of the 'macroscopic' physics simulator and the 'microscopic' physics simulator. The former performs real time simulation of a whole nuclear power plant. The latter is responsible to more detail numerical simulations based on the sophisticated and precise numerical models, while taking into account the plant conditions obtained in the macroscopic physics simulator. Steady-state and kinetics core analyses, fuel mechanical analysis, fluid dynamics analysis, and sub-channel analysis can be carried out in the microscopic physics simulator. Simulation calculations are carried out through dedicated graphical user interface and the simulation results, i.e., spatial and temporal behaviors of major plant parameters are graphically shown. The simulator will provide a bridge between the 'theories' studied with textbooks and the 'physical behaviors' of actual nuclear power plants. (authors)

  1. ADVANCED UTILITY SIMULATION MODEL DESCRIPTION OF MODIFICATIONS TO THE STATE LEVEL MODEL (VERSION 3.0)

    EPA Science Inventory

    The report documents modifications to the state level model portion of the Advanced Utility Simulation Model (AUSM), one of four stationary source emission and control cost forecasting models developed for the National Acid Precipitation Assessment Program (NAPAP). The AUSM model...

  2. Five-dimensional simulation for advanced decision making

    NASA Astrophysics Data System (ADS)

    Lammers, Craig; Steinman, Jeffrey; Valinski, Maria; Roth, Karen

    2009-05-01

    This paper describes the application of a new parallel and distributed modeling and simulation technology known as HyperWarpSpeed to facilitate the decision-making process in a time-critical simulated Command and Control environment. HyperWarpSpeed enables the exploration of multiple decision branches at key decision points within a single simulation execution. Whereas the traditional Monte Carlo approach re-computes the majority of calculations for each run, HyperWarpSpeed shares computations between the parallel behaviors resulting in run times that are potentially orders of magnitude faster.

  3. Development of Kinetic Mechanisms for Next-Generation Fuels and CFD Simulation of Advanced Combustion Engines

    SciTech Connect

    Pitz, William J.; McNenly, Matt J.; Whitesides, Russell; Mehl, Marco; Killingsworth, Nick J.; Westbrook, Charles K.

    2015-12-17

    Predictive chemical kinetic models are needed to represent next-generation fuel components and their mixtures with conventional gasoline and diesel fuels. These kinetic models will allow the prediction of the effect of alternative fuel blends in CFD simulations of advanced spark-ignition and compression-ignition engines. Enabled by kinetic models, CFD simulations can be used to optimize fuel formulations for advanced combustion engines so that maximum engine efficiency, fossil fuel displacement goals, and low pollutant emission goals can be achieved.

  4. Advanced beam-dynamics simulation tools for RIA.

    SciTech Connect

    Garnett, R. W.; Wangler, T. P.; Billen, J. H.; Qiang, J.; Ryne, R.; Crandall, K. R.; Ostroumov, P.; York, R.; Zhao, Q.; Physics; LANL; LBNL; Tech Source; Michigan State Univ.

    2005-01-01

    We are developing multi-particle beam-dynamics simulation codes for RIA driver-linac simulations extending from the low-energy beam transport (LEBT) line to the end of the linac. These codes run on the NERSC parallel supercomputing platforms at LBNL, which allow us to run simulations with large numbers of macroparticles. The codes have the physics capabilities needed for RIA, including transport and acceleration of multiple-charge-state beams, beam-line elements such as high-voltage platforms within the linac, interdigital accelerating structures, charge-stripper foils, and capabilities for handling the effects of machine errors and other off-normal conditions. This year will mark the end of our project. In this paper we present the status of the work, describe some recent additions to the codes, and show some preliminary simulation results.

  5. [Research advances in soil nitrogen cycling models and their simulation].

    PubMed

    Tang, Guoyong; Huang, Daoyou; Tong, Chengli; Zhang, Wenju; Wu, Jinshui

    2005-11-01

    Nitrogen is one of the necessary nutrients for plant, and also a primary element leading to environmental pollution. Many researches have been concerned about the contribution of agricultural activities to environmental pollution by nitrogenous compounds, and the focus is how to simulate soil nitrogen cycling processes correctly. In this paper, the primary soil nitrogen cycling processes were reviewed in brief, with 13 cycling models and 6 simulated cycling processes introduced, and the parameterization of models discussed. PMID:16471369

  6. Advanced Thermal Simulator Testing: Thermal Analysis and Test Results

    SciTech Connect

    Bragg-Sitton, Shannon M.; Dickens, Ricky; Dixon, David; Reid, Robert; Adams, Mike; Davis, Joe

    2008-01-21

    Work at the NASA Marshall Space Flight Center seeks to develop high fidelity, electrically heated thermal simulators that represent fuel elements in a nuclear reactor design to support non-nuclear testing applicable to the potential development of a space nuclear power or propulsion system. Comparison between the fuel pins and thermal simulators is made at the outer fuel clad surface, which corresponds to the outer sheath surface in the thermal simulator. The thermal simulators that are currently being tested correspond to a liquid metal cooled reactor design that could be applied for Lunar surface power. These simulators are designed to meet the geometric and power requirements of a proposed surface power reactor design, accommodate testing of various axial power profiles, and incorporate imbedded instrumentation. This paper reports the results of thermal simulator analysis and testing in a bare element configuration, which does not incorporate active heat removal, and testing in a water-cooled calorimeter designed to mimic the heat removal that would be experienced in a reactor core.

  7. Advanced Thermal Simulator Testing: Thermal Analysis and Test Results

    NASA Technical Reports Server (NTRS)

    Bragg-Sitton, Shannon M.; Dickens, Ricky; Dixon, David; Reid, Robert; Adams, Mike; Davis, Joe

    2008-01-01

    Work at the NASA Marshall Space Flight Center seeks to develop high fidelity, electrically heated thermal simulators that represent fuel elements in a nuclear reactor design to support non-nuclear testing applicable to the development of a space nuclear power or propulsion system. Comparison between the fuel pins and thermal simulators is made at the outer fuel clad surface, which corresponds to the outer sheath surface in the thermal simulator. The thermal simulators that are currently being tested correspond to a SNAP derivative reactor design that could be applied for Lunar surface power. These simulators are designed to meet the geometric and power requirements of a proposed surface power reactor design, accommodate testing of various axial power profiles, and incorporate imbedded instrumentation. This paper reports the results of thermal simulator analysis and testing in a bare element configuration, which does not incorporate active heat removal, and testing in a water-cooled calorimeter designed to mimic the heat removal that would be experienced in a reactor core.

  8. PoPLAR: Portal for Petascale Lifescience Applications and Research

    PubMed Central

    2013-01-01

    Background We are focusing specifically on fast data analysis and retrieval in bioinformatics that will have a direct impact on the quality of human health and the environment. The exponential growth of data generated in biology research, from small atoms to big ecosystems, necessitates an increasingly large computational component to perform analyses. Novel DNA sequencing technologies and complementary high-throughput approaches--such as proteomics, genomics, metabolomics, and meta-genomics--drive data-intensive bioinformatics. While individual research centers or universities could once provide for these applications, this is no longer the case. Today, only specialized national centers can deliver the level of computing resources required to meet the challenges posed by rapid data growth and the resulting computational demand. Consequently, we are developing massively parallel applications to analyze the growing flood of biological data and contribute to the rapid discovery of novel knowledge. Methods The efforts of previous National Science Foundation (NSF) projects provided for the generation of parallel modules for widely used bioinformatics applications on the Kraken supercomputer. We have profiled and optimized the code of some of the scientific community's most widely used desktop and small-cluster-based applications, including BLAST from the National Center for Biotechnology Information (NCBI), HMMER, and MUSCLE; scaled them to tens of thousands of cores on high-performance computing (HPC) architectures; made them robust and portable to next-generation architectures; and incorporated these parallel applications in science gateways with a web-based portal. Results This paper will discuss the various developmental stages, challenges, and solutions involved in taking bioinformatics applications from the desktop to petascale with a front-end portal for very-large-scale data analysis in the life sciences. Conclusions This research will help to bridge the gap

  9. Advanced SAR simulator with multi-beam interferometric capabilities

    NASA Astrophysics Data System (ADS)

    Reppucci, Antonio; Márquez, José; Cazcarra, Victor; Ruffini, Giulio

    2014-10-01

    State of the art simulations are of great interest when designing a new instrument, studying the imaging mechanisms due to a given scenario or for inversion algorithm design as they allow to analyze and understand the effects of different instrument configurations and targets compositions. In the framework of the studies about a new instruments devoted to the estimation of the ocean surface movements using Synthetic Aperture Radar along-track interferometry (SAR-ATI) an End-to-End simulator has been developed. The simulator, built in a high modular way to allow easy integration of different processing-features, deals with all the basic operations involved in an end to end scenario. This includes the computation of the position and velocity of the platform (airborne/spaceborne) and the geometric parameters defining the SAR scene, the surface definition, the backscattering computation, the atmospheric attenuation, the instrument configuration, and the simulation of the transmission/reception chains and the raw data. In addition, the simulator provides a inSAR processing suit and a sea surface movement retrieval module. Up to four beams (each one composed by a monostatic and a bistatic channel) can be activated. Each channel provides raw data and SLC images with the possibility of choosing between Strip-map and Scansar modes. Moreover, the software offers the possibility of radiometric sensitivity analysis and error analysis due atmospheric disturbances, instrument-noise, interferogram phase-noise, platform velocity and attitude variations. In this paper, the architecture and the capabilities of this simulator will be presented. Meaningful simulation examples will be shown.

  10. Toward high-fidelity subsonic jet noise prediction using petascale supercomputers

    NASA Astrophysics Data System (ADS)

    Martha, Chandra Sekhar

    The field of jet noise has become one of most active areas of research due to increasingly stringent aircraft noise regulations. A petascalable noise prediction tool-set based on the large eddy simulation (LES) technique is designed and implemented to improve the fidelity of subsonic jet noise predictions. Such tools are needed to help drive the design of quieter jets. The focus is to target computational performance and improved noise prediction fidelity through better matching experimental jet conditions and/or inclusion of the nozzle as part of the simulation. A communication-efficient SPIKE solver is used for spatial operations in conjunction with a non-overlapping multi-block topology based on a new concept of superblocks. These two choices have resulted in efficient scalability tested on up to 91,125 processors (or a theoretical speed of ˜1 petaflop/s). Other important optimizations include parallel file I/O and data buffering while gathering the acoustics. The noise from a Mach-0.9, isothermal jet is studied without and with a round nozzle. Production runs with up to first-ever one-billion-point simple-block topology grids without the nozzle and 125-million-point multi-block topology grids with the nozzle are performed. A vortex ring is used to excite the shear layers in the cases without the nozzle. The fine grid simulations with thinner shear layers have predicted higher sideline noise levels caused by the vortex ring and hence, established the need for nozzle inclusion. The problems of the centerline singularity and smaller time step size due to cylindrical grids have been addressed. A new, faster method based on a sinc filter is discussed for the time step issue in cylindrical grids. Two approaches are considered for nozzle inclusion by: 1) fully resolving the boundary layers at a lower Reynolds number; and 2) using a wall model to model the inner layer at the experimental Reynolds number. The wall-modeled cases exhibited numerical instabilities behind

  11. Advanced simulation of hydroelectric transient process with Comsol/Simulink

    NASA Astrophysics Data System (ADS)

    Li, L.; Yang, J. D.

    2010-08-01

    In the study of hydroelectric system, the research of its transient process and the improvement of its simulation accuracy are restricted mainly by the precision mismatch among the hydraulic and power system models. Simulink provides a very rich control and automation model library system, thus electrical and mechanical conditioning control systems can be accurately simulated. However, it can only solve time but spatial integral problem. Due to that cause, the hydraulic system model often needs to be simplified in course of the simulation of hydroelectric transient process. Comsol, a partial differential equation (PDEs)-based multi-physics finite element analysis software, can precisely simulate the hydraulic system model. Being developed in the Matlab environment, it also can seamlessly integrate with Simulink. In this paper, based on the individual component model, an integral hydraulic-mechanical-electric system model is established by implementing Comsol code into the Simulink S-Function. This model helps to study the interaction between the hydraulic system and the electric system, and analyze the transients of a hydro plant. Meanwhile the calculation results are compared and analyzed with the general simulation system only by using Simulink.

  12. Numerical Simulations and Optimisation in Forming of Advanced Materials

    NASA Astrophysics Data System (ADS)

    Huétink, J.

    2007-04-01

    With the introduction of new materials as high strength steels, metastable steels and fiber reinforce composites, the need for advanced physically valid constitutive models arises. A biaxial test equipment is developed and applied for the determination of material data as well as for validation of material models. An adaptive through- thickness integration scheme for plate elements is developed, which improves the accuracy of spring back prediction at minimal costs. An optimization strategy is proposed that assists an engineer to model an optimization problem.

  13. The Role of Numerical Simulation in Advancing Plasma Propulsion

    NASA Astrophysics Data System (ADS)

    Turchi, P. J.; Mikellides, P. G.; Mikellides, I. G.

    1999-11-01

    Plasma thrusters often involve a complex set of interactions among several distinct physical processes. While each process can yield to separate mathematical representation, their combination generally requires numerical simulation. We have extended and used the MACH2 code successfully to simulate both self-field and applied-field magnetoplasmadynamic thrusters and, more recently, ablation-fed pulsed plasma microthrusters. MACH2 provides a framework in which to compute 2-1/2 dimensional, unsteady, MHD flows in two-temperature LTE. It couples to several options for electrical circuitry and allows access to both analytic formulas and tabular values for material properties and transport coefficients, including phenomenological models for anomalous transport. Even with all these capabilities, however, successful modeling demands comparison with experiment and with analytic solutions in idealized limits, and careful combination of MACH2 results with separate physical reasoning. Although well understood elsewhere in plasma physics, the strengths and limitations of numerical simulation for plasma propulsion needs further discussion.

  14. Advancing botnet modeling techniques for military and security simulations

    NASA Astrophysics Data System (ADS)

    Banks, Sheila B.; Stytz, Martin R.

    2011-06-01

    Simulation environments serve many purposes, but they are only as good as their content. One of the most challenging and pressing areas that call for improved content is the simulation of bot armies (botnets) and their effects upon networks and computer systems. Botnets are a new type of malware, a type that is more powerful and potentially dangerous than any other type of malware. A botnet's power derives from several capabilities including the following: 1) the botnet's capability to be controlled and directed throughout all phases of its activity, 2) a command and control structure that grows increasingly sophisticated, and 3) the ability of a bot's software to be updated at any time by the owner of the bot (a person commonly called a bot master or bot herder.) Not only is a bot army powerful and agile in its technical capabilities, a bot army can be extremely large, can be comprised of tens of thousands, if not millions, of compromised computers or it can be as small as a few thousand targeted systems. In all botnets, their members can surreptitiously communicate with each other and their command and control centers. In sum, these capabilities allow a bot army to execute attacks that are technically sophisticated, difficult to trace, tactically agile, massive, and coordinated. To improve our understanding of their operation and potential, we believe that it is necessary to develop computer security simulations that accurately portray bot army activities, with the goal of including bot army simulations within military simulation environments. In this paper, we investigate issues that arise when simulating bot armies and propose a combination of the biologically inspired MSEIR infection spread model coupled with the jump-diffusion infection spread model to portray botnet propagation.

  15. Psychometric and Evidentiary Advances, Opportunities, and Challenges for Simulation-Based Assessment

    ERIC Educational Resources Information Center

    Levy, Roy

    2013-01-01

    This article characterizes the advances, opportunities, and challenges for psychometrics of simulation-based assessments through a lens that views assessment as evidentiary reasoning. Simulation-based tasks offer the prospect for student experiences that differ from traditional assessment. Such tasks may be used to support evidentiary arguments…

  16. Advanced Techniques for Reservoir Simulation and Modeling of Non-Conventional Wells

    SciTech Connect

    Durlofsky, Louis J.

    2000-08-28

    This project targets the development of (1) advanced reservoir simulation techniques for modeling non-conventional wells; (2) improved techniques for computing well productivity (for use in reservoir engineering calculations) and well index (for use in simulation models), including the effects of wellbore flow; and (3) accurate approaches to account for heterogeneity in the near-well region.

  17. Software Partitioning Schemes for Advanced Simulation Computer Systems. Final Report.

    ERIC Educational Resources Information Center

    Clymer, S. J.

    Conducted to design software partitioning techniques for use by the Air Force to partition a large flight simulator program for optimal execution on alternative configurations, this study resulted in a mathematical model which defines characteristics for an optimal partition, and a manually demonstrated partitioning algorithm design which…

  18. Recent Advances in Underwater Acoustic Modelling and Simulation

    NASA Astrophysics Data System (ADS)

    ETTER, P. C.

    2001-02-01

    A comprehensive review of international developments in underwater acoustic modelling is used to construct an updated technology baseline containing 107 propagation models, 16 noise models, 17 reverberation models and 25 sonar performance models. This updated technology baseline represents a 30% increase over a previous baseline published in 1996. When executed in higher-level simulations, these models can generate predictive and diagnostic outputs that are useful to acoustical oceanographers or sonar technologists in the analysis of complex systems operating in the undersea environment. Recent modelling developments described in the technical literature suggest two principal areas of application: low-frequency, inverse acoustics in deep water; and high-frequency, bottom-interacting acoustics in coastal regions. Rapid changes in global geopolitics have opened new avenues for collaboration, thereby facilitating the transfer of modelling and simulation technologies among members of the international community. This accelerated technology transfer has created new imperatives for international standards in modelling and simulation architectures. National and international activities to promote interoperability among modelling and simulation efforts in government, industry and academia are reviewed and discussed.

  19. Cross-Cultural Simulation to Advance Student Inquiry

    ERIC Educational Resources Information Center

    Inglis, Sue; Sammon, Sheila; Justice, Christopher; Cuneo, Carl; Miller, Stefania; Rice, James; Roy, Dale; Warry, Wayne

    2004-01-01

    This article reviews how and why the authors have used the cross-cultural simulation BAFA BAFA in a 1st-year social sciences inquiry course on social identity. The article discusses modifications made to Shirts's original script for BAFA BAFA, how the authors conduct the postsimulation debriefing, key aspects of the student-written reflection of…

  20. Advanced Simulation and Computing Co-Design Strategy

    SciTech Connect

    Ang, James A.; Hoang, Thuc T.; Kelly, Suzanne M.; McPherson, Allen; Neely, Rob

    2015-11-01

    This ASC Co-design Strategy lays out the full continuum and components of the co-design process, based on what we have experienced thus far and what we wish to do more in the future to meet the program’s mission of providing high performance computing (HPC) and simulation capabilities for NNSA to carry out its stockpile stewardship responsibility.

  1. Technical advances in molecular simulation since the 1980s.

    PubMed

    Field, Martin J

    2015-09-15

    This review describes how the theory and practice of molecular simulation have evolved since the beginning of the 1980s when the author started his career in this field. The account is of necessity brief and subjective and highlights the changes that the author considers have had significant impact on his research and mode of working. PMID:25772387

  2. Advanced Simulation in Undergraduate Pilot Training (ASUPT) Facility Utilization Plan.

    ERIC Educational Resources Information Center

    Hagin, William V.; Smith, James F.

    The capabilities of a flight simulation research facility located at Williams AFB, Arizona are described. Research philosophy to be applied is discussed. Long range and short range objectives are identified. A time phased plan for long range research accomplishment is described. In addition, some examples of near term research efforts which will…

  3. EarthServer: an Intercontinental Collaboration on Petascale Datacubes

    NASA Astrophysics Data System (ADS)

    Baumann, P.; Rossi, A. P.

    2015-12-01

    With the unprecedented increase of orbital sensor, in-situ measurement, and simulation data there is a rich, yet not leveraged potential for getting insights from dissecting datasets and rejoining them with other datasets. Obviously, the goal is to allow users to "ask any question, any time" thereby enabling them to "build their own product on the go".One of the most influential initiatives in Big Geo Data is EarthServer which has demonstrated new directions for flexible, scalable EO services based on innovative NewSQL technology. Researchers from Europe, the US and recently Australia have teamed up to rigourously materialize the concept of the datacube. Such a datacube may have spatial and temporal dimensions (such as a satellite image time series) and may unite an unlimited number of scenes. Independently from whatever efficient data structuring a server network may perform internally, users will always see just a few datacubes they can slice and dice. EarthServer has established client and server technology for such spatio-temporal datacubes. The underlying scalable array engine, rasdaman, enables direct interaction, including 3-D visualization, what-if scenarios, common EO data processing, and general analytics. Services exclusively rely on the open OGC "Big Geo Data" standards suite, the Web Coverage Service (WCS) including the Web Coverage Processing Service (WCPS). Conversely, EarthServer has significantly shaped and advanced the OGC Big Geo Data standards landscape based on the experience gained.Phase 1 of EarthServer has advanced scalable array database technology into 100+ TB services; in phase 2, Petabyte datacubes will be built in Europe and Australia to perform ad-hoc querying and merging. Standing between EarthServer phase 1 (from 2011 through 2014) and phase 2 (from 2015 through 2018) we present the main results and outline the impact on the international standards landscape; effectively, the Big Geo Data standards established through initiative of

  4. A Distributed Simulation Facility to Support Human Factors Research in Advanced Air Transportation Technology

    NASA Technical Reports Server (NTRS)

    Amonlirdviman, Keith; Farley, Todd C.; Hansman, R. John, Jr.; Ladik, John F.; Sherer, Dana Z.

    1998-01-01

    A distributed real-time simulation of the civil air traffic environment developed to support human factors research in advanced air transportation technology is presented. The distributed environment is based on a custom simulation architecture designed for simplicity and flexibility in human experiments. Standard Internet protocols are used to create the distributed environment, linking all advanced cockpit simulator, all Air Traffic Control simulator, and a pseudo-aircraft control and simulation management station. The pseudo-aircraft control station also functions as a scenario design tool for coordinating human factors experiments. This station incorporates a pseudo-pilot interface designed to reduce workload for human operators piloting multiple aircraft simultaneously in real time. The application of this distributed simulation facility to support a study of the effect of shared information (via air-ground datalink) on pilot/controller shared situation awareness and re-route negotiation is also presented.

  5. Simulation studies of the impact of advanced observing systems on numerical weather prediction

    NASA Technical Reports Server (NTRS)

    Atlas, R.; Kalnay, E.; Susskind, J.; Reuter, D.; Baker, W. E.; Halem, M.

    1984-01-01

    To study the potential impact of advanced passive sounders and lidar temperature, pressure, humidity, and wind observing systems on large-scale numerical weather prediction, a series of realistic simulation studies between the European Center for medium-range weather forecasts, the National Meteorological Center, and the Goddard Laboratory for Atmospheric Sciences is conducted. The project attempts to avoid the unrealistic character of earlier simulation studies. The previous simulation studies and real-data impact tests are reviewed and the design of the current simulation system is described. Consideration is given to the simulation of observations of space-based sounding systems.

  6. Design tradeoffs in the development of the advanced multispectral simulation test acceptance resource (AMSTAR) HWIL facilities

    NASA Astrophysics Data System (ADS)

    LeSueur, Kenneth G.; Almendinger, Frank J.

    2007-04-01

    The Army's Advanced Multispectral Simulation Test Acceptance Resource (AMSTAR) is a suite of missile Hardware-In-the-Loop (HWIL) simulation / test capabilities designed to support testing from concept through production. This paper presents the design tradeoffs that were conducted in the development of the AMSTAR sensor stimulators and the flight motion simulators. The AMSTAR facility design includes systems to stimulate each of the Millimeter Wave (MMW), Infrared (IR), and Semi-Active Laser (SAL) sensors. The flight motion simulator (FMS) performance was key to the success of the simulation but required many concessions to accommodate the design considerations for the tri-mode stimulation systems.

  7. Interim Service ISDN Satellite (ISIS) simulator development for advanced satellite designs and experiments

    NASA Technical Reports Server (NTRS)

    Pepin, Gerard R.

    1992-01-01

    The simulation development associated with the network models of both the Interim Service Integrated Services Digital Network (ISDN) Satellite (ISIS) and the Full Service ISDN Satellite (FSIS) architectures is documented. The ISIS Network Model design represents satellite systems like the Advanced Communications Technology Satellite (ACTS) orbiting switch. The FSIS architecture, the ultimate aim of this element of the Satellite Communications Applications Research (SCAR) Program, moves all control and switching functions on-board the next generation ISDN communications satellite. The technical and operational parameters for the advanced ISDN communications satellite design will be obtained from the simulation of ISIS and FSIS engineering software models for their major subsystems. Discrete event simulation experiments will be performed with these models using various traffic scenarios, design parameters, and operational procedures. The data from these simulations will be used to determine the engineering parameters for the advanced ISDN communications satellite.

  8. Simulating data processing for an Advanced Ion Mobility Mass Spectrometer

    SciTech Connect

    Chavarría-Miranda, Daniel; Clowers, Brian H.; Anderson, Gordon A.; Belov, Mikhail E.

    2007-11-03

    We have designed and implemented a Cray XD-1-based sim- ulation of data capture and signal processing for an ad- vanced Ion Mobility mass spectrometer (Hadamard trans- form Ion Mobility). Our simulation is a hybrid application that uses both an FPGA component and a CPU-based soft- ware component to simulate Ion Mobility mass spectrome- try data processing. The FPGA component includes data capture and accumulation, as well as a more sophisticated deconvolution algorithm based on a PNNL-developed en- hancement to standard Hadamard transform Ion Mobility spectrometry. The software portion is in charge of stream- ing data to the FPGA and collecting results. We expect the computational and memory addressing logic of the FPGA component to be portable to an instrument-attached FPGA board that can be interfaced with a Hadamard transform Ion Mobility mass spectrometer.

  9. Applicability of Randomdec technique to flight simulator for advanced aircraft

    NASA Technical Reports Server (NTRS)

    Reed, R. E., Jr.; Cole, H. A., Jr.

    1975-01-01

    The feasibility of Randomdec analysis to detect certain changes in a flight simulator system is studied. Results show that (1) additional studies are needed to ensure effectiveness; (2) a trade-off exists between development complexity and level of malfunction to be detected; and (3) although the system generally limits the input signals to less than about 5 Hz, higher frequency components in the range of 9 Hz and its harmonics are possible.

  10. Advanced wellbore thermal simulator: GEOTEMP2 user manual

    SciTech Connect

    Mitchell, R.F.

    1982-02-01

    GEOTEMP2 is a wellbore thermal simulator designed for geothermal well drilling and production problems. GEOTEMP2 includes the following features: fully transient heat conduction, wellbore fluid flow options, well completion options, and drilling-production histories. The data input format is given, along with input examples and comments on special features of the input. Ten examples that illustrate all of the flowing options and input options in GEOTEMP2 are included.

  11. ADVANCES IN COMPREHENSIVE GYROKINETIC SIMULATIONS OF TRANSPORT IN TOKAMAKS

    SciTech Connect

    WALTZ RE; CANDY J; HINTON FL; ESTRADA-MILA C; KINSEY JE

    2004-10-01

    A continuum global gyrokinetic code GYRO has been developed to comprehensively simulate core turbulent transport in actual experimental profiles and enable direct quantitative comparisons to the experimental transport flows. GYRO not only treats the now standard ion temperature gradient (ITG) mode turbulence, but also treats trapped and passing electrons with collisions and finite {beta}, equilibrium ExB shear stabilization, and all in real tokamak geometry. Most importantly the code operates at finite relative gyroradius ({rho}{sub *}) so as to treat the profile shear stabilization and nonlocal effects which can break gyroBohm scaling. The code operates in either a cyclic flux-tube limit (which allows only gyroBohm scaling) or a globally with physical profile variation. Rohm scaling of DIII-D L-mode has been simulated with power flows matching experiment within error bars on the ion temperature gradient. Mechanisms for broken gyroBohm scaling, neoclassical ion flows embedded in turbulence, turbulent dynamos and profile corrugations, plasma pinches and impurity flow, and simulations at fixed flow rather than fixed gradient are illustrated and discussed.

  12. Simulation of an advanced techniques of ion propulsion Rocket system

    NASA Astrophysics Data System (ADS)

    Bakkiyaraj, R.

    2016-07-01

    The ion propulsion rocket system is expected to become popular with the development of Deuterium,Argon gas and Hexagonal shape Magneto hydrodynamic(MHD) techniques because of the stimulation indirectly generated the power from ionization chamber,design of thrust range is 1.2 N with 40 KW of electric power and high efficiency.The proposed work is the study of MHD power generation through ionization level of Deuterium gas and combination of two gaseous ions(Deuterium gas ions + Argon gas ions) at acceleration stage.IPR consists of three parts 1.Hexagonal shape MHD based power generator through ionization chamber 2.ion accelerator 3.Exhaust of Nozzle.Initially the required energy around 1312 KJ/mol is carrying out the purpose of deuterium gas which is changed to ionization level.The ionized Deuterium gas comes out from RF ionization chamber to nozzle through MHD generator with enhanced velocity then after voltage is generated across the two pairs of electrode in MHD.it will produce thrust value with the help of mixing of Deuterium ion and Argon ion at acceleration position.The simulation of the IPR system has been carried out by MATLAB.By comparing the simulation results with the theoretical and previous results,if reaches that the proposed method is achieved of thrust value with 40KW power for simulating the IPR system.

  13. The role of advanced engineering simulation in model-based design

    SciTech Connect

    Hommert, P.J.; Biffle, J.H.

    1995-03-01

    The agile manufacturing paradigm engenders many new concepts and work approaches for manufacturing operations. A technology often invoked in the concept of agility is modeling and simulation. Few would disagree that modeling and simulation holds the potential to substantially reduce the product development cycle and lead to improve product reliability and performance. Advanced engineering simulation can impact manufacturing in three areas: process design, product design, and process control. However, despite that promise, the routine utilization of modeling and simulation by industry within the design process is very limited. Advanced simulation is still used primarily in a troubleshooting mode examining design or process problems after the fact. Sandia National Laboratories has been engaged in the development of advanced engineering simulation tools for many years and more recently has begun to focus on the application of such models to manufacturing processes important for the defense industry. These efforts involve considerable interaction and cooperative research with US industry. Based upon this experience, this presentation examines the elements that are necessary for advanced engineering simulation to become an integral part of the design process.

  14. New Paradigms for Developing Peta-scalable Codes Workshop - May 3-4, 2004

    SciTech Connect

    Michael Levine

    2005-04-30

    On May 3 & 4, 2004, sixty-two of North America's finest computational scientists gathered in Pittsburgh, Pennsylvania to discuss the future of high-performance computing. Sponsored by the National Science Foundation, the Department of Energy, the Department of Defense and the Hewlett-Packard Corporation, New Methods for Developing Peta-scalable Codes introduced the tools and techniques that will be required to efficiently exploit the next generation of supercomputers. This workshop provided an opportunity for computational scientists to consider parallel programming methods other than the currently prevalent one in which they explicitly and directly manage all parallelism via MPI. Specifically, the question is how best to program the upcoming generation of computer systems that will use massive parallelism and complex memory hierarchies to reach from the terascale into the petascale regime over the next five years. The presentations, by leading computer scientists, focused on languages, runtimes and libraries, tool collections and I/O methods.

  15. Advanced visualization technology for terascale particle accelerator simulations

    SciTech Connect

    Ma, K-L; Schussman, G.; Wilson, B.; Ko, K.; Qiang, J.; Ryne, R.

    2002-11-16

    This paper presents two new hardware-assisted rendering techniques developed for interactive visualization of the terascale data generated from numerical modeling of next generation accelerator designs. The first technique, based on a hybrid rendering approach, makes possible interactive exploration of large-scale particle data from particle beam dynamics modeling. The second technique, based on a compact texture-enhanced representation, exploits the advanced features of commodity graphics cards to achieve perceptually effective visualization of the very dense and complex electromagnetic fields produced from the modeling of reflection and transmission properties of open structures in an accelerator design. Because of the collaborative nature of the overall accelerator modeling project, the visualization technology developed is for both desktop and remote visualization settings. We have tested the techniques using both time varying particle data sets containing up to one billion particle s per time step and electromagnetic field data sets with millions of mesh elements.

  16. Collaboratively Architecting a Scalable and Adaptable Petascale Infrastructure to Support Transdisciplinary Scientific Research for the Australian Earth and Environmental Sciences

    NASA Astrophysics Data System (ADS)

    Wyborn, L. A.; Evans, B. J. K.; Pugh, T.; Lescinsky, D. T.; Foster, C.; Uhlherr, A.

    2014-12-01

    The National Computational Infrastructure (NCI) at the Australian National University (ANU) is a partnership between CSIRO, ANU, Bureau of Meteorology (BoM) and Geoscience Australia. Recent investments in a 1.2 PFlop Supercomputer (Raijin), ~ 20 PB data storage using Lustre filesystems and a 3000 core high performance cloud have created a hybrid platform for higher performance computing and data-intensive science to enable large scale earth and climate systems modelling and analysis. There are > 3000 users actively logging in and > 600 projects on the NCI system. Efficiently scaling and adapting data and software systems to petascale infrastructures requires the collaborative development of an architecture that is designed, programmed and operated to enable users to interactively invoke different forms of in-situ computation over complex and large scale data collections. NCI makes available major and long tail data collections from both the government and research sectors based on six themes: 1) weather, climate and earth system science model simulations, 2) marine and earth observations, 3) geosciences, 4) terrestrial ecosystems, 5) water and hydrology and 6) astronomy, bio and social. Collectively they span the lithosphere, crust, biosphere, hydrosphere, troposphere, and stratosphere. Collections are the operational form for data management and access. Similar data types from individual custodians are managed cohesively. Use of international standards for discovery and interoperability allow complex interactions within and between the collections. This design facilitates a transdisciplinary approach to research and enables a shift from small scale, 'stove-piped' science efforts to large scale, collaborative systems science. This new and complex infrastructure requires a move to shared, globally trusted software frameworks that can be maintained and updated. Workflow engines become essential and need to integrate provenance, versioning, traceability, repeatability

  17. Advanced Simulation Technology to Design Etching Process on CMOS Devices

    NASA Astrophysics Data System (ADS)

    Kuboi, Nobuyuki

    2015-09-01

    Prediction and control of plasma-induced damage is needed to mass-produce high performance CMOS devices. In particular, side-wall (SW) etching with low damage is a key process for the next generation of MOSFETs and FinFETs. To predict and control the damage, we have developed a SiN etching simulation technique for CHxFy/Ar/O2 plasma processes using a three-dimensional (3D) voxel model. This model includes new concepts for the gas transportation in the pattern, detailed surface reactions on the SiN reactive layer divided into several thin slabs and C-F polymer layer dependent on the H/N ratio, and use of ``smart voxels''. We successfully predicted the etching properties such as the etch rate, polymer layer thickness, and selectivity for Si, SiO2, and SiN films along with process variations and demonstrated the 3D damage distribution time-dependently during SW etching on MOSFETs and FinFETs. We confirmed that a large amount of Si damage was caused in the source/drain region with the passage of time in spite of the existing SiO2 layer of 15 nm in the over etch step and the Si fin having been directly damaged by a large amount of high energy H during the removal step of the parasitic fin spacer leading to Si fin damage to a depth of 14 to 18 nm. By analyzing the results of these simulations and our previous simulations, we found that it is important to carefully control the dose of high energy H, incident energy of H, polymer layer thickness, and over-etch time considering the effects of the pattern structure, chamber-wall condition, and wafer open area ratio. In collaboration with Masanaga Fukasawa and Tetsuya Tatsumi, Sony Corporation. We thank Mr. T. Shigetoshi and Mr. T. Kinoshita of Sony Corporation for their assistance with the experiments.

  18. Microwave Processing of Simulated Advanced Nuclear Fuel Pellets

    SciTech Connect

    D.E. Clark; D.C. Folz

    2010-08-29

    Throughout the three-year project funded by the Department of Energy (DOE) and lead by Virginia Tech (VT), project tasks were modified by consensus to fit the changing needs of the DOE with respect to developing new inert matrix fuel processing techniques. The focus throughout the project was on the use of microwave energy to sinter fully stabilized zirconia pellets using microwave energy and to evaluate the effectiveness of techniques that were developed. Additionally, the research team was to propose fundamental concepts as to processing radioactive fuels based on the effectiveness of the microwave process in sintering the simulated matrix material.

  19. Advanced flight deck/crew station simulator functional requirements

    NASA Technical Reports Server (NTRS)

    Wall, R. L.; Tate, J. L.; Moss, M. J.

    1980-01-01

    This report documents a study of flight deck/crew system research facility requirements for investigating issues involved with developing systems, and procedures for interfacing transport aircraft with air traffic control systems planned for 1985 to 2000. Crew system needs of NASA, the U.S. Air Force, and industry were investigated and reported. A matrix of these is included, as are recommended functional requirements and design criteria for simulation facilities in which to conduct this research. Methods of exploiting the commonality and similarity in facilities are identified, and plans for exploiting this in order to reduce implementation costs and allow efficient transfer of experiments from one facility to another are presented.

  20. ADVANCES IN COMPREHENSIVE GYROKINETIC SIMULATIONS OF TRANSPORT IN TOKAMAKS

    SciTech Connect

    WALTZ,R.E; CANDY,J; HINTON,F.L; ESTRADA-MILA,C; KINSEY,J.E

    2004-10-01

    A continuum global gyrokinetic code GYRO has been developed to comprehensively simulate core turbulent transport in actual experimental profiles and enable direct quantitative comparisons to the experimental transport flows. GYRO not only treats the now standard ion temperature gradient (ITG) mode turbulence, but also treats trapped and passing electrons with collisions and finite {beta}, equilibrium ExB shear stabilization, and all in real tokamak geometry. Most importantly the code operates at finite relative gyroradius ({rho}{sub *}) so as to treat the profile shear stabilization and nonlocal effects which can break gyroBohm scaling. The code operates in either a cyclic flux-tube limit (which allows only gyroBohm scaling) or globally with physical profile variation. Bohm scaling of DIII-D L-mode has been simulated with power flows matching experiment within error bars on the ion temperature gradient. Mechanisms for broken gyroBohm scaling, neoclassical ion flows embedded in turbulence, turbulent dynamos and profile corrugations, are illustrated.

  1. Langley advanced real-time simulation (ARTS) system

    NASA Technical Reports Server (NTRS)

    Crawford, Daniel J.; Cleveland, Jeff I., II

    1988-01-01

    A system of high-speed digital data networks was developed and installed to support real-time flight simulation at the NASA Langley Research Center. This system, unlike its predecessor, employs intelligence at each network node and uses distributed 10-V signal conversion equipment rather than centralized 100-V equipment. A network switch, which replaces an elaborate system of patch panels, allows the researcher to construct a customized network from the 25 available simulation sites by invoking a computer control statement. The intent of this paper is to provide a coherent functional description of the system. This development required many significant innovations to enhance performance and functionality such as the real-time clock, the network switch, and improvements to the CAMAC network to increase both distances to sites and data rates. The system has been successfully tested at a usable data rate of 24 M. The fiber optic lines allow distances of approximately 1.5 miles from switch to site. Unlike other local networks, CAMAC does not buffer data in blocks. Therefore, time delays in the network are kept below 10 microsec total. This system underwent months of testing and was put into full service in July 1987.

  2. Simulation models and designs for advanced Fischer-Tropsch technology

    SciTech Connect

    Choi, G.N.; Kramer, S.J.; Tam, S.S.

    1995-12-31

    Process designs and economics were developed for three grass-roots indirect Fischer-Tropsch coal liquefaction facilities. A baseline and an alternate upgrading design were developed for a mine-mouth plant located in southern Illinois using Illinois No. 6 coal, and one for a mine-mouth plane located in Wyoming using Power River Basin coal. The alternate design used close-coupled ZSM-5 reactors to upgrade the vapor stream leaving the Fischer-Tropsch reactor. ASPEN process simulation models were developed for all three designs. These results have been reported previously. In this study, the ASPEN process simulation model was enhanced to improve the vapor/liquid equilibrium calculations for the products leaving the slurry bed Fischer-Tropsch reactors. This significantly improved the predictions for the alternate ZSM-5 upgrading design. Another model was developed for the Wyoming coal case using ZSM-5 upgrading of the Fischer-Tropsch reactor vapors. To date, this is the best indirect coal liquefaction case. Sensitivity studies showed that additional cost reductions are possible.

  3. Advanced simulation of electron heat transport in fusion plasmas

    SciTech Connect

    Lin, Zhihong; Xiao, Y.; Klasky, Scott A; Lofstead, J.

    2009-01-01

    Electron transport in burning plasmas is more important since fusion products first heat electrons. First-principles simulations of electron turbulence are much more challenging due to the multi-scale dynamics of the electron turbulence, and have been made possible by close collaborations between plasma physicists and computational scientists. The GTC simulations of collisionless trapped electron mode (CTEM) turbulence show that the electron heat transport exhibits a gradual transition from Bohm to gyroBohm scaling when the device size is increased. The deviation from the gyroBohm scaling can be induced by large turbulence eddies, turbulence spreading, and non-diffusive transport processes. Analysis of radial correlation function shows that CTEM turbulence eddies are predominantly microscopic but with a significant tail in the mesoscale. A comprehensive analysis of kinetic and fluid time scales shows that zonal flow shearing is the dominant decorrelation mechanism. The mesoscale eddies result from a dynamical process of linear streamers breaking by zonal flows and merging of microscopic eddies. The radial profile of the electron heat conductivity only follows the profile of fluctuation intensity on a global scale, whereas the ion transport tracks more sensitively the local fluctuation intensity. This suggests the existence of a nondiffusive component in the electron heat flux, which arises from the ballistic radial E x B drift of trapped electrons due to a combination of the presence of mesoscale eddies and the weak de-tuning of the toroidal precessional resonance that drives the CTEM instability. On the other hand, the ion radial excursion is not affected by the mesoscale eddies due to a parallel decorrelation, which is not operational for the trapped electrons because of a bounce averaging process associated with the electron fast motion along magnetic field lines. The presence of the nondiffusive component raises question on the applicability of the usual

  4. Advanced Simulation of Electron Heat Transport in Fusion Plasmas

    SciTech Connect

    Lin, Z.; Xiao, Y.; Holod, I.; Zhang, W. L.; Deng, Wenjun; Klasky, Scott A; Lofstead, J.; Kamath, Chandrika; Wichmann, Nathan

    2009-01-01

    Electron transport in burning plasmas is more important since fusion products first heat electrons. First-principles simulations of electron turbulence are much more challenging due to the multi-scale dynamics of the electron turbulence, and have been made possible by close collaborations between plasma physicists and computational scientists. The GTC simulations of collisionless trapped electron mode (CTEM) turbulence show that the electron heat transport exhibits a gradual transition from Bohm to gyroBohm scaling when the device size is increased. The deviation from the gyroBohm scaling can be induced by large turbulence eddies, turbulence spreading, and non-diffusive transport processes. Analysis of radial correlation function shows that CTEM turbulence eddies are predominantly microscopic but with a significant tail in the mesoscale. A comprehensive analysis of kinetic and fluid time scales shows that zonal flow shearing is the dominant decorrelation mechanism. The mesoscale eddies result from a dynamical process of linear streamers breaking by zonal flows and merging of microscopic eddies. The radial profile of the electron heat conductivity only follows the profile of fluctuation intensity on a global scale, whereas the ion transport tracks more sensitively the local fluctuation intensity. This suggests the existence of a nondiffusive component in the electron heat flux, which arises from the ballistic radial E x B drift of trapped electrons due to a combination of the presence of mesoscale eddies and the weak de-tuning of the toroidal precessional resonance that drives the CTEM instability. On the other hand, the ion radial excursion is not affected by the mesoscale eddies due to a parallel decorrelation, which is not operational for the trapped electrons because of a bounce averaging process associated with the electron fast motion along magnetic field lines. The presence of the nondiffusive component raises question on the applicability of the usual

  5. Numerical Forming Simulations and Optimisation in Advanced Materials

    NASA Astrophysics Data System (ADS)

    Huétink, J.; van den Boogaard, A. H.; Geijselears, H. J. M.; Meinders, T.

    2007-05-01

    With the introduction of new materials as high strength steels, metastable steels and fibre reinforced composites, the need for advanced physically valid constitutive models arises. In finite deformation problems constitutive relations are commonly formulated in terms the Cauchy stress as a function of the elastic Finger tensor and an objective rate of the Cauchy stress as a function of the rate of deformation tensor. For isotropic materials models this is rather straightforward, but for anisotropic material models, including elastic anisotropy as well as plastic anisotropy, this may lead to confusing formulations. It will be shown that it is more convenient to define the constitutive relations in terms of invariant tensors referred to the deformed metric. Experimental results are presented that show new combinations of strain rate and strain path sensitivity. An adaptive through- thickness integration scheme for plate elements is developed, which improves the accuracy of spring back prediction at minimal costs. A procedure is described to automatically compensate the CAD tool shape numerically to obtain the desired product shape. Forming processes need to be optimized for cost saving and product improvement. Until recently, a trial-and-error process in the factory primarily did this optimization. An optimisation strategy is proposed that assists an engineer to model an optimization problem that suits his needs, including an efficient algorithm for solving the problem.

  6. Simulation and ground testing with the Advanced Video Guidance Sensor

    NASA Technical Reports Server (NTRS)

    Howard, Richard T.; Johnston, Albert S.; Bryan, Thomas C.; Book, Michael L.

    2005-01-01

    The Advanced Video Guidance Sensor (AVGS), an active sensor system that provides near-range 6-degree-of-freedom sensor data, has been developed as part of an automatic rendezvous and docking system for the Demonstration of Autonomous Rendezvous Technology (DART). The sensor determines the relative positions and attitudes between the active sensor and the passive target at ranges up to 300 meters. The AVGS uses laser diodes to illuminate retro-reflectors in the target, a solid-state imager to detect the light returned from the target, and image capture electronics and a digital signal processor to convert the video information into the relative positions and attitudes. The development of the sensor, through initial prototypes, final prototypes, and three flight units, has required a great deal of testing at every phase, and the different types of testing, their effectiveness, and their results, are presented in this paper, focusing on the testing of the flight units. Testing has improved the sensor's performance.

  7. Advanced wellbore thermal simulator GEOTEMP2 user manual

    SciTech Connect

    Mondy, L.A.; Duda, L.E.

    1984-11-01

    GEOTEMP2 is a wellbore thermal simulator computer code designed for geothermal drilling and production applications. The code treats natural and forced convection and conduction within the wellbore and heat conduction within the surrounding rock matrix. A variety of well operations can be modeled including injection, production, forward, and reverse circulation with gas or liquid, gas or liquid drilling, and two-phase steam injection and production. Well completion with several different casing sizes and cement intervals can be modeled. The code allows variables suchas flow rate to change with time enabling a realistic treatment of well operations. This user manual describes the input required to properly operate the code. Ten sample problems are included which illustrate all the code options. Complete listings of the code and the output of each sample problem are provided.

  8. Advances in free-energy-based simulations of protein folding and ligand binding.

    PubMed

    Perez, Alberto; Morrone, Joseph A; Simmerling, Carlos; Dill, Ken A

    2016-02-01

    Free-energy-based simulations are increasingly providing the narratives about the structures, dynamics and biological mechanisms that constitute the fabric of protein science. Here, we review two recent successes. It is becoming practical: first, to fold small proteins with free-energy methods without knowing substructures and second, to compute ligand-protein binding affinities, not just their binding poses. Over the past 40 years, the timescales that can be simulated by atomistic MD are doubling every 1.3 years--which is faster than Moore's law. Thus, these advances are not simply due to the availability of faster computers. Force fields, solvation models and simulation methodology have kept pace with computing advancements, and are now quite good. At the tip of the spear recently are GPU-based computing, improved fast-solvation methods, continued advances in force fields, and conformational sampling methods that harness external information. PMID:26773233

  9. Using CONFIG for Simulation of Operation of Water Recovery Subsystems for Advanced Control Software Evaluation

    NASA Technical Reports Server (NTRS)

    Malin, Jane T.; Flores, Luis; Fleming, Land; Throop, Daiv

    2002-01-01

    A hybrid discrete/continuous simulation tool, CONFIG, has been developed to support evaluation of the operability life support systems. CON FIG simulates operations scenarios in which flows and pressures change continuously while system reconfigurations occur as discrete events. In simulations, intelligent control software can interact dynamically with hardware system models. CONFIG simulations have been used to evaluate control software and intelligent agents for automating life support systems operations. A CON FIG model of an advanced biological water recovery system has been developed to interact with intelligent control software that is being used in a water system test at NASA Johnson Space Center

  10. Design and Test of Advanced Thermal Simulators for an Alkali Metal-Cooled Reactor Simulator

    NASA Technical Reports Server (NTRS)

    Garber, Anne E.; Dickens, Ricky E.

    2011-01-01

    The Early Flight Fission Test Facility (EFF-TF) at NASA Marshall Space Flight Center (MSFC) has as one of its primary missions the development and testing of fission reactor simulators for space applications. A key component in these simulated reactors is the thermal simulator, designed to closely mimic the form and function of a nuclear fuel pin using electric heating. Continuing effort has been made to design simple, robust, inexpensive thermal simulators that closely match the steady-state and transient performance of a nuclear fuel pin. A series of these simulators have been designed, developed, fabricated and tested individually and in a number of simulated reactor systems at the EFF-TF. The purpose of the thermal simulators developed under the Fission Surface Power (FSP) task is to ensure that non-nuclear testing can be performed at sufficiently high fidelity to allow a cost-effective qualification and acceptance strategy to be used. Prototype thermal simulator design is founded on the baseline Fission Surface Power reactor design. Recent efforts have been focused on the design, fabrication and test of a prototype thermal simulator appropriate for use in the Technology Demonstration Unit (TDU). While designing the thermal simulators described in this paper, effort were made to improve the axial power profile matching of the thermal simulators. Simultaneously, a search was conducted for graphite materials with higher resistivities than had been employed in the past. The combination of these two efforts resulted in the creation of thermal simulators with power capacities of 2300-3300 W per unit. Six of these elements were installed in a simulated core and tested in the alkali metal-cooled Fission Surface Power Primary Test Circuit (FSP-PTC) at a variety of liquid metal flow rates and temperatures. This paper documents the design of the thermal simulators, test program, and test results.

  11. Methodological advances: using greenhouses to simulate climate change scenarios.

    PubMed

    Morales, F; Pascual, I; Sánchez-Díaz, M; Aguirreolea, J; Irigoyen, J J; Goicoechea, N; Antolín, M C; Oyarzun, M; Urdiain, A

    2014-09-01

    Human activities are increasing atmospheric CO2 concentration and temperature. Related to this global warming, periods of low water availability are also expected to increase. Thus, CO2 concentration, temperature and water availability are three of the main factors related to climate change that potentially may influence crops and ecosystems. In this report, we describe the use of growth chamber - greenhouses (GCG) and temperature gradient greenhouses (TGG) to simulate climate change scenarios and to investigate possible plant responses. In the GCG, CO2 concentration, temperature and water availability are set to act simultaneously, enabling comparison of a current situation with a future one. Other characteristics of the GCG are a relative large space of work, fine control of the relative humidity, plant fertirrigation and the possibility of light supplementation, within the photosynthetic active radiation (PAR) region and/or with ultraviolet-B (UV-B) light. In the TGG, the three above-mentioned factors can act independently or in interaction, enabling more mechanistic studies aimed to elucidate the limiting factor(s) responsible for a given plant response. Examples of experiments, including some aimed to study photosynthetic acclimation, a phenomenon that leads to decreased photosynthetic capacity under long-term exposures to elevated CO2, using GCG and TGG are reported. PMID:25113448

  12. Advanced numerical methods and software approaches for semiconductor device simulation

    SciTech Connect

    CAREY,GRAHAM F.; PARDHANANI,A.L.; BOVA,STEVEN W.

    2000-03-23

    In this article the authors concisely present several modern strategies that are applicable to drift-dominated carrier transport in higher-order deterministic models such as the drift-diffusion, hydrodynamic, and quantum hydrodynamic systems. The approaches include extensions of upwind and artificial dissipation schemes, generalization of the traditional Scharfetter-Gummel approach, Petrov-Galerkin and streamline-upwind Petrov Galerkin (SUPG), entropy variables, transformations, least-squares mixed methods and other stabilized Galerkin schemes such as Galerkin least squares and discontinuous Galerkin schemes. The treatment is representative rather than an exhaustive review and several schemes are mentioned only briefly with appropriate reference to the literature. Some of the methods have been applied to the semiconductor device problem while others are still in the early stages of development for this class of applications. They have included numerical examples from the recent research tests with some of the methods. A second aspect of the work deals with algorithms that employ unstructured grids in conjunction with adaptive refinement strategies. The full benefits of such approaches have not yet been developed in this application area and they emphasize the need for further work on analysis, data structures and software to support adaptivity. Finally, they briefly consider some aspects of software frameworks. These include dial-an-operator approaches such as that used in the industrial simulator PROPHET, and object-oriented software support such as those in the SANDIA National Laboratory framework SIERRA.

  13. Advanced Numerical Methods and Software Approaches for Semiconductor Device Simulation

    DOE PAGESBeta

    Carey, Graham F.; Pardhanani, A. L.; Bova, S. W.

    2000-01-01

    In this article we concisely present several modern strategies that are applicable to driftdominated carrier transport in higher-order deterministic models such as the driftdiffusion, hydrodynamic, and quantum hydrodynamic systems. The approaches include extensions of “upwind” and artificial dissipation schemes, generalization of the traditional Scharfetter – Gummel approach, Petrov – Galerkin and streamline-upwind Petrov Galerkin (SUPG), “entropy” variables, transformations, least-squares mixed methods and other stabilized Galerkin schemes such as Galerkin least squares and discontinuous Galerkin schemes. The treatment is representative rather than an exhaustive review and several schemes are mentioned only briefly with appropriate reference to the literature. Some of themore » methods have been applied to the semiconductor device problem while others are still in the early stages of development for this class of applications. We have included numerical examples from our recent research tests with some of the methods. A second aspect of the work deals with algorithms that employ unstructured grids in conjunction with adaptive refinement strategies. The full benefits of such approaches have not yet been developed in this application area and we emphasize the need for further work on analysis, data structures and software to support adaptivity. Finally, we briefly consider some aspects of software frameworks. These include dial-an-operator approaches such as that used in the industrial simulator PROPHET, and object-oriented software support such as those in the SANDIA National Laboratory framework SIERRA.« less

  14. UCLA Final Technical Report for the "Community Petascale Project for Accelerator Science and Simulation”.

    SciTech Connect

    Mori, Warren

    2015-08-14

    The UCLA Plasma Simulation Group is a major partner of the “Community Petascale Project for Accelerator Science and Simulation”. This is the final technical report. We include an overall summary, a list of publications, progress for the most recent year, and individual progress reports for each year. We have made tremendous progress during the three years. SciDAC funds have contributed to the development of a large number of skeleton codes that illustrate how to write PIC codes with a hierarchy of parallelism. These codes cover 2D and 3D as well as electrostatic solvers (which are used in beam dynamics codes and quasi-static codes) and electromagnetic solvers (which are used in plasma based accelerator codes). We also used these ideas to develop a GPU enabled version of OSIRIS. SciDAC funds were also contributed to the development of strategies to eliminate the Numerical Cerenkov Instability (NCI) which is an issue when carrying laser wakefield accelerator (LWFA) simulations in a boosted frame and when quantifying the emittance and energy spread of self-injected electron beams. This work included the development of a new code called UPIC-EMMA which is an FFT based electromagnetic PIC code and to new hybrid algorithms in OSIRIS. A new hybrid (PIC in r-z and gridless in φ) algorithm was implemented into OSIRIS. In this algorithm the fields and current are expanded into azimuthal harmonics and the complex amplitude for each harmonic is calculated separately. The contributions from each harmonic are summed and then used to push the particles. This algorithm permits modeling plasma based acceleration with some 3D effects but with the computational load of an 2D r-z PIC code. We developed a rigorously charge conserving current deposit for this algorithm. Very recently, we made progress in combining the speed up from the quasi-3D algorithm with that from the Lorentz boosted frame. SciDAC funds also contributed to the improvement and speed up of the quasi-static PIC

  15. Preliminary simulation of an advanced, hingless rotor XV-15 tilt-rotor aircraft

    NASA Technical Reports Server (NTRS)

    Mcveigh, M. A.

    1976-01-01

    The feasibility of the tilt-rotor concept was verified through investigation of the performance, stability and handling qualities of the XV-15 tilt rotor. The rotors were replaced by advanced-technology fiberglass/composite hingless rotors of larger diameter, combined with an advanced integrated fly-by-wire control system. A parametric simulation model of the HRXV-15 was developed, model was used to define acceptable preliminary ranges of primary and secondary control schedules as functions of the flight parameters, to evaluate performance, flying qualities and structural loads, and to have a Boeing-Vertol pilot conduct a simulated flight test evaluation of the aircraft.

  16. State of the Art Assessment of Simulation in Advanced Materials Development

    NASA Technical Reports Server (NTRS)

    Wise, Kristopher E.

    2008-01-01

    Advances in both the underlying theory and in the practical implementation of molecular modeling techniques have increased their value in the advanced materials development process. The objective is to accelerate the maturation of emerging materials by tightly integrating modeling with the other critical processes: synthesis, processing, and characterization. The aims of this report are to summarize the state of the art of existing modeling tools and to highlight a number of areas in which additional development is required. In an effort to maintain focus and limit length, this survey is restricted to classical simulation techniques including molecular dynamics and Monte Carlo simulations.

  17. Advanced Method to Estimate Fuel Slosh Simulation Parameters

    NASA Technical Reports Server (NTRS)

    Schlee, Keith; Gangadharan, Sathya; Ristow, James; Sudermann, James; Walker, Charles; Hubert, Carl

    2005-01-01

    The nutation (wobble) of a spinning spacecraft in the presence of energy dissipation is a well-known problem in dynamics and is of particular concern for space missions. The nutation of a spacecraft spinning about its minor axis typically grows exponentially and the rate of growth is characterized by the Nutation Time Constant (NTC). For launch vehicles using spin-stabilized upper stages, fuel slosh in the spacecraft propellant tanks is usually the primary source of energy dissipation. For analytical prediction of the NTC this fuel slosh is commonly modeled using simple mechanical analogies such as pendulums or rigid rotors coupled to the spacecraft. Identifying model parameter values which adequately represent the sloshing dynamics is the most important step in obtaining an accurate NTC estimate. Analytic determination of the slosh model parameters has met with mixed success and is made even more difficult by the introduction of propellant management devices and elastomeric diaphragms. By subjecting full-sized fuel tanks with actual flight fuel loads to motion similar to that experienced in flight and measuring the forces experienced by the tanks these parameters can be determined experimentally. Currently, the identification of the model parameters is a laborious trial-and-error process in which the equations of motion for the mechanical analog are hand-derived, evaluated, and their results are compared with the experimental results. The proposed research is an effort to automate the process of identifying the parameters of the slosh model using a MATLAB/SimMechanics-based computer simulation of the experimental setup. Different parameter estimation and optimization approaches are evaluated and compared in order to arrive at a reliable and effective parameter identification process. To evaluate each parameter identification approach, a simple one-degree-of-freedom pendulum experiment is constructed and motion is induced using an electric motor. By applying the

  18. Advanced Computer Simulations Of Nanomaterials And Stochastic Biological Processes

    NASA Astrophysics Data System (ADS)

    Minakova, Maria S.

    This dissertation consists of several parts. The first two chapters are devoted to of study of dynamic processes in cellular organelles called filopodia. A stochastic kinetics approach is used to describe non-equilibrium evolution of the filopodial system from nano- to micro scales. Dynamic coupling between chemistry and mechanics is also taken into account in order to investigate the influence of focal adhesions on cell motility. The second chapter explores the possibilities and effects of motor enhanced delivery of actin monomers to the polymerizing tips of filopodia, and how the steady-state filopodial length can exceed the limit set by pure diffusion. Finally, we also challenge the currently existing view of active transport and propose a new theoretical model that accurately describes the motor dynamics and concentration profiles seen in experiments in a physically meaningful way. The third chapter is a result of collaboration between three laboratories, as a part of Energy Frontier Research Center at the University of North Carolina at Chapel Hill. The work presented here unified the fields of synthetic chemistry, photochemistry, and computational physical chemistry in order to investigate a novel bio-synthetic compound and its energy transfer capabilities. This particular peptide-based design has never been studied via Molecular Dynamics with high precision, and it is the first attempt known to us to simulate the whole chromophore-peptide complex in solution in order to gain detailed information about its structural and dynamic features. The fourth chapter deals with the non-equilibrium relaxation induced transport of water molecules in a microemulsion. This problem required a different set of methodologies and a more detailed, all-atomistic treatment of the system. We found interesting water clustering effects and elucidated the most probable mechanism of water transfer through oil under the condition of saturated Langmuir monolayers. Together these

  19. Iron Resources and Oceanic Nutrients: Advancement of Global Environment Simulations

    NASA Astrophysics Data System (ADS)

    Debaar, H. J.

    2002-12-01

    simulated. An existing plankton ecosystem model already well predicts limitation by four nutrients (N, P, Si, Fe) of two algal groups (diatoms and nanoplankton) including export and CO2 air/sea exchange. This is being expanded with 3 other groups of algae and DMS(P)pathways. Next this extended ecosystem model is being simplified while maintaining reliable output for export and CO2/DMS gas exchange. This unit will then be put into two existing OBCM's. Inputs of Fe from above and below into the oceans have been modeled. Moreover a simple global Fe cycling model has been verified versus field data and insights. Two different OBCM's with same upper ocean ecosystem/DMS unit and Fe cycling will be verified versus pre-industrial and present conditions. Next climate change scenario's, notably changes in Fe inputs, will be run, with special attention to climatic feedbacks (warming) on the oceanic cycles and fluxes.

  20. Proposing "the burns suite" as a novel simulation tool for advancing the delivery of burns education.

    PubMed

    Sadideen, Hazim; Wilson, David; Moiemen, Naiem; Kneebone, Roger

    2014-01-01

    Educational theory highlights the importance of contextualized simulation for effective learning. We explored this concept in a burns scenario in a novel, low-cost, high-fidelity, portable, immersive simulation environment (referred to as distributed simulation). This contextualized simulation/distributed simulation combination was named "The Burns Suite" (TBS). A pediatric burn resuscitation scenario was selected after high trainee demand. It was designed on Advanced Trauma and Life Support and Emergency Management of Severe Burns principles and refined using expert opinion through cognitive task analysis. TBS contained "realism" props, briefed nurses, and a simulated patient. Novices and experts were recruited. Five-point Likert-type questionnaires were developed for face and content validity. Cronbach's α was calculated for scale reliability. Semistructured interviews captured responses for qualitative thematic analysis allowing for data triangulation. Twelve participants completed TBS scenario. Mean face and content validity ratings were high (4.6 and 4.5, respectively; range, 4-5). The internal consistency of questions was high. Qualitative data analysis revealed that participants felt 1) the experience was "real" and they were "able to behave as if in a real resuscitation environment," and 2) TBS "addressed what Advanced Trauma and Life Support and Emergency Management of Severe Burns didn't" (including the efficacy of incorporating nontechnical skills). TBS provides a novel, effective simulation tool to significantly advance the delivery of burns education. Recreating clinical challenge is crucial to optimize simulation training. This low-cost approach also has major implications for surgical education, particularly during increasing financial austerity. Alternative scenarios and/or procedures can be recreated within TBS, providing a diverse educational immersive simulation experience. PMID:23877145

  1. WinSRFR: Current Advances in Software for Surface Irrigation Simulation and Analysis

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Significant advances have been made over the last decade in the development of software for surface irrigation analysis. WinSRFR is an integrated tool that combines unsteady flow simulation with tools for system evaluation/parameter estimation, system design, and for operational optimization. Ongoi...

  2. Battery Performance of ADEOS (Advanced Earth Observing Satellite) and Ground Simulation Test Results

    NASA Technical Reports Server (NTRS)

    Koga, K.; Suzuki, Y.; Kuwajima, S.; Kusawake, H.

    1997-01-01

    The Advanced Earth Observing Satellite (ADEOS) is developed with the aim of establishment of platform technology for future spacecraft and inter-orbit communication technology for the transmission of earth observation data. ADEOS uses 5 batteries, consists of two packs. This paper describes, using graphs and tables, the ground simulation tests and results that are carried to determine the performance of the ADEOS batteries.

  3. ADVANCED UTILITY SIMULATION MODEL, DESCRIPTION OF THE NATIONAL LOOP (VERSION 3.0)

    EPA Science Inventory

    The report is one of 11 in a series describing the initial development of the Advanced Utility Simulation Model (AUSM) by the Universities Research Group on Energy (URGE) and its continued development by the Science Applications International Corporation (SAIC) research team. The...

  4. A graphical workstation based part-task flight simulator for preliminary rapid evaluation of advanced displays

    NASA Technical Reports Server (NTRS)

    Wanke, Craig; Kuchar, James; Hahn, Edward; Pritchett, Amy; Hansman, R. J.

    1992-01-01

    Advances in avionics and display technology are significantly changing the cockpit environment in current transport aircraft. The MIT Aeronautical Systems Lab (ASL) has developed a part-task flight simulator specifically to study the effects of these new technologies on flight crew situational awareness and performance. The simulator is based on a commercially-available graphics workstation, and can be rapidly reconfigured to meet the varying demands of experimental studies. The simulator has been successfully used to evaluate graphical microburst alerting displays, electronic instrument approach plates, terrain awareness and alerting displays, and ATC routing amendment delivery through digital datalinks.

  5. A graphical workstation based part-task flight simulator for preliminary rapid evaluation of advanced displays

    NASA Technical Reports Server (NTRS)

    Wanke, Craig; Kuchar, James; Hahn, Edward; Pritchett, A.; Hansman, R. John

    1994-01-01

    Advances in avionics and display technology are significantly changing the cockpit environment in current transport aircraft. The MIT Aeronautical Systems Lab (ASL) developed a part-task flight simulator specifically to study the effects of these new technologies on flight crew situational awareness and performance. The simulator is based on a commercially-available graphics workstation, and can be rapidly reconfigured to meet the varying demands of experimental studies. The simulator was successfully used to evaluate graphical microbursts alerting displays, electronic instrument approach plates, terrain awareness and alerting displays, and ATC routing amendment delivery through digital datalinks.

  6. Wireless Hearing Aid System Simulations using Advanced Design System™: A Behavioral Modeling Approach.

    PubMed

    Singh Rana, Ram; Bin, Tang; Liang, Zhang; Hari Krishna, Garg; De Yun, Wang

    2005-01-01

    The stringent requirements on size and power consumption constrain the conventional hearing aid devices from providing the patients an economic and user friendly solution, specifically for better noise cancellation. With the advancements in technologies such as integrated circuits design, wireless communications and digital signal processing techniques, the wireless hearing aids having multi-microphones, analog, digital and mixed signals and radio frequency signals processing circuits, DSP and programmable units seem to be promising to provide enhanced performance. The focus of this paper is about the system simulation of a typical wireless hearing aid using Agilent Advanced Design System™. The behavioral modeling features are exploited to enable the whole system simulations including electro-acoustic transducers. A few system level simulation results are included. PMID:17282359

  7. Man-vehicle systems research facility advanced aircraft flight simulator throttle mechanism

    NASA Technical Reports Server (NTRS)

    Kurasaki, S. S.; Vallotton, W. C.

    1985-01-01

    The Advanced Aircraft Flight Simulator is equipped with a motorized mechanism that simulates a two engine throttle control system that can be operated via a computer driven performance management system or manually by the pilots. The throttle control system incorporates features to simulate normal engine operations and thrust reverse and vary the force feel to meet a variety of research needs. While additional testing to integrate the work required is principally now in software design, since the mechanical aspects function correctly. The mechanism is an important part of the flight control system and provides the capability to conduct human factors research of flight crews with advanced aircraft systems under various flight conditions such as go arounds, coupled instrument flight rule approaches, normal and ground operations and emergencies that would or would not normally be experienced in actual flight.

  8. Simulator training in gastrointestinal endoscopy - From basic training to advanced endoscopic procedures.

    PubMed

    van der Wiel, S E; Küttner Magalhães, R; Rocha Gonçalves, Carla Rolanda; Dinis-Ribeiro, M; Bruno, M J; Koch, A D

    2016-06-01

    Simulator-based gastrointestinal endoscopy training has gained acceptance over the last decades and has been extensively studied. Several types of simulators have been validated and it has been demonstrated that the use of simulators in the early training setting accelerates the learning curve in acquiring basic skills. Current GI endoscopy simulators lack the degree of realism that would be necessary to provide training to achieve full competency or to be applicable in certification. Virtual Reality and mechanical simulators are commonly used in basic flexible endoscopy training, whereas ex vivo and in vivo models are used in training the most advanced endoscopic procedures. Validated models for the training of more routine therapeutic interventions like polypectomy, EMR, stenting and haemostasis are lacking or scarce and developments in these areas should be encouraged. PMID:27345646

  9. Development of a VOR/DME model for an advanced concepts simulator

    NASA Technical Reports Server (NTRS)

    Steinmetz, G. G.; Bowles, R. L.

    1984-01-01

    The report presents a definition of a VOR/DME, airborne and ground systems simulation model. This description was drafted in response to a need in the creation of an advanced concepts simulation in which flight station design for the 1980 era can be postulated and examined. The simulation model described herein provides a reasonable representation of VOR/DME station in the continental United States including area coverage by type and noise errors. The detail in which the model has been cast provides the interested researcher with a moderate fidelity level simulator tool for conducting research and evaluation of navigator algorithms. Assumptions made within the development are listed and place certain responsibilities (data bases, communication with other simulation modules, uniform round earth, etc.) upon the researcher.

  10. Advances in the simulation and automated measurement of well-sorted granular material: 1. Simulation

    USGS Publications Warehouse

    Daniel Buscombe; Rubin, David M.

    2012-01-01

    1. In this, the first of a pair of papers which address the simulation and automated measurement of well-sorted natural granular material, a method is presented for simulation of two-phase (solid, void) assemblages of discrete non-cohesive particles. The purpose is to have a flexible, yet computationally and theoretically simple, suite of tools with well constrained and well known statistical properties, in order to simulate realistic granular material as a discrete element model with realistic size and shape distributions, for a variety of purposes. The stochastic modeling framework is based on three-dimensional tessellations with variable degrees of order in particle-packing arrangement. Examples of sediments with a variety of particle size distributions and spatial variability in grain size are presented. The relationship between particle shape and porosity conforms to published data. The immediate application is testing new algorithms for automated measurements of particle properties (mean and standard deviation of particle sizes, and apparent porosity) from images of natural sediment, as detailed in the second of this pair of papers. The model could also prove useful for simulating specific depositional structures found in natural sediments, the result of physical alterations to packing and grain fabric, using discrete particle flow models. While the principal focus here is on naturally occurring sediment and sedimentary rock, the methods presented might also be useful for simulations of similar granular or cellular material encountered in engineering, industrial and life sciences.

  11. Advanced manned space flight simulation and training: An investigation of simulation host computer system concepts

    NASA Technical Reports Server (NTRS)

    Montag, Bruce C.; Bishop, Alfred M.; Redfield, Joe B.

    1989-01-01

    The findings of a preliminary investigation by Southwest Research Institute (SwRI) in simulation host computer concepts is presented. It is designed to aid NASA in evaluating simulation technologies for use in spaceflight training. The focus of the investigation is on the next generation of space simulation systems that will be utilized in training personnel for Space Station Freedom operations. SwRI concludes that NASA should pursue a distributed simulation host computer system architecture for the Space Station Training Facility (SSTF) rather than a centralized mainframe based arrangement. A distributed system offers many advantages and is seen by SwRI as the only architecture that will allow NASA to achieve established functional goals and operational objectives over the life of the Space Station Freedom program. Several distributed, parallel computing systems are available today that offer real-time capabilities for time critical, man-in-the-loop simulation. These systems are flexible in terms of connectivity and configurability, and are easily scaled to meet increasing demands for more computing power.

  12. Advanced Simulation of Coupled Earthquake and Tsunami Events (ASCETE) - Simulation Techniques for Realistic Tsunami Process Studies

    NASA Astrophysics Data System (ADS)

    Behrens, Joern; Bader, Michael; Breuer, Alexander N.; van Dinther, Ylona; Gabriel, Alice-A.; Galvez Barron, Percy E.; Rahnema, Kaveh; Vater, Stefan; Wollherr, Stephanie

    2015-04-01

    At the End of phase 1 of the ASCETE project a simulation framework for coupled physics-based rupture generation with tsunami propagation and inundation is available. Adaptive mesh tsunami propagation and inundation by discontinuous Galerkin Runge-Kutta methods allows for accurate and conservative inundation schemes. Combined with a tree-based refinement strategy to highly optimize the code for high-performance computing architectures, a modeling tool for high fidelity tsunami simulations has been constructed. Validation results demonstrate the capacity of the software. Rupture simulation is performed by an unstructured tetrahedral discontinuous Galerking ADER discretization, which allows for accurate representation of complex geometries. The implemented code was nominated for and was selected as a finalist for the Gordon Bell award in high-performance computing. Highly realistic rupture events can be simulated with this modeling tool. The coupling of rupture induced wave activity and displacement with hydrodynamic equations still poses a major problem due to diverging time and spatial scales. Some insight from the ASCETE set-up could be gained and the presentation will focus on the coupled behavior of the simulation system. Finally, an outlook to phase 2 of the ASCETE project will be given in which further development of detailed physical processes as well as near-realistic scenario computations are planned. ASCETE is funded by the Volkswagen Foundation.

  13. The Osseus platform: a prototype for advanced web-based distributed simulation

    NASA Astrophysics Data System (ADS)

    Franceschini, Derrick; Riecken, Mark

    2016-05-01

    Recent technological advances in web-based distributed computing and database technology have made possible a deeper and more transparent integration of some modeling and simulation applications. Despite these advances towards true integration of capabilities, disparate systems, architectures, and protocols will remain in the inventory for some time to come. These disparities present interoperability challenges for distributed modeling and simulation whether the application is training, experimentation, or analysis. Traditional approaches call for building gateways to bridge between disparate protocols and retaining interoperability specialists. Challenges in reconciling data models also persist. These challenges and their traditional mitigation approaches directly contribute to higher costs, schedule delays, and frustration for the end users. Osseus is a prototype software platform originally funded as a research project by the Defense Modeling & Simulation Coordination Office (DMSCO) to examine interoperability alternatives using modern, web-based technology and taking inspiration from the commercial sector. Osseus provides tools and services for nonexpert users to connect simulations, targeting the time and skillset needed to successfully connect disparate systems. The Osseus platform presents a web services interface to allow simulation applications to exchange data using modern techniques efficiently over Local or Wide Area Networks. Further, it provides Service Oriented Architecture capabilities such that finer granularity components such as individual models can contribute to simulation with minimal effort.

  14. The role of numerical simulation for the development of an advanced HIFU system

    NASA Astrophysics Data System (ADS)

    Okita, Kohei; Narumi, Ryuta; Azuma, Takashi; Takagi, Shu; Matumoto, Yoichiro

    2014-10-01

    High-intensity focused ultrasound (HIFU) has been used clinically and is under clinical trials to treat various diseases. An advanced HIFU system employs ultrasound techniques for guidance during HIFU treatment instead of magnetic resonance imaging in current HIFU systems. A HIFU beam imaging for monitoring the HIFU beam and a localized motion imaging for treatment validation of tissue are introduced briefly as the real-time ultrasound monitoring techniques. Numerical simulations have a great impact on the development of real-time ultrasound monitoring as well as the improvement of the safety and efficacy of treatment in advanced HIFU systems. A HIFU simulator was developed to reproduce ultrasound propagation through the body in consideration of the elasticity of tissue, and was validated by comparison with in vitro experiments in which the ultrasound emitted from the phased-array transducer propagates through the acrylic plate acting as a bone phantom. As the result, the defocus and distortion of the ultrasound propagating through the acrylic plate in the simulation quantitatively agree with that in the experimental results. Therefore, the HIFU simulator accurately reproduces the ultrasound propagation through the medium whose shape and physical properties are well known. In addition, it is experimentally confirmed that simulation-assisted focus control of the phased-array transducer enables efficient assignment of the focus to the target. Simulation-assisted focus control can contribute to design of transducers and treatment planning.

  15. FY05-FY06 Advanced Simulation and Computing Implementation Plan, Volume 2

    SciTech Connect

    Baron, A L

    2004-07-19

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the safety and reliability of the U.S. nuclear stockpile. The SSP uses past nuclear test data along with future non-nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program will require the continued use of current facilities and programs along with new experimental facilities and computational enhancements to support these programs. The Advanced Simulation and Computing program (ASC) is a cornerstone of the SSP, providing simulation capabilities and computational resources to support the annual stockpile assessment and certification, to study advanced nuclear weapon design and manufacturing processes, to analyze accident scenarios and weapons aging, and to provide the tools to enable stockpile life extension programs and the resolution of significant finding investigations (SFIs). This requires a balanced system of technical staff, hardware, simulation software, and computer science solutions.

  16. Final Scientific Report: A Scalable Development Environment for Peta-Scale Computing

    SciTech Connect

    Karbach, Carsten; Frings, Wolfgang

    2013-02-22

    This document is the final scientific report of the project DE-SC000120 (A scalable Development Environment for Peta-Scale Computing). The objective of this project is the extension of the Parallel Tools Platform (PTP) for applying it to peta-scale systems. PTP is an integrated development environment for parallel applications. It comprises code analysis, performance tuning, parallel debugging and system monitoring. The contribution of the Juelich Supercomputing Centre (JSC) aims to provide a scalable solution for system monitoring of supercomputers. This includes the development of a new communication protocol for exchanging status data between the target remote system and the client running PTP. The communication has to work for high latency. PTP needs to be implemented robustly and should hide the complexity of the supercomputer's architecture in order to provide a transparent access to various remote systems via a uniform user interface. This simplifies the porting of applications to different systems, because PTP functions as abstraction layer between parallel application developer and compute resources. The common requirement for all PTP components is that they have to interact with the remote supercomputer. E.g. applications are built remotely and performance tools are attached to job submissions and their output data resides on the remote system. Status data has to be collected by evaluating outputs of the remote job scheduler and the parallel debugger needs to control an application executed on the supercomputer. The challenge is to provide this functionality for peta-scale systems in real-time. The client server architecture of the established monitoring application LLview, developed by the JSC, can be applied to PTP's system monitoring. LLview provides a well-arranged overview of the supercomputer's current status. A set of statistics, a list of running and queued jobs as well as a node display mapping running jobs to their compute resources form the

  17. Fortran Transformational Tools in Support of Scientific Application Development for Petascale Computer Architectures

    SciTech Connect

    Sottille, Matthew

    2013-09-12

    This document is the final report for a multi-year effort building infrastructure to support tool development for Fortran programs. We also investigated static analysis and code transformation methods relevant to scientific programmers who are writing Fortran programs for petascale-class high performance computing systems. This report details our accomplishments, technical approaches, and provides information on where the research results and code may be obtained from an open source software repository. The report for the first year of the project that was performed at the University of Oregon prior to the PI moving to Galois, Inc. is included as an appendix.

  18. Advanced simulation technology for etching process design for CMOS device applications

    NASA Astrophysics Data System (ADS)

    Kuboi, Nobuyuki; Fukasawa, Masanaga; Tatsumi, Tetsuya

    2016-07-01

    Plasma etching is a critical process for the realization of high performance in the next generation of CMOS devices. To predict and control fluctuations in the etching properties accurately during mass production, it is essential that etching process simulation technology considers fluctuations in the plasma chamber wall conditions, the effects of by-products on the critical dimensions, the Si recess dependence on the wafer open area ratio and local pattern structure, and the time-dependent plasma-induced damage distribution associated with the three-dimensional feature scale profile at the 100 nm level. This consideration can overcome the issues with conventional simulations performed under the assumed ideal conditions, which are not accurate enough for practical process design. In this article, these advanced process simulation technologies are reviewed, and, from the results of suitable process simulations, a new etching system that automatically controls the etching properties is proposed to enable stable CMOS device fabrication with high yields.

  19. Integrating advanced materials simulation techniques into an automated data analysis workflow at the Spallation Neutron Source

    SciTech Connect

    Borreguero Calvo, Jose M; Campbell, Stuart I; Delaire, Olivier A; Doucet, Mathieu; Goswami, Monojoy; Hagen, Mark E; Lynch, Vickie E; Proffen, Thomas E; Ren, Shelly; Savici, Andrei T; Sumpter, Bobby G

    2014-01-01

    This presentation will review developments on the integration of advanced modeling and simulation techniques into the analysis step of experimental data obtained at the Spallation Neutron Source. A workflow framework for the purpose of refining molecular mechanics force-fields against quasi-elastic neutron scattering data is presented. The workflow combines software components to submit model simulations to remote high performance computers, a message broker interface for communications between the optimizer engine and the simulation production step, and tools to convolve the simulated data with the experimental resolution. A test application shows the correction to a popular fixed-charge water model in order to account polarization effects due to the presence of solvated ions. Future enhancements to the refinement workflow are discussed. This work is funded through the DOE Center for Accelerating Materials Modeling.

  20. Advances in POST2 End-to-End Descent and Landing Simulation for the ALHAT Project

    NASA Technical Reports Server (NTRS)

    Davis, Jody L.; Striepe, Scott A.; Maddock, Robert W.; Hines, Glenn D.; Paschall, Stephen, II; Cohanim, Babak E.; Fill, Thomas; Johnson, Michael C.; Bishop, Robert H.; DeMars, Kyle J.; Sostaric, Ronald r.; Johnson, Andrew E.

    2008-01-01

    Program to Optimize Simulated Trajectories II (POST2) is used as a basis for an end-to-end descent and landing trajectory simulation that is essential in determining design and integration capability and system performance of the lunar descent and landing system and environment models for the Autonomous Landing and Hazard Avoidance Technology (ALHAT) project. The POST2 simulation provides a six degree-of-freedom capability necessary to test, design and operate a descent and landing system for successful lunar landing. This paper presents advances in the development and model-implementation of the POST2 simulation, as well as preliminary system performance analysis, used for the testing and evaluation of ALHAT project system models.

  1. CAPE-OPEN Integration for Advanced Process Engineering Co-Simulation

    SciTech Connect

    Zitney, S.E.

    2006-11-01

    This paper highlights the use of the CAPE-OPEN (CO) standard interfaces in the Advanced Process Engineering Co-Simulator (APECS) developed at the National Energy Technology Laboratory (NETL). The APECS system uses the CO unit operation, thermodynamic, and reaction interfaces to provide its plug-and-play co-simulation capabilities, including the integration of process simulation with computational fluid dynamics (CFD) simulation. APECS also relies heavily on the use of a CO COM/CORBA bridge for running process/CFD co-simulations on multiple operating systems. For process optimization in the face of multiple and some time conflicting objectives, APECS offers stochastic modeling and multi-objective optimization capabilities developed to comply with the CO software standard. At NETL, system analysts are applying APECS to a wide variety of advanced power generation systems, ranging from small fuel cell systems to commercial-scale power plants including the coal-fired, gasification-based FutureGen power and hydrogen production plant.

  2. Development and integration of the Army's Advanced Multispectral Simulation Test Acceptance Resource (AMSTAR) HWIL facilities

    NASA Astrophysics Data System (ADS)

    LeSueur, Kenneth G.; Lowry, William; Morris, Joe

    2006-05-01

    The Advanced Multispectral Simulation Test Acceptance Resource (AMSTAR) is a suite of state-of-the-art hardware-in-the-loop (HWIL) simulation / test capabilities designed to meet the life-cycle testing needs of multi-spectral systems. This paper presents the major AMSTAR facility design concepts and each of the Millimeter Wave (MMW), Infrared (IR), and Semi-Active Laser (SAL) in-band scene generation and projection system designs. The emergence of Multispectral sensors in missile systems necessitates capabilities such as AMSTAR to simultaneous project MMW, IR, and SAL wave bands into a common sensor aperture.

  3. Development and integration of the Army's advanced multispectral simulation test acceptance resource (AMSTAR) HWIL facilities

    NASA Astrophysics Data System (ADS)

    LeSueur, Kenneth G.; Lowry, William; Morris, Joe

    2005-05-01

    The Advanced Multispectral Simulation Test Acceptance Resource (AMSTAR) is a suite of state-of-the-art Hardware-In-the-Loop (HWIL) simulation / test capabilities designed to meet the life-cycle testing needs of multi-spectral systems. This paper presents the major AMSTAR facility design concepts and each of the Millimeter Wave (MMW), Infrared (IR), and Semi-Active Laser (SAL) in-band scene generation and projection system designs. The emergence of Multispectral sensors in missile systems necessitates capabilities such as AMSTAR to simultaneous project MMW, IR, and SAL wave bands into a common sensor aperture.

  4. Accuracy of a Decision Aid for Advance Care Planning: Simulated End-of-Life Decision Making

    PubMed Central

    Levi, Benjamin H.; Heverley, Steven R.; Green, Michael J.

    2013-01-01

    Purpose Advance directives have been criticized for failing to help physicians make decisions consistent with patients’ wishes. This pilot study sought to determine if an interactive, computer-based decision aid that generates an advance directive can help physicians accurately translate patients’ wishes into treatment decisions. Methods We recruited 19 patient-participants who had each previously created an advance directive using a computer-based decision aid, and 14 physicians who had no prior knowledge of the patient-participants. For each advance directive, three physicians were randomly assigned to review the advance directive and make five to six treatment decisions for each of six (potentially) end-of-life clinical scenarios. From the three individual physicians’ responses, a “consensus physician response” was generated for each treatment decision (total decisions = 32). This consensus response was shared with the patient whose advance directive had been reviewed, and she/he was then asked to indicate how well the physician translated his/her wishes into clinical decisions. Results Patient-participants agreed with the consensus physician responses 84 percent (508/608) of the time, including 82 percent agreement on whether to provide mechanical ventilation, and 75 percent on decisions about cardiopulmonary resuscitation (CPR). Across the six vignettes, patient-participants’ rating of how well physicians translated their advance directive into medical decisions was 8.4 (range = 6.5–10, where 1 = extremely poorly, and 10 = extremely well). Physicians’ overall rating of their confidence at accurately translating patients’ wishes into clinical decisions was 7.8 (range = 6.1–9.3, 1 = not at all confident, 10 = extremely confident). Conclusion For simulated cases, a computer-based decision aid for advance care planning can help physicians more confidently make end-of-life decisions that patients will endorse. PMID:22167985

  5. Advanced Simulation and Computing FY10-11 Implementation Plan Volume 2, Rev. 0

    SciTech Connect

    Carnes, B

    2009-06-08

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the surety and reliability of the U.S. nuclear stockpile. The SSP uses past nuclear test data along with current and future non-nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of current facilities and programs along with new experimental facilities and computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities and computational resources to support the annual stockpile assessment and certification, to study advanced nuclear weapons design and manufacturing processes, to analyze accident scenarios and weapons aging, and to provide the tools to enable stockpile Life Extension Programs (LEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balanced resource, including technical staff, hardware, simulation software, and computer science solutions. In its first decade, the ASC strategy focused on demonstrating simulation capabilities of unprecedented scale in three spatial dimensions. In its second decade, ASC is focused on increasing its predictive capabilities in a three-dimensional simulation environment while maintaining support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (focused on sufficient resolution, dimensionality and scientific details); to quantify critical margins and uncertainties (QMU); and to resolve increasingly difficult analyses needed for the SSP. Moreover, ASC has restructured its business model from one that

  6. Advanced Simulation & Computing FY09-FY10 Implementation Plan Volume 2, Rev. 0

    SciTech Connect

    Meisner, R; Perry, J; McCoy, M; Hopson, J

    2008-04-30

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the safety and reliability of the U.S. nuclear stockpile. The SSP uses past nuclear test data along with current and future nonnuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of current facilities and programs along with new experimental facilities and computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC)1 is a cornerstone of the SSP, providing simulation capabilities and computational resources to support the annual stockpile assessment and certification, to study advanced nuclear-weapons design and manufacturing processes, to analyze accident scenarios and weapons aging, and to provide the tools to enable Stockpile Life Extension Programs (SLEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balanced resource, including technical staff, hardware, simulation software, and computer science solutions. In its first decade, the ASC strategy focused on demonstrating simulation capabilities of unprecedented scale in three spatial dimensions. In its second decade, ASC is focused on increasing its predictive capabilities in a three-dimensional simulation environment while maintaining the support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (focused on sufficient resolution, dimensionality and scientific details); to quantify critical margins and uncertainties (QMU); and to resolve increasingly difficult analyses needed for the SSP. Moreover, ASC has restructured its business model from one

  7. Advanced Simulation and Computing Fiscal Year 2011-2012 Implementation Plan, Revision 0

    SciTech Connect

    McCoy, M; Phillips, J; Hpson, J; Meisner, R

    2010-04-22

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the surety and reliability of the U.S. nuclear stockpile. The SSP uses past nuclear test data along with current and future non-nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering (D&E) programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of current facilities and programs along with new experimental facilities and computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities and computational resources to support the annual stockpile assessment and certification, to study advanced nuclear weapons design and manufacturing processes, to analyze accident scenarios and weapons aging, and to provide the tools to enable stockpile Life Extension Programs (LEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balanced resource, including technical staff, hardware, simulation software, and computer science solutions. In its first decade, the ASC strategy focused on demonstrating simulation capabilities of unprecedented scale in three spatial dimensions. In its second decade, ASC is focused on increasing its predictive capabilities in a three-dimensional (3D) simulation environment while maintaining support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (focused on sufficient resolution, dimensionality and scientific details); to quantify critical margins and uncertainties (QMU); and to resolve increasingly difficult analyses needed for the SSP. Moreover, ASC has restructured its business model

  8. Advanced Simulation and Computing FY08-09 Implementation Plan Volume 2 Revision 0

    SciTech Connect

    McCoy, M; Kusnezov, D; Bikkel, T; Hopson, J

    2007-04-25

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the safety and reliability of the U.S. nuclear stockpile. The SSP uses past nuclear test data along with current and future nonnuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of current facilities and programs along with new experimental facilities and computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities and computational resources to support the annual stockpile assessment and certification, to study advanced nuclear-weapons design and manufacturing processes, to analyze accident scenarios and weapons aging, and to provide the tools to enable Stockpile Life Extension Programs (SLEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balanced resource, including technical staff, hardware, simulation software, and computer science solutions. In its first decade, the ASC strategy focused on demonstrating simulation capabilities of unprecedented scale in three spatial dimensions. In its second decade, ASC is focused on increasing its predictive capabilities in a three-dimensional simulation environment while maintaining the support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (focused on sufficient resolution, dimensionality and scientific details); to quantify critical margins and uncertainties (QMU); and to resolve increasingly difficult analyses needed for the SSP. Moreover, ASC has restructured its business model from one

  9. Advanced Simulation and Computing FY09-FY10 Implementation Plan Volume 2, Rev. 1

    SciTech Connect

    Kissel, L

    2009-04-01

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the surety and reliability of the U.S. nuclear stockpile. The SSP uses past nuclear test data along with current and future non-nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of current facilities and programs along with new experimental facilities and computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities and computational resources to support the annual stockpile assessment and certification, to study advanced nuclear weapons design and manufacturing processes, to analyze accident scenarios and weapons aging, and to provide the tools to enable stockpile Life Extension Programs (LEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balanced resource, including technical staff, hardware, simulation software, and computer science solutions. In its first decade, the ASC strategy focused on demonstrating simulation capabilities of unprecedented scale in three spatial dimensions. In its second decade, ASC is focused on increasing its predictive capabilities in a three-dimensional simulation environment while maintaining support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (focused on sufficient resolution, dimensionality and scientific details); to quantify critical margins and uncertainties (QMU); and to resolve increasingly difficult analyses needed for the SSP. Moreover, ASC has restructured its business model from one that

  10. Advanced Simulation and Computing FY09-FY10 Implementation Plan, Volume 2, Revision 0.5

    SciTech Connect

    Meisner, R; Hopson, J; Peery, J; McCoy, M

    2008-10-07

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the surety and reliability of the U.S. nuclear stockpile. The SSP uses past nuclear test data along with current and future non-nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of current facilities and programs along with new experimental facilities and computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC)1 is a cornerstone of the SSP, providing simulation capabilities and computational resources to support the annual stockpile assessment and certification, to study advanced nuclear weapons design and manufacturing processes, to analyze accident scenarios and weapons aging, and to provide the tools to enable stockpile Life Extension Programs (LEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balanced resource, including technical staff, hardware, simulation software, and computer science solutions. In its first decade, the ASC strategy focused on demonstrating simulation capabilities of unprecedented scale in three spatial dimensions. In its second decade, ASC is focused on increasing its predictive capabilities in a three-dimensional simulation environment while maintaining support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (focused on sufficient resolution, dimensionality and scientific details); to quantify critical margins and uncertainties (QMU); and to resolve increasingly difficult analyses needed for the SSP. Moreover, ASC has restructured its business model from one

  11. Advanced Simulation and Computing FY08-09 Implementation Plan, Volume 2, Revision 0.5

    SciTech Connect

    Kusnezov, D; Bickel, T; McCoy, M; Hopson, J

    2007-09-13

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the surety and reliability of the U.S. nuclear stockpile. The SSP uses past nuclear test data along with current and future non-nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of current facilities and programs along with new experimental facilities and computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC)1 is a cornerstone of the SSP, providing simulation capabilities and computational resources to support the annual stockpile assessment and certification, to study advanced nuclear-weapons design and manufacturing processes, to analyze accident scenarios and weapons aging, and to provide the tools to enable Stockpile Life Extension Programs (SLEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balanced resource, including technical staff, hardware, simulation software, and computer science solutions. In its first decade, the ASC strategy focused on demonstrating simulation capabilities of unprecedented scale in three spatial dimensions. In its second decade, ASC is focused on increasing its predictive capabilities in a three-dimensional simulation environment while maintaining the support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (focused on sufficient resolution, dimensionality and scientific details); to quantify critical margins and uncertainties (QMU); and to resolve increasingly difficult analyses needed for the SSP. Moreover, ASC has restructured its business model from

  12. Advanced Simulation and Computing FY10-FY11 Implementation Plan Volume 2, Rev. 0.5

    SciTech Connect

    Meisner, R; Peery, J; McCoy, M; Hopson, J

    2009-09-08

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the surety and reliability of the U.S. nuclear stockpile. The SSP uses past nuclear test data along with current and future non-nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering (D&E) programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of current facilities and programs along with new experimental facilities and computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities and computational resources to support the annual stockpile assessment and certification, to study advanced nuclear weapons design and manufacturing processes, to analyze accident scenarios and weapons aging, and to provide the tools to enable stockpile Life Extension Programs (LEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balanced resource, including technical staff, hardware, simulation software, and computer science solutions. In its first decade, the ASC strategy focused on demonstrating simulation capabilities of unprecedented scale in three spatial dimensions. In its second decade, ASC is focused on increasing its predictive capabilities in a three-dimensional (3D) simulation environment while maintaining support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (focused on sufficient resolution, dimensionality and scientific details); to quantify critical margins and uncertainties (QMU); and to resolve increasingly difficult analyses needed for the SSP. Moreover, ASC has restructured its business model

  13. Advanced Simulation and Computing FY07-08 Implementation Plan Volume 2

    SciTech Connect

    Kusnezov, D; Hale, A; McCoy, M; Hopson, J

    2006-06-22

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the safety and reliability of the U.S. nuclear stockpile. The SSP uses past nuclear test data along with current and future nonnuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program will require the continued use of current facilities and programs along with new experimental facilities and computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities and computational resources to support the annual stockpile assessment and certification, to study advanced nuclear-weapons design and manufacturing processes, to analyze accident scenarios and weapons aging, and to provide the tools to enable Stockpile Life Extension Programs (SLEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balanced resource, including technical staff, hardware, simulation software, and computer science solutions. In its first decade, the ASC strategy focused on demonstrating simulation capabilities of unprecedented scale in three spatial dimensions. In its second decade, ASC is focused on increasing its predictive capabilities in a three-dimensional simulation environment while maintaining the support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (focused on sufficient resolution, dimensionality and scientific details); to quantify critical margins and uncertainties (QMU); and to resolve increasingly difficult analyses needed for the SSP. Moreover, ASC has restructured its business model from

  14. Advanced Methodology for Simulation of Complex Flows Using Structured Grid Systems

    NASA Technical Reports Server (NTRS)

    Steinthorsson, Erlendur; Modiano, David

    1995-01-01

    Detailed simulations of viscous flows in complicated geometries pose a significant challenge to current capabilities of Computational Fluid Dynamics (CFD). To enable routine application of CFD to this class of problems, advanced methodologies are required that employ (a) automated grid generation, (b) adaptivity, (c) accurate discretizations and efficient solvers, and (d) advanced software techniques. Each of these ingredients contributes to increased accuracy, efficiency (in terms of human effort and computer time), and/or reliability of CFD software. In the long run, methodologies employing structured grid systems will remain a viable choice for routine simulation of flows in complex geometries only if genuinely automatic grid generation techniques for structured grids can be developed and if adaptivity is employed more routinely. More research in both these areas is urgently needed.

  15. Overview of the Consortium for the Advanced Simulation of Light Water Reactors (CASL)

    NASA Astrophysics Data System (ADS)

    Kulesza, Joel A.; Franceschini, Fausto; Evans, Thomas M.; Gehin, Jess C.

    2016-02-01

    The Consortium for Advanced Simulation of Light Water Reactors (CASL) was established in July 2010 for the purpose of providing advanced modeling and simulation solutions for commercial nuclear reactors. The primary goal is to provide coupled, higher-fidelity, usable modeling and simulation capabilities than are currently available. These are needed to address light water reactor (LWR) operational and safety performance-defining phenomena that are not yet able to be fully modeled taking a first-principles approach. In order to pursue these goals, CASL has participation from laboratory, academic, and industry partners. These partners are pursuing the solution of ten major "Challenge Problems" in order to advance the state-of-the-art in reactor design and analysis to permit power uprates, higher burnup, life extension, and increased safety. At present, the problems being addressed by CASL are primarily reactor physics-oriented; however, this paper is intended to introduce CASL to the reactor dosimetry community because of the importance of reactor physics modelling and nuclear data to define the source term for that community and the applicability and extensibility of the transport methods being developed.

  16. Beyond Petascale with the HipGISAXS Software Suite

    NASA Astrophysics Data System (ADS)

    Hexemer, Alexander; Li, Sherry; Chourou, Slim; Sarje, Abhinav

    2014-03-01

    We have developed HipGISAXS, a software suite to analyze GISAXS and SAXS data for structural characterization of materials at the nano scale using X-rays. The software has been developed as a massively-parallel system capable of harnessing the raw computational power offered by clusters and supercomputers built using graphics processors (GPUs), Intel Phi co-processors, or commodity multi-core CPUs. Currently the forward GISAXS simulation is a major component of HipGISAXS, which simulates the X-ray scattering process based on the Distorted Wave Born Approximation (DWBS) theory, for any given nano structures and morphologies with a set of experimental configurations. These simulations are compute-intensive, and have a high degree of parallelism available, making them well-suited for fine-grained parallel computations on highly parallel many core processors like GPUs. Furthermore, a large number of such simulations can be carried out simultaneously for various experimental input parameters. HipGISAXS also includes a Reverse Monte Carlo based modeling tool for SAXS data. With HipGISAXS we have demonstrated a sustained compute performance of over 1 Petaflop on 8000 GPU nodes of the Titan supercomputer at ORNL, and have shown it to be highly scalable.

  17. Enabling a Highly-Scalable Global Address Space Model for Petascale Computing

    SciTech Connect

    Apra, Edoardo; Vetter, Jeffrey S; Yu, Weikuan

    2010-01-01

    Over the past decade, the trajectory to the petascale has been built on increased complexity and scale of the underlying parallel architectures. Meanwhile, software de- velopers have struggled to provide tools that maintain the productivity of computational science teams using these new systems. In this regard, Global Address Space (GAS) programming models provide a straightforward and easy to use addressing model, which can lead to improved produc- tivity. However, the scalability of GAS depends directly on the design and implementation of the runtime system on the target petascale distributed-memory architecture. In this paper, we describe the design, implementation, and optimization of the Aggregate Remote Memory Copy Interface (ARMCI) runtime library on the Cray XT5 2.3 PetaFLOPs computer at Oak Ridge National Laboratory. We optimized our implementation with the flow intimation technique that we have introduced in this paper. Our optimized ARMCI implementation improves scalability of both the Global Arrays (GA) programming model and a real-world chemistry application NWChem from small jobs up through 180,000 cores.

  18. Advanced Simulation in Undergraduate Pilot Training: Automatic Instructional System. Final Report for the Period March 1971-January 1975.

    ERIC Educational Resources Information Center

    Faconti, Victor; Epps, Robert

    The Advanced Simulator for Undergraduate Pilot Training (ASUPT) was designed to investigate the role of simulation in the future Undergraduate Pilot Training (UPT) program. The Automated Instructional System designed for the ASUPT simulator was described in this report. The development of the Automated Instructional System for ASUPT was based upon…

  19. Vector Field Visual Data Analysis Technologies for Petascale Computational Science

    SciTech Connect

    Garth, Christoph; Deines, Eduard; Joy, Kenneth I.; Bethel, E. Wes; Childs, Hank; Weber, Gunther; Ahern, Sean; Pugmire, Dave; Sanderson, Allen; Johnson, Chris

    2009-11-13

    State-of-the-art computational science simulations generate large-scale vector field data sets. Visualization and analysis is a key aspect of obtaining insight into these data sets and represents an important challenge. This article discusses possibilities and challenges of modern vector field visualization and focuses on methods and techniques developed in the SciDAC Visualization and Analytics Center for Enabling Technologies (VACET) and deployed in the open-source visualization tool, VisIt.

  20. Validation of an Advanced Material Model for Simulating the Impact and Shock Response of Composite Materials

    NASA Astrophysics Data System (ADS)

    Clegg, Richard A.; Hayhurst, Colin J.; Nahme, Hartwig

    2001-06-01

    Validation of an advanced continuum based numerical model for the simulation of the shock response of composite materials during high rate transient dynamic loading is described. The constitutive model, implemented in AUTODYN-2D and 3D, allows for the representation of non-linear shock effects in combination with orthotropic stiffness and damage. Simulations of uniaxial flyer plate experiments on aramid and polyethylene fibre composite systems are presented and compared with experiment. The continuum model is shown to reproduce well the experimental VISAR velocity traces at the rear surface of the targets. Finally, practical application of the model as implemented in AUTODYN is demonstrated through the simulation of ballistic and hypervelocity impact events. Comparison with experiment is given where possible.

  1. Technical Basis for Physical Fidelity of NRC Control Room Training Simulators for Advanced Reactors

    SciTech Connect

    Minsk, Brian S.; Branch, Kristi M.; Bates, Edward K.; Mitchell, Mark R.; Gore, Bryan F.; Faris, Drury K.

    2009-10-09

    The objective of this study is to determine how simulator physical fidelity influences the effectiveness of training the regulatory personnel responsible for examination and oversight of operating personnel and inspection of technical systems at nuclear power reactors. It seeks to contribute to the U.S. Nuclear Regulatory Commission’s (NRC’s) understanding of the physical fidelity requirements of training simulators. The goal of the study is to provide an analytic framework, data, and analyses that inform NRC decisions about the physical fidelity requirements of the simulators it will need to train its staff for assignment at advanced reactors. These staff are expected to come from increasingly diverse educational and experiential backgrounds.

  2. Do Advance Yield Markings Increase Safe Driver Behaviors at Unsignalized, Marked Midblock Crosswalks? Driving Simulator Study

    PubMed Central

    Gómez, Radhameris A.; Samuel, Siby; Gerardino, Luis Roman; Romoser, Matthew R. E.; Collura, John; Knodler, Michael; Fisher, Donald L.

    2012-01-01

    In the United States, 78% of pedestrian crashes occur at noninter-section crossings. As a result, unsignalized, marked midblock crosswalks are prime targets for remediation. Many of these crashes occur under sight-limited conditions in which the view of critical information by the driver or pedestrian is obstructed by a vehicle stopped in an adjacent travel or parking lane on the near side of the crosswalk. Study of such a situation on the open road is much too risky, but study of the situation in a driving simulator is not. This paper describes the development of scenarios with sight limitations to compare potential vehicle–pedestrian conflicts on a driving simulator under conditions with two different types of pavement markings. Under the first condition, advance yield markings and symbol signs (prompts) that indicated “yield here to pedestrians” were used to warn drivers of pedestrians at marked, midblock crosswalks. Under the second condition, standard crosswalk treatments and prompts were used to warn drivers of these hazards. Actual crashes as well as the drivers' point of gaze were measured to determine if the drivers approaching a marked midblock crosswalk looked for pedestrians in the crosswalk more frequently and sooner in high-risk scenarios when advance yield markings and prompts were present than when standard markings and prompts were used. Fewer crashes were found to occur with advance yield markings. Drivers were also found to look for pedestrians much more frequently and much sooner with advance yield markings. The advantages and limitations of the use of driving simulation to study problems such as these are discussed. PMID:23082040

  3. Mission simulation as an approach to develop requirements for automation in Advanced Life Support Systems

    NASA Technical Reports Server (NTRS)

    Erickson, J. D.; Eckelkamp, R. E.; Barta, D. J.; Dragg, J.; Henninger, D. L. (Principal Investigator)

    1996-01-01

    This paper examines mission simulation as an approach to develop requirements for automation and robotics for Advanced Life Support Systems (ALSS). The focus is on requirements and applications for command and control, control and monitoring, situation assessment and response, diagnosis and recovery, adaptive planning and scheduling, and other automation applications in addition to mechanized equipment and robotics applications to reduce the excessive human labor requirements to operate and maintain an ALSS. Based on principles of systems engineering, an approach is proposed to assess requirements for automation and robotics using mission simulation tools. First, the story of a simulated mission is defined in terms of processes with attendant types of resources needed, including options for use of automation and robotic systems. Next, systems dynamics models are used in simulation to reveal the implications for selected resource allocation schemes in terms of resources required to complete operational tasks. The simulations not only help establish ALSS design criteria, but also may offer guidance to ALSS research efforts by identifying gaps in knowledge about procedures and/or biophysical processes. Simulations of a planned one-year mission with 4 crewmembers in a Human Rated Test Facility are presented as an approach to evaluation of mission feasibility and definition of automation and robotics requirements.

  4. Retention of Advanced Cardiac Life Support Knowledge and Skills Following High-Fidelity Mannequin Simulation Training

    PubMed Central

    Sen, Sanchita; Finn, Laura A.; Cawley, Michael J.

    2015-01-01

    Objective. To assess pharmacy students’ ability to retain advanced cardiac life support (ACLS) knowledge and skills within 120 days of previous high-fidelity mannequin simulation training. Design. Students were randomly assigned to rapid response teams of 5-6. Skills in ACLS and mannequin survival were compared between teams some members of which had simulation training 120 days earlier and teams who had not had previous training. Assessment. A checklist was used to record and assess performance in the simulations. Teams with previous simulation training (n=10) demonstrated numerical superiority to teams without previous training (n=12) for 6 out of 8 (75%) ACLS skills observed, including time calculating accurate vasopressor infusion rate (83 sec vs 113 sec; p=0.01). Mannequin survival was 37% higher for teams who had previous simulation training, but this result was not significant (70% vs 33%; p=0.20). Conclusion. Teams with students who had previous simulation training demonstrated numerical superiority in ACLS knowledge and skill retention within 120 days of previous training compared to those who had no previous training. Future studies are needed to add to the current evidence of pharmacy students’ and practicing pharmacists’ ACLS knowledge and skill retention. PMID:25741028

  5. Advanced virtual energy simulation training and research: IGCC with CO2 capture power plant

    SciTech Connect

    Zitney, S.; Liese, E.; Mahapatra, P.; Bhattacharyya, D.; Provost, G.

    2011-01-01

    In this presentation, we highlight the deployment of a real-time dynamic simulator of an integrated gasification combined cycle (IGCC) power plant with CO{sub 2} capture at the Department of Energy's (DOE) National Energy Technology Laboratory's (NETL) Advanced Virtual Energy Simulation Training and Research (AVESTARTM) Center. The Center was established as part of the DOE's accelerating initiative to advance new clean coal technology for power generation. IGCC systems are an attractive technology option, generating low-cost electricity by converting coal and/or other fuels into a clean synthesis gas mixture in a process that is efficient and environmentally superior to conventional power plants. The IGCC dynamic simulator builds on, and reaches beyond, conventional power plant simulators to merge, for the first time, a 'gasification with CO{sub 2} capture' process simulator with a 'combined-cycle' power simulator. Fueled with coal, petroleum coke, and/or biomass, the gasification island of the simulated IGCC plant consists of two oxygen-blown, downward-fired, entrained-flow, slagging gasifiers with radiant syngas coolers and two-stage sour shift reactors, followed by a dual-stage acid gas removal process for CO{sub 2} capture. The combined cycle island consists of two F-class gas turbines, steam turbine, and a heat recovery steam generator with three-pressure levels. The dynamic simulator can be used for normal base-load operation, as well as plant start-up and shut down. The real-time dynamic simulator also responds satisfactorily to process disturbances, feedstock blending and switchovers, fluctuations in ambient conditions, and power demand load shedding. In addition, the full-scope simulator handles a wide range of abnormal situations, including equipment malfunctions and failures, together with changes initiated through actions from plant field operators. By providing a comprehensive IGCC operator training system, the AVESTAR Center is poised to develop a

  6. The Advanced Modeling, Simulation and Analysis Capability Roadmap Vision for Engineering

    NASA Technical Reports Server (NTRS)

    Zang, Thomas; Lieber, Mike; Norton, Charles; Fucik, Karen

    2006-01-01

    This paper summarizes a subset of the Advanced Modeling Simulation and Analysis (AMSA) Capability Roadmap that was developed for NASA in 2005. The AMSA Capability Roadmap Team was chartered to "To identify what is needed to enhance NASA's capabilities to produce leading-edge exploration and science missions by improving engineering system development, operations, and science understanding through broad application of advanced modeling, simulation and analysis techniques." The AMSA roadmap stressed the need for integration, not just within the science, engineering and operations domains themselves, but also across these domains. Here we discuss the roadmap element pertaining to integration within the engineering domain, with a particular focus on implications for future observatory missions. The AMSA products supporting the system engineering function are mission information, bounds on information quality, and system validation guidance. The Engineering roadmap element contains 5 sub-elements: (1) Large-Scale Systems Models, (2) Anomalous Behavior Models, (3) advanced Uncertainty Models, (4) Virtual Testing Models, and (5) space-based Robotics Manufacture and Servicing Models.

  7. Recent advances in 3D computed tomography techniques for simulation and navigation in hepatobiliary pancreatic surgery.

    PubMed

    Uchida, Masafumi

    2014-04-01

    A few years ago it could take several hours to complete a 3D image using a 3D workstation. Thanks to advances in computer science, obtaining results of interest now requires only a few minutes. Many recent 3D workstations or multimedia computers are equipped with onboard 3D virtual patient modeling software, which enables patient-specific preoperative assessment and virtual planning, navigation, and tool positioning. Although medical 3D imaging can now be conducted using various modalities, including computed tomography (CT), magnetic resonance imaging (MRI), positron emission tomography (PET), and ultrasonography (US) among others, the highest quality images are obtained using CT data, and CT images are now the most commonly used source of data for 3D simulation and navigation image. If the 2D source image is bad, no amount of 3D image manipulation in software will provide a quality 3D image. In this exhibition, the recent advances in CT imaging technique and 3D visualization of the hepatobiliary and pancreatic abnormalities are featured, including scan and image reconstruction technique, contrast-enhanced techniques, new application of advanced CT scan techniques, and new virtual reality simulation and navigation imaging. PMID:24464989

  8. Design, simulation and evaluation of advanced display concepts for the F-16 control configured vehicle

    NASA Technical Reports Server (NTRS)

    Klein, R. W.; Hollister, W. M.

    1982-01-01

    Advanced display concepts to augment the tracking ability of the F-16 Control Configured Vehicle (CCV) were designed, simulated, and evaluated. A fixed-base simulator was modified to represent the F-16 CCV. An isometric sidearm control stick and two-axis CCV thumb button were installed in the cockpit. The forward cockpit CRT was programmed to present an external scene (numbered runway, horizon) and the designed Heads Up Display. The cockpit interior was modified to represent a fighter and the F-16 CCV dynamics and direct lift and side force modes were programmed. Compensatory displays were designed from man-machine considerations. Pilots evaluated the Heads up Display and compensatory displays during simulated descents in the presence of several levels of filtered, zero-mean winds gusts. During a descent from 2500 feet to the runway, the pilots tracked a point on the runway utilizing the basic F-16, F-16 CCV, and F-16 CCV with advanced displays. Substantial tracking improvements resulted utilizing the CCV modes, and the displays were found to even further enhance the tracking ability of the F-16 CCV.

  9. Validation of an Advanced Material Model for Simulating the Impact and Shock Response of Composite Materials

    NASA Astrophysics Data System (ADS)

    Clegg, Richard A.; Hayhurst, Colin J.; Nahme, Hartwig

    2002-07-01

    Composite materials are now commonly used as ballistic and hypervelocity protection materials and the demand for simulation of impact on these materials is increasing. A new material model specifically designed for the shock response of anisotropic materials has been developed and implemented in the hydrocode AUTODYN. The model allows for the representation of non-linear shock effects in combination with anisotropic material stiffness and damage. The coupling of the equation of state and anisotropic response is based on the methodology proposed by Anderson et al. [2]. An overview of the coupled formulation is described in order to point out the important assumptions, key innovations and basic theoretical framework. The coupled model was originally developed by Century Dynamics and Fhg-EMI for assessing the hypervelocity impact response of composite satellite protection systems [1]. It was also identified that the developed model should also offer new possibilities and capabilities for modelling modern advanced armour materials. Validation of the advanced composite model is firstly shown via simulations of uniaxial strain flyer plate experiments on aramid and polyethylene fibre composite systems. Finally, practical application of the model as implemented in AUTODYN is demonstrated through the simulation of ballistic and hypervelocity impact events. Comparison with experiment is given where possible.

  10. ADVANCED UTILITY SIMULATION MODEL, REPORT OF SENSITIVITY TESTING, CALIBRATION, AND MODEL OUTPUT COMPARISONS (VERSION 3.0)

    EPA Science Inventory

    The report gives results of activities relating to the Advanced Utility Simulation Model (AUSM): sensitivity testing. comparison with a mature electric utility model, and calibration to historical emissions. The activities were aimed at demonstrating AUSM's validity over input va...

  11. Simulation for supporting scale-up of a fluidized bed reactor for advanced water oxidation.

    PubMed

    Tisa, Farhana; Raman, Abdul Aziz Abdul; Daud, Wan Mohd Ashri Wan

    2014-01-01

    Simulation of fluidized bed reactor (FBR) was accomplished for treating wastewater using Fenton reaction, which is an advanced oxidation process (AOP). The simulation was performed to determine characteristics of FBR performance, concentration profile of the contaminants, and various prominent hydrodynamic properties (e.g., Reynolds number, velocity, and pressure) in the reactor. Simulation was implemented for 2.8 L working volume using hydrodynamic correlations, continuous equation, and simplified kinetic information for phenols degradation as a model. The simulation shows that, by using Fe(3+) and Fe(2+) mixtures as catalyst, TOC degradation up to 45% was achieved for contaminant range of 40-90 mg/L within 60 min. The concentration profiles and hydrodynamic characteristics were also generated. A subsequent scale-up study was also conducted using similitude method. The analysis shows that up to 10 L working volume, the models developed are applicable. The study proves that, using appropriate modeling and simulation, data can be predicted for designing and operating FBR for wastewater treatment. PMID:25309949

  12. Simulation for Supporting Scale-Up of a Fluidized Bed Reactor for Advanced Water Oxidation

    PubMed Central

    Abdul Raman, Abdul Aziz; Daud, Wan Mohd Ashri Wan

    2014-01-01

    Simulation of fluidized bed reactor (FBR) was accomplished for treating wastewater using Fenton reaction, which is an advanced oxidation process (AOP). The simulation was performed to determine characteristics of FBR performance, concentration profile of the contaminants, and various prominent hydrodynamic properties (e.g., Reynolds number, velocity, and pressure) in the reactor. Simulation was implemented for 2.8 L working volume using hydrodynamic correlations, continuous equation, and simplified kinetic information for phenols degradation as a model. The simulation shows that, by using Fe3+ and Fe2+ mixtures as catalyst, TOC degradation up to 45% was achieved for contaminant range of 40–90 mg/L within 60 min. The concentration profiles and hydrodynamic characteristics were also generated. A subsequent scale-up study was also conducted using similitude method. The analysis shows that up to 10 L working volume, the models developed are applicable. The study proves that, using appropriate modeling and simulation, data can be predicted for designing and operating FBR for wastewater treatment. PMID:25309949

  13. Large eddy simulation of unsteady wind farm behavior using advanced actuator disk models

    NASA Astrophysics Data System (ADS)

    Moens, Maud; Duponcheel, Matthieu; Winckelmans, Gregoire; Chatelain, Philippe

    2014-11-01

    The present project aims at improving the level of fidelity of unsteady wind farm scale simulations through an effort on the representation and the modeling of the rotors. The chosen tool for the simulations is a Fourth Order Finite Difference code, developed at Universite catholique de Louvain; this solver implements Large Eddy Simulation (LES) approaches. The wind turbines are modeled as advanced actuator disks: these disks are coupled with the Blade Element Momentum method (BEM method) and also take into account the turbine dynamics and controller. A special effort is made here to reproduce the specific wake behaviors. Wake decay and expansion are indeed initially governed by vortex instabilities. This is an information that cannot be obtained from the BEM calculations. We thus aim at achieving this by matching the large scales of the actuator disk flow to high fidelity wake simulations produced using a Vortex Particle-Mesh method. It is obtained by adding a controlled excitation at the disk. We apply this tool to the investigation of atmospheric turbulence effects on the power production and on the wake behavior at a wind farm level. A turbulent velocity field is then used as inflow boundary condition for the simulations. We gratefully acknowledge the support of GDF Suez for the fellowship of Mrs Maud Moens.

  14. [Objective surgery -- advanced robotic devices and simulators used for surgical skill assessment].

    PubMed

    Suhánszki, Norbert; Haidegger, Tamás

    2014-12-01

    Robotic assistance became a leading trend in minimally invasive surgery, which is based on the global success of laparoscopic surgery. Manual laparoscopy requires advanced skills and capabilities, which is acquired through tedious learning procedure, while da Vinci type surgical systems offer intuitive control and advanced ergonomics. Nevertheless, in either case, the key issue is to be able to assess objectively the surgeons' skills and capabilities. Robotic devices offer radically new way to collect data during surgical procedures, opening the space for new ways of skill parameterization. This may be revolutionary in MIS training, given the new and objective surgical curriculum and examination methods. The article reviews currently developed skill assessment techniques for robotic surgery and simulators, thoroughly inspecting their validation procedure and utility. In the coming years, these methods will become the mainstream of Western surgical education. PMID:25500641

  15. Development of Computational Approaches for Simulation and Advanced Controls for Hybrid Combustion-Gasification Chemical Looping

    SciTech Connect

    Joshi, Abhinaya; Lou, Xinsheng; Neuschaefer, Carl; Chaudry, Majid; Quinn, Joseph

    2012-07-31

    This document provides the results of the project through September 2009. The Phase I project has recently been extended from September 2009 to March 2011. The project extension will begin work on Chemical Looping (CL) Prototype modeling and advanced control design exploration in preparation for a scale-up phase. The results to date include: successful development of dual loop chemical looping process models and dynamic simulation software tools, development and test of several advanced control concepts and applications for Chemical Looping transport control and investigation of several sensor concepts and establishment of two feasible sensor candidates recommended for further prototype development and controls integration. There are three sections in this summary and conclusions. Section 1 presents the project scope and objectives. Section 2 highlights the detailed accomplishments by project task area. Section 3 provides conclusions to date and recommendations for future work.

  16. The Nuclear Energy Advanced Modeling and Simulation Enabling Computational Technologies FY09 Report

    SciTech Connect

    Diachin, L F; Garaizar, F X; Henson, V E; Pope, G

    2009-10-12

    In this document we report on the status of the Nuclear Energy Advanced Modeling and Simulation (NEAMS) Enabling Computational Technologies (ECT) effort. In particular, we provide the context for ECT In the broader NEAMS program and describe the three pillars of the ECT effort, namely, (1) tools and libraries, (2) software quality assurance, and (3) computational facility (computers, storage, etc) needs. We report on our FY09 deliverables to determine the needs of the integrated performance and safety codes (IPSCs) in these three areas and lay out the general plan for software quality assurance to meet the requirements of DOE and the DOE Advanced Fuel Cycle Initiative (AFCI). We conclude with a brief description of our interactions with the Idaho National Laboratory computer center to determine what is needed to expand their role as a NEAMS user facility.

  17. Simulations of Failure via Three-Dimensional Cracking in Fuel Cladding for Advanced Nuclear Fuels

    SciTech Connect

    Lu, Hongbing; Bukkapatnam, Satish; Harimkar, Sandip; Singh, Raman; Bardenhagen, Scott

    2014-01-09

    Enhancing performance of fuel cladding and duct alloys is a key means of increasing fuel burnup. This project will address the failure of fuel cladding via three-dimensional cracking models. Researchers will develop a simulation code for the failure of the fuel cladding and validate the code through experiments. The objective is to develop an algorithm to determine the failure of fuel cladding in the form of three-dimensional cracking due to prolonged exposure under varying conditions of pressure, temperature, chemical environment, and irradiation. This project encompasses the following tasks: 1. Simulate 3D crack initiation and growth under instantaneous and/or fatigue loads using a new variant of the material point method (MPM); 2. Simulate debonding of the materials in the crack path using cohesive elements, considering normal and shear traction separation laws; 3. Determine the crack propagation path, considering damage of the materials incorporated in the cohesive elements to allow the energy release rate to be minimized; 4. Simulate the three-dimensional fatigue crack growth as a function of loading histories; 5. Verify the simulation code by comparing results to theoretical and numerical studies available in the literature; 6. Conduct experiments to observe the crack path and surface profile in unused fuel cladding and validate against simulation results; and 7. Expand the adaptive mesh refinement infrastructure parallel processing environment to allow adaptive mesh refinement at the 3D crack fronts and adaptive mesh merging in the wake of cracks. Fuel cladding is made of materials such as stainless steels and ferritic steels with added alloying elements, which increase stability and durability under irradiation. As fuel cladding is subjected to water, chemicals, fission gas, pressure, high temperatures, and irradiation while in service, understanding performance is essential. In the fast fuel used in advanced burner reactors, simulations of the nuclear

  18. Generation of large scale urban environments to support advanced sensor and seeker simulation

    NASA Astrophysics Data System (ADS)

    Giuliani, Joseph; Hershey, Daniel; McKeown, David, Jr.; Willis, Carla; Van, Tan

    2009-05-01

    One of the key aspects for the design of a next generation weapon system is the need to operate in cluttered and complex urban environments. Simulation systems rely on accurate representation of these environments and require automated software tools to construct the underlying 3D geometry and associated spectral and material properties that are then formatted for various objective seeker simulation systems. Under an Air Force Small Business Innovative Research (SBIR) contract, we have developed an automated process to generate 3D urban environments with user defined properties. These environments can be composed from a wide variety of source materials, including vector source data, pre-existing 3D models, and digital elevation models, and rapidly organized into a geo-specific visual simulation database. This intermediate representation can be easily inspected in the visible spectrum for content and organization and interactively queried for accuracy. Once the database contains the required contents, it can then be exported into specific synthetic scene generation runtime formats, preserving the relationship between geometry and material properties. To date an exporter for the Irma simulation system developed and maintained by AFRL/Eglin has been created and a second exporter to Real Time Composite Hardbody and Missile Plume (CHAMP) simulation system for real-time use is currently being developed. This process supports significantly more complex target environments than previous approaches to database generation. In this paper we describe the capabilities for content creation for advanced seeker processing algorithms simulation and sensor stimulation, including the overall database compilation process and sample databases produced and exported for the Irma runtime system. We also discuss the addition of object dynamics and viewer dynamics within the visual simulation into the Irma runtime environment.

  19. Toward faster OPC convergence: advanced analysis for OPC iterations and simulation environment

    NASA Astrophysics Data System (ADS)

    Bahnas, Mohamed; Al-Imam, Mohamed; Tawfik, Tamer

    2008-10-01

    Achieving faster Turn-Around-Time (TAT) is one of the most attractive objectives for the silicon wafer manufacturers despite the technology node they are processing. This is valid for all the active technology nodes from 130nm till the cutting edge technologies. There have been several approaches adopted to cut down the OPC simulation runtime without sacrificing the OPC output quality, among them is using stronger CPU power and Hardware acceleration which is a good usage for the advancing powerful processing technology. Another favorable approach for cutting down the runtime is to look deeper inside the used OPC algorithm and the implemented OPC recipe. The OPC algorithm includes the convergence iterations and simulation sites distribution, and the OPC recipe is in definition how to smartly tune the OPC knobs to efficiently use the implemented algorithm. Many previous works were exposed to monitoring the OPC convergence through iterations and analyze the size of the shift per iteration, similarly several works tried to calculate the amount of simulation capacity needed for all these iterations and how to optimize it for less amount. The scope of the work presented here is an attempt to decrease the number of optical simulations by reducing the number of control points per site and without affecting OPC accuracy. The concept is proved by many simulation results and analysis. Implementing this flow illustrated the achievable simulation runtime reduction which is reflected in faster TAT. For its application, it is not just runtime optimization, additionally it puts some more intelligence in the sparse OPC engine by eliminating the headache of specifying the optimum simulation site length.

  20. Neural network setpoint control of an advanced test reactor experiment loop simulation

    SciTech Connect

    Cordes, G.A.; Bryan, S.R.; Powell, R.H.; Chick, D.R.

    1990-09-01

    This report describes the design, implementation, and application of artificial neural networks to achieve temperature and flow rate control for a simulation of a typical experiment loop in the Advanced Test Reactor (ATR) located at the Idaho National Engineering Laboratory (INEL). The goal of the project was to research multivariate, nonlinear control using neural networks. A loop simulation code was adapted for the project and used to create a training set and test the neural network controller for comparison with the existing loop controllers. The results for three neural network designs are documented and compared with existing loop controller action. The neural network was shown to be as accurate at loop control as the classical controllers in the operating region represented by the training set. 9 refs., 28 figs., 2 tabs.

  1. Recent Advances in the Theory and Simulation of Model Colloidal Microphase Formers.

    PubMed

    Zhuang, Yuan; Charbonneau, Patrick

    2016-08-18

    This mini-review synthesizes our understanding of the equilibrium behavior of particle-based models with short-range attractive and long-range repulsive (SALR) interactions. These models, which can form stable periodic microphases, aim to reproduce the essence of colloidal suspensions with competing interparticle interactions. Ordered structures, however, have yet to be obtained in experiments. In order to better understand the hurdles to periodic microphase assembly, marked theoretical and simulation advances have been made over the past few years. Here, we present recent progress in the study of microphases in models with SALR interactions using liquid-state theory and density-functional theory as well as numerical simulations. Combining these various approaches provides a description of periodic microphases, and gives insights into the rich phenomenology of the surrounding disordered regime. Ongoing research directions in the thermodynamics of models with SALR interactions are also presented. PMID:27466702

  2. Technical note: Large-eddy simulation of cloudy boundary layer with the Advanced Research WRF model

    NASA Astrophysics Data System (ADS)

    Yamaguchi, Takanobu; Feingold, Graham

    2012-03-01

    A thorough evaluation of the large-eddy simulation (LES) mode of the Advanced Research WRF model is performed with use of three cloudy boundary layer cases developed as LES intercomparison cases by the GEWEX Cloud System Study. Our evaluation reveals two problems that must be recognized and carefully addressed before proceeding with production runs. These are (i) sensitivity of results to the prescribed number of acoustic time steps per physical time step; and (ii) the assumption of saturation adjustment in the initial cloudy state. A temporary, but effective method of how to cope with these issues is suggested. With the proper treatment, the simulation results are comparable to the ensemble mean of the other LES models, and sometimes closer to the observational estimate than the ensemble mean. In order to ease the burden for configuration and post-processing, two new packages are developed and implemented. A detailed description of each package is presented. These packages are freely available to the public.

  3. Motion-base simulator results of advanced supersonic transport handling qualities with active controls

    NASA Technical Reports Server (NTRS)

    Feather, J. B.; Joshi, D. S.

    1981-01-01

    Handling qualities of the unaugmented advanced supersonic transport (AST) are deficient in the low-speed, landing approach regime. Consequently, improvement in handling with active control augmentation systems has been achieved using implicit model-following techniques. Extensive fixed-based simulator evaluations were used to validate these systems prior to tests with full motion and visual capabilities on a six-axis motion-base simulator (MBS). These tests compared the handling qualities of the unaugmented AST with several augmented configurations to ascertain the effectiveness of these systems. Cooper-Harper ratings, tracking errors, and control activity data from the MBS tests have been analyzed statistically. The results show the fully augmented AST handling qualities have been improved to an acceptable level.

  4. Advanced thermal energy management: A thermal test bed and heat pipe simulation

    NASA Technical Reports Server (NTRS)

    Barile, Ronald G.

    1986-01-01

    Work initiated on a common-module thermal test simulation was continued, and a second project on heat pipe simulation was begun. The test bed, constructed from surplus Skylab equipment, was modeled and solved for various thermal load and flow conditions. Low thermal load caused the radiator fluid, Coolanol 25, to thicken due to its temperature avoided by using a regenerator-heat-exchanger. Other possible solutions modeled include a radiator heater and shunting heat from the central thermal bus to the radiator. Also, module air temperature can become excessive with high avionics load. A second preoject concerning advanced heat pipe concepts was initiated. A program was written which calculates fluid physical properties, liquid and vapor pressure in the evaporator and condenser, fluid flow rates, and thermal flux. The program is directed to evaluating newer heat pipe wicks and geometries, especially water in an artery surrounded by six vapor channels. Effects of temperature, groove and slot dimensions, and wick properties are reported.

  5. Strategic Plan for Nuclear Energy -- Knowledge Base for Advanced Modeling and Simulation (NE-KAMS)

    SciTech Connect

    Kimberlyn C. Mousseau

    2011-10-01

    The Nuclear Energy Computational Fluid Dynamics Advanced Modeling and Simulation (NE-CAMS) system is being developed at the Idaho National Laboratory (INL) in collaboration with Bettis Laboratory, Sandia National Laboratory (SNL), Argonne National Laboratory (ANL), Utah State University (USU), and other interested parties with the objective of developing and implementing a comprehensive and readily accessible data and information management system for computational fluid dynamics (CFD) verification and validation (V&V) in support of nuclear energy systems design and safety analysis. The two key objectives of the NE-CAMS effort are to identify, collect, assess, store and maintain high resolution and high quality experimental data and related expert knowledge (metadata) for use in CFD V&V assessments specific to the nuclear energy field and to establish a working relationship with the U.S. Nuclear Regulatory Commission (NRC) to develop a CFD V&V database, including benchmark cases, that addresses and supports the associated NRC regulations and policies on the use of CFD analysis. In particular, the NE-CAMS system will support the Department of Energy Office of Nuclear Energy Advanced Modeling and Simulation (NEAMS) Program, which aims to develop and deploy advanced modeling and simulation methods and computational tools for reliable numerical simulation of nuclear reactor systems for design and safety analysis. Primary NE-CAMS Elements There are four primary elements of the NE-CAMS knowledge base designed to support computer modeling and simulation in the nuclear energy arena as listed below. Element 1. The database will contain experimental data that can be used for CFD validation that is relevant to nuclear reactor and plant processes, particularly those important to the nuclear industry and the NRC. Element 2. Qualification standards for data evaluation and classification will be incorporated and applied such that validation data sets will result in well

  6. Current Advances in the Computational Simulation of the Formation of Low-Mass Stars

    SciTech Connect

    Klein, R I; Inutsuka, S; Padoan, P; Tomisaka, K

    2005-10-24

    Developing a theory of low-mass star formation ({approx} 0.1 to 3 M{sub {circle_dot}}) remains one of the most elusive and important goals of theoretical astrophysics. The star-formation process is the outcome of the complex dynamics of interstellar gas involving non-linear interactions of turbulence, gravity, magnetic field and radiation. The evolution of protostellar condensations, from the moment they are assembled by turbulent flows to the time they reach stellar densities, spans an enormous range of scales, resulting in a major computational challenge for simulations. Since the previous Protostars and Planets conference, dramatic advances in the development of new numerical algorithmic techniques have been successfully implemented on large scale parallel supercomputers. Among such techniques, Adaptive Mesh Refinement and Smooth Particle Hydrodynamics have provided frameworks to simulate the process of low-mass star formation with a very large dynamic range. It is now feasible to explore the turbulent fragmentation of molecular clouds and the gravitational collapse of cores into stars self-consistently within the same calculation. The increased sophistication of these powerful methods comes with substantial caveats associated with the use of the techniques and the interpretation of the numerical results. In this review, we examine what has been accomplished in the field and present a critique of both numerical methods and scientific results. We stress that computational simulations should obey the available observational constraints and demonstrate numerical convergence. Failing this, results of large scale simulations do not advance our understanding of low-mass star formation.

  7. A demonstration of motion base design alternatives for the National Advanced Driving Simulator

    NASA Technical Reports Server (NTRS)

    Mccauley, Michael E.; Sharkey, Thomas J.; Sinacori, John B.; Laforce, Soren; Miller, James C.; Cook, Anthony

    1992-01-01

    A demonstration of the capability of NASA's Vertical Motion Simulator to simulate two alternative motion base designs for the National Advanced Driving simulator (NADS) is reported. The VMS is located at ARC. The motion base conditions used in this demonstration were as follows: (1) a large translational motion base; and (2) a motion base design with limited translational capability. The latter had translational capability representative of a typical synergistic motion platform. These alternatives were selected to test the prediction that large amplitude translational motion would result in a lower incidence or severity of simulator induced sickness (SIS) than would a limited translational motion base. A total of 10 drivers performed two tasks, slaloms and quick-stops, using each of the motion bases. Physiological, objective, and subjective measures were collected. No reliable differences in SIS between the motion base conditions was found in this demonstration. However, in light of the cost considerations and engineering challenges associated with implementing a large translation motion base, performance of a formal study is recommended.

  8. Advanced optical system simulation in a coupled CAD/optical analysis package

    NASA Astrophysics Data System (ADS)

    Stevenson, Michael A.; Campillo, Chris J.; Jenkins, David G.

    1999-05-01

    Software packages capable of simulating complex optical systems have the power to shorten the design process for non-imaging illumination, projection display, and other imaging illumination systems, Breault Research Organization's Advanced Systems Analysis Program (ASAP) and Robert McNeel and Associates' Rhinoceros computer aided design software, together, allow complicated optical systems to be simulated and analyzed. Through the use of Rhinoceros, an optical system can be accurately modeled in a 3D design environment. ASAP is then used to assign optical properties to the Rhinoceros CAD model. After the optical system has been characterized, it can be analyzed and optimized, by way of features specific to the ASAP optical analysis engine. Using this simulation technique, an HID arc source manufactured by Ushio America, Inc. is accurately represented. 2D CCD images are gathered for the source's emitting-volume across its spectral bandwidth. The images are processed within ASAP, via the inverse Abel command, to produce a 3D emitting-volume. This emitting-volume is combined with an accurate model of the source geometry and its optical properties, to finalize a functioning virtual source model. The characterized source is then joined with a simulated optical system for detailed performance analysis: namely, a projection display system.

  9. Nuclear Energy Advanced Modeling and Simulation Waste Integrated Performance and Safety Codes (NEAMS Waste IPSC).

    SciTech Connect

    Schultz, Peter Andrew

    2011-12-01

    The objective of the U.S. Department of Energy Office of Nuclear Energy Advanced Modeling and Simulation Waste Integrated Performance and Safety Codes (NEAMS Waste IPSC) is to provide an integrated suite of computational modeling and simulation (M&S) capabilities to quantitatively assess the long-term performance of waste forms in the engineered and geologic environments of a radioactive-waste storage facility or disposal repository. Achieving the objective of modeling the performance of a disposal scenario requires describing processes involved in waste form degradation and radionuclide release at the subcontinuum scale, beginning with mechanistic descriptions of chemical reactions and chemical kinetics at the atomic scale, and upscaling into effective, validated constitutive models for input to high-fidelity continuum scale codes for coupled multiphysics simulations of release and transport. Verification and validation (V&V) is required throughout the system to establish evidence-based metrics for the level of confidence in M&S codes and capabilities, including at the subcontiunuum scale and the constitutive models they inform or generate. This Report outlines the nature of the V&V challenge at the subcontinuum scale, an approach to incorporate V&V concepts into subcontinuum scale modeling and simulation (M&S), and a plan to incrementally incorporate effective V&V into subcontinuum scale M&S destined for use in the NEAMS Waste IPSC work flow to meet requirements of quantitative confidence in the constitutive models informed by subcontinuum scale phenomena.

  10. Space-based radar representation in the advanced warfighting simulation (AWARS)

    NASA Astrophysics Data System (ADS)

    Phend, Andrew E.; Buckley, Kathryn; Elliott, Steven R.; Stanley, Page B.; Shea, Peter M.; Rutland, Jimmie A.

    2004-09-01

    Space and orbiting systems impact multiple battlefield operating systems (BOS). Space support to current operations is a perfect example of how the United States fights. Satellite-aided munitions, communications, navigation and weather systems combine to achieve military objectives in a relatively short amount of time. Through representation of space capabilities within models and simulations, the military will have the ability to train and educate officers and soldiers to fight from the high ground of space or to conduct analysis and determine the requirements or utility of transformed forces empowered with advanced space-based capabilities. The Army Vice Chief of Staff acknowledged deficiencies in space modeling and simulation during the September 2001 Space Force Management Analsyis Review (FORMAL) and directed that a multi-disciplinary team be established to recommend a service-wide roadmap to address shortcomings. A Focus Area Collaborative Team (FACT), led by the U.S. Army Space & Missile Defense Command with participation across the Army, confirmed the weaknesses in scope, consistency, correctness, completeness, availability, and usability of space model and simulation (M&S) for Army applications. The FACT addressed the need to develop a roadmap to remedy Space M&S deficiencies using a highly parallelized process and schedule designed to support a recommendation during the Sep 02 meeting of the Army Model and Simulation Executive Council (AMSEC).

  11. A direct-execution parallel architecture for the Advanced Continuous Simulation Language (ACSL)

    NASA Technical Reports Server (NTRS)

    Carroll, Chester C.; Owen, Jeffrey E.

    1988-01-01

    A direct-execution parallel architecture for the Advanced Continuous Simulation Language (ACSL) is presented which overcomes the traditional disadvantages of simulations executed on a digital computer. The incorporation of parallel processing allows the mapping of simulations into a digital computer to be done in the same inherently parallel manner as they are currently mapped onto an analog computer. The direct-execution format maximizes the efficiency of the executed code since the need for a high level language compiler is eliminated. Resolution is greatly increased over that which is available with an analog computer without the sacrifice in execution speed normally expected with digitial computer simulations. Although this report covers all aspects of the new architecture, key emphasis is placed on the processing element configuration and the microprogramming of the ACLS constructs. The execution times for all ACLS constructs are computed using a model of a processing element based on the AMD 29000 CPU and the AMD 29027 FPU. The increase in execution speed provided by parallel processing is exemplified by comparing the derived execution times of two ACSL programs with the execution times for the same programs executed on a similar sequential architecture.

  12. Development of an advanced actuator disk model for Large-Eddy Simulation of wind farms

    NASA Astrophysics Data System (ADS)

    Moens, Maud; Duponcheel, Matthieu; Winckelmans, Gregoire; Chatelain, Philippe

    2015-11-01

    This work aims at improving the fidelity of the wind turbine modelling for Large-Eddy Simulation (LES) of wind farms, in order to accurately predict the loads, the production, and the wake dynamics. In those simulations, the wind turbines are accounted for through actuator disks. i.e. a body-force term acting over the regularised disk swept by the rotor. These forces are computed using the Blade Element theory to estimate the normal and tangential components (based on the local simulated flow and the blade characteristics). The local velocities are modified using the Glauert tip-loss factor in order to account for the finite number of blades; the computation of this correction is here improved thanks to a local estimation of the effective upstream velocity at every point of the disk. These advanced actuator disks are implemented in a 4th order finite difference LES solver and are compared to a classical Blade Element Momentum method and to high fidelity wake simulations performed using a Vortex Particle-Mesh method in uniform and turbulent flows.

  13. Characterization and Simulation of the Thermoacoustic Instability Behavior of an Advanced, Low Emissions Combustor Prototype

    NASA Technical Reports Server (NTRS)

    DeLaat, John C.; Paxson, Daniel E.

    2008-01-01

    Extensive research is being done toward the development of ultra-low-emissions combustors for aircraft gas turbine engines. However, these combustors have an increased susceptibility to thermoacoustic instabilities. This type of instability was recently observed in an advanced, low emissions combustor prototype installed in a NASA Glenn Research Center test stand. The instability produces pressure oscillations that grow with increasing fuel/air ratio, preventing full power operation. The instability behavior makes the combustor a potentially useful test bed for research into active control methods for combustion instability suppression. The instability behavior was characterized by operating the combustor at various pressures, temperatures, and fuel and air flows representative of operation within an aircraft gas turbine engine. Trends in instability behavior versus operating condition have been identified and documented, and possible explanations for the trends provided. A simulation developed at NASA Glenn captures the observed instability behavior. The physics-based simulation includes the relevant physical features of the combustor and test rig, employs a Sectored 1-D approach, includes simplified reaction equations, and provides time-accurate results. A computationally efficient method is used for area transitions, which decreases run times and allows the simulation to be used for parametric studies, including control method investigations. Simulation results show that the simulation exhibits a self-starting, self-sustained combustion instability and also replicates the experimentally observed instability trends versus operating condition. Future plans are to use the simulation to investigate active control strategies to suppress combustion instabilities and then to experimentally demonstrate active instability suppression with the low emissions combustor prototype, enabling full power, stable operation.

  14. Report of the Workshop on Petascale Systems Integration for LargeScale Facilities

    SciTech Connect

    Kramer, William T.C.; Walter, Howard; New, Gary; Engle, Tom; Pennington, Rob; Comes, Brad; Bland, Buddy; Tomlison, Bob; Kasdorf, Jim; Skinner, David; Regimbal, Kevin

    2007-10-01

    There are significant issues regarding Large Scale System integration that are not being addressed in other forums such as current research portfolios or vendor user groups. Unfortunately, the issues in the area of large-scale system integration often fall into a netherworld; not research, not facilities, not procurement, not operations, not user services. Taken together, these issues along with the impact of sub-optimal integration technology means the time required to deploy, integrate and stabilize large scale system may consume up to 20 percent of the useful life of such systems. Improving the state of the art for large scale systems integration has potential to increase the scientific productivity of these systems. Sites have significant expertise, but there are no easy ways to leverage this expertise among them . Many issues inhibit the sharing of information, including available time and effort, as well as issues with sharing proprietary information. Vendors also benefit in the long run from the solutions to issues detected during site testing and integration. There is a great deal of enthusiasm for making large scale system integration a full-fledged partner along with the other major thrusts supported by funding agencies in the definition, design, and use of a petascale systems. Integration technology and issues should have a full 'seat at the table' as petascale and exascale initiatives and programs are planned. The workshop attendees identified a wide range of issues and suggested paths forward. Pursuing these with funding opportunities and innovation offers the opportunity to dramatically improve the state of large scale system integration.

  15. A driver linac for the Advanced Exotic Beam Laboratory : physics design and beam dynamics simulations.

    SciTech Connect

    Ostroumov, P. N.; Mustapha, B.; Nolen, J.; Physics

    2007-01-01

    The Advanced Exotic Beam Laboratory (AEBL) being developed at ANL consists of an 833 MV heavy-ion driver linac capable of producing uranium ions up to 200 MeV/u and protons to 580 MeV with 400 kW beam power. We have designed all accelerator components including a two charge state LEBT, an RFQ, a MEBT, a superconducting linac, a stripper station and chicane. We present the results of an optimized linac design and end-to-end simulations including machine errors and detailed beam loss analysis. The Advanced Exotic Beam Laboratory (AEBL) has been proposed at ANL as a reduced scale of the original Rare Isotope Accelerator (RIA) project with about half the cost but the same beam power. AEBL will address 90% or more of RIA physics but with reduced multi-users capabilities. The focus of this paper is the physics design and beam dynamics simulations of the AEBL driver linac. The reported results are for a multiple charge state U{sup 238} beam.

  16. A review on recent advances in the numerical simulation for coalbed-methane-recovery process

    SciTech Connect

    Wei, X.R.; Wang, G.X.; Massarotto, P.; Golding, S.D.; Rudolph, V.

    2007-12-15

    The recent advances in numerical simulation for primary coalbed methane (CBM) recovery and enhanced coalbed-methane recovery (ECBMR) processes are reviewed, primarily focusing on the progress that has occurred since the late 1980s. Two major issues regarding the numerical modeling will be discussed in this review: first, multicomponent gas transport in in-situ bulk coal and, second, changes of coal properties during methane (CH{sub 4}) production. For the former issues, a detailed review of more recent advances in modeling gas and water transport within a coal matrix is presented. Further, various factors influencing gas diffusion through the coal matrix will be highlighted as well, such as pore structure, concentration and pressure, and water effects. An ongoing bottleneck for evaluating total mass transport rate is developing a reasonable representation of multiscale pore space that considers coal type and rank. Moreover, few efforts have been concerned with modeling water-flow behavior in the coal matrix and its effects on CH{sub 4} production and on the exchange of carbon dioxide (CO{sub 2}) and CH{sub 4}. As for the second issue, theoretical coupled fluid-flow and geomechanical models have been proposed to describe the evolution of pore structure during CH{sub 4} production, instead of traditional empirical equations. However, there is currently no effective coupled model for engineering applications. Finally, perspectives on developing suitable simulation models for CBM production and for predicting CO{sub 2}-sequestration ECBMR are suggested.

  17. Computational Advances in the Arctic Terrestrial Simulator: Modeling Permafrost Degradation in a Warming Arctic

    NASA Astrophysics Data System (ADS)

    Coon, E.; Berndt, M.; Garimella, R.; Moulton, J. D.; Manzini, G.; Painter, S. L.

    2013-12-01

    The terrestrial Arctic has been a net sink of carbon for thousands of years, but warming trends suggest this may change. As the terrestrial Arctic warms, degradation of the permafrost results in significant melting of the ice wedges that support low-centered polygonal ground. This leads to subsidence of the topography, inversion of the polygonal ground, and restructuring of drainage networks. The change in hydrology and vegetation that result from these processes is poorly understood. Predictive simulation of the fate of this carbon is critical for understanding feedback effects between the terrestrial Arctic and climate change. Simulation of this system at fine scales presents many challenges. Flow and energy equations are solved on both the surface and subsurface domains, and deformation of the soil subsurface must couple with both. Additional processes such as snow, evapo-transpiration, and biogeochemistry supplement this THMC model. While globally implicit coupling methods enable conservation of mass and energy on the combined domain, care must be taken to ensure conservation as the soil subsides and the mesh deforms. Uncertainty in both critical physics of each process model and in coupling to maintain accuracy between processes suggests the need for a versatile many-physics framework. This framework should allow swapping of both processes and constitutive relations, and enable easy numerical experimentation of coupling strategies. Deformation dictates the need for advanced discretizations which maintain accuracy and a mesh framework capable of calculating smooth deformation with remapped fields. And latent heat introduces strong nonlinearities, requiring robust solvers and an efficient globalization strategy. Here we discuss advances as implemented in the Arctic Terrestrial Simulator (ATS), a many-physics framework and collection of physics kernels based upon Amanzi. We demonstrate the deformation capability, conserving mass and energy while simulating soil

  18. Annoyance response to simulated advanced turboprop aircraft interior noise containing tonal beats

    NASA Technical Reports Server (NTRS)

    Leatherwood, Jack D.

    1987-01-01

    A study is done to investigate the effects on subjective annoyance of simulated advanced turboprop (ATP) interior noise environments containing tonal beats. The simulated environments consisted of low-frequency tones superimposed on a turbulent-boundary-layer noise spectrum. The variables used in the study included propeller tone frequency (100 to 250 Hz), propeller tone levels (84 to 105 dB), and tonal beat frequency (0 to 1.0 Hz). Results indicated that propeller tones within the simulated ATP environment resulted in increased annoyance response that was fully predictable in terms of the increase in overall sound pressure level due to the tones. Implications for ATP aircraft include the following: (1) the interior noise environment with propeller tones is more annoying than an environment without tones if the tone is present at a level sufficient to increase the overall sound pressure level; (2) the increased annoyance due to the fundamental propeller tone frequency without harmonics is predictable from the overall sound pressure level; and (3) no additional noise penalty due to the perception of single discrete-frequency tones and/or beats was observed.

  19. Efficient Multi-Dimensional Simulation of Quantum Confinement Effects in Advanced MOS Devices

    NASA Technical Reports Server (NTRS)

    Biegel, Bryan A.; Rafferty, Conor S.; Ancona, Mario G.; Yu, Zhi-Ping

    2000-01-01

    We investigate the density-gradient (DG) transport model for efficient multi-dimensional simulation of quantum confinement effects in advanced MOS devices. The formulation of the DG model is described as a quantum correction to the classical drift-diffusion model. Quantum confinement effects are shown to be significant in sub-100nm MOSFETs. In thin-oxide MOS capacitors, quantum effects may reduce gate capacitance by 25% or more. As a result, the inclusion or quantum effects in simulations dramatically improves the match between C-V simulations and measurements for oxide thickness down to 2 nm. Significant quantum corrections also occur in the I-V characteristics of short-channel (30 to 100 nm) n-MOSFETs, with current drive reduced by up to 70%. This effect is shown to result from reduced inversion charge due to quantum confinement of electrons in the channel. Also, subthreshold slope is degraded by 15 to 20 mV/decade with the inclusion of quantum effects via the density-gradient model, and short channel effects (in particular, drain-induced barrier lowering) are noticeably increased.

  20. Efficient Multi-Dimensional Simulation of Quantum Confinement Effects in Advanced MOS Devices

    NASA Technical Reports Server (NTRS)

    Biegel, Bryan A.; Ancona, Mario G.; Rafferty, Conor S.; Yu, Zhiping

    2000-01-01

    We investigate the density-gradient (DG) transport model for efficient multi-dimensional simulation of quantum confinement effects in advanced MOS devices. The formulation of the DG model is described as a quantum correction ot the classical drift-diffusion model. Quantum confinement effects are shown to be significant in sub-100nm MOSFETs. In thin-oxide MOS capacitors, quantum effects may reduce gate capacitance by 25% or more. As a result, the inclusion of quantum effects may reduce gate capacitance by 25% or more. As a result, the inclusion of quantum effects in simulations dramatically improves the match between C-V simulations and measurements for oxide thickness down to 2 nm. Significant quantum corrections also occur in the I-V characteristics of short-channel (30 to 100 nm) n-MOSFETs, with current drive reduced by up to 70%. This effect is shown to result from reduced inversion charge due to quantum confinement of electrons in the channel. Also, subthreshold slope is degraded by 15 to 20 mV/decade with the inclusion of quantum effects via the density-gradient model, and short channel effects (in particular, drain-induced barrier lowering) are noticeably increased.

  1. Recent advances in computational methodology for simulation of mechanical circulatory assist devices

    PubMed Central

    Marsden, Alison L.; Bazilevs, Yuri; Long, Christopher C.; Behr, Marek

    2014-01-01

    Ventricular assist devices (VADs) provide mechanical circulatory support to offload the work of one or both ventricles during heart failure. They are used in the clinical setting as destination therapy, as bridge to transplant, or more recently as bridge to recovery to allow for myocardial remodeling. Recent developments in computational simulation allow for detailed assessment of VAD hemodynamics for device design and optimization for both children and adults. Here, we provide a focused review of the recent literature on finite element methods and optimization for VAD simulations. As VAD designs typically fall into two categories, pulsatile and continuous flow devices, we separately address computational challenges of both types of designs, and the interaction with the circulatory system with three representative case studies. In particular, we focus on recent advancements in finite element methodology that has increased the fidelity of VAD simulations. We outline key challenges, which extend to the incorporation of biological response such as thrombosis and hemolysis, as well as shape optimization methods and challenges in computational methodology. PMID:24449607

  2. Design and development of a virtual reality simulator for advanced cardiac life support training.

    PubMed

    Vankipuram, Akshay; Khanal, Prabal; Ashby, Aaron; Vankipuram, Mithra; Gupta, Ashish; DrummGurnee, Denise; Josey, Karen; Smith, Marshall

    2014-07-01

    The use of virtual reality (VR) training tools for medical education could lead to improvements in the skills of clinicians while providing economic incentives for healthcare institutions. The use of VR tools can also mitigate some of the drawbacks currently associated with providing medical training in a traditional clinical environment such as scheduling conflicts and the need for specialized equipment (e.g., high-fidelity manikins). This paper presents the details of the framework and the development methodology associated with a VR-based training simulator for advanced cardiac life support, a time critical, team-based medical scenario. In addition, we also report the key findings of a usability study conducted to assess the efficacy of various features of this VR simulator through a postuse questionnaire administered to various care providers. The usability questionnaires were completed by two groups that used two different versions of the VR simulator. One version consisted of the VR trainer with it all its features and a minified version with certain immersive features disabled. We found an increase in usability scores from the minified group to the full VR group. PMID:24122608

  3. Simulation of Thin-Film Damping and Thermal Mechanical Noise Spectra for Advanced Micromachined Microphone Structures

    PubMed Central

    Hall, Neal A.; Okandan, Murat; Littrell, Robert; Bicen, Baris; Degertekin, F. Levent

    2008-01-01

    In many micromachined sensors the thin (2–10 μm thick) air film between a compliant diaphragm and backplate electrode plays a dominant role in shaping both the dynamic and thermal noise characteristics of the device. Silicon microphone structures used in grating-based optical-interference microphones have recently been introduced that employ backplates with minimal area to achieve low damping and low thermal noise levels. Finite-element based modeling procedures based on 2-D discretization of the governing Reynolds equation are ideally suited for studying thin-film dynamics in such structures which utilize relatively complex backplate geometries. In this paper, the dynamic properties of both the diaphragm and thin air film are studied using a modal projection procedure in a commonly used finite element software and the results are used to simulate the dynamic frequency response of the coupled structure to internally generated electrostatic actuation pressure. The model is also extended to simulate thermal mechanical noise spectra of these advanced sensing structures. In all cases simulations are compared with measured data and show excellent agreement—demonstrating 0.8 pN/√Hz and 1.8 μPa/√Hz thermal force and thermal pressure noise levels, respectively, for the 1.5 mm diameter structures under study which have a fundamental diaphragm resonance-limited bandwidth near 20 kHz. PMID:19081811

  4. Monte Carlo simulations of the vacuum performance of differential pumps at the Advanced Photon Source

    NASA Astrophysics Data System (ADS)

    Liu, C.; Shu, D.; Kuzay, T. M.; Kersevan, R.

    1996-09-01

    Monte Carlo computer simulations have been successfully applied in the design of vacuum systems. These simulations allow the user to check the vacuum performance without the need of making a prototype of the vacuum system. In this paper we demonstrate the effectiveness and aptitude of these simulations in the design of differential pumps for synchrotron radiation beamlines. Eventually a good number of the beamline front ends at the Advanced Photon Source (APS) will use differential pumps to protect the synchrotron storage ring vacuum. A Monte Carlo computer program is used to calculate the molecular flow transmission and pressure distribution across the differential pump. A differential pump system, which consists of two 170 l/s ion pumps with three conductance-limiting apertures, was previously tested on an APS insertion-device beamline front end. Pressure distribution measurements using controlled leaks demonstrated a pressure difference of over two decades across the differential pump. A new differential pump utilizes a fixed mask between two 170 l/s ion pumps. The fixed mask, which has a conical channel with a small cross section of 4.5×4.5 mm2 in the far end, is used in the beamline to confine the photon beam. Monte Carlo simulations indicate that this configuration with the fixed mask significantly improves the pressure reduction capability of the differential pump, to ˜3×10-5, within the operational range from ˜10-4 to 10-10 Torr. The lower end of pressure is limited by outgassing from front-end components and the higher end by the pumping ability of the ion pump.

  5. Advances in simulating radiance signatures for dynamic air/water interfaces

    NASA Astrophysics Data System (ADS)

    Goodenough, Adam A.; Brown, Scott D.; Gerace, Aaron

    2015-05-01

    The air-water interface poses a number of problems for both collecting and simulating imagery. At the surface, the magnitude of observed radiance can change by multiple orders of magnitude at high spatiotemporal frequency due to glinting effects. In the volume, similarly high frequency focusing of photons by a dynamic wave surface significantly changes the reflected radiance of in-water objects and the scattered return of the volume itself. These phenomena are often manifest as saturated pixels and artifacts in collected imagery (often enhanced by time delays between neighboring pixels or interpolation between adjacent filters) and as noise and greater required computation times in simulated imagery. This paper describes recent advances made to the Digital Image and Remote Sensing Image Generation (DIRSIG) model to address the simulation issues to better facilitate an understanding of a multi/hyper-spectral collection. Glint effects are simulated using a dynamic height field that can be driven by wave frequency models and generates a sea state at arbitrary time scales. The volume scattering problem is handled by coupling the geometry representing the surface (facetization by the height field) with the single scattering contribution at any point in the water. The problem is constrained somewhat by assuming that contributions come from a Snell's window above the scattering point and by assuming a direct source (sun). Diffuse single scattered and multiple scattered energy contributions are handled by Monte Carlo techniques employed previously. The model is compared to existing radiative transfer codes where possible, with the objective of providing a robust movel of time-dependent absolute radiance at many wavelengths.

  6. Advanced Maintenance Simulation by Means of Hand-Based Haptic Interfaces

    NASA Astrophysics Data System (ADS)

    Nappi, Michele; Paolino, Luca; Ricciardi, Stefano; Sebillo, Monica; Vitiello, Giuliana

    Aerospace industry has been involved in virtual simulation for design and testing since the birth of virtual reality. Today this industry is showing a growing interest in the development of haptic-based maintenance training applications, which represent the most advanced way to simulate maintenance and repair tasks within a virtual environment by means of a visual-haptic approach. The goal is to allow the trainee to experiment the service procedures not only as a workflow reproduced at a visual level but also in terms of the kinaesthetic feedback involved with the manipulation of tools and components. This study, conducted in collaboration with aerospace industry specialists, is aimed to the development of an immersive virtual capable of immerging the trainees into a virtual environment where mechanics and technicians can perform maintenance simulation or training tasks by directly manipulating 3D virtual models of aircraft parts while perceiving force feedback through the haptic interface. The proposed system is based on ViRstperson, a virtual reality engine under development at the Italian Center for Aerospace Research (CIRA) to support engineering and technical activities such as design-time maintenance procedure validation, and maintenance training. This engine has been extended to support haptic-based interaction, enabling a more complete level of interaction, also in terms of impedance control, and thus fostering the development of haptic knowledge in the user. The user’s “sense of touch” within the immersive virtual environment is simulated through an Immersion CyberForce® hand-based force-feedback device. Preliminary testing of the proposed system seems encouraging.

  7. Exploring the use of standardized patients for simulation-based learning in preparing advanced practice nurses.

    PubMed

    Kowitlawakul, Yanika; Chow, Yeow Leng; Salam, Zakir Hussian Abdul; Ignacio, Jeanette

    2015-07-01

    The use of standardized patients for simulation-based learning was integrated into the Master of Nursing curriculum in the 2012-2013 academic year. The study aimed to explore the Master of Nursing students' experiences with and perceptions of using standardized patients in simulations, and to identify the students' learning needs in preparing to become advanced practice nurses. The study adopted an exploratory descriptive qualitative design, using a focus group interview. The study was conducted at a university in Singapore. Seven Master of Nursing students who were enrolled in the Acute Care Track of Master of Nursing program in the 2012-2013 academic year participated in the study. The data were gathered at the end of the first semester. Content analysis was used to analyze the data. Three main categories - usefulness, clinical limitations, and realism - were identified in the study. The results revealed that the students felt using standardized patients was useful and realistic for developing skills in history taking, communication, and responding to an emergency situation. On the other hand, they found that the standardized patients were limited in providing critical signs and symptoms of case scenarios. To meet the learning objectives, future development and integration of standardized patients in the Master of Nursing curriculum might need to be considered along with the use of a high-fidelity simulator. This can be an alternative strategy to fill the gaps in each method. Obviously, using standardized patients for simulation-based learning has added value to the students' learning experiences. It is highly recommended that future studies explore the impact of using standardized patients on students' performance in clinical settings. PMID:25819268

  8. Advances in edge-diffraction modeling for virtual-acoustic simulations

    NASA Astrophysics Data System (ADS)

    Calamia, Paul Thomas

    In recent years there has been growing interest in modeling sound propagation in complex, three-dimensional (3D) virtual environments. With diverse applications for the military, the gaming industry, psychoacoustics researchers, architectural acousticians, and others, advances in computing power and 3D audio-rendering techniques have driven research and development aimed at closing the gap between the auralization and visualization of virtual spaces. To this end, this thesis focuses on improving the physical and perceptual realism of sound-field simulations in virtual environments through advances in edge-diffraction modeling. To model sound propagation in virtual environments, acoustical simulation tools commonly rely on geometrical-acoustics (GA) techniques that assume asymptotically high frequencies, large flat surfaces, and infinitely thin ray-like propagation paths. Such techniques can be augmented with diffraction modeling to compensate for the effect of surface size on the strength and directivity of a reflection, to allow for propagation around obstacles and into shadow zones, and to maintain soundfield continuity across reflection and shadow boundaries. Using a time-domain, line-integral formulation of the Biot-Tolstoy-Medwin (BTM) diffraction expression, this thesis explores various aspects of diffraction calculations for virtual-acoustic simulations. Specifically, we first analyze the periodic singularity of the BTM integrand and describe the relationship between the singularities and higher-order reflections within wedges with open angle less than 180°. Coupled with analytical approximations for the BTM expression, this analysis allows for accurate numerical computations and a continuous sound field in the vicinity of an arbitrary wedge geometry insonified by a point source. Second, we describe an edge-subdivision strategy that allows for fast diffraction calculations with low error relative to a numerically more accurate solution. Third, to address

  9. Strategic Plan for Nuclear Energy -- Knowledge Base for Advanced Modeling and Simulation (NE-KAMS)

    SciTech Connect

    Rich Johnson; Kimberlyn C. Mousseau; Hyung Lee

    2011-09-01

    NE-KAMS knowledge base will assist computational analysts, physics model developers, experimentalists, nuclear reactor designers, and federal regulators by: (1) Establishing accepted standards, requirements and best practices for V&V and UQ of computational models and simulations, (2) Establishing accepted standards and procedures for qualifying and classifying experimental and numerical benchmark data, (3) Providing readily accessible databases for nuclear energy related experimental and numerical benchmark data that can be used in V&V assessments and computational methods development, (4) Providing a searchable knowledge base of information, documents and data on V&V and UQ, and (5) Providing web-enabled applications, tools and utilities for V&V and UQ activities, data assessment and processing, and information and data searches. From its inception, NE-KAMS will directly support nuclear energy research, development and demonstration programs within the U.S. Department of Energy (DOE), including the Consortium for Advanced Simulation of Light Water Reactors (CASL), the Nuclear Energy Advanced Modeling and Simulation (NEAMS), the Light Water Reactor Sustainability (LWRS), the Small Modular Reactors (SMR), and the Next Generation Nuclear Power Plant (NGNP) programs. These programs all involve computational modeling and simulation (M&S) of nuclear reactor systems, components and processes, and it is envisioned that NE-KAMS will help to coordinate and facilitate collaboration and sharing of resources and expertise for V&V and UQ across these programs. In addition, from the outset, NE-KAMS will support the use of computational M&S in the nuclear industry by developing guidelines and recommended practices aimed at quantifying the uncertainty and assessing the applicability of existing analysis models and methods. The NE-KAMS effort will initially focus on supporting the use of computational fluid dynamics (CFD) and thermal hydraulics (T/H) analysis for M&S of nuclear

  10. Advanced Simulation and Computing Fiscal Year 2016 Implementation Plan, Version 0

    SciTech Connect

    McCoy, M.; Archer, B.; Hendrickson, B.

    2015-08-27

    The Stockpile Stewardship Program (SSP) is an integrated technical program for maintaining the safety, surety, and reliability of the U.S. nuclear stockpile. The SSP uses nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of experimental facilities and programs, and the computational capabilities to support these programs. The purpose of this IP is to outline key work requirements to be performed and to control individual work activities within the scope of work. Contractors may not deviate from this plan without a revised WA or subsequent IP.

  11. Simulations of Liners and Test Objects for a New Atlas Advanced Radiography Source

    SciTech Connect

    D. V. Morgan; S. Iversen; R. A. Hilko

    2002-06-01

    The Advanced Radiographic Source (ARS) will improve the data significantly due to its smaller source width. Because of the enhanced ARS output, larger source-to-object distances are a reality. The harder ARS source will allow radiography of thick high-Z targets. The five different spectral simulations resulted in similar imaging detector weighted transmission. This work used a limited set of test objects and imaging detectors. Other test objects and imaging detectors could possibly change the MVp-sensitivity result. The effect of material motion blur must be considered for the ARS due to the expected smaller X-ray source size. This study supports the original 1.5-MVp value.

  12. Photocatalytic removal of microcystin-LR by advanced WO3-based nanoparticles under simulated solar light.

    PubMed

    Zhao, Chao; Li, Dawei; Liu, Yonggang; Feng, Chuanping; Zhang, Zhenya; Sugiura, Norio; Yang, Yingnan

    2015-01-01

    A series of advanced WO3-based photocatalysts including CuO/WO3, Pd/WO3, and Pt/WO3 were synthesized for the photocatalytic removal of microcystin-LR (MC-LR) under simulated solar light. In the present study, Pt/WO3 exhibited the best performance for the photocatalytic degradation of MC-LR. The MC-LR degradation can be described by pseudo-first-order kinetic model. Chloride ion (Cl-) with proper concentration could enhance the MC-LR degradation. The presence of metal cations (Cu2+ and Fe3+) improved the photocatalytic degradation of MC-LR. This study suggests that Pt/WO3 photocatalytic oxidation under solar light is a promising option for the purification of water containing MC-LR. PMID:25884038

  13. Photocatalytic Removal of Microcystin-LR by Advanced WO3-Based Nanoparticles under Simulated Solar Light

    PubMed Central

    Zhao, Chao; Li, Dawei; Feng, Chuanping; Zhang, Zhenya; Sugiura, Norio; Yang, Yingnan

    2015-01-01

    A series of advanced WO3-based photocatalysts including CuO/WO3, Pd/WO3, and Pt/WO3 were synthesized for the photocatalytic removal of microcystin-LR (MC-LR) under simulated solar light. In the present study, Pt/WO3 exhibited the best performance for the photocatalytic degradation of MC-LR. The MC-LR degradation can be described by pseudo-first-order kinetic model. Chloride ion (Cl−) with proper concentration could enhance the MC-LR degradation. The presence of metal cations (Cu2+ and Fe3+) improved the photocatalytic degradation of MC-LR. This study suggests that Pt/WO3 photocatalytic oxidation under solar light is a promising option for the purification of water containing MC-LR. PMID:25884038

  14. Advanced adaptive computational methods for Navier-Stokes simulations in rotorcraft aerodynamics

    NASA Technical Reports Server (NTRS)

    Stowers, S. T.; Bass, J. M.; Oden, J. T.

    1993-01-01

    A phase 2 research and development effort was conducted in area transonic, compressible, inviscid flows with an ultimate goal of numerically modeling complex flows inherent in advanced helicopter blade designs. The algorithms and methodologies therefore are classified as adaptive methods, which are error estimation techniques for approximating the local numerical error, and automatically refine or unrefine the mesh so as to deliver a given level of accuracy. The result is a scheme which attempts to produce the best possible results with the least number of grid points, degrees of freedom, and operations. These types of schemes automatically locate and resolve shocks, shear layers, and other flow details to an accuracy level specified by the user of the code. The phase 1 work involved a feasibility study of h-adaptive methods for steady viscous flows, with emphasis on accurate simulation of vortex initiation, migration, and interaction. Phase 2 effort focused on extending these algorithms and methodologies to a three-dimensional topology.

  15. A Computational Methodology for Simulating Thermal Loss Testing of the Advanced Stirling Convertor

    NASA Technical Reports Server (NTRS)

    Reid, Terry V.; Wilson, Scott D.; Schifer, Nicholas A.; Briggs, Maxwell H.

    2012-01-01

    The U.S. Department of Energy (DOE) and Lockheed Martin Space Systems Company (LMSSC) have been developing the Advanced Stirling Radioisotope Generator (ASRG) for use as a power system for space science missions. This generator would use two highefficiency Advanced Stirling Convertors (ASCs), developed by Sunpower Inc. and NASA Glenn Research Center (GRC). The ASCs convert thermal energy from a radioisotope heat source into electricity. As part of ground testing of these ASCs, different operating conditions are used to simulate expected mission conditions. These conditions require achieving a particular operating frequency, hot end and cold end temperatures, and specified electrical power output for a given net heat input. In an effort to improve net heat input predictions, numerous tasks have been performed which provided a more accurate value for net heat input into the ASCs, including the use of multidimensional numerical models. Validation test hardware has also been used to provide a direct comparison of numerical results and validate the multi-dimensional numerical models used to predict convertor net heat input and efficiency. These validation tests were designed to simulate the temperature profile of an operating Stirling convertor and resulted in a measured net heat input of 244.4 W. The methodology was applied to the multi-dimensional numerical model which resulted in a net heat input of 240.3 W. The computational methodology resulted in a value of net heat input that was 1.7 percent less than that measured during laboratory testing. The resulting computational methodology and results are discussed.

  16. Advanced Simulation and Computing: A Summary Report to the Director's Review

    SciTech Connect

    McCoy, M G; Peck, T

    2003-06-01

    It has now been three years since the Advanced Simulation and Computing Program (ASCI), as managed by Defense and Nuclear Technologies (DNT) Directorate, has been reviewed by this Director's Review Committee (DRC). Since that time, there has been considerable progress for all components of the ASCI Program, and these developments will be highlighted in this document and in the presentations planned for June 9 and 10, 2003. There have also been some name changes. Today, the Program is called ''Advanced Simulation and Computing,'' Although it retains the familiar acronym ASCI, the initiative nature of the effort has given way to sustained services as an integral part of the Stockpile Stewardship Program (SSP). All computing efforts at LLNL and the other two Defense Program (DP) laboratories are funded and managed under ASCI. This includes the so-called legacy codes, which remain essential tools in stockpile stewardship. The contract between the Department of Energy (DOE) and the University of California (UC) specifies an independent appraisal of Directorate technical work and programmatic management. Such represents the work of this DNT Review Committee. Beginning this year, the Laboratory is implementing a new review system. This process was negotiated between UC, the National Nuclear Security Administration (NNSA), and the Laboratory Directors. Central to this approach are eight performance objectives that focus on key programmatic and administrative goals. Associated with each of these objectives are a number of performance measures to more clearly characterize the attainment of the objectives. Each performance measure has a lead directorate and one or more contributing directorates. Each measure has an evaluation plan and has identified expected documentation to be included in the ''Assessment File''.

  17. The SCEC Petascale Cyberfacility for Physics-based Seismic Hazard Analysis (PetaSHA): Accelerating SCEC Research Using High Performance Computing

    NASA Astrophysics Data System (ADS)

    Maechling, P. J.; Jordan, T. H.; Kesselman, C.; Moore, R.; Minster, J. B.; SCEC Collaboration

    2007-12-01

    The SCEC Community Modeling Environment (SCEC/CME) collaboration is extending SCEC's program of seismic hazard research using high performance computing with the NSF-funded Petascale Cyberfacility for Physics-based Seismic Hazard Analysis (PetaSHA) Project. The SCEC PetaSHA project is a collaboration of geoscientists and computer scientists that integrate geophysical numerical modeling codes with leading-edge cyberinfrastructure to perform seismic hazard research at large-scales and high-resolution using national academic supercomputing facilities. The PetaSHA computational capabilities are organized around the development of robust, re-usable, well-validated simulation systems we call computational platforms. Researchers on the PetaSHA Project are currently developing the DynaShake Platform (dynamic rupture simulations), the TeraShake Platform (wave propagation simulations), the CyberShake Platform (physics-based probabilistic seismic hazard analysis), the BroadBand Platform (deterministic and stochastic modeling of high frequency synthetic waveforms), the Full 3D Tomography (F3DT) Platform (improvements in structural representations), as well as using and extending the OpenSHA Platform (Probabilistic Seismic Hazard Analysis). We will describe several current PetaSHA research projects including the application of the DynaShake Platform to dynamic rupture modeling of the ShakeOut source, the use of the TeraShake Platform, including the URS- Graves, SDSU-Olsen and CMU-Hercules Anelastic Wave Propagation codes, to model 1Hz ShakeOut simulations, the use of the CyberShake Platform to investigate physics-based PSHA hazard curves, and the use of the F3DT Platform to produce an improved structural model for a large region in southern California.

  18. A new paradigm for petascale Monte Carlo simulation: Replica exchange Wang Landau sampling

    SciTech Connect

    Li, Ying Wai; Vogel, Thomas; Wuest, Thomas; Landau, David P

    2014-01-01

    We introduce a generic, parallel Wang Landau method that is naturally suited to implementation on massively parallel, petaflop supercomputers. The approach introduces a replica-exchange framework in which densities of states for overlapping sub-windows in energy space are determined iteratively by traditional Wang Landau sampling. The advantages and general applicability of the method are demonstrated for several distinct systems that possess discrete or continuous degrees of freedom, including those with complex free energy landscapes and topological constraints.

  19. From Petascale to Exascale: Eight Focus Areas of R&D Challenges for HPC Simulation Environments

    SciTech Connect

    Springmeyer, R; Still, C; Schulz, M; Ahrens, J; Hemmert, S; Minnich, R; McCormick, P; Ward, L; Knoll, D

    2011-03-17

    Programming models bridge the gap between the underlying hardware architecture and the supporting layers of software available to applications. Programming models are different from both programming languages and application programming interfaces (APIs). Specifically, a programming model is an abstraction of the underlying computer system that allows for the expression of both algorithms and data structures. In comparison, languages and APIs provide implementations of these abstractions and allow the algorithms and data structures to be put into practice - a programming model exists independently of the choice of both the programming language and the supporting APIs. Programming models are typically focused on achieving increased developer productivity, performance, and portability to other system designs. The rapidly changing nature of processor architectures and the complexity of designing an exascale platform provide significant challenges for these goals. Several other factors are likely to impact the design of future programming models. In particular, the representation and management of increasing levels of parallelism, concurrency and memory hierarchies, combined with the ability to maintain a progressive level of interoperability with today's applications are of significant concern. Overall the design of a programming model is inherently tied not only to the underlying hardware architecture, but also to the requirements of applications and libraries including data analysis, visualization, and uncertainty quantification. Furthermore, the successful implementation of a programming model is dependent on exposed features of the runtime software layers and features of the operating system. Successful use of a programming model also requires effective presentation to the software developer within the context of traditional and new software development tools. Consideration must also be given to the impact of programming models on both languages and the associated compiler infrastructure. Exascale programming models must reflect several, often competing, design goals. These design goals include desirable features such as abstraction and separation of concerns. However, some aspects are unique to large-scale computing. For example, interoperability and composability with existing implementations will prove critical. In particular, performance is the essential underlying goal for large-scale systems. A key evaluation metric for exascale models will be the extent to which they support these goals rather than merely enable them.

  20. A simulation study of crew performance in operating an advanced transport aircraft in an automated terminal area environment

    NASA Technical Reports Server (NTRS)

    Houck, J. A.

    1983-01-01

    A simulation study assessing crew performance operating an advanced transport aircraft in an automated terminal area environment is described. The linking together of the Langley Advanced Transport Operating Systems Aft Flight Deck Simulator with the Terminal Area Air Traffic Model Simulation was required. The realism of an air traffic control (ATC) environment with audio controller instructions for the flight crews and the capability of inserting a live aircraft into the terminal area model to interact with computer generated aircraft was provided. Crew performance using the advanced displays and two separate control systems (automatic and manual) in flying area navigation routes in the automated ATC environment was assessed. Although the crews did not perform as well using the manual control system, their performances were within acceptable operational limits with little increase in workload. The crews favored using the manual control system and felt they were more alert and aware of their environment when using it.

  1. Some Specific CASL Requirements for Advanced Multiphase Flow Simulation of Light Water Reactors

    SciTech Connect

    R. A. Berry

    2010-11-01

    Because of the diversity of physical phenomena occuring in boiling, flashing, and bubble collapse, and of the length and time scales of LWR systems, it is imperative that the models have the following features: • Both vapor and liquid phases (and noncondensible phases, if present) must be treated as compressible. • Models must be mathematically and numerically well-posed. • The models methodology must be multi-scale. A fundamental derivation of the multiphase governing equation system, that should be used as a basis for advanced multiphase modeling in LWR coolant systems, is given in the Appendix using the ensemble averaging method. The remainder of this work focuses specifically on the compressible, well-posed, and multi-scale requirements of advanced simulation methods for these LWR coolant systems, because without these are the most fundamental aspects, without which widespread advancement cannot be claimed. Because of the expense of developing multiple special-purpose codes and the inherent inability to couple information from the multiple, separate length- and time-scales, efforts within CASL should be focused toward development of a multi-scale approaches to solve those multiphase flow problems relevant to LWR design and safety analysis. Efforts should be aimed at developing well-designed unified physical/mathematical and high-resolution numerical models for compressible, all-speed multiphase flows spanning: (1) Well-posed general mixture level (true multiphase) models for fast transient situations and safety analysis, (2) DNS (Direct Numerical Simulation)-like models to resolve interface level phenmena like flashing and boiling flows, and critical heat flux determination (necessarily including conjugate heat transfer), and (3) Multi-scale methods to resolve both (1) and (2) automatically, depending upon specified mesh resolution, and to couple different flow models (single-phase, multiphase with several velocities and pressures, multiphase with single

  2. Technology advancement for the ASCENDS mission using the ASCENDS CarbonHawk Experiment Simulator (ACES)

    NASA Astrophysics Data System (ADS)

    Obland, M. D.; Antill, C.; Browell, E. V.; Campbell, J. F.; CHEN, S.; Cleckner, C.; Dijoseph, M. S.; Harrison, F. W.; Ismail, S.; Lin, B.; Meadows, B. L.; Mills, C.; Nehrir, A. R.; Notari, A.; Prasad, N. S.; Kooi, S. A.; Vitullo, N.; Dobler, J. T.; Bender, J.; Blume, N.; Braun, M.; Horney, S.; McGregor, D.; Neal, M.; Shure, M.; Zaccheo, T.; Moore, B.; Crowell, S.; Rayner, P. J.; Welch, W.

    2013-12-01

    The ASCENDS CarbonHawk Experiment Simulator (ACES) is a NASA Langley Research Center project funded by NASA's Earth Science Technology Office that seeks to advance technologies critical to measuring atmospheric column carbon dioxide (CO2) mixing ratios in support of the NASA Active Sensing of CO2 Emissions over Nights, Days, and Seasons (ASCENDS) mission. The technologies being advanced are: (1) multiple transmitter and telescope-aperture operations, (2) high-efficiency CO2 laser transmitters, (3) a high bandwidth detector and transimpedance amplifier (TIA), and (4) advanced algorithms for cloud and aerosol discrimination. The instrument architecture is being developed for ACES to operate on a high-altitude aircraft, and it will be directly scalable to meet the ASCENDS mission requirements. The above technologies are critical for developing an airborne simulator and spaceborne instrument with lower platform consumption of size, mass, and power, and with improved performance. This design employs several laser transmitters and telescope-apertures to demonstrate column CO2 retrievals with alignment of multiple laser beams in the far-field. ACES will transmit five laser beams: three from commercial lasers operating near 1.57-microns, and two from the Exelis atmospheric oxygen (O2) fiber laser amplifier system operating near 1.26-microns. The Master Oscillator Power Amplifier at 1.57-microns measures CO2 column concentrations using an Integrated-Path Differential Absorption (IPDA) lidar approach. O2 column amounts needed for calculating the CO2 mixing ratio will be retrieved using the Exelis laser system with a similar IPDA approach. The three aperture telescope design was built to meet the constraints of the Global Hawk high-altitude unmanned aerial vehicle (UAV). This assembly integrates fiber-coupled transmit collimators for all of the laser transmitters and fiber-coupled optical signals from the three telescopes to the aft optics and detector package. The detector

  3. The Synergy Between Total Scattering and Advanced Simulation Techniques: Quantifying Geopolymer Gel Evolution

    SciTech Connect

    White, Claire; Bloomer, Breaunnah E.; Provis, John L.; Henson, Neil J.; Page, Katharine L.

    2012-05-16

    With the ever increasing demands for technologically advanced structural materials, together with emerging environmental consciousness due to climate change, geopolymer cement is fast becoming a viable alternative to traditional cements due to proven mechanical engineering characteristics and the reduction in CO2 emitted (approximately 80% less CO2 emitted compared to ordinary Portland cement). Nevertheless, much remains unknown regarding the kinetics of the molecular changes responsible for nanostructural evolution during the geopolymerization process. Here, in-situ total scattering measurements in the form of X-ray pair distribution function (PDF) analysis are used to quantify the extent of reaction of metakaolin/slag alkali-activated geopolymer binders, including the effects of various activators (alkali hydroxide/silicate) on the kinetics of the geopolymerization reaction. Restricting quantification of the kinetics to the initial ten hours of reaction does not enable elucidation of the true extent of the reaction, but using X-ray PDF data obtained after 128 days of reaction enables more accurate determination of the initial extent of reaction. The synergies between the in-situ X-ray PDF data and simulations conducted by multiscale density functional theory-based coarse-grained Monte Carlo analysis are outlined, particularly with regard to the potential for the X-ray data to provide a time scale for kinetic analysis of the extent of reaction obtained from the multiscale simulation methodology.

  4. Propulsion Simulations Using Advanced Turbulence Models with the Unstructured Grid CFD Tool, TetrUSS

    NASA Technical Reports Server (NTRS)

    Abdol-Hamid, Khaled S.; Frink, Neal T.; Deere, Karen A.; Pandya, Mohangna J.

    2004-01-01

    A computational investigation has been completed to assess the capability of TetrUSS for exhaust nozzle flows. Three configurations were chosen for this study (1) an axisymmetric supersonic jet, (2) a transonic axisymmetric boattail with solid sting operated at different Reynolds number and Mach number, and (3) an isolated non-axisymmetric nacelle with a supersonic cruise nozzle. These configurations were chosen because existing experimental data provided a means for measuring the ability of TetrUSS for simulating complex nozzle flows. The main objective of this paper is to validate the implementation of advanced two-equation turbulence models in the unstructured-grid CFD code USM3D for propulsion flow cases. USM3D is the flow solver of the TetrUSS system. Three different turbulence models, namely, Menter Shear Stress Transport (SST), basic k epsilon, and the Spalart-Allmaras (SA) are used in the present study. The results are generally in agreement with other implementations of these models in structured-grid CFD codes. Results indicate that USM3D provides accurate simulations for complex aerodynamic configurations with propulsion integration.

  5. Real-time manned simulation of advanced terminal area guidance concepts for short-haul operations

    NASA Technical Reports Server (NTRS)

    Tobias, L.; Obrien, P. J.

    1977-01-01

    A real-time simulation was conducted of three-dimensional area navigation and four-dimensional area navigation equipped (STOL) aircraft operating in a high-density terminal area traffic environment. The objectives were to examine the effects of 3D RNAV and 4D RNAV equipped aircraft on the terminal area traffic efficiency, and to examine the performance of an air traffic control system concept and associated controller display proposed for use with advanced RNAV systems. Three types of STOL aircraft were simulated each with different performance capabilities. System performance was measured in both the 4D mode and in a 3D mode; the 3D mode, used as a baseline, was simply the 4D mode less any time specification. The results show that communications workload in the 4D mode was reduced by about 35 percent compared to the 3D, while 35 percent more traffic was handled with the 4D. Aircraft holding time in the 4D mode was only 30 percent of that required in the 3D mode. In addition, the orderliness of traffic was improved significantly in the 4D mode.

  6. CHARMM-GUI PDB Manipulator for Advanced Modeling and Simulations of Proteins Containing Non-standard Residues

    PubMed Central

    Jo, Sunhwan; Cheng, Xi; Islam, Shahidul M.; Huang, Lei; Rui, Huan; Zhu, Allen; Lee, Hui Sun; Qi, Yifei; Han, Wei; Vanommeslaeghe, Kenno; MacKerell, Alexander D.; Roux, Benoît; Im, Wonpil

    2016-01-01

    CHARMM-GUI, http://www.charmm-gui.org, is a web-based graphical user interface to prepare molecular simulation systems and input files to facilitate the usage of common and advanced simulation techniques. Since its original development in 2006, CHARMM-GUI has been widely adopted for various purposes and now contains a number of different modules designed to setup a broad range of simulations including free energy calculation and large-scale coarse-grained representation. Here, we describe functionalities that have recently been integrated into CHARMM-GUI PDB Manipulator, such as ligand force field generation, incorporation of methanethiosulfonate (MTS) spin labels and chemical modifiers, and substitution of amino acids with unnatural amino acids. These new features are expected to be useful in advanced biomolecular modeling and simulation of proteins. PMID:25443960

  7. Progress and opportunities in direct numerical simulations at the next higher resolution

    NASA Astrophysics Data System (ADS)

    Yeung, P. K.; Sreenivasan, K. R.

    2013-11-01

    In recent years, many researchers in the turbulence community have been able to exploit the steady advancement of computing power to advance our understanding of turbulence, including new parameter ranges and the effects of coupling with other physical processes. However it is remarkable that, the ``record'' grid resolution of 40963, first achieved just over 10 years ago (Kaneda et al., Phys. Fluids 2003) still stands in the literature of the field. In this talk, we will present preliminary results from an 81923 simulation of turbulence on a periodic domain, carried out using 262144 CPU cores on the Blue Waters supercomputer under the NSF Track 1 Petascale Resource Allocations program. Since a simulation at this magnitude is still extremely expensive, and the resources required are not easily secured, very careful planning and very aggressive efforts at algorithmic enhancement are necessary (which we will also briefly discuss). This new simulation is expected to allow us to probe deeply into fundamental questions such as intermittency at the highest Reynolds numbers and the best possible resolution of the small scales at the current limit of computing power available. Supported by NSF Grant ACI-1036170.

  8. A computer program for estimating the power-density spectrum of advanced continuous simulation language generated time histories

    NASA Technical Reports Server (NTRS)

    Dunn, H. J.

    1981-01-01

    A computer program for performing frequency analysis of time history data is presented. The program uses circular convolution and the fast Fourier transform to calculate power density spectrum (PDS) of time history data. The program interfaces with the advanced continuous simulation language (ACSL) so that a frequency analysis may be performed on ACSL generated simulation variables. An example of the calculation of the PDS of a Van de Pol oscillator is presented.

  9. Numerical Simulation of Multi-Material Mixing in an Inclined Interface Richtmyer-Meshkov Instability

    NASA Astrophysics Data System (ADS)

    Subramaniam, Akshay; Lele, Sanjiva

    2015-11-01

    The Richtmyer-Meshkov instability arises when a shock wave interacts with an interface separating two fluids. In this work, high fidelity simulations of shock induced multi-material mixing between N2 and CO2 in a shock tube are performed for a Mach 1.55 shock interacting with a planar material interface that is inclined with respect to the shock propagation direction. In the current configuration, unlike the classical perturbed flat interface case, the evolution of the interface is non-linear from early time onwards. Our previous simulations of this problem at multiple spatial resolutions have shown that very small 3D perturbations have a large effect on vortex breakdown mechanisms and hence fine scale turbulence. We propose a comparison of our simulations to the experiments performed at the Georgia Tech Shock Tube and Advanced Mixing Laboratory (STAML). Results before and after reshock of the interface will be shown. Results from simulations of a second case with a more complex initial interface will also be presented. Simulations shown are conducted with an extended version of the Miranda solver developed by Cook et al. (2007) which combines high-order compact finite differences with localized non-linear artificial properties for shock and interface capturing. This research is part of the Blue Waters sustained-petascale computing project, which is supported by the National Science Foundation (awards OCI-0725070 and ACI-1238993) and the state of Illinois.

  10. Cooperative Server Clustering for a Scalable GAS Model on Petascale Cray XT5 Systems

    SciTech Connect

    Yu, Weikuan; Que, Xinyu; Tipparaju, Vinod; Graham, Richard L; Vetter, Jeffrey S

    2010-05-01

    Global Address Space (GAS) programming models are attractive because they retain the easy-to-use addressing model that is the characteristic of shared-memory style load and store operations. The scalability of GAS models depends directly on the design and implementation of runtime libraries on the targeted platforms. In this paper, we examine the memory requirement of a popular GAS run-time library, Aggregate Remote Memory Copy Interface (ARMCI) on petascale Cray XT 5 systems. Then we describe a new technique, cooperative server clustering, that enhances the memory scalability of ARMCI communication servers. In cooperative server clustering, ARMCI servers are organized into clusters, and cooperatively process incoming communication requests among them. A request intervention scheme is also designed to expedite the return of responses to the initiating processes. Our experimental results demonstrate that, with very little impact on ARMCI communication latency and bandwidth, cooperative server clustering is able to significantly reduce the memory requirement of ARMCI communication servers, thereby enabling highly scalable scientific applications. In particular, it dramatically reduces the total execution time of a scientific application, NWChem, by 45% on 2400 processes.

  11. Cooperative Server Clustering for a Scalable GAS Model on petascale cray XT5 Systems

    SciTech Connect

    Yu, Weikuan; Que, Xinyu; Graham, Richard L; Vetter, Jeffrey S

    2010-01-01

    Global Address Space (GAS) programming models are attractive because they retain the easy-to-use addressing model that is the characteristic of shared-memory style load and store operations. The scalability of GAS models depends directly on the design and implementation of runtime libraries on the targeted platforms. In this paper, we examine the memory requirement of a popular GAS runtime library, Aggregate Remote Memory Copy Interface (ARMCI) on petascale Cray XT 5 systems. Then we describe a new technique cooperative server clustering that enhances the memory scalability of ARMCI communication servers. In cooperative server clustering, ARMCI servers are organized into clusters, and cooperatively process incoming communication requests among them. A request intervention scheme is also designed to expedite the return of responses to the initiating processes. Our experimental results demonstrate that, with very little impact on ARMCI communication latency and bandwidth, cooperative server clustering is able to significantly reduce the memory requirement of ARMCI communication servers, thereby enabling highly scalable scientific applications. In particular, it dramatically reduces the total execution time of a scientific application, NWChem, by 45% on 2400 processes.

  12. DAG Software Architectures for Multi-Scale Multi-Physics Problems at Petascale and Beyond

    NASA Astrophysics Data System (ADS)

    Berzins, Martin

    2015-03-01

    The challenge of computations at Petascale and beyond is to ensure how to make possible efficient calculations on possibly hundreds of thousands for cores or on large numbers of GPUs or Intel Xeon Phis. An important methodology for achieving this is at present thought to be that of asynchronous task-based parallelism. The success of this approach will be demonstrated using the Uintah software framework for the solution of coupled fluid-structure interaction problems with chemical reactions. The layered approach of this software makes it possible for the user to specify the physical problems without parallel code, for that specification to be translated into a parallel set of tasks. These tasks are executed using a runtime system that executes tasks asynchronously and sometimes out-of-order. The scalability and portability of this approach will be demonstrated using examples from large scale combustion problems, industrial detonations and multi-scale, multi-physics models. The challenges of scaling such calculations to the next generations of leadership class computers (with more than a hundred petaflops) will be discussed. Thanks to NSF, XSEDE, DOE NNSA, DOE NETL, DOE ALCC and DOE INCITE.

  13. NWChem Meeting on Science Driven Petascale Computing and Capability Development at EMSL

    SciTech Connect

    De Jong, Wibe A.

    2007-02-19

    On January 25, and 26, 2007, an NWChem meeting was held that was attended by 65 scientists from 29 institutions including 22 universities and 5 national laboratories. The goals of the meeting were to look at major scientific challenges that could be addressed by computational modeling in environmental molecular sciences, and to identify the associated capability development needs. In addition, insights were sought into petascale computing developments in computational chemistry. During the meeting common themes were identified that will drive the need for the development of new or improved capabilities in NWChem. Crucial areas of development that the developer's team will be focusing on are (1) modeling of dynamics and kinetics in chemical transformations, (2) modeling of chemistry at interfaces and in the condensed phase, and (3) spanning longer time scales in biological processes modeled with molecular dynamics. Various computational chemistry methodologies were discussed during the meeting, which will provide the basis for the capability developments in the near or long term future of NWChem.

  14. Network-friendly one-sided communication through multinode cooperation on petascale cray xt5 systems

    SciTech Connect

    Tipparaju, Vinod; Que, Xinyu; Yu, Weikuan; Vetter, Jeffrey S

    2011-05-01

    ne-sided communication is important to enable asynchronous communication and data movement for Global Address Space (GAS) programming models. Such communication is typically realized through direct messages between initiator and target processes. For petascale systems with 10,000s of nodes and 100,000s of cores, these direct messages require dedicated communication buffers and/or channels, which can lead to significant scalability challenges for GAS programming models. In this paper, we describe a network-friendly communication model, multinode cooperation, to enable indirect one-sided communication. Compute nodes work together to handle one-sided requests through (1) request forwarding in which one node can intercept a request and forward it to a target node, and (2) request aggregation in which one node can aggregate many requests to a target node. We have implemented multinode cooperation for a popular GAS runtime library, Aggregate Remote Memory Copy Interface (ARMCI). Our experimental results on a large-scale Cray XT5 system demonstrate that, multinode cooperation is able to greatly increase the memory scalability by reducing the number of communication buffers. In addition, multinode cooperation improves the resiliency of GAS runtime system to network contention. Furthermore, multinode cooperation can benefit the performance of scientific applications. In one case, it reduces the total execution time of an NWChem application by 52%.

  15. Network-friendly one-sided communication through multinode cooperation on petascale cray xt5 systems

    SciTech Connect

    Tipparaju, Vinod; Que, Xinyu; Yu, Weikuan; Vetter, Jeffrey S

    2011-01-01

    ne-sided communication is important to enable asynchronous communication and data movement for Global Address Space (GAS) programming models. Such communication is typically realized through direct messages between initiator and target processes. For petascale systems with 10,000s of nodes and 100,000s of cores, these direct messages require dedicated communication buffers and/or channels, which can lead to significant scalability challenges for GAS programming models. In this paper, we describe a network-friendly communication model, multinode cooperation, to enable indirect one-sided communication. Compute nodes work together to handle one-sided requests through (1) request forwarding in which one node can intercept a request and forward it to a target node, and (2) request aggregation in which one node can aggregate many requests to a target node. We have implemented multinode cooperation for a popular GAS runtime library, Aggregate Remote Memory Copy Interface (ARMCI). Our experimental results on a large-scale Cray XT5 system demonstrate that, multinode cooperation is able to greatly increase the memory scalability by reducing the number of communication buffers. In addition, multinode cooperation improves the resiliency of GAS runtime system to network contention. Furthermore, multinode cooperation can benefit the performance of scientific applications. In one case, it reduces the total execution time of an NWChem application by 52%.

  16. ADVANCED UTILITY SIMULATION MODEL, REPORT OF SENSITIVITY TESTING, CALIBRATION, AND MODEL OUTPUT COMPARISONS (VERSION 3.0) TAPE

    EPA Science Inventory

    The report is one of 11 in a series describing the initial development of the Advanced Utility Simulation Model (AUSM) by the Universities Research Group on Energy (URGE) and its continued development by the Science Applications International Corporation (SAIC) research team. The...

  17. ADVANCED SIMULATION CAPABILITY FOR ENVIRONMENTAL MANAGEMENT – CURRENT STATUS AND PHASE II DEMONSTRATION RESULTS

    SciTech Connect

    Seitz, Roger; Freshley, Mark D.; Dixon, Paul; Hubbard, Susan S.; Freedman, Vicky L.; Flach, Gregory P.; Faybishenko, Boris; Gorton, Ian; Finsterle, Stefan A.; Moulton, John D.; Steefel, Carl I.; Marble, Justin

    2013-06-27

    The U.S. Department of Energy (USDOE) Office of Environmental Management (EM), Office of Soil and Groundwater, is supporting development of the Advanced Simulation Capability for Environmental Management (ASCEM). ASCEM is a state-of-the-art scientific tool and approach for understanding and predicting contaminant fate and transport in natural and engineered systems. The modular and open source high-performance computing tool facilitates integrated approaches to modeling and site characterization that enable robust and standardized assessments of performance and risk for EM cleanup and closure activities. The ASCEM project continues to make significant progress in development of computer software capabilities with an emphasis on integration of capabilities in FY12. Capability development is occurring for both the Platform and Integrated Toolsets and High-Performance Computing (HPC) Multiprocess Simulator. The Platform capabilities provide the user interface and tools for end-to-end model development, starting with definition of the conceptual model, management of data for model input, model calibration and uncertainty analysis, and processing of model output, including visualization. The HPC capabilities target increased functionality of process model representations, toolsets for interaction with Platform, and verification and model confidence testing. The Platform and HPC capabilities are being tested and evaluated for EM applications in a set of demonstrations as part of Site Applications Thrust Area activities. The Phase I demonstration focusing on individual capabilities of the initial toolsets was completed in 2010. The Phase II demonstration completed in 2012 focused on showcasing integrated ASCEM capabilities. For Phase II, the Hanford Site deep vadose zone (BC Cribs) served as an application site for an end-to-end demonstration of capabilities, with emphasis on integration and linkages between the Platform and HPC components. Other demonstrations

  18. The Nuclear Energy Advanced Modeling and Simulation Safeguards and Separations Reprocessing Plant Toolkit

    SciTech Connect

    McCaskey, Alex; Billings, Jay Jay; de Almeida, Valmor F

    2011-08-01

    This report details the progress made in the development of the Reprocessing Plant Toolkit (RPTk) for the DOE Nuclear Energy Advanced Modeling and Simulation (NEAMS) program. RPTk is an ongoing development effort intended to provide users with an extensible, integrated, and scalable software framework for the modeling and simulation of spent nuclear fuel reprocessing plants by enabling the insertion and coupling of user-developed physicochemical modules of variable fidelity. The NEAMS Safeguards and Separations IPSC (SafeSeps) and the Enabling Computational Technologies (ECT) supporting program element have partnered to release an initial version of the RPTk with a focus on software usability and utility. RPTk implements a data flow architecture that is the source of the system's extensibility and scalability. Data flows through physicochemical modules sequentially, with each module importing data, evolving it, and exporting the updated data to the next downstream module. This is accomplished through various architectural abstractions designed to give RPTk true plug-and-play capabilities. A simple application of this architecture, as well as RPTk data flow and evolution, is demonstrated in Section 6 with an application consisting of two coupled physicochemical modules. The remaining sections describe this ongoing work in full, from system vision and design inception to full implementation. Section 3 describes the relevant software development processes used by the RPTk development team. These processes allow the team to manage system complexity and ensure stakeholder satisfaction. This section also details the work done on the RPTk ``black box'' and ``white box'' models, with a special focus on the separation of concerns between the RPTk user interface and application runtime. Section 4 and 5 discuss that application runtime component in more detail, and describe the dependencies, behavior, and rigorous testing of its constituent components.

  19. Advanced End-to-end Simulation for On-board Processing (AESOP)

    NASA Technical Reports Server (NTRS)

    Mazer, Alan S.

    1994-01-01

    Developers of data compression algorithms typically use their own software together with commercial packages to implement, evaluate and demonstrate their work. While convenient for an individual developer, this approach makes it difficult to build on or use another's work without intimate knowledge of each component. When several people or groups work on different parts of the same problem, the larger view can be lost. What's needed is a simple piece of software to stand in the gap and link together the efforts of different people, enabling them to build on each other's work, and providing a base for engineers and scientists to evaluate the parts as a cohesive whole and make design decisions. AESOP (Advanced End-to-end Simulation for On-board Processing) attempts to meet this need by providing a graphical interface to a developer-selected set of algorithms, interfacing with compiled code and standalone programs, as well as procedures written in the IDL and PV-Wave command languages. As a proof of concept, AESOP is outfitted with several data compression algorithms integrating previous work on different processors (AT&T DSP32C, TI TMS320C30, SPARC). The user can specify at run-time the processor on which individual parts of the compression should run. Compressed data is then fed through simulated transmission and uncompression to evaluate the effects of compression parameters, noise and error correction algorithms. The following sections describe AESOP in detail. Section 2 describes fundamental goals for usability. Section 3 describes the implementation. Sections 4 through 5 describe how to add new functionality to the system and present the existing data compression algorithms. Sections 6 and 7 discuss portability and future work.

  20. ADVANCED SIMULATION CAPABILITY FOR ENVIRONMENTAL MANAGEMENT- CURRENT STATUS AND PHASE II DEMONSTRATION RESULTS

    SciTech Connect

    Seitz, R.

    2013-02-26

    The U.S. Department of Energy (USDOE) Office of Environmental Management (EM), Office of Soil and Groundwater, is supporting development of the Advanced Simulation Capability for Environmental Management (ASCEM). ASCEM is a state-of-the-art scientific tool and approach for understanding and predicting contaminant fate and transport in natural and engineered systems. The modular and open source high-performance computing tool facilitates integrated approaches to modeling and site characterization that enable robust and standardized assessments of performance and risk for EM cleanup and closure activities. The ASCEM project continues to make significant progress in development of computer software capabilities with an emphasis on integration of capabilities in FY12. Capability development is occurring for both the Platform and Integrated Toolsets and High-Performance Computing (HPC) Multiprocess Simulator. The Platform capabilities provide the user interface and tools for end-to-end model development, starting with definition of the conceptual model, management of data for model input, model calibration and uncertainty analysis, and processing of model output, including visualization. The HPC capabilities target increased functionality of process model representations, toolsets for interaction with Platform, and verification and model confidence testing. The Platform and HPC capabilities are being tested and evaluated for EM applications in a set of demonstrations as part of Site Applications Thrust Area activities. The Phase I demonstration focusing on individual capabilities of the initial toolsets was completed in 2010. The Phase II demonstration completed in 2012 focused on showcasing integrated ASCEM capabilities. For Phase II, the Hanford Site deep vadose zone (BC Cribs) served as an application site for an end-to-end demonstration of capabilities, with emphasis on integration and linkages between the Platform and HPC components. Other demonstrations

  1. Recent advances in large-eddy simulation of spray and coal combustion

    NASA Astrophysics Data System (ADS)

    Zhou, L. X.

    2013-07-01

    Large-eddy simulation (LES) is under its rapid development and is recognized as a possible second generation of CFD methods used in engineering. Spray and coal combustion is widely used in power, transportation, chemical and metallurgical, iron and steel making, aeronautical and astronautical engineering, hence LES of spray and coal two-phase combustion is particularly important for engineering application. LES of two-phase combustion attracts more and more attention; since it can give the detailed instantaneous flow and flame structures and more exact statistical results than those given by the Reynolds averaged modeling (RANS modeling). One of the key problems in LES is to develop sub-grid scale (SGS) models, including SGS stress models and combustion models. Different investigators proposed or adopted various SGS models. In this paper the present author attempts to review the advances in studies on LES of spray and coal combustion, including the studies done by the present author and his colleagues. Different SGS models adopted by different investigators are described, some of their main results are summarized, and finally some research needs are discussed.

  2. Performance experiments with alternative advanced teleoperator control modes for a simulated solar maximum satellite repair

    NASA Technical Reports Server (NTRS)

    Das, H.; Zak, H.; Kim, W. S.; Bejczy, A. K.; Schenker, P. S.

    1992-01-01

    Experiments are described which were conducted at the JPL Advanced Teleoperator Lab to demonstrate and evaluate the effectiveness of various teleoperator control modes in the performance of a simulated Solar Max Satellite Repair (SMSR) task. THe SMSR was selected as a test because it is very rich in performance capability requirements and it actually has been performed by two EVA astronauts in the Space Shuttle Bay in 1984. The main subtasks are: thermal blanket removal; installation of a hinge attachment for electrical panel opening; opening of electrical panel; removal of electrical connectors; relining of cable bundles; replacement of electrical panel; securing parts and cables; re-mate electrical connectors; closing of electrical panel; and reinstating thermal blanket. The current performance experiments are limited to thermal blanket cutting, electrical panel unbolting and handling electrical bundles and connectors. In one formal experiment even different control modes were applied to the unbolting and reinsertion of electrical panel screws subtasks. The seven control modes are alternative combinations of manual position and rate control with force feedback and remote compliance referenced to force-torque sensor information. Force-torque sensor and end effector position data and task completion times were recorded for analysis and quantification of operator performance.

  3. Recent advances in auxiliary-field methods --- simulations in lattice models and real materials

    NASA Astrophysics Data System (ADS)

    Zhang, Shiwei

    2007-03-01

    We have developed an auxiliary-field (AF) quantum Monte Carlo (QMC) method for many-body simulations. The method takes the form of a linear superposition of independent-particle calculations in fluctuating external fields. ``Entanglement'' of the different field configurations leads to random walks in Slater determinant space. We formulate an approximate constraint on the random walk paths to control the sign/phase problem, which has shown to be very accurate even with simple mean-field solutions as the constraining trial wave function. The same method can be applied to both simplified lattice models and real materials. For realistic electronic Hamiltonians, each random walk stream resembles a density-functional theory (DFT) calculation in random local fields. Thus, the AF QMC method can directly import existing technology from standard electronic structure methods into a many-body QMC framework. We have demonstrated this method with calculations in close to 100 systems, including Si solid, first- and second-row molecular systems, molecules of heavier post-d elements, transition-metal systems, and ultra-cold atomic gases. In these we have operated largely in an automated mode, inputting the DFT or Hartree-Fock solutions as trial wave functions. The AF QMC results showed consistently good agreement with near-exact quantum chemistry results and/or experiment. I will also discuss additional algorithmic advances which can further improve the method in strongly correlated systems. Supported by ARO, NSF, ONR, and DOE-cmsn.

  4. An expanded framework for the advanced computational testing and simulation toolkit

    SciTech Connect

    Marques, Osni A.; Drummond, Leroy A.

    2003-11-09

    The Advanced Computational Testing and Simulation (ACTS) Toolkit is a set of computational tools developed primarily at DOE laboratories and is aimed at simplifying the solution of common and important computational problems. The use of the tools reduces the development time for new codes and the tools provide functionality that might not otherwise be available. This document outlines an agenda for expanding the scope of the ACTS Project based on lessons learned from current activities. Highlights of this agenda include peer-reviewed certification of new tools; finding tools to solve problems that are not currently addressed by the Toolkit; working in collaboration with other software initiatives and DOE computer facilities; expanding outreach efforts; promoting interoperability, further development of the tools; and improving functionality of the ACTS Information Center, among other tasks. The ultimate goal is to make the ACTS tools more widely used and more effective in solving DOE's and the nation's scientific problems through the creation of a reliable software infrastructure for scientific computing.

  5. An architecture and model for cognitive engineering simulation analysis - Application to advanced aviation automation

    NASA Technical Reports Server (NTRS)

    Corker, Kevin M.; Smith, Barry R.

    1993-01-01

    The process of designing crew stations for large-scale, complex automated systems is made difficult because of the flexibility of roles that the crew can assume, and by the rapid rate at which system designs become fixed. Modern cockpit automation frequently involves multiple layers of control and display technology in which human operators must exercise equipment in augmented, supervisory, and fully automated control modes. In this context, we maintain that effective human-centered design is dependent on adequate models of human/system performance in which representations of the equipment, the human operator(s), and the mission tasks are available to designers for manipulation and modification. The joint Army-NASA Aircrew/Aircraft Integration (A3I) Program, with its attendant Man-machine Integration Design and Analysis System (MIDAS), was initiated to meet this challenge. MIDAS provides designers with a test bed for analyzing human-system integration in an environment in which both cognitive human function and 'intelligent' machine function are described in similar terms. This distributed object-oriented simulation system, its architecture and assumptions, and our experiences from its application in advanced aviation crew stations are described.

  6. Advanced simulation for analysis of critical infrastructure : abstract cascades, the electric power grid, and Fedwire.

    SciTech Connect

    Glass, Robert John, Jr.; Stamber, Kevin Louis; Beyeler, Walter Eugene

    2004-08-01

    Critical Infrastructures are formed by a large number of components that interact within complex networks. As a rule, infrastructures contain strong feedbacks either explicitly through the action of hardware/software control, or implicitly through the action/reaction of people. Individual infrastructures influence others and grow, adapt, and thus evolve in response to their multifaceted physical, economic, cultural, and political environments. Simply put, critical infrastructures are complex adaptive systems. In the Advanced Modeling and Techniques Investigations (AMTI) subgroup of the National Infrastructure Simulation and Analysis Center (NISAC), we are studying infrastructures as complex adaptive systems. In one of AMTI's efforts, we are focusing on cascading failure as can occur with devastating results within and between infrastructures. Over the past year we have synthesized and extended the large variety of abstract cascade models developed in the field of complexity science and have started to apply them to specific infrastructures that might experience cascading failure. In this report we introduce our comprehensive model, Polynet, which simulates cascading failure over a wide range of network topologies, interaction rules, and adaptive responses as well as multiple interacting and growing networks. We first demonstrate Polynet for the classical Bac, Tang, and Wiesenfeld or BTW sand-pile in several network topologies. We then apply Polynet to two very different critical infrastructures: the high voltage electric power transmission system which relays electricity from generators to groups of distribution-level consumers, and Fedwire which is a Federal Reserve service for sending large-value payments between banks and other large financial institutions. For these two applications, we tailor interaction rules to represent appropriate unit behavior and consider the influence of random transactions within two stylized networks: a regular homogeneous array and a

  7. Update on ORNL TRANSFORM Tool: Simulating Multi-Module Advanced Reactor with End-to-End I&C

    SciTech Connect

    Hale, Richard Edward; Fugate, David L.; Cetiner, Sacit M.; Qualls, A. L.

    2015-05-01

    The Small Modular Reactor (SMR) Dynamic System Modeling Tool project is in the fourth year of development. The project is designed to support collaborative modeling and study of various advanced SMR (non-light water cooled reactor) concepts, including the use of multiple coupled reactors at a single site. The focus of this report is the development of a steam generator and drum system model that includes the complex dynamics of typical steam drum systems, the development of instrumentation and controls for the steam generator with drum system model, and the development of multi-reactor module models that reflect the full power reactor innovative small module design concept. The objective of the project is to provide a common simulation environment and baseline modeling resources to facilitate rapid development of dynamic advanced reactor models; ensure consistency among research products within the Instrumentation, Controls, and Human-Machine Interface technical area; and leverage cross-cutting capabilities while minimizing duplication of effort. The combined simulation environment and suite of models are identified as the TRANSFORM tool. The critical elements of this effort include (1) defining a standardized, common simulation environment that can be applied throughout the Advanced Reactors Technology program; (2) developing a library of baseline component modules that can be assembled into full plant models using available geometry, design, and thermal-hydraulic data; (3) defining modeling conventions for interconnecting component models; and (4) establishing user interfaces and support tools to facilitate simulation development (i.e., configuration and parameterization), execution, and results display and capture.

  8. Simulation of rice plant temperatures using the UC Davis Advanced Canopy-Atmosphere-Soil Algorithm (ACASA)

    NASA Astrophysics Data System (ADS)

    Maruyama, A.; Pyles, D.; Paw U, K.

    2009-12-01

    The thermal environment in the plant canopy affects plants’ growth processes such as flowering and ripening. High temperatures often cause grain sterility and poor filling in serial crops, and reduce their production in tropical and temperate regions. With global warming predicted, these effects have become a major concern worldwide. In this study, we observed the plant body temperature profiles for the rice canopy and simulate them using a higher-order closure micrometeorological model to understand the relationship between plant temperatures and atmospheric condition. Experiments were conducted in rice paddy during 2007-summer season under warm temperate climate in Japan. Leaf temperatures at three different height (0.3, 0.5, 0.7m) and panicle temperatures at 0.9m were measured using fine-thermocouples. The UC Davis Advanced Canopy-Atmosphere-Soil Algorithm (ACASA) was used to calculate plant body temperature profiles in the canopy. ACASA is based on the radiation transfer, higher-order closure of turbulent equations for mass and heat exchange, and detailed plant physiological parameterization for the canopy-atmosphere-soil system. Water temperature was almost constant of 21-23 C throughout the summer because of continuous irrigation. Therefore, larger difference between air temperature at 2 m and water temperature was found on daytime. Observed leaf/panicle temperature was lower near the water surface and higher on upper layer in the canopy. Difference of temperatures between 0.3 m and 0.9 m was around 3-4 C for daytime, and around 1-2 C for nighttime. Calculated result of ACASA recreated these trends of plant temperature profile sufficiently. However, the relationship between plant and air temperature in the canopy was a little different from observed, i.e. observed leaf/panicle temperature were almost the same as air temperature, in contrast the simulated air temperature was 0.5-1.5 C higher than plant temperatures for the both of daytime and night time

  9. Advanced Simulation Capability for Environmental Management - Current Status and Phase II Demonstration Results - 13161

    SciTech Connect

    Seitz, Roger R.; Flach, Greg; Freshley, Mark D.; Freedman, Vicky; Gorton, Ian; Dixon, Paul; Moulton, J. David; Hubbard, Susan S.; Faybishenko, Boris; Steefel, Carl I.; Finsterle, Stefan; Marble, Justin

    2013-07-01

    The U.S. Department of Energy (US DOE) Office of Environmental Management (EM), Office of Soil and Groundwater, is supporting development of the Advanced Simulation Capability for Environmental Management (ASCEM). ASCEM is a state-of-the-art scientific tool and approach for understanding and predicting contaminant fate and transport in natural and engineered systems. The modular and open source high-performance computing tool facilitates integrated approaches to modeling and site characterization that enable robust and standardized assessments of performance and risk for EM cleanup and closure activities. The ASCEM project continues to make significant progress in development of computer software capabilities with an emphasis on integration of capabilities in FY12. Capability development is occurring for both the Platform and Integrated Tool-sets and High-Performance Computing (HPC) Multi-process Simulator. The Platform capabilities provide the user interface and tools for end-to-end model development, starting with definition of the conceptual model, management of data for model input, model calibration and uncertainty analysis, and processing of model output, including visualization. The HPC capabilities target increased functionality of process model representations, tool-sets for interaction with Platform, and verification and model confidence testing. The Platform and HPC capabilities are being tested and evaluated for EM applications in a set of demonstrations as part of Site Applications Thrust Area activities. The Phase I demonstration focusing on individual capabilities of the initial tool-sets was completed in 2010. The Phase II demonstration completed in 2012 focused on showcasing integrated ASCEM capabilities. For Phase II, the Hanford Site deep vadose zone (BC Cribs) served as an application site for an end-to-end demonstration of capabilities, with emphasis on integration and linkages between the Platform and HPC components. Other demonstrations

  10. WRF4G project: Advances in running climate simulations on the EGI Infrastructure

    NASA Astrophysics Data System (ADS)

    Blanco, Carlos; Cofino, Antonio S.; Fernández Quiruelas, Valvanuz; García, Markel; Fernández, Jesús

    2014-05-01

    The Weather Research and Forecasting For Grid (WRF4G) project is a two-year Spanish National R&D project, which has started in 2011. It is now a well established project, involving scientists and technical staff from several institutions, which contribute results to international initiatives such as CORDEX and European FP7 projects such as SPECS and EUPORIAS. The aim of the WRF4G project is to homogenize access hybrid Distributed Computer Infrastructures (DCIs), such as HPC and Grid infrastructures, for climate researchers. Additionally, it provides a productive interface to accomplish ambitious climate experiments such as regional hind-cast/forecast and sensitivity studies. Although Grid infrastructures are very powerful, they have some drawbacks for executing climate applications such as the WRF model. This makes necessary to encapsulate the applications in a middleware in order to provide the appropriate services for monitoring and management. Therefore, the challenge of the WRF4G project is to develop a generic adaptation framework (WRF4G framework) to disseminate it to the scientific community. The framework aims at simplifying the model access by releasing climate scientists from technical and computational aspects. In this contribution, we present some new advances of the WRF4G framework, including new components for designing experiments, simulation monitoring and data management. Additionally, we will show how WRF4G makes possible to run complex experiments on EGI infrastructures concurrently over several VOs such as esr and earth.vo.ibergrid. http://www.meteo.unican.es/software/wrf4g This work has been partially funded by the European Regional Development Fund (ERDF) and the Spanish National R&D Plan 2008-2011 (CGL2011-28864, WRF4G)

  11. Noble gas and hydrocarbon tracers in multiphase unconventional hydrocarbon systems: Toward integrated advanced reservoir simulators

    NASA Astrophysics Data System (ADS)

    Darrah, T.; Moortgat, J.; Poreda, R. J.; Muehlenbachs, K.; Whyte, C. J.

    2015-12-01

    Although hydrocarbon production from unconventional energy resources has increased dramatically in the last decade, total unconventional oil and gas recovery from black shales is still less than 25% and 9% of the totals in place, respectively. Further, the majority of increased hydrocarbon production results from increasing the lengths of laterals, the number of hydraulic fracturing stages, and the volume of consumptive water usage. These strategies all reduce the economic efficiency of hydrocarbon extraction. The poor recovery statistics result from an insufficient understanding of some of the key physical processes in complex, organic-rich, low porosity formations (e.g., phase behavior, fluid-rock interactions, and flow mechanisms at nano-scale confinement and the role of natural fractures and faults as conduits for flow). Noble gases and other hydrocarbon tracers are capably of recording subsurface fluid-rock interactions on a variety of geological scales (micro-, meso-, to macro-scale) and provide analogs for the movement of hydrocarbons in the subsurface. As such geochemical data enrich the input for the numerical modeling of multi-phase (e.g., oil, gas, and brine) fluid flow in highly heterogeneous, low permeability formations Herein we will present a combination of noble gas (He, Ne, Ar, Kr, and Xe abundances and isotope ratios) and molecular and isotopic hydrocarbon data from a geographically and geologically diverse set of unconventional hydrocarbon reservoirs in North America. Specifically, we will include data from the Marcellus, Utica, Barnett, Eagle Ford, formations and the Illinois basin. Our presentation will include geochemical and geological interpretation and our perspective on the first steps toward building an advanced reservoir simulator for tracer transport in multicomponent multiphase compositional flow (presented separately, in Moortgat et al., 2015).

  12. In-Service Design & Performance Prediction of Advanced Fusion Material Systems by Computational Modeling and Simulation

    SciTech Connect

    G. R. Odette; G. E. Lucas

    2005-11-15

    This final report on "In-Service Design & Performance Prediction of Advanced Fusion Material Systems by Computational Modeling and Simulation" (DE-FG03-01ER54632) consists of a series of summaries of work that has been published, or presented at meetings, or both. It briefly describes results on the following topics: 1) A Transport and Fate Model for Helium and Helium Management; 2) Atomistic Studies of Point Defect Energetics, Dynamics and Interactions; 3) Multiscale Modeling of Fracture consisting of: 3a) A Micromechanical Model of the Master Curve (MC) Universal Fracture Toughness-Temperature Curve Relation, KJc(T - To), 3b) An Embrittlement DTo Prediction Model for the Irradiation Hardening Dominated Regime, 3c) Non-hardening Irradiation Assisted Thermal and Helium Embrittlement of 8Cr Tempered Martensitic Steels: Compilation and Analysis of Existing Data, 3d) A Model for the KJc(T) of a High Strength NFA MA957, 3e) Cracked Body Size and Geometry Effects of Measured and Effective Fracture Toughness-Model Based MC and To Evaluations of F82H and Eurofer 97, 3-f) Size and Geometry Effects on the Effective Toughness of Cracked Fusion Structures; 4) Modeling the Multiscale Mechanics of Flow Localization-Ductility Loss in Irradiation Damaged BCC Alloys; and 5) A Universal Relation Between Indentation Hardness and True Stress-Strain Constitutive Behavior. Further details can be found in the cited references or presentations that generally can be accessed on the internet, or provided upon request to the authors. Finally, it is noted that this effort was integrated with our base program in fusion materials, also funded by the DOE OFES.

  13. Development of Advanced Electrochemical Emission Spectroscopy for Monitoring Corrosion in Simulated DOE Liquid Waste

    SciTech Connect

    Digby Macdonald; Brian Marx; Balaji Soundararajan; Morgan Smith

    2005-07-28

    The different tasks that have been carried out under the current program are as follows: (1) Theoretical and experimental assessment of general corrosion of iron/steel in borate buffer solutions by using electrochemical impedance spectroscopy (EIS), ellipsometry and XPS techniques; (2) Development of a damage function analysis (DFA), which would help in predicting the accumulation of damage due to pitting corrosion in an environment prototypical of DOE liquid waste systems; (3) Experimental measurement of crack growth rate, acoustic emission signals, and coupling currents for fracture in carbon and low alloy steels as functions of mechanical (stress intensity), chemical (conductivity), electrochemical (corrosion potential, ECP), and microstructural (grain size, precipitate size, etc) variables in a systematic manner, with particular attention being focused on the structure of the noise in the current and its correlation with the acoustic emissions; (4) Development of fracture mechanisms for carbon and low alloy steels that are consistent with the crack growth rate, coupling current data and acoustic emissions; (5) Inserting advanced crack growth rate models for SCC into existing deterministic codes for predicting the evolution of corrosion damage in DOE liquid waste storage tanks; (6) Computer simulation of the anodic and cathodic activity on the surface of the steel samples in order to exactly predict the corrosion mechanisms; (7) Wavelet analysis of EC noise data from steel samples undergoing corrosion in an environment similar to that of the high level waste storage containers, to extract data pertaining to general, pitting and stress corrosion processes, from the overall data. The work has yielded a number of important findings, including an unequivocal demonstration of the role of chloride ion in passivity breakdown on nickel in terms of cation vacancy generation within the passive film, the first detection and characterization of individual micro fracture

  14. Development of Advanced Wear and Corrosion Resistant Systems Through Laser Surface Alloying and Materials Simulations

    SciTech Connect

    R. P. Martukanitz and S. Babu

    2007-05-03

    Laser surfacing in the form of cladding, alloying, and modifications are gaining widespread use because of its ability to provide high deposition rates, low thermal distortion, and refined microstructure due to high solidification rates. Because of these advantages, laser surface alloying is considered a prime candidate for producing ultra-hard coatings through the establishment or in situ formation of composite structures. Therefore, a program was conducted by the Applied Research Laboratory, Pennsylvania State University and Oak Ridge National Laboratory to develop the scientific and engineering basis for performing laser-based surface modifications involving the addition of hard particles, such as carbides, borides, and nitrides, within a metallic matrix for improved wear, fatigue, creep, and corrosion resistance. This has involved the development of advanced laser processing and simulation techniques, along with the refinement and application of these techniques for predicting and selecting materials and processing parameters for the creation of new surfaces having improved properties over current coating technologies. This program has also resulted in the formulation of process and material simulation tools capable of examining the potential for the formation and retention of composite coatings and deposits produced using laser processing techniques, as well as positive laboratory demonstrations in producing these coatings. In conjunction with the process simulation techniques, the application of computational thermodynamic and kinetic models to design laser surface alloying materials was demonstrated and resulted in a vast improvement in the formulation of materials used for producing composite coatings. The methodology was used to identify materials and to selectively modify microstructures for increasing hardness of deposits produced by the laser surface alloying process. Computational thermodynamic calculations indicated that it was possible to induce the

  15. Investigation of Alien Wavelength Quality in Live Multi-Domain, Multi-Vendor Link Using Advanced Simulation Tool

    NASA Astrophysics Data System (ADS)

    Nordal Petersen, Martin; Nuijts, Roeland; Lange Bjørn, Lars

    2014-05-01

    This article presents an advanced optical model for simulation of alien wavelengths in multi-domain and multi-vendor dense wavelength-division multiplexing networks. The model aids optical network planners with a better understanding of the non-linear effects present in dense wavelength-division multiplexing systems and better utilization of alien wavelengths in future applications. The limiting physical effects for alien wavelengths are investigated in relation to power levels, channel spacing, and other factors. The simulation results are verified through experimental setup in live multi-domain dense wavelength-division multiplexing systems between two national research networks: SURFnet in Holland and NORDUnet in Denmark.

  16. CFD Simulations of a Regenerative Process for Carbon Dioxide Capture in Advanced Gasification Based Power Systems

    SciTech Connect

    Arastoopour, Hamid; Abbasian, Javad

    2014-07-31

    This project describes the work carried out to prepare a highly reactive and mechanically strong MgO based sorbents and to develop a Population Balance Equations (PBE) approach to describe the evolution of the particle porosity distribution that is linked with Computational Fluid Dynamics (CFD) to perform simulations of the CO2 capture and sorbent regeneration. A large number of MgO-based regenerable sorbents were prepared using low cost and abundant dolomite as the base material. Among various preparation parameters investigated the potassium/magnesium (K/Mg) ratio was identified as the key variable affecting the reactivity and CO2 capacity of the sorbent. The optimum K/Mg ratio is about 0.15. The sorbent formulation HD52-P2 was identified as the “best” sorbent formulation and a large batch (one kg) of the sorbent was prepared for the detailed study. The results of parametric study indicate the optimum carbonation and regeneration temperatures are 360° and 500°C, respectively. The results also indicate that steam has a beneficial effect on the rate of carbonation and regeneration of the sorbent and that the reactivity and capacity of the sorbent decreases in the cycling process (sorbent deactivation). The results indicate that to achieve a high CO2 removal efficiency, the bed of sorbent should be operated at a temperature range of 370-410°C which also favors production of hydrogen through the WGS reaction. To describe the carbonation reaction kinetics of the MgO, the Variable Diffusivity shrinking core Model (VDM) was developed in this project, which was shown to accurately fit the experimental data. An important advantage of this model is that the changes in the sorbent conversion with time can be expressed in an explicit manner, which will significantly reduce the CFD computation time. A Computational Fluid Dynamic/Population Balance Equations (CFD/PBE) model was developed that accounts for the particle (sorbent) porosity distribution and a new version of

  17. Big data to smart data in Alzheimer's disease: Real-world examples of advanced modeling and simulation.

    PubMed

    Haas, Magali; Stephenson, Diane; Romero, Klaus; Gordon, Mark Forrest; Zach, Neta; Geerts, Hugo

    2016-09-01

    Many disease-modifying clinical development programs in Alzheimer's disease (AD) have failed to date, and development of new and advanced preclinical models that generate actionable knowledge is desperately needed. This review reports on computer-based modeling and simulation approach as a powerful tool in AD research. Statistical data-analysis techniques can identify associations between certain data and phenotypes, such as diagnosis or disease progression. Other approaches integrate domain expertise in a formalized mathematical way to understand how specific components of pathology integrate into complex brain networks. Private-public partnerships focused on data sharing, causal inference and pathway-based analysis, crowdsourcing, and mechanism-based quantitative systems modeling represent successful real-world modeling examples with substantial impact on CNS diseases. Similar to other disease indications, successful real-world examples of advanced simulation can generate actionable support of drug discovery and development in AD, illustrating the value that can be generated for different stakeholders. PMID:27327540

  18. Correlation of Simulation Examination to Written Test Scores for Advanced Cardiac Life Support Testing: Prospective Cohort Study

    PubMed Central

    Strom, Suzanne L.; Anderson, Craig L.; Yang, Luanna; Canales, Cecilia; Amin, Alpesh; Lotfipour, Shahram; McCoy, C. Eric; Langdorf, Mark I.

    2015-01-01

    Introduction Traditional Advanced Cardiac Life Support (ACLS) courses are evaluated using written multiple-choice tests. High-fidelity simulation is a widely used adjunct to didactic content, and has been used in many specialties as a training resource as well as an evaluative tool. There are no data to our knowledge that compare simulation examination scores with written test scores for ACLS courses. Objective To compare and correlate a novel high-fidelity simulation-based evaluation with traditional written testing for senior medical students in an ACLS course. Methods We performed a prospective cohort study to determine the correlation between simulation-based evaluation and traditional written testing in a medical school simulation center. Students were tested on a standard acute coronary syndrome/ventricular fibrillation cardiac arrest scenario. Our primary outcome measure was correlation of exam results for 19 volunteer fourth-year medical students after a 32-hour ACLS-based Resuscitation Boot Camp course. Our secondary outcome was comparison of simulation-based vs. written outcome scores. Results The composite average score on the written evaluation was substantially higher (93.6%) than the simulation performance score (81.3%, absolute difference 12.3%, 95% CI [10.6–14.0%], p<0.00005). We found a statistically significant moderate correlation between simulation scenario test performance and traditional written testing (Pearson r=0.48, p=0.04), validating the new evaluation method. Conclusion Simulation-based ACLS evaluation methods correlate with traditional written testing and demonstrate resuscitation knowledge and skills. Simulation may be a more discriminating and challenging testing method, as students scored higher on written evaluation methods compared to simulation. PMID:26594288

  19. A real-time simulation evaluation of an advanced detection, isolation and accommodation algorithm for sensor failures in turbine engines

    NASA Technical Reports Server (NTRS)

    Merrill, W. C.; Delaat, J. C.

    1986-01-01

    An advanced sensor failure detection, isolation, and accommodation (ADIA) algorithm has been developed for use with an aircraft turbofan engine control system. In a previous paper the authors described the ADIA algorithm and its real-time implementation. Subsequent improvements made to the algorithm and implementation are discussed, and the results of an evaluation presented. The evaluation used a real-time, hybrid computer simulation of an F100 turbofan engine.

  20. A real-time simulation evaluation of an advanced detection. Isolation and accommodation algorithm for sensor failures in turbine engines

    NASA Technical Reports Server (NTRS)

    Merrill, W. C.; Delaat, J. C.

    1986-01-01

    An advanced sensor failure detection, isolation, and accommodation (ADIA) algorithm has been developed for use with an aircraft turbofan engine control system. In a previous paper the authors described the ADIA algorithm and its real-time implementation. Subsequent improvements made to the algorithm and implementation are discussed, and the results of an evaluation presented. The evaluation used a real-time, hybrid computer simulation of an F100 turbofan engine.

  1. Analog-digital simulation of transient-induced logic errors and upset susceptibility of an advanced control system

    NASA Technical Reports Server (NTRS)

    Carreno, Victor A.; Choi, G.; Iyer, R. K.

    1990-01-01

    A simulation study is described which predicts the susceptibility of an advanced control system to electrical transients resulting in logic errors, latched errors, error propagation, and digital upset. The system is based on a custom-designed microprocessor and it incorporates fault-tolerant techniques. The system under test and the method to perform the transient injection experiment are described. Results for 2100 transient injections are analyzed and classified according to charge level, type of error, and location of injection.

  2. Distance-Learning for Advanced Military Education: Using Wargame Simulation Course as an Example

    ERIC Educational Resources Information Center

    Keh, Huan-Chao; Wang, Kuei-Min; Wai, Shu-Shen; Huang, Jiung-yao; Hui, Lin; Wu, Ji-Jen

    2008-01-01

    Distance learning in advanced military education can assist officers around the world to become more skilled and qualified for future challenges. Through well-chosen technology, the efficiency of distance-learning can be improved significantly. In this paper we present the architecture of Advanced Military Education-Distance Learning (AME-DL)…

  3. An Advanced, Interactive, High-Performance Liquid Chromatography Simulator and Instructor Resources

    ERIC Educational Resources Information Center

    Boswell, Paul G.; Stoll, Dwight R.; Carr, Peter W.; Nagel, Megan L.; Vitha, Mark F.; Mabbott, Gary A.

    2013-01-01

    High-performance liquid chromatography (HPLC) simulation software has long been recognized as an effective educational tool, yet many of the existing HPLC simulators are either too expensive, outdated, or lack many important features necessary to make them widely useful for educational purposes. Here, a free, open-source HPLC simulator is…

  4. Advancements in hardware-in-the-loop simulations at the U.S. Army Aviation and Missile Command

    NASA Astrophysics Data System (ADS)

    Buford, James A.; Jolly, Alexander C.; Mobley, Scott B.; Sholes, William J.

    2000-07-01

    A greater awareness of and increased interest in the use of modeling and simulation (M&S) has been demonstrated at many levels within the Department of Defense (DoD) and all the Armed Services agencies in recent years. M&S application is regarded as a viable means of lowering the life cycle costs of missile defense and tactical missile weapon system acquisition beginning with studies of new concepts of war-fighting through user training and post-deployment support. The Aviation and Missile Research, Engineering, and Development Center (AMRDEC) of the U.S. Army Aviation and Missile Command (AMCOM) has an extensive history of applying all types of M&S to weapons system development and has been a particularly strong advocate of hardware-in-the-loop (HWIL) simulation and test for many years. Over the past 40 years AMRDEC has developed and maintained the Advanced Simulation Center (ASC) which provides world-class, high fidelity, specific and dedicated HWIL simulation and test capabilities for the Army's missile defense and tactical missile program offices in both the infrared and radio frequency sensor domains. The ASC facility uses M&S to conduct daily HWIL missile simulations and tests to support flight tests, missile/system development, independent verification and validation of weapon system embedded software and simulations, and missile/system performance against current and future threat environments. This paper describes the ASC role, recaps the past year, describes the HWIL components and advancements, and outlines the path-ahead for the ASC in terms of both missile and complete system HWIL simulations and test with a focus on the imaging infrared systems.

  5. Role of 3D photo-resist simulation for advanced technology nodes

    NASA Astrophysics Data System (ADS)

    Narayana Samy, Aravind; Seltmann, Rolf; Kahlenberg, Frank; Schramm, Jessy; Küchler, Bernd; Klostermann, Ulrich

    2013-04-01

    3D Resist Models are gaining significant interest for advanced technology node development. Correct prediction of resist profiles, resist top-loss and top-rounding are acquiring higher importance in ORC hotspot verification due to impact on etch resistance and post etch results. We would like to highlight the specific calibration procedure to calibrate a rigorous 3D model. Special focus is on the importance of high quality metrology data for both a successful calibration and for allowing a reduction of the number of data points used for calibration [1]. In a productive application the calibration could be performed using a subset of 20 features measured through dose and focus and model validation was done with 500 features through dose and focus. This data reduction minimized the actual calibration effort of the 3D resist model and enabled calibration run times of less than one hour. The successful validation with the complete data set showed that the data reduction did not cause over- fitting of the model. The model is applied and verified at hotspots showing defects such as bottom bridging or top loss that would not be visible in a 2D resist model. The model performance is also evaluated with a conventional CD error metric where CD at Bottom of simulation and measurement are compared. We could achieve excellent results for both metrics using SEM CD, SEM images, AFM measurements and wafer cross sections. Additional modeling criterion is resist model portability. A prerequisite is the separability of resist model and optical model, i.e. the resist model shall characterize the resist only and should not lump characteristics from the optical model. This is a requirement to port the resist model to different optical setups such as another illumination source without the need of re-calibration. Resist model portability is shown by validation and application of the model to a second process with significantly different optical settings. The resist model can predict hot

  6. Parameter identification studies on the NASA/Ames Research Center Advanced Concepts Flight Simulator. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Mckavitt, Thomas P., Jr.

    1990-01-01

    The results of an aircraft parameters identification study conducted on the National Aeronautics and Space Administration/Ames Research Center Advanced Concepts Flight Simulator (ACFS) in conjunction with the Navy-NASA Joint Institute of Aeronautics are given. The ACFS is a commercial airline simulator with a design based on future technology. The simulator is used as a laboratory for human factors research and engineering as applied to the commercial airline industry. Parametric areas examined were engine pressure ratio (EPR), optimum long range cruise Mach number, flap reference speed, and critical take-off speeds. Results were compared with corresponding parameters of the Boeing 757 and 767 aircraft. This comparison identified two areas where improvements can be made: (1) low maximum lift coefficients (on the order of 20-25 percent less than those of a 757); and (2) low optimum cruise Mach numbers. Recommendations were made to those anticipated with the application of future technologies.

  7. Advancing lighting and daylighting simulation: The transition from analysis to design aid tools

    SciTech Connect

    Hitchcock, R.J.

    1995-05-01

    This paper explores three significant software development requirements for making the transition from stand-alone lighting simulation/analysis tools to simulation-based design aid tools. These requirements include specialized lighting simulation engines, facilitated methods for creating detailed simulatable building descriptions, an automated techniques for providing lighting design guidance. Initial computer implementations meant to address each of these requirements are discussed to further elaborate these requirements and to illustrate work-in-progress.

  8. Development of Advanced Electrochemical Emission Spectroscopy for Monitoring Corrosion in Simulated DOE Liquid Waste

    SciTech Connect

    Digby D. Macdonald; Brian M. Marx; Sejin Ahn; Julio de Ruiz; Balaji Soundararaja; Morgan Smith; and Wendy Coulson

    2008-01-15

    Various forms of general and localized corrosion represent principal threats to the integrity of DOE liquid waste storage tanks. These tanks, which are of a single wall or double wall design, depending upon their age, are fabricated from welded carbon steel and contain a complex waste-form comprised of NaOH and NaNO{sub 3}, along with trace amounts of phosphate, sulfate, carbonate, and chloride. Because waste leakage can have a profound environmental impact, considerable interest exists in predicting the accumulation of corrosion damage, so as to more effectively schedule maintenance and repair. The different tasks that are being carried out under the current program are as follows: (1) Theoretical and experimental assessment of general corrosion of iron/steel in borate buffer solutions by using electrochemical impedance spectroscopy (EIS), ellipsometry and XPS techniques; (2) Development of a damage function analysis (DFA) which would help in predicting the accumulation of damage due to pitting corrosion in an environment prototypical of DOE liquid waste systems; (3) Experimental measurement of crack growth rate, acoustic emission signals and coupling currents for fracture in carbon and low alloy steels as functions of mechanical (stress intensity), chemical (conductivity), electrochemical (corrosion potential, ECP), and microstructural (grain size, precipitate size, etc) variables in a systematic manner, with particular attention being focused on the structure of the noise in the current and its correlation with the acoustic emissions; (4) Development of fracture mechanisms for carbon and low alloy steels that are consistent with the crack growth rate, coupling current data and acoustic emissions; (5) Inserting advanced crack growth rate models for SCC into existing deterministic codes for predicting the evolution of corrosion damage in DOE liquid waste storage tanks; (6) Computer simulation of the anodic and cathodic activity on the surface of the steel samples

  9. Student Award Finalist: Advances in the three-dimensional simulation of streamer discharges

    NASA Astrophysics Data System (ADS)

    Teunissen, Jannis; Ebert, Ute

    2015-09-01

    We have implemented a 2D and 3D streamer model inside AFiVO, a simulation framework that we have recently developed. We use numerical techniques such as adaptive mesh refinement, parallel multigrid and a novel implementation of photoionization to push simulations to new limits. This allows us to study the interaction of two streamers in 3D, the branching of streamers in 3D, or the propagation of a streamer over a dielectric surface. Simulations in 2D also benefit, allowing for a relatively interactive exploration of parameter regimes. We present highlights of the new simulation possibilities. JT acknowledges support from STW project 10755.

  10. Recent advances in numerical simulation of space-plasma-physics problems

    NASA Technical Reports Server (NTRS)

    Birmingham, T. J.

    1983-01-01

    Computer simulations have become an increasingly popular, important and insightful tool for studying space plasmas. This review describes MHD and particle simulations, both of which treat the plasma and the electromagnetic field in which it moves in a self consistent fashion but on drastically different spatial and temporal scales. The complementary roles of simulation, observations and theory are stressed. Several examples of simulations being carried out in the area of magnetospheric plasma physics are described to illustrate the power, potential and limitations of the approach.

  11. Progress and new advances in simulating electron microscopy datasets using MULTEM.

    PubMed

    Lobato, I; Van Aert, S; Verbeeck, J

    2016-09-01

    A new version of the open source program MULTEM is presented here. It includes a graphical user interface, tapering truncation of the atomic potential, CPU multithreading functionality, single/double precision calculations, scanning transmission electron microscopy (STEM) simulations using experimental detector sensitivities, imaging STEM (ISTEM) simulations, energy filtered transmission electron microscopy (EFTEM) simulations, STEM electron energy loss spectroscopy (EELS) simulations along with other improvements in the algorithms. We also present a mixed channeling approach for the calculation of inelastic excitations, which allows one to considerably speed up time consuming EFTEM/STEM-EELS calculations. PMID:27323350

  12. Advanced Techniques for Reservoir Simulation and Modeling of Non-Conventional Wells

    SciTech Connect

    Durlofsky, Louis J.; Aziz, Khalid

    2001-08-23

    Research results for the second year of this project on the development of improved modeling techniques for non-conventional (e.g., horizontal, deviated or multilateral) wells were presented. The overall program entails the development of enhanced well modeling and general simulation capabilities. A general formulation for black-oil and compositional reservoir simulation was presented.

  13. Multi-center development and testing of a simulation-based cardiovascular assessment curriculum for advanced practice nurses.

    PubMed

    Jeffries, Pamela R; Beach, Michael; Decker, Sharon I; Dlugasch, Lucie; Groom, Jeffrey; Settles, Julie; O'Donnell, John M

    2011-01-01

    Cardiovascular assessment skills are deficient among advanced practice nursing students, and effective instructional methods to improve assessment skills are needed. The purpose of this study was to develop, implement, and evaluate outcomes of a cardiovascular assessment curriculum for advanced practice nurses at four institutions. Each institution used a one-group pre-to-post-intervention design. Educational interventions included faculty-led, simulation-based case presentations using the Harvey cardiopulmonary patient simulator (CPS), and independent learning sessions using the CPS and a multimedia, computer-based CD-ROM program. Outcome measures included a 31-item cognitive written exam, a 13-item skills checklist used in each of a three-station objective structured clinical exam, learner self-efficacy and satisfaction survey, instructor satisfaction and self-efficacy survey, and a participant logbook to record practice time using the self-learning materials. Thirty-six students who received the simulation-based training showed statistically significant pre-to-post-test improvement in cognitive knowledge and cardiovascular assessment skills. PMID:22029244

  14. Simulation of fast-ion-driven Alfvén eigenmodes on the Experimental Advanced Superconducting Tokamak

    NASA Astrophysics Data System (ADS)

    Hu, Youjun; Todo, Y.; Pei, Youbin; Li, Guoqiang; Qian, Jinping; Xiang, Nong; Zhou, Deng; Ren, Qilong; Huang, Juan; Xu, Liqing

    2016-02-01

    Kinetic-MHD hybrid simulations are carried out to investigate possible fast-ion-driven modes on the Experimental Advanced Superconducting Tokamak. Three typical kinds of fast-ion-driven modes, namely, toroidicity-induced Alfvén eigenmodes, reversed shear Alfvén eigenmodes, and energetic-particle continuum modes, are observed simultaneously in the simulations. The simulation results are compared with the results of an ideal MHD eigenvalue code, which shows agreement with respect to the mode frequency, dominant poloidal mode numbers, and radial location. However, the modes in the hybrid simulations take a twisted structure on the poloidal plane, which is different from the results of the ideal MHD eigenvalue code. The twist is due to the radial phase variation of the eigenfunction, which may be attributed to the non-perturbative kinetic effects of the fast ions. By varying the stored energy of fast ions to change the fast ion drive in the simulations, it is demonstrated that the twist (i.e., the radial phase variation) is positively correlated with the fast ion drive.

  15. Brain-wave measures of workload in advanced cockpits: The transition of technology from laboratory to cockpit simulator, phase 2

    NASA Technical Reports Server (NTRS)

    Horst, Richard L.; Mahaffey, David L.; Munson, Robert C.

    1989-01-01

    The present Phase 2 small business innovation research study was designed to address issues related to scalp-recorded event-related potential (ERP) indices of mental workload and to transition this technology from the laboratory to cockpit simulator environments for use as a systems engineering tool. The project involved five main tasks: (1) Two laboratory studies confirmed the generality of the ERP indices of workload obtained in the Phase 1 study and revealed two additional ERP components related to workload. (2) A task analysis' of flight scenarios and pilot tasks in the Advanced Concepts Flight Simulator (ACFS) defined cockpit events (i.e., displays, messages, alarms) that would be expected to elicit ERPs related to workload. (3) Software was developed to support ERP data analysis. An existing ARD-proprietary package of ERP data analysis routines was upgraded, new graphics routines were developed to enhance interactive data analysis, and routines were developed to compare alternative single-trial analysis techniques using simulated ERP data. (4) Working in conjunction with NASA Langley research scientists and simulator engineers, preparations were made for an ACFS validation study of ERP measures of workload. (5) A design specification was developed for a general purpose, computerized, workload assessment system that can function in simulators such as the ACFS.

  16. A Visual Basic simulation software tool for performance analysis of a membrane-based advanced water treatment plant.

    PubMed

    Pal, P; Kumar, R; Srivastava, N; Chowdhury, J

    2014-02-01

    A Visual Basic simulation software (WATTPPA) has been developed to analyse the performance of an advanced wastewater treatment plant. This user-friendly and menu-driven software is based on the dynamic mathematical model for an industrial wastewater treatment scheme that integrates chemical, biological and membrane-based unit operations. The software-predicted results corroborate very well with the experimental findings as indicated in the overall correlation coefficient of the order of 0.99. The software permits pre-analysis and manipulation of input data, helps in optimization and exhibits performance of an integrated plant visually on a graphical platform. It allows quick performance analysis of the whole system as well as the individual units. The software first of its kind in its domain and in the well-known Microsoft Excel environment is likely to be very useful in successful design, optimization and operation of an advanced hybrid treatment plant for hazardous wastewater. PMID:23982824

  17. Advanced simulation technology used to reduce accident rates through a better understanding of human behaviors and human perception

    NASA Astrophysics Data System (ADS)

    Manser, Michael P.; Hancock, Peter A.

    1996-06-01

    Human beings and technology have attained a mutually dependent and symbiotic relationship. It is easy to recognize how each depends on the other for survival. It is also easy to see how technology advances due to human activities. However, the role technology plays in advancing humankind is seldom examined. This presentation examines two research areas where the role of advanced visual simulation systems play an integral and essential role in understanding human perception and behavior. The ultimate goal of this research is the betterment of humankind through reduced accident and death rates in transportation environments. The first research area examined involved the estimation of time-to-contact. A high-fidelity wrap-around simulator (RAS) was used to examine people's ability to estimate time-to- contact. The ability of people to estimate the amount of time before an oncoming vehicle will collide with them is a necessary skill for avoiding collisions. A vehicle approached participants at one of three velocities, and while en route to the participant, the vehicle disappeared. The participants' task was to respond when they felt the accuracy of time-to-contact estimates and the practical applications of the result. The second area of research investigates the effects of various visual stimuli on underground transportation tunnel walls for the perception of vehicle speed. A RAS is paramount in creating visual patterns in peripheral vision. Flat-screen or front-screen simulators do not have this ability. Results are discussed in terms of speed perception and the application of these results to real world environments.

  18. Real-time application of advanced three-dimensional graphic techniques for research aircraft simulation

    NASA Technical Reports Server (NTRS)

    Davis, Steven B.

    1990-01-01

    Visual aids are valuable assets to engineers for design, demonstration, and evaluation. Discussed here are a variety of advanced three-dimensional graphic techniques used to enhance the displays of test aircraft dynamics. The new software's capabilities are examined and possible future uses are considered.

  19. Why the Petascale era will drive improvements in the management of the full lifecycle of earth science data.

    NASA Astrophysics Data System (ADS)

    Wyborn, L.

    2012-04-01

    The advent of the petascale era, in both storage and compute facilities, will offer new opportunities for earth scientists to transform the way they do their science and to undertake cross-disciplinary science at a global scale. No longer will data have to be averaged and subsampled: it can be analysed to its fullest resolution at national or even global scales. Much larger data volumes can be analysed in single passes and at higher resolution: large scale cross domain science is now feasible. However, in general, earth sciences have been slow to capitalise on the potential of these new petascale compute facilities: many struggle to even use terascale facilities. Our chances of using these new facilities will require a vast improvement in the management of the full life cycle of data: in reality it will need to be transformed. Many of our current issues with earth science data are historic and stem from the limitations of early data storage systems. As storage was so expensive, metadata was usually stored separate from the data and attached as a readme file. Likewise, attributes that defined uncertainty, reliability and traceability were recoded in lab note books and rarely stored with the data. Data were routinely transferred as files. The new opportunities require that the traditional discover, display and locally download and process paradigm is too limited. For data access and assimilation to be improved, data will need to be self describing. For heterogeneous data to be rapidly integrated attributes such as reliability, uncertainty and traceability will need to be systematically recorded with each observation. The petascale era also requires that individual data files be transformed and aggregated into calibrated data arrays or data cubes. Standards become critical and are the enablers of integration. These changes are common to almost every science discipline. What makes earth sciences unique is that many domains record time series data, particularly in the

  20. Akuna - Integrated Toolsets Supporting Advanced Subsurface Flow and Transport Simulations for Environmental Management

    SciTech Connect

    Schuchardt, Karen L.; Agarwal, Deborah A.; Finsterle, Stefan A.; Gable, Carl W.; Gorton, Ian; Gosink, Luke J.; Keating, Elizabeth H.; Lansing, Carina S.; Meyer, Joerg; Moeglein, William A.M.; Pau, George S.H.; Porter, Ellen A.; Purohit, Sumit; Rockhold, Mark L.; Shoshani, Arie; Sivaramakrishnan, Chandrika

    2012-04-24

    A next generation open source subsurface simulator and user environment for environmental management is being developed through a collaborative effort across Department of Energy National Laboratories. The flow and transport simulator, Amanzi, will be capable of modeling complex subsurface environments and processes using both unstructured and adaptive meshes at very fine spatial resolutions that require supercomputing-scale resources. The user environment, Akuna, provides users with a range of tools to manage environmental and simulator data sets, create models, manage and share simulation data, and visualize results. Underlying the user interface are core toolsets that provide algorithms for sensitivity analysis, parameter estimation, and uncertainty quantification. Akuna is open-source, cross platform software that is initially being demonstrated on the Hanford BC Cribs remediation site. In this paper, we describe the emerging capabilities of Akuna and illustrate how these are being applied to the BC Cribs site.

  1. Final Report for "Implimentation and Evaluation of Multigrid Linear Solvers into Extended Magnetohydrodynamic Codes for Petascale Computing"

    SciTech Connect

    Srinath Vadlamani; Scott Kruger; Travis Austin

    2008-06-19

    Extended magnetohydrodynamic (MHD) codes are used to model the large, slow-growing instabilities that are projected to limit the performance of International Thermonuclear Experimental Reactor (ITER). The multiscale nature of the extended MHD equations requires an implicit approach. The current linear solvers needed for the implicit algorithm scale poorly because the resultant matrices are so ill-conditioned. A new solver is needed, especially one that scales to the petascale. The most successful scalable parallel processor solvers to date are multigrid solvers. Applying multigrid techniques to a set of equations whose fundamental modes are dispersive waves is a promising solution to CEMM problems. For the Phase 1, we implemented multigrid preconditioners from the HYPRE project of the Center for Applied Scientific Computing at LLNL via PETSc of the DOE SciDAC TOPS for the real matrix systems of the extended MHD code NIMROD which is a one of the primary modeling codes of the OFES-funded Center for Extended Magnetohydrodynamic Modeling (CEMM) SciDAC. We implemented the multigrid solvers on the fusion test problem that allows for real matrix systems with success, and in the process learned about the details of NIMROD data structures and the difficulties of inverting NIMROD operators. The further success of this project will allow for efficient usage of future petascale computers at the National Leadership Facilities: Oak Ridge National Laboratory, Argonne National Laboratory, and National Energy Research Scientific Computing Center. The project will be a collaborative effort between computational plasma physicists and applied mathematicians at Tech-X Corporation, applied mathematicians Front Range Scientific Computations, Inc. (who are collaborators on the HYPRE project), and other computational plasma physicists involved with the CEMM project.

  2. A mathematical representation of an advanced helicopter for piloted simulator investigations of control system and display variations

    NASA Technical Reports Server (NTRS)

    Aiken, E. W.

    1980-01-01

    A mathematical model of an advanced helicopter is described. The model is suitable for use in control/display research involving piloted simulation. The general design approach for the six degree of freedom equations of motion is to use the full set of nonlinear gravitational and inertial terms of the equations and to express the aerodynamic forces and moments as the reference values and first order terms of a Taylor series expansion about a reference trajectory defined as a function of longitudinal airspeed. Provisions for several different specific and generic flight control systems are included in the model. The logic required to drive various flight control and weapon delivery symbols on a pilot's electronic display is also provided. Finally, the model includes a simplified representation of low altitude wind and turbulence effects. This model was used in a piloted simulator investigation of the effects of control system and display variations for an attack helicopter mission.

  3. An Aerodynamic Performance Evaluation of the NASA/Ames Research Center Advanced Concepts Flight Simulator. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Donohue, Paul F.

    1987-01-01

    The results of an aerodynamic performance evaluation of the National Aeronautics and Space Administration (NASA)/Ames Research Center Advanced Concepts Flight Simulator (ACFS), conducted in association with the Navy-NASA Joint Institute of Aeronautics, are presented. The ACFS is a full-mission flight simulator which provides an excellent platform for the critical evaluation of emerging flight systems and aircrew performance. The propulsion and flight dynamics models were evaluated using classical flight test techniques. The aerodynamic performance model of the ACFS was found to realistically represent that of current day, medium range transport aircraft. Recommendations are provided to enhance the capabilities of the ACFS to a level forecast for 1995 transport aircraft. The graphical and tabular results of this study will establish a performance section of the ACFS Operation's Manual.

  4. ATHENA (Advanced Thermal Hydraulic Energy Network Analyzer) simulation of a loss of coolant accident in a space reactor

    SciTech Connect

    Roth, P.A.; Shumway, R.W.

    1988-01-01

    The Advanced Thermal Hydraulic Energy Network Analyzer (ATHENA) code was used to simulate a loss-of-coolant accident (LOCA) in a conceptual space reactor design. ATHENA provides the capability of simulating the thermal-hydraulic behavior of the wide variety of systems which are being considered for use in space reactors. Flow loops containing any one of several available working fluids may interact through thermal connections with other loops containing the same or a different working fluid. The code can be used to model special systems such as: heat pipes, point reactor kinetics, plant control systems, turbines, valves, and pumps. This work demonstrates the application of the thermal radiation model which has been recently incorporated into ATHENA and verifies the need for supplemental reactor cooling to prevent reactor fuel damage in the event of a LOCA.

  5. A study on optimization of hybrid drive train using Advanced Vehicle Simulator (ADVISOR)

    NASA Astrophysics Data System (ADS)

    Same, Adam; Stipe, Alex; Grossman, David; Park, Jae Wan

    This study investigates the advantages and disadvantages of three hybrid drive train configurations: series, parallel, and "through-the-ground" parallel. Power flow simulations are conducted with the MATLAB/Simulink-based software ADVISOR. These simulations are then applied in an application for the UC Davis SAE Formula Hybrid vehicle. ADVISOR performs simulation calculations for vehicle position using a combined backward/forward method. These simulations are used to study how efficiency and agility are affected by the motor, fuel converter, and hybrid configuration. Three different vehicle models are developed to optimize the drive train of a vehicle for three stages of the SAE Formula Hybrid competition: autocross, endurance, and acceleration. Input cycles are created based on rough estimates of track geometry. The output from these ADVISOR simulations is a series of plots of velocity profile and energy storage State of Charge that provide a good estimate of how the Formula Hybrid vehicle will perform on the given course. The most noticeable discrepancy between the input cycle and the actual velocity profile of the vehicle occurs during deceleration. A weighted ranking system is developed to organize the simulation results and to determine the best drive train configuration for the Formula Hybrid vehicle. Results show that the through-the-ground parallel configuration with front-mounted motors achieves an optimal balance of efficiency, simplicity, and cost. ADVISOR is proven to be a useful tool for vehicle power train design for the SAE Formula Hybrid competition. This vehicle model based on ADVISOR simulation is applicable to various studies concerning performance and efficiency of hybrid drive trains.

  6. OPENING REMARKS: Scientific Discovery through Advanced Computing

    NASA Astrophysics Data System (ADS)

    Strayer, Michael

    2006-01-01

    Good morning. Welcome to SciDAC 2006 and Denver. I share greetings from the new Undersecretary for Energy, Ray Orbach. Five years ago SciDAC was launched as an experiment in computational science. The goal was to form partnerships among science applications, computer scientists, and applied mathematicians to take advantage of the potential of emerging terascale computers. This experiment has been a resounding success. SciDAC has emerged as a powerful concept for addressing some of the biggest challenges facing our world. As significant as these successes were, I believe there is also significance in the teams that achieved them. In addition to their scientific aims these teams have advanced the overall field of computational science and set the stage for even larger accomplishments as we look ahead to SciDAC-2. I am sure that many of you are expecting to hear about the results of our current solicitation for SciDAC-2. I’m afraid we are not quite ready to make that announcement. Decisions are still being made and we will announce the results later this summer. Nearly 250 unique proposals were received and evaluated, involving literally thousands of researchers, postdocs, and students. These collectively requested more than five times our expected budget. This response is a testament to the success of SciDAC in the community. In SciDAC-2 our budget has been increased to about 70 million for FY 2007 and our partnerships have expanded to include the Environment and National Security missions of the Department. The National Science Foundation has also joined as a partner. These new partnerships are expected to expand the application space of SciDAC, and broaden the impact and visibility of the program. We have, with our recent solicitation, expanded to turbulence, computational biology, and groundwater reactive modeling and simulation. We are currently talking with the Department’s applied energy programs about risk assessment, optimization of complex systems - such

  7. Computational Plasma Physics at the Bleeding Edge: Simulating Kinetic Turbulence Dynamics in Fusion Energy Sciences

    NASA Astrophysics Data System (ADS)

    Tang, William

    2013-04-01

    Advanced computing is generally recognized to be an increasingly vital tool for accelerating progress in scientific research in the 21st Century. The imperative is to translate the combination of the rapid advances in super-computing power together with the emergence of effective new algorithms and computational methodologies to help enable corresponding increases in the physics fidelity and the performance of the scientific codes used to model complex physical systems. If properly validated against experimental measurements and verified with mathematical tests and computational benchmarks, these codes can provide more reliable predictive capability for the behavior of complex systems, including fusion energy relevant high temperature plasmas. The magnetic fusion energy research community has made excellent progress in developing advanced codes for which computer run-time and problem size scale very well with the number of processors on massively parallel supercomputers. A good example is the effective usage of the full power of modern leadership class computational platforms from the terascale to the petascale and beyond to produce nonlinear particle-in-cell simulations which have accelerated progress in understanding the nature of plasma turbulence in magnetically-confined high temperature plasmas. Illustrative results provide great encouragement for being able to include increasingly realistic dynamics in extreme-scale computing campaigns to enable predictive simulations with unprecedented physics fidelity. Some illustrative examples will be presented of the algorithmic progress from the magnetic fusion energy sciences area in dealing with low memory per core extreme scale computing challenges for the current top 3 supercomputers worldwide. These include advanced CPU systems (such as the IBM-Blue-Gene-Q system and the Fujitsu K Machine) as well as the GPU-CPU hybrid system (Titan).

  8. Quantifying the Effect of Fast Charger Deployments on Electric Vehicle Utility and Travel Patterns via Advanced Simulation: Preprint

    SciTech Connect

    Wood, E.; Neubauer, J.; Burton, E.

    2015-02-01

    The disparate characteristics between conventional (CVs) and battery electric vehicles (BEVs) in terms of driving range, refill/recharge time, and availability of refuel/recharge infrastructure inherently limit the relative utility of BEVs when benchmarked against traditional driver travel patterns. However, given a high penetration of high-power public charging combined with driver tolerance for rerouting travel to facilitate charging on long-distance trips, the difference in utility between CVs and BEVs could be marginalized. We quantify the relationships between BEV utility, the deployment of fast chargers, and driver tolerance for rerouting travel and extending travel durations by simulating BEVs operated over real-world travel patterns using the National Renewable Energy Laboratory's Battery Lifetime Analysis and Simulation Tool for Vehicles (BLAST-V). With support from the U.S. Department of Energy's Vehicle Technologies Office, BLAST-V has been developed to include algorithms for estimating the available range of BEVs prior to the start of trips, for rerouting baseline travel to utilize public charging infrastructure when necessary, and for making driver travel decisions for those trips in the presence of available public charging infrastructure, all while conducting advanced vehicle simulations that account for battery electrical, thermal, and degradation response. Results from BLAST-V simulations on vehicle utility, frequency of inserted stops, duration of charging events, and additional time and distance necessary for rerouting travel are presented to illustrate how BEV utility and travel patterns can be affected by various fast charge deployments.

  9. Energy Simulation studies in IEA/SHC Task 18 advanced glazing and associated materials for solar and building applications

    SciTech Connect

    Sullivan, R.; Selkowitz, S.; Lyons, P.

    1995-04-01

    Researchers participating in IEA/SHC Task 18 on advanced glazing materials have as their primary objective the development of new innovative glazing products such as high performance glazings, wavelength selective glazings, chromogenic optical switching devices, and light transport mechanisms that will lead to significant energy use reductions and increased comfort in commercial and residential buildings. Part of the Task 18 effort involves evaluation of the energy and comfort performance of these new glazings through the use of various performance analysis simulation tools. Eleven countries (Australia, Denmark, Finland, Germany, Italy, Netherlands, Norway, Spain, Sweden, Switzerland, and the United States) are contributing to this multi-year simulation study to better understand the complex heat transfer interactions that determine window performance. Each country has selected particular simulation programs and identified the following items to guide the simulation tasks: (1) geographic locations; (2) building types; (3) window systems and control strategies; and (4) analysis parameters of interest. This paper summarizes the results obtained thus far by several of the research organizations.

  10. Simulation evaluation of the advanced control concept for the NASA V/STOL research aircraft (VSRA)

    NASA Technical Reports Server (NTRS)

    Moralez, E.; Merrick, V. K.; Schroeder, J. A.

    1987-01-01

    Two candidate control systems for the Vertical/Short Takeoff and Landing (V/STOL) Research Aircraft (VSRA) are described, both of which are limited-authority, digital, fly-by wire variants of the original YAV-8B Harrier control system. The performance of these systems was compared with that of an ideal, full-authority system in simulated, adverse-weather V/STOL shipboard operations using the Ames Research Center's Vertical Motion Simulator. Both systems showed some performance degradation relative to the ideal, but both were adequate to meet VSRA program objectives. The favored system, selected because of safety considerations, was further simulated using a precision visual hovering task that verified its acceptability.

  11. A Computational Methodology for Simulating Thermal Loss Testing of the Advanced Stirling Convertor

    NASA Technical Reports Server (NTRS)

    Reid, Terry V.; Wilson, Scott D.; Schifer, Nicholas A.; Briggs, Maxwell H.

    2011-01-01

    Convertor and generator testing is carried out in tests designed to characterize convertor performance when subjected to environments intended to simulate launch and space conditions. The value of net heat input must be known in order to calculate convertor efficiency and to validate convertor performance. Specially designed test hardware was used to verify and validate a two step methodology for the prediction of net heat input. This lessons learned from these simulations have been applied to previous convertor simulations. As heat is supplied to the convertors, electric power is produced and measured. Net heat input to the convertor is one parameter that will contribute to the calculation of efficiency. This parameter is not measured directly. Insulation Loss. Determine the current status of the thermal conductivity of the micro-porous insulation. - Match heat source and hot end temperatures. - Match temperature difference across Kaowool insulation

  12. Validation of advanced NSSS simulator model for loss-of-coolant accidents

    SciTech Connect

    Kao, S.P.; Chang, S.K.; Huang, H.C.

    1995-09-01

    The replacement of the NSSS (Nuclear Steam Supply System) model on the Millstone 2 full-scope simulator has significantly increased its fidelity to simulate adverse conditions in the RCS. The new simulator NSSS model is a real-time derivative of the Nuclear Plant Analyzer by ABB. The thermal-hydraulic model is a five-equation, non-homogeneous model for water, steam, and non-condensible gases. The neutronic model is a three-dimensional nodal diffusion model. In order to certify the new NSSS model for operator training, an extensive validation effort has been performed by benchmarking the model performance against RELAP5/MOD2. This paper presents the validation results for the cases of small-and large-break loss-of-coolant accidents (LOCA). Detailed comparisons in the phenomena of reflux-condensation, phase separation, and two-phase natural circulation are discussed.

  13. Advanced methods in global gyrokinetic full f particle simulation of tokamak transport

    SciTech Connect

    Ogando, F.; Heikkinen, J. A.; Henriksson, S.; Janhunen, S. J.; Kiviniemi, T. P.; Leerink, S.

    2006-11-30

    A new full f nonlinear gyrokinetic simulation code, named ELMFIRE, has been developed for simulating transport phenomena in tokamak plasmas. The code is based on a gyrokinetic particle-in-cell algorithm, which can consider electrons and ions jointly or separately, as well as arbitrary impurities. The implicit treatment of the ion polarization drift and the use of full f methods allow for simulations of strongly perturbed plasmas including wide orbit effects, steep gradients and rapid dynamic changes. This article presents in more detail the algorithms incorporated into ELMFIRE, as well as benchmarking comparisons to both neoclassical theory and other codes.Code ELMFIRE calculates plasma dynamics by following the evolution of a number of sample particles. Because of using an stochastic algorithm its results are influenced by statistical noise. The effect of noise on relevant magnitudes is analyzed.Turbulence spectra of FT-2 plasma has been calculated with ELMFIRE, obtaining results consistent with experimental data.

  14. A review of recent advances in numerical simulations of microscale fuel processor for hydrogen production

    NASA Astrophysics Data System (ADS)

    Holladay, J. D.; Wang, Y.

    2015-05-01

    Microscale (<5 W) reformers for hydrogen production have been investigated for over a decade. These devices are intended to provide hydrogen for small fuel cells. Due to the reformer's small size, numerical simulations are critical to understand heat and mass transfer phenomena occurring in the systems and help guide the further improvements. This paper reviews the development of the numerical codes and details the reaction equations used. The majority of the devices utilized methanol as the fuel due to methanol's low reforming temperature and high conversion, although, there are several methane fueled systems. The increased computational power and more complex codes have led to improved accuracy of numerical simulations. Initial models focused on the reformer, while more recently, the simulations began including other unit operations such as vaporizers, inlet manifolds, and combustors. These codes are critical for developing the next generation systems. The systems reviewed included plate reactors, microchannel reactors, and annulus reactors for both wash-coated and packed bed systems.

  15. Advanced material modelling in numerical simulation of primary acetabular press-fit cup stability.

    PubMed

    Souffrant, R; Zietz, C; Fritsche, A; Kluess, D; Mittelmeier, W; Bader, R

    2012-01-01

    Primary stability of artificial acetabular cups, used for total hip arthroplasty, is required for the subsequent osteointegration and good long-term clinical results of the implant. Although closed-cell polymer foams represent an adequate bone substitute in experimental studies investigating primary stability, correct numerical modelling of this material depends on the parameter selection. Material parameters necessary for crushable foam plasticity behaviour were originated from numerical simulations matched with experimental tests of the polymethacrylimide raw material. Experimental primary stability tests of acetabular press-fit cups consisting of static shell assembly with consecutively pull-out and lever-out testing were subsequently simulated using finite element analysis. Identified and optimised parameters allowed the accurate numerical reproduction of the raw material tests. Correlation between experimental tests and the numerical simulation of primary implant stability depended on the value of interference fit. However, the validated material model provides the opportunity for subsequent parametric numerical studies. PMID:22817471

  16. Heads up display for the Flight Simulator for Advanced Aircraft (FSAA)

    NASA Technical Reports Server (NTRS)

    Brocker, D. H.; Ganzler, B. C.

    1975-01-01

    A heads-up flight director display designed for a V/STOL lift-fan transport simulation study is described. The pilot's visual flight scene had the heads-up display optically superimposed over the usual out-the-window, video flight scene. The flight director display required the development and integration of a flexible, programmable display generator, graphics assembler, display driver, computer interface system, and special collimating optics for the pilot's flight scene. The optical overlay was realistic because both scenes appeared at optical infinity, and the flexibility of this display device establishes its value as a research tool for use in future flight simulation programs.

  17. Material properties of low pressure chemical vapor deposited silicon nitride for modeling and calibrating the simulation of advanced isolation structures

    NASA Astrophysics Data System (ADS)

    Smeys, Peter I. L.; Griffin, Peter B.; Saraswat, Krishna C.

    1995-08-01

    The increasing cost and complexity of semiconductor process development has lead to the widespread use of multidimensional semiconductor process simulators. The success of a program like SUPREM-IV is primarily due to the fact that it is based on physical models, rather than empirical equations. This is in contrast to the first generation of process simulators, which calculated impurity profiles and oxide thickness in one dimension based on semiempirical approaches. SUPREM-IV incorporates two-dimensional coupled stress-dependent oxidation and impurity diffusion, which allows the accurate simulations of state-of-the-art integrated processes, provided that accurate model parameter sets are available. In this article we present an improved calibration methodology for simulation of advanced isolation technologies using SUPREM-IV, based on the experimental determination of the material properties of silicon nitride. The proposed strategy is applicable not only to SUPREM-IV but to any numerical simulator that uses the stress-dependent oxidation models to calculate oxide growth. In order to simulate experimental isolation boundary shapes, the oxidation models in SUPREM-IV must be calibrated. This requires a set of five fitting parameters, i.e., the material viscosities and activation volumes for stress-dependent diffusion, reaction rate, and critical stress. These parameters form a quintuplet but are not unique. Multiplying the viscosity values and dividing the activation volumes by a constant will yield exactly the same isolation structure boundary shape. The calculated stresses in the substrate however do not remain constant when different quintuplets are used. This has serious implications since isolation structures require the stress levels in the silicon substrate to remain well below the yield stress of silicon. If a nonoptimal parameter set is used, incorrect designs will result. Based on the experimental extraction of the silicon nitride viscosity by measuring the

  18. High Level Requirements for the Nuclear Energy -- Knowledge Base for Advanced Modeling and Simulation (NE-KAMS)

    SciTech Connect

    Rich Johnson; Hyung Lee; Kimberlyn C. Mousseau

    2011-09-01

    The US Department of Energy, Office of Nuclear Energy (DOE-NE), has been tasked with the important mission of ensuring that nuclear energy remains a compelling and viable energy source in the U.S. The motivations behind this mission include cost-effectively meeting the expected increases in the power needs of the country, reducing carbon emissions and reducing dependence on foreign energy sources. In the near term, to ensure that nuclear power remains a key element of U.S. energy strategy and portfolio, the DOE-NE will be working with the nuclear industry to support safe and efficient operations of existing nuclear power plants. In the long term, to meet the increasing energy needs of the U.S., the DOE-NE will be investing in research and development (R&D) and working in concert with the nuclear industry to build and deploy new, safer and more efficient nuclear power plants. The safe and efficient operations of existing nuclear power plants and designing, licensing and deploying new reactor designs, however, will require focused R&D programs as well as the extensive use and leveraging of advanced modeling and simulation (M&S). M&S will play a key role in ensuring safe and efficient operations of existing and new nuclear reactors. The DOE-NE has been actively developing and promoting the use of advanced M&S in reactor design and analysis through its R&D programs, e.g., the Nuclear Energy Advanced Modeling and Simulation (NEAMS) and Consortium for Advanced Simulation of Light Water Reactors (CASL) programs. Also, nuclear reactor vendors are already using CFD and CSM, for design, analysis, and licensing. However, these M&S tools cannot be used with confidence for nuclear reactor applications unless accompanied and supported by verification and validation (V&V) and uncertainty quantification (UQ) processes and procedures which provide quantitative measures of uncertainty for specific applications. The Nuclear Energy Knowledge base for Advanced Modeling and Simulation

  19. Advanced Signal Processing for Integrated LES-RANS Simulations: Anti-aliasing Filters

    NASA Technical Reports Server (NTRS)

    Schlueter, J. U.

    2003-01-01

    Currently, a wide variety of flow phenomena are addressed with numerical simulations. Many flow solvers are optimized to simulate a limited spectrum of flow effects effectively, such as single parts of a flow system, but are either inadequate or too expensive to be applied to a very complex problem. As an example, the flow through a gas turbine can be considered. In the compressor and the turbine section, the flow solver has to be able to handle the moving blades, model the wall turbulence, and predict the pressure and density distribution properly. This can be done by a flow solver based on the Reynolds-Averaged Navier-Stokes (RANS) approach. On the other hand, the flow in the combustion chamber is governed by large scale turbulence, chemical reactions, and the presence of fuel spray. Experience shows that these phenomena require an unsteady approach. Hence, for the combustor, the use of a Large Eddy Simulation (LES) flow solver is desirable. While many design problems of a single flow passage can be addressed by separate computations, only the simultaneous computation of all parts can guarantee the proper prediction of multi-component phenomena, such as compressor/combustor instability and combustor/turbine hot-streak migration. Therefore, a promising strategy to perform full aero-thermal simulations of gas-turbine engines is the use of a RANS flow solver for the compressor sections, an LES flow solver for the combustor, and again a RANS flow solver for the turbine section.

  20. Advances in Chimera Grid Tools for Multi-Body Dynamics Simulations and Script Creation

    NASA Technical Reports Server (NTRS)

    Chan, William M.

    2004-01-01

    This viewgraph presentation contains information about (1) Framework for multi-body dynamics - Geometry Manipulation Protocol (GMP), (2) Simulation procedure using Chimera Grid Tools (CGT) and OVERFLOW-2 (3) Further recent developments in Chimera Grid Tools OVERGRID, Grid modules, Script library and (4) Future work.

  1. A Feedback Intervention to Increase Digital and Paper Checklist Performance in Technically Advanced Aircraft Simulation

    ERIC Educational Resources Information Center

    Rantz, William G.; Van Houten, Ron

    2011-01-01

    This study examined whether pilots operating a flight simulator completed digital or paper flight checklists more accurately after receiving postflight graphic and verbal feedback. The dependent variable was the number of checklist items completed correctly per flight. Following treatment, checklist completion with paper and digital checklists…

  2. ADVANCED URBANIZED METEOROLOGICAL MODELING AND AIR QUALITY SIMULATIONS WITH CMAQ AT NEIGHBORHOOD SCALES

    EPA Science Inventory

    We present results from a study testing the new boundary layer parameterization method, the canopy drag approach (DA) which is designed to explicitly simulate the effects of buildings, street and tree canopies on the dynamic, thermodynamic structure and dispersion fields in urban...

  3. Simulations of the L-H transition on experimental advanced superconducting Tokamak

    SciTech Connect

    Weiland, Jan

    2014-12-15

    We have simulated the L-H transition on the EAST tokamak [Baonian Wan, EAST and HT-7 Teams, and International Collaborators, “Recent experiments in the EAST and HT-7 superconducting tokamaks,” Nucl. Fusion 49, 104011 (2009)] using a predictive transport code where ion and electron temperatures, electron density, and poloidal and toroidal momenta are simulated self consistently. This is, as far as we know, the first theory based simulation of an L-H transition including the whole radius and not making any assumptions about where the barrier should be formed. Another remarkable feature is that we get H-mode gradients in agreement with the α – α{sub d} diagram of Rogers et al. [Phys. Rev. Lett. 81, 4396 (1998)]. Then, the feedback loop emerging from the simulations means that the L-H power threshold increases with the temperature at the separatrix. This is a main feature of the C-mod experiments [Hubbard et al., Phys. Plasmas 14, 056109 (2007)]. This is also why the power threshold depends on the direction of the grad B drift in the scrape off layer and also why the power threshold increases with the magnetic field. A further significant general H-mode feature is that the density is much flatter in H-mode than in L-mode.

  4. Development of Advanced Electrochemical Emission Spectroscopy for Monitoring Corrosion in Simulated DOE Liquid Waste

    SciTech Connect

    MacDonald, Digby D.

    2005-06-01

    In this work, the examination of electrochemical noise data comprised three main approaches: one, a computer simulation of the anodic and cathodic activity relating to corrosion on a metal surface; two, experimental modeling of the electrochemical environment inside nuclear waste storage containers and collection of EN generated; and three, Wavelet analysis of the EN data from the first two parts. The simulation of EN proved to be effective in replicating the EN data of both general and pitting corrosion. Using competition mechanisms for the anodic and cathodic sites on the surface, the long-term, low-frequency data generated by localized pitting corrosion was reproduced. Disabling one or more of the rules of the simulation eliminated the low-frequency character of the data, and eliminating all of the rules effectively reproduced general corrosion noise. The simulation accuracy benefited from comparison to experimental data, and conversely, it improved the EN analysis by providing theory for the underlying mechanisms. The experimental electrochemical cell modeled the important factors in nuclear waste storage containers for this EN study; mainly increased temperature and the concentrations of corrosion-inducing or inhibiting chemicals. It also provided a platform for studying how the EN was affected by the competing chemicals.

  5. ANNUAL REPORT. DEVELOPMENT OF ADVANCED ELECTROCHEMICAL EMISSION SPECTROSCOPY FOR MONITORING CORROSION IN SIMULATED DOE LIQUID WASTE

    EPA Science Inventory

    The current report summarizes work performed on the project over the past calendar year (2001). The work concentrated on four areas: the fracture of AISI 4340 steel simulating weld heat affected zones in DOE liquid waste storage tanks, investigation of the passive state on nickel...

  6. Predicting Earthquake Occurrence at Subduction-Zone Plate Boundaries Through Advanced Computer Simulation

    NASA Astrophysics Data System (ADS)

    Matsu'Ura, M.; Hashimoto, C.; Fukuyama, E.

    2004-12-01

    In general, predicting the occurrence of earthquakes is very difficult, because of the complexity of actual faults and nonlinear interaction between them. From the standpoint of earthquake prediction, however, our target is limited to the large events that completely break down a seismogenic zone. To such large events we may apply the concept of the earthquake cycle. The entire process of earthquake generation cycles generally consists of tectonic loading due to relative plate motion, quasi-static rupture nucleation, dynamic rupture propagation and stop, and restoration of fault strength. This process can be completely described by a coupled nonlinear system, which consists of an elastic/viscoelastic slip-response function that relates fault slip to shear stress change and a fault constitutive law that prescribes change in shear strength with fault slip and contact time. The shear stress and the shear strength are related with each other through boundary conditions on the fault. The driving force of this system is observed relative plate motion. The system to describe the earthquake generation cycle is conceptually quite simple. The complexity in practical modeling mainly comes from complexity in structure of the real earth. Recently, we have developed a physics-based, predictive simulation system for earthquake generation at plate boundaries in and around Japan, where the four plates of Pacific, North American, Philippine Sea and Eurasian are interacting with each other. The simulation system consists of a crust-mantle structure model, a quasi-static tectonic loading model, and a dynamic rupture propagation model. First, we constructed a realistic 3D model of plate interfaces in and around Japan by applying an inversion technique to ISC hypocenter data, and computed viscoelastic slip-response functions for this structure model. Second, we introduced the slip- and time-dependent fault constitutive law with an inherent strength-restoration mechanism as a basic

  7. Data Collection Methods for Validation of Advanced Multi-Resolution Fast Reactor Simulations

    SciTech Connect

    Tokuhiro, Akiro; Ruggles, Art; Pointer, David

    2015-01-22

    In pool-type Sodium Fast Reactors (SFR) the regions most susceptible to thermal striping are the upper instrumentation structure (UIS) and the intermediate heat exchanger (IHX). This project experimentally and computationally (CFD) investigated the thermal mixing in the region exiting the reactor core to the UIS. The thermal mixing phenomenon was simulated using two vertical jets at different velocities and temperatures as prototypic of two adjacent channels out of the core. Thermal jet mixing of anticipated flows at different temperatures and velocities were investigated. Velocity profiles are measured throughout the flow region using Ultrasonic Doppler Velocimetry (UDV), and temperatures along the geometric centerline between the jets were recorded using a thermocouple array. CFD simulations, using COMSOL, were used to initially understand the flow, then to design the experimental apparatus and finally to compare simulation results and measurements characterizing the flows. The experimental results and CFD simulations show that the flow field is characterized into three regions with respective transitions, namely, convective mixing, (flow direction) transitional, and post-mixing. Both experiments and CFD simulations support this observation. For the anticipated SFR conditions the flow is momentum dominated and thus thermal mixing is limited due to the short flow length associated from the exit of the core to the bottom of the UIS. This means that there will be thermal striping at any surface where poorly mixed streams impinge; rather unless lateral mixing is ‘actively promoted out of the core, thermal striping will prevail. Furthermore we note that CFD can be considered a ‘separate effects (computational) test’ and is recommended as part of any integral analysis. To this effect, poorly mixed streams then have potential impact on the rest of the SFR design and scaling, especially placement of internal components, such as the IHX that may see poorly mixed

  8. Recent advances in renal hypoxia: insights from bench experiments and computer simulations.

    PubMed

    Layton, Anita T

    2016-07-01

    The availability of oxygen in renal tissue is determined by the complex interactions among a host of processes, including renal blood flow, glomerular filtration, arterial-to-venous oxygen shunting, medullary architecture, Na(+) transport, and oxygen consumption. When this delicate balance is disrupted, the kidney may become susceptible to hypoxic injury. Indeed, renal hypoxia has been implicated as one of the major causes of acute kidney injury and chronic kidney diseases. This review highlights recent advances in our understanding of renal hypoxia; some of these studies were published in response to a recent Call for Papers of this journal: Renal Hypoxia. PMID:27147670

  9. Monte Carlo simulation and scatter correction of the GE Advance PET scanner with SimSET and Geant4

    NASA Astrophysics Data System (ADS)

    Barret, Olivier; Carpenter, T. Adrian; Clark, John C.; Ansorge, Richard E.; Fryer, Tim D.

    2005-10-01

    For Monte Carlo simulations to be used as an alternative solution to perform scatter correction, accurate modelling of the scanner as well as speed is paramount. General-purpose Monte Carlo packages (Geant4, EGS, MCNP) allow a detailed description of the scanner but are not efficient at simulating voxel-based geometries (patient images). On the other hand, dedicated codes (SimSET, PETSIM) will perform well for voxel-based objects but will be poor in their capacity of simulating complex geometries such as a PET scanner. The approach adopted in this work was to couple a dedicated code (SimSET) with a general-purpose package (Geant4) to have the efficiency of the former and the capabilities of the latter. The combined SimSET+Geant4 code (SimG4) was assessed on the GE Advance PET scanner and compared to the use of SimSET only. A better description of the resolution and sensitivity of the scanner and of the scatter fraction was obtained with SimG4. The accuracy of scatter correction performed with SimG4 and SimSET was also assessed from data acquired with the 20 cm NEMA phantom. SimG4 was found to outperform SimSET and to give slightly better results than the GE scatter correction methods installed on the Advance scanner (curve fitting and scatter modelling for the 300-650 keV and 375-650 keV energy windows, respectively). In the presence of a hot source close to the edge of the field of view (as found in oxygen scans), the GE curve-fitting method was found to fail whereas SimG4 maintained its performance.

  10. Monte Carlo simulation and scatter correction of the GE advance PET scanner with SimSET and Geant4.

    PubMed

    Barret, Olivier; Carpenter, T Adrian; Clark, John C; Ansorge, Richard E; Fryer, Tim D

    2005-10-21

    For Monte Carlo simulations to be used as an alternative solution to perform scatter correction, accurate modelling of the scanner as well as speed is paramount. General-purpose Monte Carlo packages (Geant4, EGS, MCNP) allow a detailed description of the scanner but are not efficient at simulating voxel-based geometries (patient images). On the other hand, dedicated codes (SimSET, PETSIM) will perform well for voxel-based objects but will be poor in their capacity of simulating complex geometries such as a PET scanner. The approach adopted in this work was to couple a dedicated code (SimSET) with a general-purpose package (Geant4) to have the efficiency of the former and the capabilities of the latter. The combined SimSET+Geant4 code (SimG4) was assessed on the GE Advance PET scanner and compared to the use of SimSET only. A better description of the resolution and sensitivity of the scanner and of the scatter fraction was obtained with SimG4. The accuracy of scatter correction performed with SimG4 and SimSET was also assessed from data acquired with the 20 cm NEMA phantom. SimG4 was found to outperform SimSET and to give slightly better results than the GE scatter correction methods installed on the Advance scanner (curve fitting and scatter modelling for the 300-650 keV and 375-650 keV energy windows, respectively). In the presence of a hot source close to the edge of the field of view (as found in oxygen scans), the GE curve-fitting method was found to fail whereas SimG4 maintained its performance. PMID:16204875

  11. Advanced time integration algorithms for dislocation dynamics simulations of work hardening

    DOE PAGESBeta

    Sills, Ryan B.; Aghaei, Amin; Cai, Wei

    2016-04-25

    Efficient time integration is a necessity for dislocation dynamics simulations of work hardening to achieve experimentally relevant strains. In this work, an efficient time integration scheme using a high order explicit method with time step subcycling and a newly-developed collision detection algorithm are evaluated. First, time integrator performance is examined for an annihilating Frank–Read source, showing the effects of dislocation line collision. The integrator with subcycling is found to significantly out-perform other integration schemes. The performance of the time integration and collision detection algorithms is then tested in a work hardening simulation. The new algorithms show a 100-fold speed-up relativemore » to traditional schemes. As a result, subcycling is shown to improve efficiency significantly while maintaining an accurate solution, and the new collision algorithm allows an arbitrarily large time step size without missing collisions.« less

  12. Recent advances in numerical simulation and control of asymmetric flows around slender bodies

    NASA Technical Reports Server (NTRS)

    Kandil, Osama A.; Wong, Tin-Chee; Sharaf, Hazem H.; Liu, C. H.

    1992-01-01

    The problems of asymmetric flow around slender bodies and its control are formulated using the unsteady, compressible, thin-layer or full Navier-Stokes equations which are solved using an implicit, flux-difference splitting, finite-volume scheme. The problem is numerically simulated for both locally-conical and three-dimensional flows. The numerical applications include studies of the effects of relative incidence, Mach number and Reynolds number on the flow asymmetry. For the control of flow asymmetry, the numerical simulation cover passive and active control methods. For the passive control, the effectiveness of vertical fins placed in the leeward plane of geometric symmetry and side strakes with different orientations is studied. For the active control, the effectiveness of normal and tangential flow injection and surface heating and a combination of these methods is studied.

  13. Advanced time integration algorithms for dislocation dynamics simulations of work hardening

    NASA Astrophysics Data System (ADS)

    Sills, Ryan B.; Aghaei, Amin; Cai, Wei

    2016-05-01

    Efficient time integration is a necessity for dislocation dynamics simulations of work hardening to achieve experimentally relevant strains. In this work, an efficient time integration scheme using a high order explicit method with time step subcycling and a newly-developed collision detection algorithm are evaluated. First, time integrator performance is examined for an annihilating Frank–Read source, showing the effects of dislocation line collision. The integrator with subcycling is found to significantly out-perform other integration schemes. The performance of the time integration and collision detection algorithms is then tested in a work hardening simulation. The new algorithms show a 100-fold speed-up relative to traditional schemes. Subcycling is shown to improve efficiency significantly while maintaining an accurate solution, and the new collision algorithm allows an arbitrarily large time step size without missing collisions.

  14. A review of recent advances of numerical simulations of microscale fuel processors for hydrogen production

    SciTech Connect

    Holladay, Jamelyn D.; Wang, Yong

    2015-05-01

    Microscale (<5W) reformers for hydrogen production have been investigated for over a decade. These devices are intended to provide hydrogen for small fuel cells. Due to the reformer’s small size, numerical simulations are critical to understand heat and mass transfer phenomena occurring in the systems. This paper reviews the development of the numerical codes and details the reaction equations used. The majority of the devices utilized methanol as the fuel due to methanol’s low reforming temperature and high conversion, although, there are several methane fueled systems. As computational power has decreased in cost and increased in availability, the codes increased in complexity and accuracy. Initial models focused on the reformer, while more recently, the simulations began including other unit operations such as vaporizers, inlet manifolds, and combustors. These codes are critical for developing the next generation systems. The systems reviewed included, plate reactors, microchannel reactors, annulus reactors, wash-coated, packed bed systems.

  15. Springback Simulation: Impact of Some Advanced Constitutive Models and Numerical Parameters

    NASA Astrophysics Data System (ADS)

    Haddag, Badis; Balan, Tudor; Abed-Meraim, Farid

    2005-08-01

    The impact of material models on the numerical simulation of springback is investigated. The study is focused on the strain-path sensitivity of two hardening models. While both models predict the Bauschinger effect, their response in the transient zone after a strain-path change is fairly different. Their respective predictions are compared in terms of sequential test response and of strip-drawing springback. For this purpose, an accurate and general time integration algorithm has been developed and implemented in the Abaqus code. The impact of several numerical parameters is also studied in order to assess the overall accuracy of the finite element prediction. For some test geometries, both material and numerical parameters are shown to clearly influence the springback behavior at a large extent. Moreover, a general trend cannot always be extracted, thus justifying the need for the finite element simulation of the stamping process.

  16. Simulation of an advanced scout attack helicopter for crew station studies

    NASA Technical Reports Server (NTRS)

    Lypaczewski, P. A.; Jones, A. D.; Voorhees, J. W.

    1987-01-01

    The system complexity and high workload of the next generation of light scout/attack helicopters is being evaluated in the Crew Station Research and Development Program. This program has been established to study the issues of battle captain performance for one-man versus two-man crews when confronted by a hostile environment. The crew station experiments are described along with the facility elements and simulation.

  17. Advanced software development workstation: Effectiveness of constraint-checking. [spaceflight simulation and planning

    NASA Technical Reports Server (NTRS)

    Izygon, Michel

    1992-01-01

    This report summarizes the findings and lessons learned from the development of an intelligent user interface for a space flight planning simulation program, in the specific area related to constraint-checking. The different functionalities of the Graphical User Interface part and of the rule-based part of the system have been identified. Their respective domain of applicability for error prevention and error checking have been specified.

  18. Study of Plasma Liner Driven Magnetized Target Fusion Via Advanced Simulations

    SciTech Connect

    Samulyak, Roman V.; Parks, Paul

    2013-08-31

    The feasibility of the plasma liner driven Magnetized Target Fusion (MTF) via terascale numerical simulations will be assessed. In the MTF concept, a plasma liner, formed by merging of a number (60 or more) of radial, highly supersonic plasma jets, implodes on the target in the form of two compact plasma toroids, and compresses it to conditions of the fusion ignition. By avoiding major difficulties associated with both the traditional laser driven inertial confinement fusion and solid liner driven MTF, the plasma liner driven MTF potentially provides a low-cost and fast R&D path towards the demonstration of practical fusion energy. High fidelity numerical simulations of full nonlinear models associated with the plasma liner MTF using state-of-art numerical algorithms and terascale computing are necessary in order to resolve uncertainties and provide guidance for future experiments. At Stony Brook University, we have developed unique computational capabilities that ideally suite the MTF problem. The FronTier code, developed in collaboration with BNL and LANL under DOE funding including SciDAC for the simulation of 3D multi-material hydro and MHD flows, has beenbenchmarked and used for fundamental and engineering problems in energy science applications. We have performed 3D simulations of converging supersonic plasma jets, their merger and the formation of the plasma liner, and a study of the corresponding oblique shock problem. We have studied the implosion of the plasma liner on the magnetized plasma target by resolving Rayleigh-Taylor instabilities in 2D and 3D and other relevant physics and estimate thermodynamic conditions of the target at the moment of maximum compression and the hydrodynamic efficiency of the method.

  19. Advances in HYDRA and its application to simulations of Inertial Confinement Fusion targets

    NASA Astrophysics Data System (ADS)

    Marinak, M. M.; Kerbel, G. D.; Koning, J. M.; Patel, M. V.; Sepke, S. M.; Brown, P. N.; Chang, B.; Procassini, R.; Veitzer, S. A.

    2008-11-01

    We will outline new capabilities added to the HYDRA 2D/3D multiphysics ICF simulation code. These include a new SN multigroup radiation transport package (1D), constitutive models for elastic-plastic (strength) effects, and a mix model. A Monte Carlo burn package is being incorporated to model diagnostic signatures of neutrons, gamma rays and charged particles. A 3D MHD package that treats resistive MHD is available. Improvements to HYDRA's implicit Monte Carlo photonics package, including the addition of angular biasing, now enable integrated hohlraum simulations to complete in substantially shorter time. The heavy ion beam deposition package now includes a new model for ion stopping power developed by the Tech-X Corporation, with improved accuracy below the Bragg peak. Examples will illustrate HYDRA's enhanced capabilities to simulate various aspects of inertial confinement fusion targets.This work was performed under the auspices of the Lawrence Livermore National Security, LLC, (LLNS) under Contract No. DE-AC52-07NA27344. The work of Tech-X personnel was funded by the Department of Energy under Small Business Innovation Research Contract No. DE-FG02-03ER83797.

  20. Advanced Coupled Simulation of Borehole Thermal Energy Storage Systems and Above Ground Installations

    NASA Astrophysics Data System (ADS)

    Welsch, Bastian; Rühaak, Wolfram; Schulte, Daniel O.; Bär, Kristian; Sass, Ingo

    2016-04-01

    Seasonal thermal energy storage in borehole heat exchanger arrays is a promising technology to reduce primary energy consumption and carbon dioxide emissions. These systems usually consist of several subsystems like the heat source (e.g. solarthermics or a combined heat and power plant), the heat consumer (e.g. a heating system), diurnal storages (i.e. water tanks), the borehole thermal energy storage, additional heat sources for peak load coverage (e.g. a heat pump or a gas boiler) and the distribution network. For the design of an integrated system, numerical simulations of all subsystems are imperative. A separate simulation of the borehole energy storage is well-established but represents a simplification. In reality, the subsystems interact with each other. The fluid temperatures of the heat generation system, the heating system and the underground storage are interdependent and affect the performance of each subsystem. To take into account these interdependencies, we coupled a software for the simulation of the above ground facilities with a finite element software for the modeling of the heat flow in the subsurface and the borehole heat exchangers. This allows for a more realistic view on the entire system. Consequently, a finer adjustment of the system components and a more precise prognosis of the system's performance can be ensured.

  1. Implementation and verification of an advancement to the Galileo system search and rescue service signal simulator

    NASA Astrophysics Data System (ADS)

    Lin, Mo; Wu, Qiongzhi; Liu, Shaobo

    2009-12-01

    Based on the Sino-European cooperation in the Galileo satellite navigation system Search And Rescue(SAR) service, a multi-channel distress beacon signal simulator is presented here. The mathematic model and spectrum analysis of current 406 MHz distress beacons signals is presented which includes Emergency Position Indicating Radio Beacons (EPIRB) for Maritime, Personal Locator Beacons (PLB) for land (road and rail), and Emergency Locator Transmitter (ELT) for aeronautical applications. Based on Software Defined Radio (SDR) and digital IF techniques, the design of a SAR Signal Simulator (SAR-SS) is proposed, which can generate no less than 20 distress signals simultaneously. The Doppler shift and space propagation effect such as ionosphere delay, free space attenuation and etc, are simulated in SAR-SS, which provided a significant test and evaluation capability for SAR/Galileo project. The performance of SAR-SS is more accurate and stabile than the Cospas-Sarsat(C-S) requirement. SAR-SS will be a significant instrument for the Galileo system search and rescue project ground test.

  2. Goal-directed transthoracic echocardiography during advanced cardiac life support: A pilot study using simulation to assess ability

    PubMed Central

    Greenstein, Yonatan Y.; Martin, Thomas J.; Rolnitzky, Linda; Felner, Kevin; Kaufman, Brian

    2015-01-01

    Introduction Goal-directed echocardiography (GDE) is used to answer specific clinical questions which provide invaluable information to physicians managing a hemodynamically unstable patient. We studied perception and ability of housestaff previously trained in GDE to accurately diagnose common causes of cardiac arrest during simulated advanced cardiac life support (ACLS); we compared their results to those of expert echocardiographers. Methods Eleven pulmonary and critical care medicine fellows, seven emergency medicine residents, and five cardiologists board-certified in echocardiography were enrolled. Baseline ability to acquire four transthoracic echocardiography views was assessed and participants were exposed to six simulated cardiac arrests and were asked to perform a GDE during ACLS. Housestaff performance was compared to the performance of five expert echocardiographers. Results Average baseline and scenario views by housestaff were of good or excellent quality 89% and 83% of the time, respectively. Expert average baseline and scenario views were always of good or excellent quality. Housestaff and experts made the correct diagnosis in 68% and 77% of cases, respectively. On average, participants required 1.5 pulse checks to make the correct diagnosis. 94% of housestaff perceived this study as an accurate assessment of ability. Conclusions In an ACLS compliant manner, housestaff are capable of diagnosing management altering pathologies the majority of the time and they reach similar diagnostic conclusions in the same amount of time as expert echocardiographers in a simulated cardiac arrest scenario. PMID:25932707

  3. Nuclear Energy Advanced Modeling and Simulation (NEAMS) Waste Integrated Performance and Safety Codes (IPSC) : FY10 development and integration.

    SciTech Connect

    Criscenti, Louise Jacqueline; Sassani, David Carl; Arguello, Jose Guadalupe, Jr.; Dewers, Thomas A.; Bouchard, Julie F.; Edwards, Harold Carter; Freeze, Geoffrey A.; Wang, Yifeng; Schultz, Peter Andrew

    2011-02-01

    This report describes the progress in fiscal year 2010 in developing the Waste Integrated Performance and Safety Codes (IPSC) in support of the U.S. Department of Energy (DOE) Office of Nuclear Energy Advanced Modeling and Simulation (NEAMS) Campaign. The goal of the Waste IPSC is to develop an integrated suite of computational modeling and simulation capabilities to quantitatively assess the long-term performance of waste forms in the engineered and geologic environments of a radioactive waste storage or disposal system. The Waste IPSC will provide this simulation capability (1) for a range of disposal concepts, waste form types, engineered repository designs, and geologic settings, (2) for a range of time scales and distances, (3) with appropriate consideration of the inherent uncertainties, and (4) in accordance with robust verification, validation, and software quality requirements. Waste IPSC activities in fiscal year 2010 focused on specifying a challenge problem to demonstrate proof of concept, developing a verification and validation plan, and performing an initial gap analyses to identify candidate codes and tools to support the development and integration of the Waste IPSC. The current Waste IPSC strategy is to acquire and integrate the necessary Waste IPSC capabilities wherever feasible, and develop only those capabilities that cannot be acquired or suitably integrated, verified, or validated. This year-end progress report documents the FY10 status of acquisition, development, and integration of thermal-hydrologic-chemical-mechanical (THCM) code capabilities, frameworks, and enabling tools and infrastructure.

  4. Parallel processing of real-time dynamic systems simulation on OSCAR (Optimally SCheduled Advanced multiprocessoR)

    NASA Technical Reports Server (NTRS)

    Kasahara, Hironori; Honda, Hiroki; Narita, Seinosuke

    1989-01-01

    Parallel processing of real-time dynamic systems simulation on a multiprocessor system named OSCAR is presented. In the simulation of dynamic systems, generally, the same calculation are repeated every time step. However, we cannot apply to Do-all or the Do-across techniques for parallel processing of the simulation since there exist data dependencies from the end of an iteration to the beginning of the next iteration and furthermore data-input and data-output are required every sampling time period. Therefore, parallelism inside the calculation required for a single time step, or a large basic block which consists of arithmetic assignment statements, must be used. In the proposed method, near fine grain tasks, each of which consists of one or more floating point operations, are generated to extract the parallelism from the calculation and assigned to processors by using optimal static scheduling at compile time in order to reduce large run time overhead caused by the use of near fine grain tasks. The practicality of the scheme is demonstrated on OSCAR (Optimally SCheduled Advanced multiprocessoR) which has been developed to extract advantageous features of static scheduling algorithms to the maximum extent.

  5. Integrating Simulation and Data for Materials in Extreme Environments

    NASA Astrophysics Data System (ADS)

    Germann, Timothy

    2014-03-01

    We are using large-scale molecular dynamics (MD) simulations to study the response of nanocrystalline metals such as tantalum to uniaxial (e.g., shock) compression. With modern petascale-class platforms, we are able to model sample sizes with edge lengths over one micrometer, which match the length and time scales experimentally accessible at Argonne's Advanced Photon Source (APS) and SLAC's Linac Coherent Light Source (LCLS). I will describe our simulation predictions and their recent verification at LCLS, as well as outstanding challenges in modeling the response of materials to extreme mechanical and radiation environments, and our efforts to tackle these as part of the multi-institutional, multi-disciplinary Exascale Co-design Center for Materials in Extreme Environments (ExMatEx). ExMatEx has initiated an early and deep collaboration between domain (computational materials) scientists, applied mathematicians, computer scientists, and hardware architects, in order to establish the relationships between algorithms, software stacks, and architectures needed to enable exascale-ready materials science application codes within the next decade. We anticipate that we will be able to exploit hierarchical, heterogeneous architectures to achieve more realistic large-scale simulations with adaptive physics refinement, and are using tractable application scale-bridging proxy application testbeds to assess new approaches and requirements. The current scale-bridging strategies accumulate (or recompute) a distributed response database from fine-scale calculations, in a top-down rather than bottom-up multiscale approach. I will demonstrate this approach and our initial assessments, using the newly emerging capabilities at new 4th generation synchrotron light sources as an experimental driver.

  6. A Simulation Study Comparing Incineration and Composting in a Mars-Based Advanced Life Support System

    NASA Technical Reports Server (NTRS)

    Hogan, John; Kang, Sukwon; Cavazzoni, Jim; Levri, Julie; Finn, Cory; Luna, Bernadette (Technical Monitor)

    2000-01-01

    The objective of this study is to compare incineration and composting in a Mars-based advanced life support (ALS) system. The variables explored include waste pre-processing requirements, reactor sizing and buffer capacities. The study incorporates detailed mathematical models of biomass production and waste processing into an existing dynamic ALS system model. The ALS system and incineration models (written in MATLAB/SIMULINK(c)) were developed at the NASA Ames Research Center. The composting process is modeled using first order kinetics, with different degradation rates for individual waste components (carbohydrates, proteins, fats, cellulose and lignin). The biomass waste streams are generated using modified "Eneray Cascade" crop models, which use light- and dark-cycle temperatures, irradiance, photoperiod, [CO2], planting density, and relative humidity as model inputs. The study also includes an evaluation of equivalent system mass (ESM).

  7. Simulation of concomitant magnetic fields on fast switched gradient coils used in advanced application of MRI

    NASA Astrophysics Data System (ADS)

    Salinas-Muciño, G.; Torres-García, E.; Hidalgo-Tobon, S.

    2012-10-01

    The process to produce an MR image includes nuclear alignment, RF excitation, spatial encoding, and image formation. To form an image, it is necessary to perform spatial localization of the MR signals, which is achieved using gradient coils. MRI requires the use of gradient coils that generate magnetic fields, which vary linearly with position over the imaging volume. Safety issues have been a motivation to study deeply the relation between the interaction of gradient magnetic field and the peripheral nerve stimulation. In this work is presented a numerical modeling between the concomitant magnetic fields produced by the gradient coils and the electric field induced in a cube with σ conductivity by the gradient field switching in pulse sequences as Eco planar Imaging (EPI), due to this kind of sequence is the most used in advance applications of magnetic resonance imaging as functional MRI, cardiac imaging or diffusion.

  8. The dynamic information architecture system : an advanced simulation framework for military and civilian applications.

    SciTech Connect

    Campbell, A. P.; Hummel, J. R.

    1998-01-08

    DIAS, the Dynamic Information Architecture System, is an object-oriented simulation system that was designed to provide an integrating framework in which new or legacy software applications can operate in a context-driven frame of reference. DIAS provides a flexible and extensible mechanism to allow disparate, and mixed language, software applications to interoperate. DIAS captures the dynamic interplay between different processes or phenomena in the same frame of reference. Finally, DIAS accommodates a broad range of analysis contexts, with widely varying spatial and temporal resolutions and fidelity.

  9. Development of an Advanced Stimulation / Production Predictive Simulator for Enhanced Geothermal Systems

    SciTech Connect

    Pritchett, John W.

    2015-04-15

    There are several well-known obstacles to the successful deployment of EGS projects on a commercial scale, of course. EGS projects are expected to be deeper, on the average, than conventional “natural” geothermal reservoirs, and drilling costs are already a formidable barrier to conventional geothermal projects. Unlike conventional resources (which frequently announce their presence with natural manifestations such as geysers, hot springs and fumaroles), EGS prospects are likely to appear fairly undistinguished from the earth surface. And, of course, the probable necessity of fabricating a subterranean fluid circulation network to mine the heat from the rock (instead of simply relying on natural, pre-existing permeable fractures) adds a significant degree of uncertainty to the prospects for success. Accordingly, the basic motivation for the work presented herein was to try to develop a new set of tools that would be more suitable for this purpose. Several years ago, the Department of Energy’s Geothermal Technologies Office recognized this need and funded a cost-shared grant to our company (then SAIC, now Leidos) to partner with Geowatt AG of Zurich, Switzerland and undertake the development of a new reservoir simulator that would be more suitable for EGS forecasting than the existing tools. That project has now been completed and a new numerical geothermal reservoir simulator has been developed. It is named “HeatEx” (for “Heat Extraction”) and is almost completely new, although its methodology owes a great deal to other previous geothermal software development efforts, including Geowatt’s “HEX-S” code, the STAR and SPFRAC simulators developed here at SAIC/Leidos, the MINC approach originally developed at LBNL, and tracer analysis software originally formulated at INEL. Furthermore, the development effort was led by engineers with many years of experience in using reservoir simulation software to make meaningful forecasts for real geothermal

  10. Researcher's guide to the NASA Ames Flight Simulator for Advanced Aircraft (FSAA)

    NASA Technical Reports Server (NTRS)

    Sinacori, J. B.; Stapleford, R. L.; Jewell, W. F.; Lehman, J. M.

    1977-01-01

    Performance, limitations, supporting software, and current checkout and operating procedures are presented for the flight simulator, in terms useful to the researcher who intends to use it. Suggestions to help the researcher prepare the experimental plan are also given. The FSAA's central computer, cockpit, and visual and motion systems are addressed individually but their interaction is considered as well. Data required, available options, user responsibilities, and occupancy procedures are given in a form that facilitates the initial communication required with the NASA operations' group.

  11. Utilizing NX Advanced Simulation for NASA's New Mobile Launcher for Ares-l

    NASA Technical Reports Server (NTRS)

    Brown, Christopher

    2010-01-01

    This slide presentation reviews the use of NX to simulate the new Mobile Launcher (ML) for the Ares-I. It includes: a comparison of the sizes of the Saturn 5, the Space Shuttle, the Ares I, and the Ares V, with the height, and payload capability; the loads control plan; drawings of the base framing, the underside of the ML, beam arrangement, and the finished base and the origin of the 3D CAD data. It also reviews the modeling approach, meshing. the assembly Finite Element Modeling, the model summary. and beam improvements.

  12. Simultaneous Multiple-Jet Impacts in Concrete-Experiments and Advanced Computational Simulations

    SciTech Connect

    Baum, D.W.; Kuklo, R.M.; Routh, J.W.; Simonson, S.C.

    1999-08-12

    The simultaneous impact of multiple shaped-charge jets on a concrete target has been observed experimentally to lead to the formation of a larger and deeper entrance crater than would be expected from the superposition of the craters of the individual jets. The problem has been modeled with the 3-D simulation code ALE3D, running on massively parallel processors. These calculations indicate that the enlarged damage area is the result of tensile stresses caused by the interactions among the pressure waves simultaneously emanating from the three impact sites. This phenomenon has the potential for enhancing the penetration of a follow-on projectile.

  13. Advancements in tailored hot stamping simulations: Cooling channel and distortion analyses

    NASA Astrophysics Data System (ADS)

    Billur, Eren; Wang, Chao; Bloor, Colin; Holecek, Martin; Porzner, Harald; Altan, Taylan

    2013-12-01

    Hot stamped components have been widely used in the automotive industry in the last decade where ultra high strength is required. These parts, however, may not provide sufficient toughness to absorb crash energy. Therefore, these components are "tailored" by controlling the microstructure at various locations. Simulation of tailored hot stamped components requires more detailed analysis of microstructural changes. Furthermore, since the part is not uniformly quenched, severe distortion can be observed. CPF, together with ESI have developed a number of techniques to predict the final properties of a tailored part. This paper discusses the recent improvements in modeling distortion and die design with cooling channels.

  14. Recent Advances in Simulation of Eddy Current Testing of Tubes and Experimental Validations

    SciTech Connect

    Reboud, C.; Premel, D.; Bisiaux, B.

    2007-03-21

    Eddy current testing (ECT) is widely used in iron and steel industry for the inspection of tubes during manufacturing. A collaboration between CEA and the Vallourec Research Center led to the development of new numerical functionalities dedicated to the simulation of ECT of non-magnetic tubes by external probes. The achievement of experimental validations led us to the integration of these models into the CIVA platform. Modeling approach and validation results are discussed here. A new numerical scheme is also proposed in order to improve the accuracy of the model.

  15. Performance evaluation in full-mission simulation - Methodological advances and research challenges. [in air transport operations

    NASA Technical Reports Server (NTRS)

    Chidester, Thomas R.; Kanki, Barbara G.; Helmreich, Robert L.

    1989-01-01

    The crew-factors research program at NASA Ames has developed a methodology for studying the impact of a variety of variables on the effectiveness of crews flying realistic but high workload simulated trips. The validity of investigations using the methodology is enhanced by careful design of full-mission scenarios, performance assessment using converging sources of data, and recruitment of representative subjects. Recently, portions of this methodology have been adapted for use in assessing the effectiveness of crew coordination among participants in line-oriented flight training.

  16. Advanced computational simulation for design and manufacturing of lightweight material components for automotive applications

    SciTech Connect

    Simunovic, S.; Aramayo, G.A.; Zacharia, T.; Toridis, T.G.; Bandak, F.; Ragland, C.L.

    1997-04-01

    Computational vehicle models for the analysis of lightweight material performance in automobiles have been developed through collaboration between Oak Ridge National Laboratory, the National Highway Transportation Safety Administration, and George Washington University. The vehicle models have been verified against experimental data obtained from vehicle collisions. The crashed vehicles were analyzed, and the main impact energy dissipation mechanisms were identified and characterized. Important structural parts were extracted and digitized and directly compared with simulation results. High-performance computing played a key role in the model development because it allowed for rapid computational simulations and model modifications. The deformation of the computational model shows a very good agreement with the experiments. This report documents the modifications made to the computational model and relates them to the observations and findings on the test vehicle. Procedural guidelines are also provided that the authors believe need to be followed to create realistic models of passenger vehicles that could be used to evaluate the performance of lightweight materials in automotive structural components.

  17. Simulating the dynamic behavior of chain drive systems by advanced CAE programs

    SciTech Connect

    Ross, J.; Meyer, J.

    1996-09-01

    Due to the increased requirements for chain drive systems of 4-stroke internal combustion engines CAE-tools are necessary to design the optimum dynamic system. In comparison to models used din the past the advantage of the new model CDD (Chain Drive Dynamics) is the capability of simulating the trajectory of each chain link around the drive system. Each chain link is represented by a mass with two degrees of freedom and is coupled to the next by a spring-damper element. The drive sprocket can be moved with a constant or non-constant speed. As in reality the other sprockets are driven by the running chain and can be excited by torques. Due to these unique model features it is possible to calculate all vibration types of the chain, polygon effects and radial or angular vibrations of the sprockets very accurately. The model includes the detailed simulation of a mechanical or a hydraulic tensioner as well. The method is ready to be coupled to other detailed calculation models (e.g. valve train systems, crankshaft, etc.). The high efficiency of the tool predicting the dynamic and acoustic behavior of a chain drive system will be demonstrated in comparison to measurements.

  18. Advances in Rotor Performance and Turbulent Wake Simulation Using DES and Adaptive Mesh Refinement

    NASA Technical Reports Server (NTRS)

    Chaderjian, Neal M.

    2012-01-01

    Time-dependent Navier-Stokes simulations have been carried out for a rigid V22 rotor in hover, and a flexible UH-60A rotor in forward flight. Emphasis is placed on understanding and characterizing the effects of high-order spatial differencing, grid resolution, and Spalart-Allmaras (SA) detached eddy simulation (DES) in predicting the rotor figure of merit (FM) and resolving the turbulent rotor wake. The FM was accurately predicted within experimental error using SA-DES. Moreover, a new adaptive mesh refinement (AMR) procedure revealed a complex and more realistic turbulent rotor wake, including the formation of turbulent structures resembling vortical worms. Time-dependent flow visualization played a crucial role in understanding the physical mechanisms involved in these complex viscous flows. The predicted vortex core growth with wake age was in good agreement with experiment. High-resolution wakes for the UH-60A in forward flight exhibited complex turbulent interactions and turbulent worms, similar to the V22. The normal force and pitching moment coefficients were in good agreement with flight-test data.

  19. Development of advanced computational fluid dynamics tools and their application to simulation of internal turbulent flows

    NASA Astrophysics Data System (ADS)

    Emelyanov, V. N.; Karpenko, A. G.; Volkov, K. N.

    2015-06-01

    Modern graphics processing units (GPU) provide architectures and new programming models that enable to harness their large processing power and to design computational fluid dynamics (CFD) simulations at both high performance and low cost. Possibilities of the use of GPUs for the simulation of internal fluid flows are discussed. The finite volume method is applied to solve three-dimensional (3D) unsteady compressible Euler and Navier-Stokes equations on unstructured meshes. Compute Inified Device Architecture (CUDA) technology is used for programming implementation of parallel computational algorithms. Solution of some fluid dynamics problems on GPUs is presented and approaches to optimization of the CFD code related to the use of different types of memory are discussed. Speedup of solution on GPUs with respect to the solution on central processor unit (CPU) is compared with the use of different meshes and different methods of distribution of input data into blocks. Performance measurements show that numerical schemes developed achieve 20 to 50 speedup on GPU hardware compared to CPU reference implementation. The results obtained provide promising perspective for designing a GPU-based software framework for applications in CFD.

  20. Simulated flight acoustic investigation of treated ejector effectiveness on advanced mechanical suppresors for high velocity jet noise reduction

    NASA Technical Reports Server (NTRS)

    Brausch, J. F.; Motsinger, R. E.; Hoerst, D. J.

    1986-01-01

    Ten scale-model nozzles were tested in an anechoic free-jet facility to evaluate the acoustic characteristics of a mechanically suppressed inverted-velocity-profile coannular nozzle with an accoustically treated ejector system. The nozzle system used was developed from aerodynamic flow lines evolved in a previous contract, defined to incorporate the restraints imposed by the aerodynamic performance requirements of an Advanced Supersonic Technology/Variable Cycle Engine system through all its mission phases. Accoustic data of 188 test points were obtained, 87 under static and 101 under simulated flight conditions. The tests investigated variables of hardwall ejector application to a coannular nozzle with 20-chute outer annular suppressor, ejector axial positioning, treatment application to ejector and plug surfaces, and treatment design. Laser velocimeter, shadowgraph photograph, aerodynamic static pressure, and temperature measurement were acquired on select models to yield diagnositc information regarding the flow field and aerodynamic performance characteristics of the nozzles.

  1. ELMy H-mode linear simulation with 3-field model on experimental advanced superconducting tokamak using BOUT++

    SciTech Connect

    Liu, Z. X.; Gao, X.; Liu, S. C.; Ding, S. Y.; Li, J. G.; Xia, T. Y.; Xu, X. Q.; Hughes, J. W.

    2012-10-15

    H-mode plasmas with ELM (edge localized mode) have been realized on experimental advanced superconducting tokamak (EAST) with 2.45 GHz low hybrid wave at P{sub LHW}{approx}1 MW in 2010. Data from EAST experiments including magnetic geometry, measured pressure profiles, and calculated current profiles are used to investigate the physics of ELM utilizing the BOUT++ code. Results from linear simulations show that the ELMs in EAST are dominated by resistive ballooning modes. When the Lundquist number (dimensionless ratio of the resistive diffusion time to the Alfven time) is equal to or less than 10{sup 7}, the resistive ballooning modes are found to become unstable in the ELMy H-mode plasma. For a fixed pedestal pressure profile, increasing plasma current generates more activities of low-n ELMs.

  2. Advanced timing analysis based on post-OPC patterning process simulations

    NASA Astrophysics Data System (ADS)

    Yang, Jie; Capodieci, Luigi; Sylvester, Dennis

    2005-05-01

    For current and upcoming technology nodes (90, 65, 45 nm and beyond) one of the fundamental enablers of Moore's Law is the use of Resolution Enhancement Techniques (RET) in optical lithography. While RETs allow for continuing reduction in integrated circuits" critical dimensions (CD), layout distortions are introduced as an undesired consequence due to proximity effects. Complex and costly Optical Proximity Correction (OPC) is then deployed to compensate for CD variations and loss of pattern fidelity, in an effort to improve yield. This, together with other sources for CD variations, causes the actual on-silicon chip performance to be quite different from sign-off expectations. In current design optimization methodologies, process variation modeling, aimed at providing guardbands for performance analysis, is based on "worst-case scenarios" (corner cases) and yields overly pessimistic simulation results which makes meeting design targets unnecessarily difficult. Assumptions of CD distributions in Monte Carlo simulations, and statistical timing analysis in general, can be made more rigorous by considering realistic systematic and random contributions to the overall process variation. A novel methodology is presented in this paper for extracting residual OPC errors from a placed and routed full chip layout and for deriving actual (i.e., calibrated to silicon) CD values, to be used in timing analysis and speed path characterization. The implementation of this automated flow is achieved through a combination of tagging critical gates, post-OPC layout back-annotation, and selective extraction from the global circuit netlist. This approach improves upon traditional design flow practices where ideal (i.e., drawn) CD values are employed, which leads to poor performance predictability of the as-fabricated design. With this more accurate timing analysis, we are able to highlight the necessity of a post-OPC verification embedded design flow by showing substantial differences

  3. Advances in Computational Radiation Biophysics for Cancer Therapy: Simulating Nano-Scale Damage by Low-Energy Electrons

    NASA Astrophysics Data System (ADS)

    Kuncic, Zdenka

    Computational radiation biophysics is a rapidly growing area that is contributing, alongside new hardware technologies, to ongoing developments in cancer imaging and therapy. Recent advances in theoretical and computational modeling have enabled the simulation of discrete, event-by-event interactions of very low energy (≪ 100 eV) electrons with water in its liquid thermodynamic phase. This represents a significant advance in our ability to investigate the initial stages of radiation induced biological damage at the molecular level. Such studies are important for the development of novel cancer treatment strategies, an example of which is given by microbeam radiation therapy (MRT). Here, new results are shown demonstrating that when excitations and ionizations are resolved down to nano-scales, their distribution extends well outside the primary microbeam path, into regions that are not directly irradiated. This suggests that radiation dose alone is insufficient to fully quantify biological damage. These results also suggest that the radiation cross-fire may be an important clue to understanding the different observed responses of healthy cells and tumor cells to MRT.

  4. Advances in Computational Radiation Biophysics for Cancer Therapy: Simulating Nano-Scale Damage by Low-Energy Electrons

    NASA Astrophysics Data System (ADS)

    Kuncic, Zdenka

    2015-10-01

    Computational radiation biophysics is a rapidly growing area that is contributing, alongside new hardware technologies, to ongoing developments in cancer imaging and therapy. Recent advances in theoretical and computational modeling have enabled the simulation of discrete, event-by-event interactions of very low energy (≪ 100 eV) electrons with water in its liquid thermodynamic phase. This represents a significant advance in our ability to investigate the initial stages of radiation induced biological damage at the molecular level. Such studies are important for the development of novel cancer treatment strategies, an example of which is given by microbeam radiation therapy (MRT). Here, new results are shown demonstrating that when excitations and ionizations are resolved down to nano-scales, their distribution extends well outside the primary microbeam path, into regions that are not directly irradiated. This suggests that radiation dose alone is insufficient to fully quantify biological damage. These results also suggest that the radiation cross-fire may be an important clue to understanding the different observed responses of healthy cells and tumor cells to MRT.

  5. Evaluation of Temperature Gradient in Advanced Automated Directional Solidification Furnace (AADSF) by Numerical Simulation

    NASA Technical Reports Server (NTRS)

    Bune, Andris V.; Gillies, Donald C.; Lehoczky, Sandor L.

    1996-01-01

    A numerical model of heat transfer using combined conduction, radiation and convection in AADSF was used to evaluate temperature gradients in the vicinity of the crystal/melt interface for variety of hot and cold zone set point temperatures specifically for the growth of mercury cadmium telluride (MCT). Reverse usage of hot and cold zones was simulated to aid the choice of proper orientation of crystal/melt interface regarding residual acceleration vector without actual change of furnace location on board the orbiter. It appears that an additional booster heater will be extremely helpful to ensure desired temperature gradient when hot and cold zones are reversed. Further efforts are required to investigate advantages/disadvantages of symmetrical furnace design (i.e. with similar length of hot and cold zones).

  6. Numerical simulation of the reactive flow in advanced (HSR) combustors using KIVA-2

    NASA Technical Reports Server (NTRS)

    Winowich, Nicholas S.

    1991-01-01

    Recent work has been done with the goal of establishing ultralow emission aircraft gas turbine combustors. A significant portion of the effort is the development of three dimensional computational combustor models. The KIVA-II computer code which is based on the Implicit Continuous Eulerian Difference mesh Arbitrary Lagrangian Eulerian (ICED-ALE) numerical scheme is one of the codes selected by NASA to achieve these goals. This report involves a simulation of jet injection through slanted slots within the Rich burn/Quick quench/Lean burn (RQL) baseline experimental rig. The RQL combustor distinguishes three regions of combustion. This work specifically focuses on modeling the quick quench mixer region in which secondary injection air is introduced radially through 12 equally spaced slots around the mixer circumference. Steady state solutions are achieved with modifications to the KIVA-II program. Work currently underway will evaluate thermal mixing as a function of injection air velocity and angle of inclination of the slots.

  7. Simulation Manikin Modifications for High-Fidelity Training of Advanced Airway Procedures.

    PubMed

    Hirsch, Jan; Generoso, Jose R; Latoures, Renee; Acar, Yahya; Fidler, Richard L

    2016-05-01

    Thoracic anesthesia procedures are challenging to master during anesthesia training. A Laerdal ALS Simulator® manikin was modified by adding a bronchial tree module to create fidelity to the fourth generation. After modification, placement of endotracheal tubes up to 8.0 mm is possible by direct laryngoscopy, video laryngoscopy, and fiberoptically; in addition, it allows fiberoptically guided insertion of endobronchial blockers. Insertion of left and right 35-Fr double-lumen tubes permits double- and single-lung ventilation with continuous positive airway pressure and positive end-expiratory pressure. This anatomical modification created a high-fidelity training tool for thoracic anesthesia that has been incorporated into educational curricula for anesthesia. PMID:26752178

  8. Numerical simulation of fine blanking process using fully coupled advanced constitutive equations with ductile damage

    NASA Astrophysics Data System (ADS)

    Labergere, C.; Saanouni, K.; Benafia, S.; Galmiche, J.; Sulaiman, H.

    2013-05-01

    This paper presents the modelling and adaptive numerical simulation of the fine blanking process. Thermodynamically-consistent constitutive equations, strongly coupled with ductile damage, together with specific boundary conditions (particular command of forces on blank holder and counterpunch) are presented. This model is implemented into ABAQUS/EXPLICIT using the Vumat user subroutine and connected with an adaptive 2D remeshing procedure. The different material parameters are identified for the steel S600MC using experimental tensile tests conducted until the final fracture. A parametric study aiming to examine the sensitivity of the process parameters (die radius, clearance die/punch) to the punch force and fracture surfaces topology (convex zone, sheared zone, fracture zone and the burr).

  9. Numerical Simulations of Optical Turbulence Using an Advanced Atmospheric Prediction Model: Implications for Adaptive Optics Design

    NASA Astrophysics Data System (ADS)

    Alliss, R.

    2014-09-01

    Optical turbulence (OT) acts to distort light in the atmosphere, degrading imagery from astronomical telescopes and reducing the data quality of optical imaging and communication links. Some of the degradation due to turbulence can be corrected by adaptive optics. However, the severity of optical turbulence, and thus the amount of correction required, is largely dependent upon the turbulence at the location of interest. Therefore, it is vital to understand the climatology of optical turbulence at such locations. In many cases, it is impractical and expensive to setup instrumentation to characterize the climatology of OT, so numerical simulations become a less expensive and convenient alternative. The strength of OT is characterized by the refractive index structure function Cn2, which in turn is used to calculate atmospheric seeing parameters. While attempts have been made to characterize Cn2 using empirical models, Cn2 can be calculated more directly from Numerical Weather Prediction (NWP) simulations using pressure, temperature, thermal stability, vertical wind shear, turbulent Prandtl number, and turbulence kinetic energy (TKE). In this work we use the Weather Research and Forecast (WRF) NWP model to generate Cn2 climatologies in the planetary boundary layer and free atmosphere, allowing for both point-to-point and ground-to-space seeing estimates of the Fried Coherence length (ro) and other seeing parameters. Simulations are performed using a multi-node linux cluster using the Intel chip architecture. The WRF model is configured to run at 1km horizontal resolution and centered on the Mauna Loa Observatory (MLO) of the Big Island. The vertical resolution varies from 25 meters in the boundary layer to 500 meters in the stratosphere. The model top is 20 km. The Mellor-Yamada-Janjic (MYJ) TKE scheme has been modified to diagnose the turbulent Prandtl number as a function of the Richardson number, following observations by Kondo and others. This modification

  10. A Damage Model for the Simulation of Delamination in Advanced Composites under Variable-Mode Loading

    NASA Technical Reports Server (NTRS)

    Turon, A.; Camanho, P. P.; Costa, J.; Davila, C. G.

    2006-01-01

    A thermodynamically consistent damage model is proposed for the simulation of progressive delamination in composite materials under variable-mode ratio. The model is formulated in the context of Damage Mechanics. A novel constitutive equation is developed to model the initiation and propagation of delamination. A delamination initiation criterion is proposed to assure that the formulation can account for changes in the loading mode in a thermodynamically consistent way. The formulation accounts for crack closure effects to avoid interfacial penetration of two adjacent layers after complete decohesion. The model is implemented in a finite element formulation, and the numerical predictions are compared with experimental results obtained in both composite test specimens and structural components.

  11. Advances in Impedance Probe Applications and Design in the NRL Space Physics Simulation Chamber

    NASA Astrophysics Data System (ADS)

    Blackwell, David; Walker, David; Cothran, Christopher; Gatling, George; Tejero, Erik; Amatucci, William

    2013-10-01

    We will present recent progress in plasma impedance probe experiments and design at NRL's Space Physics Simulation Chamber. These include our network analyzer S-parameter methods as well as more portable self-contained diagnostics with an eye towards space vehicle applications. The experiments are performed under a variety of conditions with magnetized and unmagnetized collisionless, cold (Te ~ 1 - 2 eV) plasmas in density ranges of 105-108 cm-3. Large and small spheres, disks, floating dipoles and monopoles are all in development with various electronic setups, along with traditional emissive and Langmuir probes for measurement redundancy. New computational results provide experimental predictions over a larger parameter space. This work supported by the Naval Research Laboratory Base Program.

  12. Advancing adaptive optics technology: Laboratory turbulence simulation and optimization of laser guide stars

    NASA Astrophysics Data System (ADS)

    Rampy, Rachel A.

    Since Galileo's first telescope some 400 years ago, astronomers have been building ever-larger instruments. Yet only within the last two decades has it become possible to realize the potential angular resolutions of large ground-based telescopes, by using adaptive optics (AO) technology to counter the blurring effects of Earth's atmosphere. And only within the past decade have the development of laser guide stars (LGS) extended AO capabilities to observe science targets nearly anywhere in the sky. Improving turbulence simulation strategies and LGS are the two main topics of my research. In the first part of this thesis, I report on the development of a technique for manufacturing phase plates for simulating atmospheric turbulence in the laboratory. The process involves strategic application of clear acrylic paint onto a transparent substrate. Results of interferometric characterization of the plates are described and compared to Kolmogorov statistics. The range of r0 (Fried's parameter) achieved thus far is 0.2--1.2 mm at 650 nm measurement wavelength, with a Kolmogorov power law. These plates proved valuable at the Laboratory for Adaptive Optics at University of California, Santa Cruz, where they have been used in the Multi-Conjugate Adaptive Optics testbed, during integration and testing of the Gemini Planet Imager, and as part of the calibration system of the on-sky AO testbed named ViLLaGEs (Visible Light Laser Guidestar Experiments). I present a comparison of measurements taken by ViLLaGEs of the power spectrum of a plate and the real sky turbulence. The plate is demonstrated to follow Kolmogorov theory well, while the sky power spectrum does so in a third of the data. This method of fabricating phase plates has been established as an effective and low-cost means of creating simulated turbulence. Due to the demand for such devices, they are now being distributed to other members of the AO community. The second topic of this thesis pertains to understanding and

  13. Advancing adaptive optics technology: Laboratory turbulence simulation and optimization of laser guide stars

    NASA Astrophysics Data System (ADS)

    Rampy, Rachel A.

    Since Galileo's first telescope some 400 years ago, astronomers have been building ever-larger instruments. Yet only within the last two decades has it become possible to realize the potential angular resolutions of large ground-based telescopes, by using adaptive optics (AO) technology to counter the blurring effects of Earth's atmosphere. And only within the past decade have the development of laser guide stars (LGS) extended AO capabilities to observe science targets nearly anywhere in the sky. Improving turbulence simulation strategies and LGS are the two main topics of my research. In the first part of this thesis, I report on the development of a technique for manufacturing phase plates for simulating atmospheric turbulence in the laboratory. The process involves strategic application of clear acrylic paint onto a transparent substrate. Results of interferometric characterization of the plates are described and compared to Kolmogorov statistics. The range of r0 (Fried's parameter) achieved thus far is 0.2--1.2 mm at 650 nm measurement wavelength, with a Kolmogorov power law. These plates proved valuable at the Laboratory for Adaptive Optics at University of California, Santa Cruz, where they have been used in the Multi-Conjugate Adaptive Optics testbed, during integration and testing of the Gemini Planet Imager, and as part of the calibration system of the on-sky AO testbed named ViLLaGEs (Visible Light Laser Guidestar Experiments). I present a comparison of measurements taken by ViLLaGEs of the power spectrum of a plate and the real sky turbulence. The plate is demonstrated to follow Kolmogorov theory well, while the sky power spectrum does so in a third of the data. This method of fabricating phase plates has been established as an effective and low-cost means of creating simulated turbulence. Due to the demand for such devices, they are now being distributed to other members of the AO community. The second topic of this thesis pertains to understanding and

  14. Scientific and computational challenges of the fusion simulation project (FSP)

    NASA Astrophysics Data System (ADS)

    Tang, W. M.

    2008-07-01

    This paper highlights the scientific and computational challenges facing the Fusion Simulation Project (FSP). The primary objective is to develop advanced software designed to use leadership-class computers for carrying out multiscale physics simulations to provide information vital to delivering a realistic integrated fusion simulation model with unprecedented physics fidelity. This multiphysics capability will be unprecedented in that in the current FES applications domain, the largest-scale codes are used to carry out first-principles simulations of mostly individual phenomena in realistic 3D geometry while the integrated models are much smaller-scale, lower-dimensionality codes with significant empirical elements used for modeling and designing experiments. The FSP is expected to be the most up-to-date embodiment of the theoretical and experimental understanding of magnetically confined thermonuclear plasmas and to provide a living framework for the simulation of such plasmas as the associated physics understanding continues to advance over the next several decades. Substantive progress on answering the outstanding scientific questions in the field will drive the FSP toward its ultimate goal of developing a reliable ability to predict the behavior of plasma discharges in toroidal magnetic fusion devices on all relevant time and space scales. From a computational perspective, the fusion energy science application goal to produce high-fidelity, whole-device modeling capabilities will demand computing resources in the petascale range and beyond, together with the associated multicore algorithmic formulation needed to address burning plasma issues relevant to ITER — a multibillion dollar collaborative device involving seven international partners representing over half the world's population. Even more powerful exascale platforms will be needed to meet the future challenges of designing a demonstration fusion reactor (DEMO). Analogous to other major applied

  15. Advances in Turbulent Combustion Dynamics Simulations in Bluff-Body Stabilized Flames

    NASA Astrophysics Data System (ADS)

    Tovar, Jonathan Michael

    This work examines the three main aspects of bluff-body stabilized flames: stationary combustion, lean blow-out, and thermo-acoustic instabilities. For the cases of stationary combustion and lean blow-out, an improved version of the Linear Eddy Model approach is used, while in the case of thermo-acoustic instabilities, the effect of boundary conditions on the predictions are studied. The improved version couples the Linear Eddy Model with the full-set of resolved scale Large Eddy Simulation equations for continuity, momentum, energy, and species transport. In traditional implementations the species equations are generally solved using a Lagrangian method which has some significant limitations. The novelty in this work is that the Eulerian species concentration equations are solved at the resolved scale and the Linear Eddy Model is strictly used to close the species production term. In this work, the improved Linear Eddy Model approach is applied to predict the flame properties inside the Volvo rig and it is shown to over-predict the flame temperature and normalized velocity when compared to experimental data using a premixed single step global propane reaction with an equivalence ratio of 0.65. The model is also applied to predict lean blow-out and is shown to predict a stable flame at an equivalence ratio of 0.5 when experiments achieve flame extinction at an equivalence ratio of 0.55. The improved Linear Eddy Model is, however, shown to be closer to experimental data than a comparable reactive flow simulation that uses laminar closure of the species source terms. The thermo-acoustic analysis is performed on a combustor rig designed at the Air Force Research Laboratory. The analysis is performed using a premixed single step global methane reaction for laminar reactive flow and shows that imposing a non-physical boundary condition at the rig exhaust will result in the suppression of acoustic content inside the domain and can alter the temperature contours in non

  16. Science-Based Approach for Advancing Marine and Hydrokinetic Energy: Integrating Numerical Simulations with Experiments

    NASA Astrophysics Data System (ADS)

    Sotiropoulos, F.; Kang, S.; Chamorro, L. P.; Hill, C.

    2011-12-01

    The field of MHK energy is still in its infancy lagging approximately a decade or more behind the technology and development progress made in wind energy engineering. Marine environments are characterized by complex topography and three-dimensional (3D) turbulent flows, which can greatly affect the performance and structural integrity of MHK devices and impact the Levelized Cost of Energy (LCoE). Since the deployment of multi-turbine arrays is envisioned for field applications, turbine-to-turbine interactions and turbine-bathymetry interactions need to be understood and properly modeled so that MHK arrays can be optimized on a site specific basis. Furthermore, turbulence induced by MHK turbines alters and interacts with the nearby ecosystem and could potentially impact aquatic habitats. Increased turbulence in the wake of MHK devices can also change the shear stress imposed on the bed ultimately affecting the sediment transport and suspension processes in the wake of these structures. Such effects, however, remain today largely unexplored. In this work a science-based approach integrating state-of-the-art experimentation with high-resolution computational fluid dynamics is proposed as a powerful strategy for optimizing the performance of MHK devices and assessing environmental impacts. A novel numerical framework is developed for carrying out Large-Eddy Simulation (LES) in arbitrarily complex domains with embedded MHK devices. The model is able to resolve the geometrical complexity of real-life MHK devices using the Curvilinear Immersed Boundary (CURVIB) method along with a wall model for handling the flow near solid surfaces. Calculations are carried out for an axial flow hydrokinetic turbine mounted on the bed of rectangular open channel on a grid with nearly 200 million grid nodes. The approach flow corresponds to fully developed turbulent open channel flow and is obtained from a separate LES calculation. The specific case corresponds to that studied

  17. Finite difference simulations of seismic wave propagation for understanding earthquake physics and predicting ground motions: Advances and challenges

    NASA Astrophysics Data System (ADS)

    Aochi, Hideo; Ulrich, Thomas; Ducellier, Ariane; Dupros, Fabrice; Michea, David

    2013-08-01

    Seismic waves radiated from an earthquake propagate in the Earth and the ground shaking is felt and recorded at (or near) the ground surface. Understanding the wave propagation with respect to the Earth's structure and the earthquake mechanisms is one of the main objectives of seismology, and predicting the strong ground shaking for moderate and large earthquakes is essential for quantitative seismic hazard assessment. The finite difference scheme for solving the wave propagation problem in elastic (sometimes anelastic) media has been more widely used since the 1970s than any other numerical methods, because of its simple formulation and implementation, and its easy scalability to large computations. This paper briefly overviews the advances in finite difference simulations, focusing particularly on earthquake mechanics and the resultant wave radiation in the near field. As the finite difference formulation is simple (interpolation is smooth), an easy coupling with other approaches is one of its advantages. A coupling with a boundary integral equation method (BIEM) allows us to simulate complex earthquake source processes.

  18. Recent Advances in Polarizable Force Fields for Macromolecules: Microsecond Simulations of Proteins Using the Classical Drude Oscillator Model

    PubMed Central

    2015-01-01

    In this Perspective, we summarize recent efforts to include the explicit treatment of induced electronic polarization in biomolecular force fields. Methods used to treat polarizability, including the induced dipole, fluctuating charge, and classical Drude oscillator models, are presented, including recent advances in force fields using those methods. This is followed by recent results obtained with the Drude model, including microsecond molecular dynamics (MD) simulations of multiple proteins in explicit solvent. Results show significant variability of backbone and side-chain dipole moments as a function of environment, including significant changes during individual simulations. Dipole moments of water in the vicinity of the proteins reveal small but systematic changes, with the direction of the changes dependent on the environment. Analyses of the full proteins show that the polarizable Drude model leads to larger values of the dielectric constant of the protein interior, especially in the case of hydrophobic regions. These results indicate that the inclusion of explicit electronic polarizability leads to significant differences in the physical forces affecting the structure and dynamics of proteins, which can be investigated in a computationally tractable fashion in the context of the Drude model. PMID:25247054

  19. A station blackout simulation for the Advanced Neutron Source Reactor using the integrated primary and secondary system model

    SciTech Connect

    Schneider, E.A.

    1994-06-01

    The Advanced Neutron Source Reactor (ANSR) is a research reactor to be built at Oak Ridge National Laboratory. This paper deals with thermal-hydraulic analysis of ANSR`s cooling systems during nominal and transient conditions, with the major effort focusing upon the construction and testing of computer models of the reactor`s primary, secondary and reflector vessel cooling systems. The code RELAP5 was used to simulate transients, such as loss of coolant accidents and loss of off-site power, as well as to model the behavior of the reactor in steady state. Three stages are involved in constructing and using a RELAP5 model: (1) construction and encoding of the desired model, (2) testing and adjustment of the model until a satisfactory steady state is achieved, and (3) running actual transients using the steady-state results obtained earlier as initial conditions. By use of the ANSR design specifications, a model of the reactor`s primary and secondary cooling systems has been constructed to run a transient simulating a loss of off-site power. This incident assumes a pump coastdown in both the primary and secondary loops. The results determine whether the reactor can survive the transition from forced convection to natural circulation.

  20. Advance Care Planning Norms May Contribute to Hospital Variation in End-of-life ICU Use: A Simulation Study

    PubMed Central

    Barnato, Amber E.; Mohan, Deepika; Lane, Rondall K.; Huang, Yue Ming; Angus, Derek C.; Farris, Coreen; Arnold, Robert M.

    2014-01-01

    Background There is wide variation in end-of-life (EOL) intensive care unit (ICU) use among academic medical centers (AMCs). Objective To develop hypotheses regarding medical decision-making factors underlying this variation. Design High-fidelity simulation experiment involving a critically and terminally ill elder, followed by a survey and debriefing cognitive interview and evaluated using triangulated quantitative-qualitative comparative analysis. Setting 2 AMCs in the same state and health care system with disparate EOL ICU use. Subjects Hospital-based physicians responsible for ICU admission decisions. Measurements Treatment plan, prognosis, diagnosis, qualitative case perceptions and clinical reasoning. Main Results Sixty-seven of 111 (60%) eligible physicians agreed to participate; 48 (72%) could be scheduled. There were no significant between-AMC differences in 3-month prognosis or treatment plan, but there were systematic differences in perceptions of the case. Case perceptions at the low-intensity AMC seemed to be influenced by the absence of a DNR order in the context of norms of universal code status discussion and documentation upon admission, whereas case perceptions at the high-intensity AMC seemed to be influenced by the patient’s known metastatic gastric cancer in the context of norms of oncologists’ avoiding code status discussions. Conclusions In this simulation study of 2 AMCs, hospital-based physicians had different perceptions of an identical case. We hypothesize that different advance care planning norms may have influenced their decision-making heuristics. PMID:24615275

  1. Science based integrated approach to advanced nuclear fuel development - integrated multi-scale multi-physics hierarchical modeling and simulation framework Part III: cladding

    SciTech Connect

    Tome, Carlos N; Caro, J A; Lebensohn, R A; Unal, Cetin; Arsenlis, A; Marian, J; Pasamehmetoglu, K

    2010-01-01

    Advancing the performance of Light Water Reactors, Advanced Nuclear Fuel Cycles, and Advanced Reactors, such as the Next Generation Nuclear Power Plants, requires enhancing our fundamental understanding of fuel and materials behavior under irradiation. The capability to accurately model the nuclear fuel systems to develop predictive tools is critical. Not only are fabrication and performance models needed to understand specific aspects of the nuclear fuel, fully coupled fuel simulation codes are required to achieve licensing of specific nuclear fuel designs for operation. The backbone of these codes, models, and simulations is a fundamental understanding and predictive capability for simulating the phase and microstructural behavior of the nuclear fuel system materials and matrices. In this paper we review the current status of the advanced modeling and simulation of nuclear reactor cladding, with emphasis on what is available and what is to be developed in each scale of the project, how we propose to pass information from one scale to the next, and what experimental information is required for benchmarking and advancing the modeling at each scale level.

  2. Advancing predictive models for particulate formation in turbulent flames via massively parallel direct numerical simulations

    PubMed Central

    Bisetti, Fabrizio; Attili, Antonio; Pitsch, Heinz

    2014-01-01

    Combustion of fossil fuels is likely to continue for the near future due to the growing trends in energy consumption worldwide. The increase in efficiency and the reduction of pollutant emissions from combustion devices are pivotal to achieving meaningful levels of carbon abatement as part of the ongoing climate change efforts. Computational fluid dynamics featuring adequate combustion models will play an increasingly important role in the design of more efficient and cleaner industrial burners, internal combustion engines, and combustors for stationary power generation and aircraft propulsion. Today, turbulent combustion modelling is hindered severely by the lack of data that are accurate and sufficiently complete to assess and remedy model deficiencies effectively. In particular, the formation of pollutants is a complex, nonlinear and multi-scale process characterized by the interaction of molecular and turbulent mixing with a multitude of chemical reactions with disparate time scales. The use of direct numerical simulation (DNS) featuring a state of the art description of the underlying chemistry and physical processes has contributed greatly to combustion model development in recent years. In this paper, the analysis of the intricate evolution of soot formation in turbulent flames demonstrates how DNS databases are used to illuminate relevant physico-chemical mechanisms and to identify modelling needs. PMID:25024412

  3. Advancement in polarimetric glucose sensing: simulation and measurement of birefringence properties of cornea

    NASA Astrophysics Data System (ADS)

    Malik, Bilal H.; Coté, Gerard L.

    2011-03-01

    Clinical guidelines dictate that frequent blood glucose monitoring in diabetic patients is critical towards proper management of the disease. Although, several different types of glucose monitors are now commercially available, most of these devices are invasive, thereby adversely affecting patient compliance. To this end, optical polarimetric glucose sensing through the eye has been proposed as a potential noninvasive means to aid in the control of diabetes. Arguably, the most critical and limiting factor towards successful application of such a technique is the time varying corneal birefringence due to eye motion artifact. We present a spatially variant uniaxial eye model to serve as a tool towards better understanding of the cornea's birefringence properties. The simulations show that index-unmatched coupling of light is spatially limited to a smaller range when compared to the index-matched situation. Polarimetric measurements on rabbits' eyes indicate relative agreement between the modeled and experimental values of corneal birefringence. In addition, the observed rotation in the plane of polarized light for multiple wavelengths demonstrates the potential for using a dual-wavelength polarimetric approach to overcome the noise due to timevarying corneal birefringence. These results will ultimately aid us in the development of an appropriate eye coupling mechanism for in vivo polarimetric glucose measurements.

  4. Flying Boresight for Advanced Testing and Calibration of Tracking Antennas and Flight Path Simulations

    NASA Astrophysics Data System (ADS)

    Hafner, D.

    2015-09-01

    The application of ground-based boresight sources for calibration and testing of tracking antennas usually entails various difficulties, mostly due to unwanted ground effects. To avoid this problem, DLR MORABA developed a small, lightweight, frequency-adjustable S-band boresight source, mounted on a small remote-controlled multirotor aircraft. Highly accurate GPS-supported, position and altitude control functions allow both, very steady positioning of the aircraft in mid-air, and precise waypoint-based, semi-autonomous flights. In contrast to fixed near-ground boresight sources this flying setup enables to avoid obstructions in the Fresnel zone between source and antenna. Further, it minimizes ground reflections and other multipath effects which can affect antenna calibration. In addition, the large operating range of a flying boresight simplifies measurements in the far field of the antenna and permits undisturbed antenna pattern tests. A unique application is the realistic simulation of sophisticated flight paths, including overhead tracking and demanding trajectories of fast objects such as sounding rockets. Likewise, dynamic tracking tests are feasible which provide crucial information about the antenna pedestal performance — particularly at high elevations — and reveal weaknesses in the autotrack control loop of tracking antenna systems. During acceptance tests of MORABA's new tracking antennas, a manned aircraft was never used, since the Flying Boresight surpassed all expectations regarding usability, efficiency, and precision. Hence, it became an integral part of MORABA's standard antenna setup and calibration procedures.

  5. Advancing predictive models for particulate formation in turbulent flames via massively parallel direct numerical simulations.

    PubMed

    Bisetti, Fabrizio; Attili, Antonio; Pitsch, Heinz

    2014-08-13

    Combustion of fossil fuels is likely to continue for the near future due to the growing trends in energy consumption worldwide. The increase in efficiency and the reduction of pollutant emissions from combustion devices are pivotal to achieving meaningful levels of carbon abatement as part of the ongoing climate change efforts. Computational fluid dynamics featuring adequate combustion models will play an increasingly important role in the design of more efficient and cleaner industrial burners, internal combustion engines, and combustors for stationary power generation and aircraft propulsion. Today, turbulent combustion modelling is hindered severely by the lack of data that are accurate and sufficiently complete to assess and remedy model deficiencies effectively. In particular, the formation of pollutants is a complex, nonlinear and multi-scale process characterized by the interaction of molecular and turbulent mixing with a multitude of chemical reactions with disparate time scales. The use of direct numerical simulation (DNS) featuring a state of the art description of the underlying chemistry and physical processes has contributed greatly to combustion model development in recent years. In this paper, the analysis of the intricate evolution of soot formation in turbulent flames demonstrates how DNS databases are used to illuminate relevant physico-chemical mechanisms and to identify modelling needs. PMID:25024412

  6. Advanced modeling of electron avalanche process in polymeric dielectric voids: Simulations and experimental validation

    NASA Astrophysics Data System (ADS)

    Testa, L.; Serra, S.; Montanari, G. C.

    2010-08-01

    This paper deals with aging phenomena in polymers under electric stress. In particular, we focus our efforts on the development of a novel theoretical method accounting for the discharge process (partial discharge) in well known defects present in polymers, which are essentially tiny air gaps embedded in a polymeric matrix. Such defects are believed to act as trigger points for the partial discharges and their induced aging process. The model accounts for the amplitude as well as the energy distribution of the electrons during their motion, particularly at the time in which they impact on the polymer surface. The knowledge of the number of generated electrons and of their energy distributions is fundamental to evaluate the amount of damage caused by an avalanche on the polymer-void interface and get novel insights of the basic phenomena underlying the relevant aging processes. The calculation of such quantities would require generally the combined solution of the Boltzmann equation in the energy and space/time domains. The proposed method simplifies the problem, taking into account only the main phenomena involved in the process and provides a partial discharge (PD) model virtually free of adjustable parameters. This model is validated by an accurate experimental procedure aimed at reproducing the same conditions of the simulations and regarding air gaps embedded in polymeric dielectrics. The experimental results confirm the validity and accuracy of the proposed approach.

  7. Simulation of the hybrid Tunka Advanced International Gamma-ray and Cosmic ray Astrophysics (TAIGA)

    NASA Astrophysics Data System (ADS)

    Kunnas, M.; Astapov, I.; Barbashina, N.; Beregnev, S.; Bogdanov, A.; Bogorodskii, D.; Boreyko, V.; Brückner, M.; Budnev, N.; Chiavassa, A.; Chvalaev, O.; Dyachok, A.; Epimakhov, S.; Eremin, T.; Gafarov, A.; Gorbunov, N.; Grebenyuk, V.; Gress, O.; Gress, T.; Grinyuk, A.; Grishin, O.; Horns, D.; Ivanova, A.; Karpov, N.; Kalmykov, N.; Kazarina, Y.; Kindin, V.; Kirichkov, N.; Kiryuhin, S.; Kokoulin, R.; Kompaniets, K.; Konstantinov, E.; Korobchenko, A.; Korosteleva, E.; Kozhin, V.; Kuzmichev, L.; Lenok, V.; Lubsandorzhiev, B.; Lubsandorzhiev, N.; Mirgazov, R.; Mirzoyan, R.; Monkhoev, R.; Nachtigall, R.; Pakhorukov, A.; Panasyuk, M.; Pankov, L.; Perevalov, A.; Petrukhin, A.; Platonov, V.; Poleschuk, V.; Popescu, M.; Popova, E.; Porelli, A.; Porokhovoy, S.; Prosin, V.; Ptuskin, V.; Romanov, V.; Rubtsov, G. I.; Müger; Rybov, E.; Samoliga, V.; Satunin, P.; Saunkin, A.; Savinov, V.; Semeney, Yu; Shaibonov (junior, B.; Silaev, A.; Silaev (junior, A.; Skurikhin, A.; Slunecka, M.; Spiering, C.; Sveshnikova, L.; Tabolenko, V.; Tkachenko, A.; Tkachev, L.; Tluczykont, M.; Veslopopov, A.; Veslopopova, E.; Voronov, D.; Wischnewski, R.; Yashin, I.; Yurin, K.; Zagorodnikov, A.; Zirakashvili, V.; Zurbanov, V.

    2015-08-01

    Up to several 10s of TeV, Imaging Air Cherenkov Telescopes (IACTs) have proven to be the instruments of choice for GeV/TeV gamma-ray astronomy due to their good reconstrucion quality and gamma-hadron separation power. However, sensitive observations at and above 100 TeV require very large effective areas (10 km2 and more), which is difficult and expensive to achieve. The alternative to IACTs are shower front sampling arrays (non-imaging technique or timing-arrays) with a large area and a wide field of view. Such experiments provide good core position, energy and angular resolution, but only poor gamma-hadron separation. Combining both experimental approaches, using the strengths of both techniques, could optimize the sensitivity to the highest energies. The TAIGA project plans to combine the non-imaging HiSCORE [8] array with small (∼10m2) imaging telescopes. This paper covers simulation results of this hybrid approach.

  8. An advanced object-based software framework for complex ecosystem modeling and simulation

    SciTech Connect

    Sydelko, P. J.; Dolph, J. E.; Majerus, K. A.; Taxon, T. N.

    2000-06-29

    Military land managers and decision makers face an ever increasing challenge to balance maximum flexibility for the mission with a diverse set of multiple land use, social, political, and economic goals. In addition, these goals encompass environmental requirements for maintaining ecosystem health and sustainability over the long term. Spatiotemporal modeling and simulation in support of adaptive ecosystem management can be best accomplished through a dynamic, integrated, and flexible approach that incorporates scientific and technological components into a comprehensive ecosystem modeling framework. The Integrated Dynamic Landscape Analysis and Modeling System (IDLAMS) integrates ecological models and decision support techniques through a geographic information system (GIS)-based backbone. Recently, an object-oriented (OO) architectural framework was developed for IDLAMS (OO-IDLAMS). This OO-IDLAMS Prototype was built upon and leverages from the Dynamic Information Architecture System (DIAS) developed by Argonne National Laboratory. DIAS is an object-based architectural framework that affords a more integrated, dynamic, and flexible approach to comprehensive ecosystem modeling than was possible with the GIS-based integration approach of the original IDLAMS. The flexibility, dynamics, and interoperability demonstrated through this case study of an object-oriented approach have the potential to provide key technology solutions for many of the military's multiple-use goals and needs for integrated natural resource planning and ecosystem management.

  9. MHD Simulation of Magnetic Nozzle Plasma with the NIMROD Code: Applications to the VASIMR Advanced Space Propulsion Concept

    NASA Astrophysics Data System (ADS)

    Tarditi, Alfonso G.; Shebalin, John V.

    2002-11-01

    A simulation study with the NIMROD code [1] is being carried on to investigate the efficiency of the thrust generation process and the properties of the plasma detachment in a magnetic nozzle. In the simulation, hot plasma is injected in the magnetic nozzle, modeled as a 2D, axi-symmetric domain. NIMROD has two-fluid, 3D capabilities but the present runs are being conducted within the MHD, 2D approximation. As the plasma travels through the magnetic field, part of its thermal energy is converted into longitudinal kinetic energy, along the axis of the nozzle. The plasma eventually detaches from the magnetic field at a certain distance from the nozzle throat where the kinetic energy becomes larger than the magnetic energy. Preliminary NIMROD 2D runs have been benchmarked with a particle trajectory code showing satisfactory results [2]. Further testing is here reported with the emphasis on the analysis of the diffusion rate across the field lines and of the overall nozzle efficiency. These simulation runs are specifically designed for obtaining comparisons with laboratory measurements of the VASIMR experiment, by looking at the evolution of the radial plasma density and temperature profiles in the nozzle. VASIMR (Variable Specific Impulse Magnetoplasma Rocket, [3]) is an advanced space propulsion concept currently under experimental development at the Advanced Space Propulsion Laboratory, NASA Johnson Space Center. A plasma (typically ionized Hydrogen or Helium) is generated by a RF (Helicon) discharge and heated by an Ion Cyclotron Resonance Heating antenna. The heated plasma is then guided into a magnetic nozzle to convert the thermal plasma energy into effective thrust. The VASIMR system has no electrodes and a solenoidal magnetic field produced by an asymmetric mirror configuration ensures magnetic insulation of the plasma from the material surfaces. By powering the plasma source and the heating antenna at different levels it is possible to vary smoothly of the

  10. Challenge problem and milestones for : Nuclear Energy Advanced Modeling and Simulation (NEAMS) waste Integrated Performance and Safety Codes (IPSC).

    SciTech Connect

    Freeze, Geoffrey A.; Wang, Yifeng; Howard, Robert; McNeish, Jerry A.; Schultz, Peter Andrew; Arguello, Jose Guadalupe, Jr.

    2010-09-01

    This report describes the specification of a challenge problem and associated challenge milestones for the Waste Integrated Performance and Safety Codes (IPSC) supporting the U.S. Department of Energy (DOE) Office of Nuclear Energy Advanced Modeling and Simulation (NEAMS) Campaign. The NEAMS challenge problems are designed to demonstrate proof of concept and progress towards IPSC goals. The goal of the Waste IPSC is to develop an integrated suite of modeling and simulation capabilities to quantitatively assess the long-term performance of waste forms in the engineered and geologic environments of a radioactive waste storage or disposal system. The Waste IPSC will provide this simulation capability (1) for a range of disposal concepts, waste form types, engineered repository designs, and geologic settings, (2) for a range of time scales and distances, (3) with appropriate consideration of the inherent uncertainties, and (4) in accordance with robust verification, validation, and software quality requirements. To demonstrate proof of concept and progress towards these goals and requirements, a Waste IPSC challenge problem is specified that includes coupled thermal-hydrologic-chemical-mechanical (THCM) processes that describe (1) the degradation of a borosilicate glass waste form and the corresponding mobilization of radionuclides (i.e., the processes that produce the radionuclide source term), (2) the associated near-field physical and chemical environment for waste emplacement within a salt formation, and (3) radionuclide transport in the near field (i.e., through the engineered components - waste form, waste package, and backfill - and the immediately adjacent salt). The initial details of a set of challenge milestones that collectively comprise the full challenge problem are also specified.

  11. Detection of acute nervous system injury with advanced diffusion-weighted MRI: a simulation and sensitivity analysis.

    PubMed

    Skinner, Nathan P; Kurpad, Shekar N; Schmit, Brian D; Budde, Matthew D

    2015-11-01

    Diffusion-weighted imaging (DWI) is a powerful tool to investigate the microscopic structure of the central nervous system (CNS). Diffusion tensor imaging (DTI), a common model of the DWI signal, has a demonstrated sensitivity to detect microscopic changes as a result of injury or disease. However, DTI and other similar models have inherent limitations that reduce their specificity for certain pathological features, particularly in tissues with complex fiber arrangements. Methods such as double pulsed field gradient (dPFG) and q-vector magic angle spinning (qMAS) have been proposed to specifically probe the underlying microscopic anisotropy without interference from the macroscopic tissue organization. This is particularly important for the study of acute injury, where abrupt changes in the microscopic morphology of axons and dendrites manifest as focal enlargements known as beading. The purpose of this work was to assess the relative sensitivity of DWI measures to beading in the context of macroscopic fiber organization and edema. Computational simulations of DWI experiments in normal and beaded axons demonstrated that, although DWI models can be highly specific for the simulated pathologies of beading and volume fraction changes in coherent fiber pathways, their sensitivity to a single idealized pathology is considerably reduced in crossing and dispersed fibers. However, dPFG and qMAS have a high sensitivity for beading, even in complex fiber tracts. Moreover, in tissues with coherent arrangements, such as the spinal cord or nerve fibers in which tract orientation is known a priori, a specific dPFG sequence variant decreases the effects of edema and improves specificity for beading. Collectively, the simulation results demonstrate that advanced DWI methods, particularly those which sample diffusion along multiple directions within a single acquisition, have improved sensitivity to acute axonal injury over conventional DTI metrics and hold promise for more

  12. Advanced simulation methods to detect resonant frequency stack up in focal plane design

    NASA Astrophysics Data System (ADS)

    Adams, Craig; Malone, Neil R.; Torres, Raymond; Fajardo, Armando; Vampola, John; Drechsler, William; Parlato, Russell; Cobb, Christopher; Randolph, Max; Chiourn, Surath; Swinehart, Robert

    2014-09-01

    Wire used to connect focal plane electrical connections to external electrical circuitry can be modeled using the length, diameter and loop height to determine the resonant frequency. The design of the adjacent electric board and mounting platform can also be analyzed. The combined resonant frequency analysis can then be used to decouple the different component resonant frequencies to eliminate the potential for metal fatigue in the wires. It is important to note that the nominal maximum stress values that cause metal fatigue can be much less than the ultimate tensile stress limit or the yield stress limit and are degraded further at resonant frequencies. It is critical that tests be done to qualify designs that are not easily simulated due to material property variation and complex structures. Sine wave vibration testing is a critical component of qualification vibration and provides the highest accuracy in determining the resonant frequencies which can be reduced or uncorrelated improving the structural performance of the focal plane assembly by small changes in design damping or modern space material selection. Vibration flow down from higher levels of assembly needs consideration for intermediary hardware, which may amplify or attenuate the full up system vibration profile. A simple pass through of vibration requirements may result in over test or missing amplified resonant frequencies that can cause system failure. Examples are shown of metal wire fatigue such as discoloration and microscopic cracks which are visible at the submicron level by the use of a scanning electron microscope. While it is important to model and test resonant frequencies the Focal plane must also be constrained such that Coefficient of Thermal expansion mismatches are allowed to move and not overstress the FPA.

  13. An advanced disruption predictor for JET tested in a simulated real-time environment

    NASA Astrophysics Data System (ADS)

    Rattá, G. A.; Vega, J.; Murari, A.; Vagliasindi, G.; Johnson, M. F.; de Vries, P. C.; EFDA Contributors, JET

    2010-02-01

    Disruptions are sudden and unavoidable losses of confinement that may put at risk the integrity of a tokamak. However, the physical phenomena leading to disruptions are very complex and non-linear and therefore no satisfactory model has been devised so far either for their avoidance or their prediction. For this reason, machine learning techniques have been extensively pursued in the last years. In this paper a real-time predictor specifically developed for JET and based on support vector machines is presented. The main aim of the present investigation is to obtain high recognition rates in a real-time simulated environment. To this end the predictor has been tested on the time slices of entire discharges exactly as in real world operation. Since the year 2000, the experiments at JET have been organized in campaigns named sequentially beginning with campaign C1. In this paper results from campaign C1 (year 2000) and up to C19 (year 2007) are reported. The predictor has been trained with data from JET's campaigns up to C7 with particular attention to reducing the number of missed alarms, which are less than 1%, for a test set of discharges from the same campaigns used for the training. The false alarms plus premature alarms are of the order of 6.4%, for a total success rate of more than 92%. The robustness of the predictor has been proven by testing it with a wide subset of shots of more recent campaigns (from C8 to C19) without any retraining. The success rate over the period between C8 and C14 is on average 88% and never falls below 82%, confirming the good generalization capabilities of the developed technique. After C14, significant modifications were implemented on JET and its diagnostics and consequently the success rates of the predictor between C15 and C19 decays to an average of 79%. Finally, the performance of the developed detection system has been compared with the predictions of the JET protection system (JPS). The new predictor clearly outperforms JPS

  14. Advances in estimating the climate sensibility of a large lake using scenario simulations

    NASA Astrophysics Data System (ADS)

    Eder, M. M.; Schlabing, D.; Frassl, M. A.; Rinke, K.; Bárdossy, A.

    2012-04-01

    . The poster " Simulating the effect of meteorological variability on a lake ecosystem" by Marieke Anna Frassl in this session "Lakes and Inland Seas" shows lake model runs focusing on water quality and ecosystem behavior under different climate change scenarios. For further information on VG see "Stochastic Downscaling for Hydrodynamic and Ecological Modeling of Lakes" by Dirk Schlabing in Session "Hydroclimatic stochastics" (HS7.5 / NP8.3).

  15. Advanced Variance Reduction for Global k-Eigenvalue Simulations in MCNP

    SciTech Connect

    Edward W. Larsen

    2008-06-01

    to the correlations between fission source estimates. In the new FMC method, the eigenvalue problem (expressed in terms of the Boltzmann equation) is integrated over the energy and direction variables. Then these equations are multiplied by J special "tent" functions in space and integrated over the spatial variable. This yields J equations that are exactly satisfied by the eigenvalue k and J space-angle-energy moments of the eigenfunction. Multiplying and dividing by suitable integrals of the eigenfunction, one obtains J algebraic equations for k and the space-angle-energy moments of the eigenfunction, which contain nonlinear functionals that depend weakly on the eigenfunction. In the FMC method, information from the standard Monte Carlo solution for each active cycle is used to estimate the functionals, and at the end of each cycle the J equations for k and the space-angle-energy moments of the eigenfunction are solved. Finally, these results are averaged over N active cycles to obtain estimated means and standard deviations for k and the space-angle-energy moments of the eigenfunction. Our limited testing shows that for large single fissile systems such as a commercial reactor core, (i) the FMC estimate of the eigenvalue is at least one order of magnitude more accurate than estimates obtained from the standard Monte Carlo approach, (ii) the FMC estimate of the eigenfunction converges and is several orders of magnitude more accurate than the standard estimate, and (iii) the FMC estimate of the standard deviation in k is at least one order of magnitude closer to the correct standard deviation than the standard estimate. These advances occur because: (i) the Monte Carlo estimates of the nonlinear functionals are much more accurate than the direct Monte Carlo estimates of the eigenfunction, (ii) the system of discrete equations that determines the FMC estimates of k is robust, and (iii) the functionals are only very weakly correlated between different fission

  16. Leveraging data analytics, patterning simulations and metrology models to enhance CD metrology accuracy for advanced IC nodes

    NASA Astrophysics Data System (ADS)

    Rana, Narender; Zhang, Yunlin; Kagalwala, Taher; Hu, Lin; Bailey, Todd

    2014-04-01

    Integrated Circuit (IC) technology is changing in multiple ways: 193i to EUV exposure, planar to non-planar device architecture, from single exposure lithography to multiple exposure and DSA patterning etc. Critical dimension (CD) control requirement is becoming stringent and more exhaustive: CD and process window are shrinking., three sigma CD control of < 2 nm is required in complex geometries, and metrology uncertainty of < 0.2 nm is required to achieve the target CD control for advanced IC nodes (e.g. 14 nm, 10 nm and 7 nm nodes). There are fundamental capability and accuracy limits in all the metrology techniques that are detrimental to the success of advanced IC nodes. Reference or physical CD metrology is provided by CD-AFM, and TEM while workhorse metrology is provided by CD-SEM, Scatterometry, Model Based Infrared Reflectrometry (MBIR). Precision alone is not sufficient moving forward. No single technique is sufficient to ensure the required accuracy of patterning. The accuracy of CD-AFM is ~1 nm and precision in TEM is poor due to limited statistics. CD-SEM, scatterometry and MBIR need to be calibrated by reference measurements for ensuring the accuracy of patterned CDs and patterning models. There is a dire need of measurement with < 0.5 nm accuracy and the industry currently does not have that capability with inline measurments. Being aware of the capability gaps for various metrology techniques, we have employed data processing techniques and predictive data analytics, along with patterning simulation and metrology models, and data integration techniques to selected applications demonstrating the potential solution and practicality of such an approach to enhance CD metrology accuracy. Data from multiple metrology techniques has been analyzed in multiple ways to extract information with associated uncertainties and integrated to extract the useful and more accurate CD and profile information of the structures. This paper presents the optimization of

  17. Scientific and Computational Challenges of the Fusion Simulation Program (FSP)

    SciTech Connect

    William M. Tang

    2011-02-09

    This paper highlights the scientific and computational challenges facing the Fusion Simulation Program (FSP) a major national initiative in the United States with the primary objective being to enable scientific discovery of important new plasma phenomena with associated understanding that emerges only upon integration. This requires developing a predictive integrated simulation capability for magnetically-confined fusion plasmas that are properly validated against experiments in regimes relevant for producing practical fusion energy. It is expected to provide a suite of advanced modeling tools for reliably predicting fusion device behavior with comprehensive and targeted science-based simulations of nonlinearly-coupled phenomena in the core plasma, edge plasma, and wall region on time and space scales required for fusion energy production. As such, it will strive to embody the most current theoretical and experimental understanding of magnetic fusion plasmas and to provide a living framework for the simulation of such plasmas as the associated physics understanding continues to advance over the next several decades. Substantive progress on answering the outstanding scientific questions in the field will drive the FSP toward its ultimate goal of developing the ability to predict the behavior of plasma discharges in toroidal magnetic fusion devices with high physics fidelity on all relevant time and space scales. From a computational perspective, this will demand computing resources in the petascale range and beyond together with the associated multi-core algorithmic formulation needed to address burning plasma issues relevant to ITER - a multibillion dollar collaborative experiment involving seven international partners representing over half the world's population. Even more powerful exascale platforms will be needed to meet the future challenges of designing a demonstration fusion reactor (DEMO). Analogous to other major applied physics modeling projects (e

  18. Lightweighting Automotive Materials for Increased Fuel Efficiency and Delivering Advanced Modeling and Simulation Capabilities to U.S. Manufacturers

    SciTech Connect

    Hale, Steve

    2013-09-11

    Abstract The National Center for Manufacturing Sciences (NCMS) worked with the U.S. Department of Energy (DOE), National Energy Technology Laboratory (NETL), to bring together research and development (R&D) collaborations to develop and accelerate the knowledgebase and infrastructure for lightweighting materials and manufacturing processes for their use in structural and applications in the automotive sector. The purpose/importance of this DOE program: • 2016 CAFÉ standards. • Automotive industry technology that shall adopt the insertion of lightweighting material concepts towards manufacturing of production vehicles. • Development and manufacture of advanced research tools for modeling and simulation (M&S) applications to reduce manufacturing and material costs. • U.S. competitiveness that will help drive the development and manufacture of the next generation of materials. NCMS established a focused portfolio of applied R&D projects utilizing lightweighting materials for manufacture into automotive structures and components. Areas that were targeted in this program: • Functionality of new lightweighting materials to meet present safety requirements. • Manufacturability using new lightweighting materials. • Cost reduction for the development and use of new lightweighting materials. The automotive industry’s future continuously evolves through innovation, and lightweight materials are key in achieving a new era of lighter, more efficient vehicles. Lightweight materials are among the technical advances needed to achieve fuel/energy efficiency and reduce carbon dioxide (CO2) emissions: • Establish design criteria methodology to identify the best materials for lightweighting. • Employ state-of-the-art design tools for optimum material development for their specific applications. • Match new manufacturing technology to production volume. • Address new process variability with new production-ready processes.

  19. Advancements for Active Remote Sensing of Carbon Dioxide from Space using the ASCENDS CarbonHawk Experiment Simulator: First Results

    NASA Astrophysics Data System (ADS)

    Obland, M. D.; Nehrir, A. R.; Lin, B.; Harrison, F. W.; Kooi, S. A.; Choi, Y.; Plant, J.; Yang, M. M.; Antill, C.; Campbell, J. F.; Ismail, S.; Browell, E. V.; Meadows, B.; Dobler, J. T.; Zaccheo, T. S.; Moore, B., III; Crowell, S.

    2014-12-01

    The ASCENDS CarbonHawk Experiment Simulator (ACES) is an Intensity-Modulated Continuous-Wave lidar system recently developed at NASA Langley Research Center that seeks to advance technologies and techniques critical to measuring atmospheric column carbon dioxide (CO2) mixing ratios in support of the NASA Active Sensing of CO2 Emissions over Nights, Days, and Seasons (ASCENDS) mission. These advancements include: (1) increasing the power-aperture product to approach ASCENDS mission requirements by implementing multi-aperture telescopes and multiple co-aligned laser transmitters; (2) incorporating high-efficiency, high-power Erbium-Doped Fiber Amplifiers (EDFAs); (3) developing and incorporating a high-bandwidth, low-noise HgCdTe detector and transimpedence amplifier (TIA) subsystem capable of long-duration operation on Global Hawk aircraft, and (4) advancing algorithms for cloud and aerosol discrimination. The ACES instrument architecture is being developed for operation on high-altitude aircraft and will be directly scalable to meet the ASCENDS mission requirements. ACES simultaneously transmits five laser beams: three from commercial EDFAs operating near 1571 nm, and two from the Exelis oxygen (O2) Raman fiber laser amplifier system operating near 1260 nm. The Integrated-Path Differential Absorption (IPDA) lidar approach is used at both wavelengths to independently measure the CO2 and O2 column number densities and retrieve the average column CO2 mixing ratio. The outgoing laser beams are aligned to the field of view of ACES' three fiber-coupled 17.8-cm diameter athermal telescopes. The backscattered light collected by the three telescopes is sent to the detector/TIA subsystem, which has a bandwidth of 4.7 MHz and operates service-free using a tactical dewar and cryocooler. Two key laser modulation approaches are being tested to significantly mitigate the effects of thin clouds on the retrieved CO2 column amounts. Full instrument development concluded in the

  20. Stress trajectory and advanced hydraulic-fracture simulations for the Eastern Gas Shales Project. Final report, April 30, 1981-July 30, 1983

    SciTech Connect

    Advani, S.H.; Lee, J.K.

    1983-01-01

    A summary review of hydraulic fracture modeling is given. Advanced hydraulic fracture model formulations and simulation, using the finite element method, are presented. The numerical examples include the determination of fracture width, height, length, and stress intensity factors with the effects of frac fluid properties, layered strata, in situ stresses, and joints. Future model extensions are also recommended. 66 references, 23 figures.