Science.gov

Sample records for advanced petascale simulations

  1. Interoperable Technologies for Advanced Petascale Simulations (ITAPS)

    SciTech Connect

    Shephard, Mark S

    2010-02-05

    Efforts during the past year have contributed to the continued development of the ITAPS interfaces and services as well as specific efforts to support ITAPS applications. The ITAPS interface efforts have two components. The first is working with the ITAPS team on improving the ITAPS software infrastructure and level of compliance of our implementations of ITAPS interfaces (iMesh, iMeshP, iRel and iGeom). The second is being involved with the discussions on the design of the iField fields interface. Efforts to move the ITAPS technologies to petascale computers has identified a number of key technical developments that are required to effectively execute the ITAPS interfaces and services. Research to address these parallel method developments has been a major emphasis of the RPI’s team efforts over the past year. Efforts to move the ITAPS technologies to petascale computers has identified a number of key technical developments that are required to effectively execute the ITAPS interfaces and services. Research to address these parallel method developments has been a major emphasis of the RPI’s team efforts over the past year. The development of parallel unstructured mesh methods has considered the need to scale unstructured mesh solves to massively parallel computers. These efforts, summarized in section 2.1 show that with the addition of the ITAPS procedures described in sections 2.2 and 2.3 we are able to obtain excellent strong scaling with our unstructured mesh CFD code on up to 294,912 cores of IBM Blue Gene/P which is the highest core count machine available. The ITAPS developments that have contributed to the scaling and performance of PHASTA include an iterative migration algorithm to improve the combined region and vertex balance of the mesh partition, which increases scalability, and mesh data reordering, which improves computational performance. The other developments are associated with the further development of the ITAPS parallel unstructured mesh

  2. Interoperable Technologies for Advanced Petascale Simulations

    SciTech Connect

    Li, Xiaolin

    2013-01-14

    Our final report on the accomplishments of ITAPS at Stony Brook during period covered by the research award includes component service, interface service and applications. On the component service, we have designed and implemented a robust functionality for the Lagrangian tracking of dynamic interface. We have migrated the hyperbolic, parabolic and elliptic solver from stage-wise second order toward global second order schemes. We have implemented high order coupling between interface propagation and interior PDE solvers. On the interface service, we have constructed the FronTier application programer's interface (API) and its manual page using doxygen. We installed the FronTier functional interface to conform with the ITAPS specifications, especially the iMesh and iMeshP interfaces. On applications, we have implemented deposition and dissolution models with flow and implemented the two-reactant model for a more realistic precipitation at the pore level and its coupling with Darcy level model. We have continued our support to the study of fluid mixing problem for problems in inertial comfinement fusion. We have continued our support to the MHD model and its application to plasma liner implosion in fusion confinement. We have simulated a step in the reprocessing and separation of spent fuels from nuclear power plant fuel rods. We have implemented the fluid-structure interaction for 3D windmill and parachute simulations. We have continued our collaboration with PNNL, BNL, LANL, ORNL, and other SciDAC institutions.

  3. Advances in petascale kinetic plasma simulation with VPIC and Roadrunner

    SciTech Connect

    Bowers, Kevin J; Albright, Brian J; Yin, Lin; Daughton, William S; Roytershteyn, Vadim; Kwan, Thomas J T

    2009-01-01

    VPIC, a first-principles 3d electromagnetic charge-conserving relativistic kinetic particle-in-cell (PIC) code, was recently adapted to run on Los Alamos's Roadrunner, the first supercomputer to break a petaflop (10{sup 15} floating point operations per second) in the TOP500 supercomputer performance rankings. They give a brief overview of the modeling capabilities and optimization techniques used in VPIC and the computational characteristics of petascale supercomputers like Roadrunner. They then discuss three applications enabled by VPIC's unprecedented performance on Roadrunner: modeling laser plasma interaction in upcoming inertial confinement fusion experiments at the National Ignition Facility (NIF), modeling short pulse laser GeV ion acceleration and modeling reconnection in magnetic confinement fusion experiments.

  4. Collaboration Portal for Petascale Simulations

    SciTech Connect

    Klasky, Scott A; Podhorszki, Norbert; Mouallem, P. A.; Vouk, Mladen

    2009-01-01

    The emergence of leadership class computing is creating a tsunami of data from petascale simulations. Results are typically analyzed by dozens of scientists. In order for the scientist to digest the vast amount of data being produced from the simulations and auxiliary programs, it is critical to automate the effort to manage, analyze, visualize, and share this data. One aspect of this is leveraging of their collective knowledge and experiences through a scientific social network. This can be archived through a combination of parallel back-end services, provenance capturing, and an easy to use front-end tool. 'eSimMon', is one such tool we developed as part of the Scientific Discovery through Advanced Computing (SciDAC) program. In this paper we describe eSimMon, discuss its ease of use, its efficiency, and its ability to accelerate scientific discovery through advanced computing.

  5. Enabling technologies for petascale electromagnetic accelerator simulation

    NASA Astrophysics Data System (ADS)

    Lee, Lie-Quan; Akcelik, Volkan; Chen, Sheng; Ge, Lixin; Prudencio, Ernesto; Schussman, Greg; Uplenchwar, Ravi; Ng, Cho; Ko, Kwok; Luo, Xiaojun; Shephard, Mark

    2007-07-01

    The SciDAC2 accelerator project at SLAC aims to simulate an entire three-cryomodule radio frequency (RF) unit of the International Linear Collider (ILC) main Linac. Petascale computing resources supported by advances in Applied Mathematics (AM) and Computer Science (CS) and INCITE Program are essential to enable such very large-scale electromagnetic accelerator simulations required by the ILC Global Design Effort. This poster presents the recent advances and achievements in the areas of CS/AM through collaborations.

  6. Enabling Technologies for Petascale Electromagnetic Accelerator Simulation

    SciTech Connect

    Lee, Lie-Quan; Akcelik, Volkan; Chen, Sheng; Ge, Li-Xin; Prudencio, Ernesto; Schussman, Greg; Uplenchwar, Ravi; Ng, Cho; Ko, Kwok; Luo, Xiaojun; Shephard, Mark; /Rensselaer Poly.

    2007-11-09

    The SciDAC2 accelerator project at SLAC aims to simulate an entire three-cryomodule radio frequency (RF) unit of the International Linear Collider (ILC) main Linac. Petascale computing resources supported by advances in Applied Mathematics (AM) and Computer Science (CS) and INCITE Program are essential to enable such very large-scale electromagnetic accelerator simulations required by the ILC Global Design Effort. This poster presents the recent advances and achievements in the areas of CS/AM through collaborations.

  7. Community Petascale Project for Accelerator Science and Simulation: Advancing Computational Science for Future Accelerators and Accelerator Technologies

    SciTech Connect

    Spentzouris, P.; Cary, J.; McInnes, L.C.; Mori, W.; Ng, C.; Ng, E.; Ryne, R.; /LBL, Berkeley

    2011-11-14

    The design and performance optimization of particle accelerators are essential for the success of the DOE scientific program in the next decade. Particle accelerators are very complex systems whose accurate description involves a large number of degrees of freedom and requires the inclusion of many physics processes. Building on the success of the SciDAC-1 Accelerator Science and Technology project, the SciDAC-2 Community Petascale Project for Accelerator Science and Simulation (ComPASS) is developing a comprehensive set of interoperable components for beam dynamics, electromagnetics, electron cooling, and laser/plasma acceleration modelling. ComPASS is providing accelerator scientists the tools required to enable the necessary accelerator simulation paradigm shift from high-fidelity single physics process modeling (covered under SciDAC1) to high-fidelity multiphysics modeling. Our computational frameworks have been used to model the behavior of a large number of accelerators and accelerator R&D experiments, assisting both their design and performance optimization. As parallel computational applications, the ComPASS codes have been shown to make effective use of thousands of processors. ComPASS is in the first year of executing its plan to develop the next-generation HPC accelerator modeling tools. ComPASS aims to develop an integrated simulation environment that will utilize existing and new accelerator physics modules with petascale capabilities, by employing modern computing and solver technologies. The ComPASS vision is to deliver to accelerator scientists a virtual accelerator and virtual prototyping modeling environment, with the necessary multiphysics, multiscale capabilities. The plan for this development includes delivering accelerator modeling applications appropriate for each stage of the ComPASS software evolution. Such applications are already being used to address challenging problems in accelerator design and optimization. The ComPASS organization

  8. Petascale Core-Collapse Supernova Simulation

    NASA Astrophysics Data System (ADS)

    Messer, Bronson

    2009-11-01

    The advent of petascale computing brings with it the promise of substantial increases in physical fidelity for a host of scientific problems. However, the realities of computing on these resources are daunting, and the architectural features of petascale machines will require considerable innovation for effective use. Nevertheless, there exists a class of scientific problems whose ultimate answer requires the application of petascale (and beyond) computing. One example is ascertaining the core-collapse supernova mechanism and explaining the rich phenomenology associated with these events. These stellar explosions produce and disseminate a dominant fraction of the elements in the Universe; are prodigious sources of neutrinos, gravitational waves, and photons across the electromagnetic spectrum; and lead to the formation of neutron stars and black holes. I will describe our recent multidimensional supernova simulations performed on petascale platforms fielded by the DOE and NSF.

  9. Commnity Petascale Project for Accelerator Science and Simulation: Advancing Computational Science for Future Accelerators and Accelerator Technologies

    SciTech Connect

    Spentzouris, Panagiotis; Cary, John; Mcinnes, Lois Curfman; Mori, Warren; Ng, Cho; Ng, Esmond; Ryne, Robert; /LBL, Berkeley

    2008-07-01

    The design and performance optimization of particle accelerators is essential for the success of the DOE scientific program in the next decade. Particle accelerators are very complex systems whose accurate description involves a large number of degrees of freedom and requires the inclusion of many physics processes. Building on the success of the SciDAC1 Accelerator Science and Technology project, the SciDAC2 Community Petascale Project for Accelerator Science and Simulation (ComPASS) is developing a comprehensive set of interoperable components for beam dynamics, electromagnetics, electron cooling, and laser/plasma acceleration modeling. ComPASS is providing accelerator scientists the tools required to enable the necessary accelerator simulation paradigm shift from high-fidelity single physics process modeling (covered under SciDAC1) to high-fidelity multi-physics modeling. Our computational frameworks have been used to model the behavior of a large number of accelerators and accelerator R&D experiments, assisting both their design and performance optimization. As parallel computational applications, the ComPASS codes have been shown to make effective use of thousands of processors.

  10. Commnity Petascale Project for Accelerator Science And Simulation: Advancing Computational Science for Future Accelerators And Accelerator Technologies

    SciTech Connect

    Spentzouris, Panagiotis; Cary, John; Mcinnes, Lois Curfman; Mori, Warren; Ng, Cho; Ng, Esmond; Ryne, Robert; /LBL, Berkeley

    2011-10-21

    The design and performance optimization of particle accelerators are essential for the success of the DOE scientific program in the next decade. Particle accelerators are very complex systems whose accurate description involves a large number of degrees of freedom and requires the inclusion of many physics processes. Building on the success of the SciDAC-1 Accelerator Science and Technology project, the SciDAC-2 Community Petascale Project for Accelerator Science and Simulation (ComPASS) is developing a comprehensive set of interoperable components for beam dynamics, electromagnetics, electron cooling, and laser/plasma acceleration modelling. ComPASS is providing accelerator scientists the tools required to enable the necessary accelerator simulation paradigm shift from high-fidelity single physics process modeling (covered under SciDAC1) to high-fidelity multiphysics modeling. Our computational frameworks have been used to model the behavior of a large number of accelerators and accelerator R&D experiments, assisting both their design and performance optimization. As parallel computational applications, the ComPASS codes have been shown to make effective use of thousands of processors.

  11. Biomolecular simulations on petascale: promises and challenges

    NASA Astrophysics Data System (ADS)

    Agarwal, Pratul K.; Alam, Sadaf R.

    2006-09-01

    Proteins work as highly efficient machines at the molecular level and are responsible for a variety of processes in all living cells. There is wide interest in understanding these machines for implications in biochemical/biotechnology industries as well as in health related fields. Over the last century, investigations of proteins based on a variety of experimental techniques have provided a wealth of information. More recently, theoretical and computational modeling using large scale simulations is providing novel insights into the functioning of these machines. The next generation supercomputers with petascale computing power, hold great promises as well as challenges for the biomolecular simulation scientists. We briefly discuss the progress being made in this area.

  12. Community Petascale Project for Accelerator Science and Simulation

    SciTech Connect

    Warren B. Mori

    2013-02-01

    The UCLA Plasma Simulation Group is a major partner of the "Community Petascale Project for Accelerator Science and Simulation. This is the final technical report. We include an overall summary, a list of publications and individual progress reports for each years. During the past five years we have made tremendous progress in enhancing the capabilities of OSIRIS and QuickPIC, in developing new algorithms and data structures for PIC codes to run on GPUS and many future core architectures, and in using these codes to model experiments and in making new scientific discoveries. Here we summarize some highlights for which SciDAC was a major contributor.

  13. Hierarchical petascale simulation framework for stress corrosion cracking

    NASA Astrophysics Data System (ADS)

    Vashishta, P.; Kalia, R. K.; Nakano, A.; Kaxiras, E.; Grama, A.; Lu, G.; Eidenbenz, S.; Voter, A. F.; Hood, R. Q.; Moriarty, J. A.; Yang, L. H.

    2008-07-01

    We are developing a scalable parallel and distributed computational framework consisting of methods, algorithms, and integrated software tools for multi-terascle-to-petascale simulations of stress corrosion cracking (SCC) with quantum-level accuracy. We have performed multimillion- to billion-atom molecular dynamics (MD) simulations of deformation, flow, and fracture in amorphous silica with interatomic potentials and forces validated by density functional theory (DFT) calculations. Optimized potentials have been developed to study sulfur embrittlement of nickel with multimillion-to-multibillion atom MD simulations based on DFT and temperature dependent model generalized pseudopotential theory. We have also developed a quasi-continuum method embedded with quantum simulations based on DFT to reach macroscopic length scales and an accelerated molecular dynamics scheme to reach macroscopic time scales in simulations of solid-fluid interfaces that are relevant to SCC. A hybrid MD and mesoscale lattice Boltzmann simulation algorithm is being designed to study fluid flow through cracks.

  14. Petascale Flow Simulations Using Particles and Grids

    NASA Astrophysics Data System (ADS)

    Koumoutsakos, Petros

    2014-11-01

    How to chose the discretization of flow models in order to harness the power of available computer architectures? Our group explores this question for particle (vortex methods, molecular and dissipative particle dynamics) and grid based (finite difference, finite volume) discretisations for flow simulations across scales. I will discuss methodologies to transition between these methods and their implementation in massively parallel computer architectures. I will present simulations ranging from flows of cells in microfluidic channels to cloud cavitation collapse at 14.5 PFLOP/s. This research was supported by the European Research Council, the Swiss National Science Foundation and the Swiss National Supercomputing Center.

  15. Cray XT4: An Early Evaluation for Petascale Scientific Simulation

    SciTech Connect

    Alam, Sadaf R; Barrett, Richard F; Fahey, Mark R; Kuehn, Jeffery A; Sankaran, Ramanan; Worley, Patrick H; Larkin, Jeffrey M

    2007-01-01

    The scientific simulation capabilities of next generation high-end computing technology will depend on striking a balance among memory, processor, I/O, and local and global network performance across the breadth of the scientific simulation space. The Cray XT4 combines commodity AMD dual core Opteron processor technology with the second generation of Cray's custom communication accelerator in a system design whose balance is claimed to be driven by the demands of scientific simulation. This paper presents an evaluation of the Cray XT4 using microbenchmarks to develop a controlled understanding of individual system components, providing the context for analyzing and comprehending the performance of several petascale-ready applications. Results gathered from several strategic application domains are compared with observations on the previous generation Cray XT3 and other high-end computing systems, demonstrating performance improvements across a wide variety of application benchmark problems.

  16. Towards petascale simulation of atmospheric circulations with soundproof equations

    NASA Astrophysics Data System (ADS)

    Piotrowski, Zbigniew; Wyszogrodzki, Andrzej; Smolarkiewicz, Piotr

    2011-12-01

    This paper highlights progress with the development of a petascale implementation of general-purpose high-resolution (nonoscillatory) hydrodynamical simulation code EULAG [Prusa et al. 2008, Comput. Fluids 37, 1193]. The applications addressed are anelastic atmospheric flows in the range of scales from micro to planetary. The new modeldomain decomposition into a three dimensional processor array has been implemented to increase model performance and scalability. The performance of the new code is demonstrated on the IBM BlueGene/L and Cray XT4/XT5 supercomputers. The results show significant improvement of the model efficacy compared to the original decomposition into a two-dimensional processor array in the horizontal — a standard in meteorological models.

  17. Scalable parallel programming for high performance seismic simulation on petascale heterogeneous supercomputers

    NASA Astrophysics Data System (ADS)

    Zhou, Jun

    The 1994 Northridge earthquake in Los Angeles, California, killed 57 people, injured over 8,700 and caused an estimated $20 billion in damage. Petascale simulations are needed in California and elsewhere to provide society with a better understanding of the rupture and wave dynamics of the largest earthquakes at shaking frequencies required to engineer safe structures. As the heterogeneous supercomputing infrastructures are becoming more common, numerical developments in earthquake system research are particularly challenged by the dependence on the accelerator elements to enable "the Big One" simulations with higher frequency and finer resolution. Reducing time to solution and power consumption are two primary focus area today for the enabling technology of fault rupture dynamics and seismic wave propagation in realistic 3D models of the crust's heterogeneous structure. This dissertation presents scalable parallel programming techniques for high performance seismic simulation running on petascale heterogeneous supercomputers. A real world earthquake simulation code, AWP-ODC, one of the most advanced earthquake codes to date, was chosen as the base code in this research, and the testbed is based on Titan at Oak Ridge National Laboraratory, the world's largest hetergeneous supercomputer. The research work is primarily related to architecture study, computation performance tuning and software system scalability. An earthquake simulation workflow has also been developed to support the efficient production sets of simulations. The highlights of the technical development are an aggressive performance optimization focusing on data locality and a notable data communication model that hides the data communication latency. This development results in the optimal computation efficiency and throughput for the 13-point stencil code on heterogeneous systems, which can be extended to general high-order stencil codes. Started from scratch, the hybrid CPU/GPU version of AWP

  18. Final Report: Towards Optimal Petascale Simulations (TOPS), ER25785

    SciTech Connect

    Reynolds, Daniel R

    2011-04-15

    Multiscale, multirate scientific and engineering applications in the SciDAC portfolio possess resolution requirements that are practically inexhaustible and demand execution on the highest-capability computers available, which will soon reach the petascale. While the variety of applications is enormous, their needs for mathematical software infrastructure are surprisingly coincident; moreover the chief bottleneck is often the solver. At their current scalability limits, many applications spend a vast majority of their operations in solvers, due to solver algorithmic complexity that is superlinear in the problem size, whereas other phases scale linearly. Furthermore, the solver may be the phase of the simulation with the poorest parallel scalability, due to intrinsic global dependencies. This project brings together the providers of some of the world's most widely distributed, freely available, scalable solver software and focuses them on relieving this bottleneck for many specific applications within SciDAC, which are representative of many others outside. Solver software directly supported under TOPS includes: hypre, PETSc, SUNDIALS, SuperLU, TAO, and Trilinos. Transparent access is also provided to other solver software through the TOPS interface. The primary goals of TOPS are the development, testing, and dissemination of solver software, especially for systems governed by PDEs. Upon discretization, these systems possess mathematical structure that must be exploited for optimal scalability; therefore, application-targeted algorithmic research is included. TOPS software development includes attention to high performance as well as interoperability among the solver components. Support for integration of TOPS solvers into SciDAC applications is also directly supported by this proposal. The role of the UCSD PI in this overall CET, is one of direct interaction between the TOPS software partners and various DOE applications scientists' specifically toward

  19. Petascale Simulation Initiative Tech Base: FY2007 Final Report

    SciTech Connect

    May, J; Chen, R; Jefferson, D; Leek, J; Kaplan, I; Tannahill, J

    2007-10-26

    The Petascale Simulation Initiative began as an LDRD project in the middle of Fiscal Year 2004. The goal of the project was to develop techniques to allow large-scale scientific simulation applications to better exploit the massive parallelism that will come with computers running at petaflops per second. One of the major products of this work was the design and prototype implementation of a programming model and a runtime system that lets applications extend data-parallel applications to use task parallelism. By adopting task parallelism, applications can use processing resources more flexibly, exploit multiple forms of parallelism, and support more sophisticated multiscale and multiphysics models. Our programming model was originally called the Symponents Architecture but is now known as Cooperative Parallelism, and the runtime software that supports it is called Coop. (However, we sometimes refer to the programming model as Coop for brevity.) We have documented the programming model and runtime system in a submitted conference paper [1]. This report focuses on the specific accomplishments of the Cooperative Parallelism project (as we now call it) under Tech Base funding in FY2007. Development and implementation of the model under LDRD funding alone proceeded to the point of demonstrating a large-scale materials modeling application using Coop on more than 1300 processors by the end of FY2006. Beginning in FY2007, the project received funding from both LDRD and the Computation Directorate Tech Base program. Later in the year, after the three-year term of the LDRD funding ended, the ASC program supported the project with additional funds. The goal of the Tech Base effort was to bring Coop from a prototype to a production-ready system that a variety of LLNL users could work with. Specifically, the major tasks that we planned for the project were: (1) Port SARS [former name of the Coop runtime system] to another LLNL platform, probably Thunder or Peloton (depending

  20. Final Report for "Community Petascale Project for Accelerator Science and Simulations".

    SciTech Connect

    Cary, J. R.; Bruhwiler, D. L.; Stoltz, P. H.; Cormier-Michel, E.; Cowan, B.; Schwartz, B. T.; Bell, G.; Paul, K.; Veitzer, S.

    2013-04-19

    This final report describes the work that has been accomplished over the past 5 years under the Community Petascale Project for Accelerator and Simulations (ComPASS) at Tech-X Corporation. Tech-X had been involved in the full range of ComPASS activities with simulation of laser plasma accelerator concepts, mainly in collaboration with LOASIS program at LBNL, simulation of coherent electron cooling in collaboration with BNL, modeling of electron clouds in high intensity accelerators, in collaboration with researchers at Fermilab and accurate modeling of superconducting RF cavity in collaboration with Fermilab, JLab and Cockcroft Institute in the UK.

  1. Topology for statistical modeling of petascale data.

    SciTech Connect

    Pascucci, Valerio; Mascarenhas, Ajith Arthur; Rusek, Korben; Bennett, Janine Camille; Levine, Joshua; Pebay, Philippe Pierre; Gyulassy, Attila; Thompson, David C.; Rojas, Joseph Maurice

    2011-07-01

    This document presents current technical progress and dissemination of results for the Mathematics for Analysis of Petascale Data (MAPD) project titled 'Topology for Statistical Modeling of Petascale Data', funded by the Office of Science Advanced Scientific Computing Research (ASCR) Applied Math program. Many commonly used algorithms for mathematical analysis do not scale well enough to accommodate the size or complexity of petascale data produced by computational simulations. The primary goal of this project is thus to develop new mathematical tools that address both the petascale size and uncertain nature of current data. At a high level, our approach is based on the complementary techniques of combinatorial topology and statistical modeling. In particular, we use combinatorial topology to filter out spurious data that would otherwise skew statistical modeling techniques, and we employ advanced algorithms from algebraic statistics to efficiently find globally optimal fits to statistical models. This document summarizes the technical advances we have made to date that were made possible in whole or in part by MAPD funding. These technical contributions can be divided loosely into three categories: (1) advances in the field of combinatorial topology, (2) advances in statistical modeling, and (3) new integrated topological and statistical methods.

  2. Enabling Breakthrough Kinetic Simulations of the Magnetosphere Using Petascale Computing

    NASA Astrophysics Data System (ADS)

    Vu, H. X.; Karimabadi, H.; Omelchenko, Y.; Tatineni, M.; Majumdar, A.; Krauss-Varban, D.; Dorelli, J.

    2009-12-01

    Currently global magnetospheric simulations are predominantly based on single-fluid magnetohydrodynamics (MHD). MHD simulations have proven useful in studies of the global dynamics of the magnetosphere with the goal of predicting eminent features of substorms and other global events. But it is well known that the magnetosphere is dominated by ion kinetic effects, which is ignored in MHD simulations, and many key aspects of the magnetosphere relating to transport and structure of boundaries await global kinetic simulations. We are using our recent innovations in hybrid (electron fluid, kinetic ions) simulations, as being developed in our Hybrid3D (H3D) code, and the power of massively parallel machines to make, breakthrough 3D global kinetic simulations of the magnetosphere. The innovations include (i) multi-zone (asynchronous) algorithm, (ii) dynamic load balancing, and (iii) code adaptation and optimization to large number of processors. In this presentation we will show preliminary results of our progress to date using from 512 to over 8192 cores. In particular, we focus on what we believe to be the first demonstration of the formation of a flux rope in 3D global hybrid simulations. As in the MHD simulations, the resulting flux rope has a very complex structure, wrapping up field lines from different regions and appears to be connected on at least one end to Earth. Magnetic topology of the FTE is examined to reveal the existence of several separators (3D X-lines). The formation and growth of this structure will be discussed and spatial profile of the magnetic and plasma variables will be compared with those from MHD simulations.

  3. Enabling Global Kinetic Simulations of the Magnetosphere via Petascale Computing

    NASA Astrophysics Data System (ADS)

    Karimabadi, H.; Vu, H. X.; Omelchenko, Y. A.; Tatineni, M.; Majumdar, A.; Catalyurek, U. V.; Saule, E.

    2009-11-01

    The ultimate goal in magnetospheric physics is to understand how the solar wind transfers its mass, momentum and energy to the magnetosphere. This problem has turned out to be much more complex intellectually than originally thought. MHD simulations have proven useful in predicting eminent features of substorms and other global events. Given the complexity of solar wind-magnetosphere interactions, hybrid (electron fluid, kinetic ion) simulations have recently been emerging in the studies of the global dynamics of the magnetosphere with the goal of accurately predicting the energetic particle transport and structure of plasma boundaries. We take advantage of our recent innovations in hybrid simulations and the power of massively parallel computers to make breakthrough 3D global kinetic simulations of the magnetosphere. The preliminary results reveal many major differences with global MHD simulations. For example, the hybrid simulations predict the formation of the quadruple structure associated with reconnection events, ion/ion kink instability in the tail, turbulence in the magnetosheath, and formation of the ion foreshock region.

  4. Petascale Molecular Dynamics Simulations of Polymers and Liquid Crystals

    NASA Astrophysics Data System (ADS)

    Nguyen, Trung Dac; Carrillo, Jan-Michael; Brown, W. Michael

    2014-03-01

    The availability of faster and larger supercomputers and more efficient parallel algorithms now enable us to perform unprecedented simulations approaching experimental scales. Here we present two examples of our latest large-scale molecular dynamics simulations using the Titan supercomputer in the Oak Ridge Leadership Computing Facility (OLCF). In the first study, we address the rupture origin of liquid crystal thin films wetting a solid substrate. Our simulations show the key signatures of spinodal instability in isotropic and nematic films on top of thermal nucleation. Importantly, we found evidence of a common rupture mechanism independent of initial thickness and LC orientational ordering. In the second study, we used coarse-grained molecular dynamics to simulate the thermal annealing of poly(3-hexylthiophene) (P3HT) and Phenyl-C61-butyric acid methyl ester (PCBM) blends in the presence of a silicon substrate found in organic solar cells. Our simulations show different phase segregated morphologies dependent on the P3HT chain length and PCBM volume fraction in the blend. Furthermore, the ternary blend of short and long P3HT chains with PCBM affects the vertical phase segregation of PCBM decreasing its concentration in the vicinity of the substrate. U.S. DOE Contract No. DE-AC05-00OR22725.

  5. Hierarchical Petascale Simulation Framework for Stress Corrosion Cracking

    SciTech Connect

    Vashishta, Priya

    2014-12-01

    Reaction Dynamics in Energetic Materials: Detonation is a prototype of mechanochemistry, in which mechanically and thermally induced chemical reactions far from equilibrium exhibit vastly different behaviors. It is also one of the hardest multiscale physics problems, in which diverse length and time scales play important roles. The CACS group has performed multimillion-atom reactive MD simulations to reveal a novel two-stage reaction mechanism during the detonation of cyclotrimethylenetrinitramine (RDX) crystal. Rapid production of N2 and H2O within ~10 ps is followed by delayed production of CO molecules within ~ 1 ns. They found that further decomposition towards the final products is inhibited by the formation of large metastable C- and O-rich clusters with fractal geometry. The CACS group has also simulated the oxidation dynamics of close-packed aggregates of aluminum nanoparticles passivated by oxide shells. Their simulation results suggest an unexpectedly active role of the oxide shell as a nanoreactor.

  6. Peta-scale QMC simulations on DOE leadership computing facilities

    NASA Astrophysics Data System (ADS)

    Kim, Jeongnim; Ab Initio Network Collaboration

    2014-03-01

    Continuum quantum Monte Carlo (QMC) has proved to be an invaluable tool for predicting the properties of matter from fundamental principles. Even with numerous innovations in methods, algorithms and codes, QMC simulations of realistic problems of 1000s and more electrons are demanding, requiring millions of core hours to achieve the target chemical accuracy. The multiple forms of parallelism afforded by QMC algorithms and high compute-to-communication ratio make them ideal candidates for acceleration in the multi/many-core paradigm. We have ported and tuned QMCPACK to recently deployed DOE doca-petaflop systems, Titan (Cray XK7 CPU/GPGPU) and Mira (IBM Blue Gene/Q). The efficiency gains through improved algorithms and architecture-specific tuning and, most importantly, the vast increase in computing powers have opened up opportunities to apply QMC at unprecedent scales, accuracy and time-to-solution. We present large-scale QMC simulations to study energetics of layered materials where vdW interactions play critical roles. Collaboration supported through the Predictive Theory and Modeling for Materials and Chemical Science program by the Basic Energy Science, Department of Energy.

  7. First-Principles Petascale Simulations for Predicting Deflagration to Detonation Transition in Hydrogen-Oxygen Mixtures

    SciTech Connect

    Khokhlov, Alexei; Austin, Joanna; Bacon, C.

    2015-03-02

    Hydrogen has emerged as an important fuel across a range of industries as a means of achieving energy independence and to reduce emissions. DDT and the resulting detonation waves in hydrogen-oxygen can have especially catastrophic consequences in a variety of industrial and energy producing settings related to hydrogen. First-principles numerical simulations of flame acceleration and DDT are required for an in-depth understanding of the phenomena and facilitating design of safe hydrogen systems. The goals of this project were (1) to develop first-principles petascale reactive flow Navier-Stokes simulation code for predicting gaseous high-speed combustion and detonation (HSCD) phenomena and (2) demonstrate feasibility of first-principles simulations of rapid flame acceleration and deflagration-to-detonation transition (DDT) in stoichiometric hydrogen-oxygen mixture (2H2 + O2). The goals of the project have been accomplished. We have developed a novel numerical simulation code, named HSCD, for performing first-principles direct numerical simulations of high-speed hydrogen combustion. We carried out a series of validating numerical simulations of inert and reactive shock reflection experiments in shock tubes. We then performed a pilot numerical simulation of flame acceleration in a long pipe. The simulation showed the transition of the rapidly accelerating flame into a detonation. The DDT simulations were performed using BG/Q Mira at the Argonne National Laboratory, currently the fourth fastest super-computer in the world.

  8. Petascale algorithms for reactor hydrodynamics.

    SciTech Connect

    Fischer, P.; Lottes, J.; Pointer, W. D.; Siegel, A.

    2008-01-01

    We describe recent algorithmic developments that have enabled large eddy simulations of reactor flows on up to P = 65, 000 processors on the IBM BG/P at the Argonne Leadership Computing Facility. Petascale computing is expected to play a pivotal role in the design and analysis of next-generation nuclear reactors. Argonne's SHARP project is focused on advanced reactor simulation, with a current emphasis on modeling coupled neutronics and thermal-hydraulics (TH). The TH modeling comprises a hierarchy of computational fluid dynamics approaches ranging from detailed turbulence computations, using DNS (direct numerical simulation) and LES (large eddy simulation), to full core analysis based on RANS (Reynolds-averaged Navier-Stokes) and subchannel models. Our initial study is focused on LES of sodium-cooled fast reactor cores. The aim is to leverage petascale platforms at DOE's Leadership Computing Facilities (LCFs) to provide detailed information about heat transfer within the core and to provide baseline data for less expensive RANS and subchannel models.

  9. Computational Science Research in Support of Petascale Electromagnetic Modeling

    SciTech Connect

    Lee, L.-Q.; Akcelik, V; Ge, L; Chen, S; Schussman, G; Candel, A; Li, Z; Xiao, L; Kabel, A; Uplenchwar, R; Ng, C; Ko, K; /SLAC

    2008-06-20

    Computational science research components were vital parts of the SciDAC-1 accelerator project and are continuing to play a critical role in newly-funded SciDAC-2 accelerator project, the Community Petascale Project for Accelerator Science and Simulation (ComPASS). Recent advances and achievements in the area of computational science research in support of petascale electromagnetic modeling for accelerator design analysis are presented, which include shape determination of superconducting RF cavities, mesh-based multilevel preconditioner in solving highly-indefinite linear systems, moving window using h- or p- refinement for time-domain short-range wakefield calculations, and improved scalable application I/O.

  10. Petascale Simulations of the Morphology and the Molecular Interface of Bulk Heterojunctions

    SciTech Connect

    Carrillo, Jan-Michael Y.; Seibers, Zach; Kumar, Rajeev; Matheson, Michael A.; Ankner, John F.; Goswami, Monojoy; Bhaskaran-Nair, Kiran; Shelton, William A.; Sumpter, Bobby G.; Kilbey, S. Michael

    2016-07-14

    Understanding how additives interact and segregate within bulk heterojunction (BHJ) thin films is critical for exercising control over structure at multiple length scales and delivering improvements in photovoltaic performance. The morphological evolution of poly(3-hexylthiophene) (P3HT) and phenyl-C61-butyric acid methyl ester (PCBM) blends that are commensurate with the size of a BHJ thin film is examined using petascale coarse-grained molecular dynamics simulations. When comparing 2 component and 3 component systems containing short P3HT chains as additives undergoing thermal annealing we demonstrate that the short chains alter the morphol- ogy in apparently useful ways: They efficiently migrate to the P3HT/PCBM interface, increasing the P3HT domain size and interfacial area. Simulation results agree with depth profiles determined from neutron reflectometry measurements that reveal PCBM enrichment near substrate and air interfaces, but a decrease in that PCBM enrich- ment when a small amount of short P3HT chains are integrated into the BHJ blend. Atomistic simulations of the P3HT/PCBM blend interfaces show a non-monotonic dependence of the interfacial thickness as a function of number of repeat units in the oligomeric P3HT additive, and the thiophene rings orient parallel to the interfacial plane as they approach the PCBM domain. Using the nanoscale geometries of the P3HT oligomers, LUMO and HOMO energy levels calculated by density functional theory are found to be invariant across the donor/acceptor interface. Finally, these connections between additives, processing, and morphology at all length scales are generally useful for efforts to improve device performance.

  11. Petascale Simulations of the Morphology and the Molecular Interface of Bulk Heterojunctions

    DOE PAGES

    Carrillo, Jan-Michael Y.; Seibers, Zach; Kumar, Rajeev; ...

    2016-07-14

    Understanding how additives interact and segregate within bulk heterojunction (BHJ) thin films is critical for exercising control over structure at multiple length scales and delivering improvements in photovoltaic performance. The morphological evolution of poly(3-hexylthiophene) (P3HT) and phenyl-C61-butyric acid methyl ester (PCBM) blends that are commensurate with the size of a BHJ thin film is examined using petascale coarse-grained molecular dynamics simulations. When comparing 2 component and 3 component systems containing short P3HT chains as additives undergoing thermal annealing we demonstrate that the short chains alter the morphol- ogy in apparently useful ways: They efficiently migrate to the P3HT/PCBM interface, increasingmore » the P3HT domain size and interfacial area. Simulation results agree with depth profiles determined from neutron reflectometry measurements that reveal PCBM enrichment near substrate and air interfaces, but a decrease in that PCBM enrich- ment when a small amount of short P3HT chains are integrated into the BHJ blend. Atomistic simulations of the P3HT/PCBM blend interfaces show a non-monotonic dependence of the interfacial thickness as a function of number of repeat units in the oligomeric P3HT additive, and the thiophene rings orient parallel to the interfacial plane as they approach the PCBM domain. Using the nanoscale geometries of the P3HT oligomers, LUMO and HOMO energy levels calculated by density functional theory are found to be invariant across the donor/acceptor interface. Finally, these connections between additives, processing, and morphology at all length scales are generally useful for efforts to improve device performance.« less

  12. Topology for Statistical Modeling of Petascale Data

    SciTech Connect

    Bennett, Janine Camille; Pebay, Philippe Pierre; Pascucci, Valerio; Levine, Joshua; Gyulassy, Attila; Rojas, Maurice

    2014-07-01

    This document presents current technical progress and dissemination of results for the Mathematics for Analysis of Petascale Data (MAPD) project titled "Topology for Statistical Modeling of Petascale Data", funded by the Office of Science Advanced Scientific Computing Research (ASCR) Applied Math program.

  13. Science and Engineering in the Petascale Era

    PubMed Central

    Dunning, Thom H.; Schulten, Klaus; Tromp, Jeroen; Ostriker, Jeremiah P.; Droegemeier, Kelvin; Xue, Ming; Fussell, Paul

    2011-01-01

    What breakthrough advances will petascale computing bring to various science and engineering fields? Experts in everything from astronomy to seismology envision the opportunities ahead and the impact they’ll have on advancing our understanding of the world. PMID:21998556

  14. Petascale Kinetic Simulations in Space Sciences: New Simulations and Data Discovery Techniques and Physics Results

    NASA Astrophysics Data System (ADS)

    Karimabadi, Homa

    2012-03-01

    Recent advances in simulation technology and hardware are enabling breakthrough science where many longstanding problems can now be addressed for the first time. In this talk, we focus on kinetic simulations of the Earth's magnetosphere and magnetic reconnection process which is the key mechanism that breaks the protective shield of the Earth's dipole field, allowing the solar wind to enter the Earth's magnetosphere. This leads to the so-called space weather where storms on the Sun can affect space-borne and ground-based technological systems on Earth. The talk will consist of three parts: (a) overview of a new multi-scale simulation technique where each computational grid is updated based on its own unique timestep, (b) Presentation of a new approach to data analysis that we refer to as Physics Mining which entails combining data mining and computer vision algorithms with scientific visualization to extract physics from the resulting massive data sets. (c) Presentation of several recent discoveries in studies of space plasmas including the role of vortex formation and resulting turbulence in magnetized plasmas.

  15. Foundational Tools for Petascale Computing

    SciTech Connect

    Miller, Barton

    2014-05-19

    The Paradyn project has a history of developing algorithms, techniques, and software that push the cutting edge of tool technology for high-end computing systems. Under this funding, we are working on a three-year agenda to make substantial new advances in support of new and emerging Petascale systems. The overall goal for this work is to address the steady increase in complexity of these petascale systems. Our work covers two key areas: (1) The analysis, instrumentation and control of binary programs. Work in this area falls under the general framework of the Dyninst API tool kits. (2) Infrastructure for building tools and applications at extreme scale. Work in this area falls under the general framework of the MRNet scalability framework. Note that work done under this funding is closely related to work done under a contemporaneous grant, “High-Performance Energy Applications and Systems”, SC0004061/FG02-10ER25972, UW PRJ36WV.

  16. Topology for Statistical Modeling of Petascale Data

    SciTech Connect

    Pascucci, Valerio; Levine, Joshua; Gyulassy, Attila; Bremer, P. -T.

    2013-10-31

    Many commonly used algorithms for mathematical analysis do not scale well enough to accommodate the size or complexity of petascale data produced by computational simulations. The primary goal of this project is to develop new mathematical tools that address both the petascale size and uncertain nature of current data. At a high level, the approach of the entire team involving all three institutions is based on the complementary techniques of combinatorial topology and statistical modelling. In particular, we use combinatorial topology to filter out spurious data that would otherwise skew statistical modelling techniques, and we employ advanced algorithms from algebraic statistics to efficiently find globally optimal fits to statistical models. The overall technical contributions can be divided loosely into three categories: (1) advances in the field of combinatorial topology, (2) advances in statistical modelling, and (3) new integrated topological and statistical methods. Roughly speaking, the division of labor between our 3 groups (Sandia Labs in Livermore, Texas A&M in College Station, and U Utah in Salt Lake City) is as follows: the Sandia group focuses on statistical methods and their formulation in algebraic terms, and finds the application problems (and data sets) most relevant to this project, the Texas A&M Group develops new algebraic geometry algorithms, in particular with fewnomial theory, and the Utah group develops new algorithms in computational topology via Discrete Morse Theory. However, we hasten to point out that our three groups stay in tight contact via videconference every 2 weeks, so there is much synergy of ideas between the groups. The following of this document is focused on the contributions that had grater direct involvement from the team at the University of Utah in Salt Lake City.

  17. Petascale direct numerical simulation of blood flow on 200K cores and heterogeneous architectures

    SciTech Connect

    Sampath, Rahul S; Veerapaneni, Shravan; Biros, George; Zorin, Denis; Vuduc, Richard; Vetter, Jeffrey S; Moon, Logan; Malhotra, Dhairya; Shringarpure, Aashay; Rahimian, Abtin; Lashuk, Ilya; Chandramowlishwaran, Aparna

    2010-01-01

    We present a fast, petaflop-scalable algorithm for Stokesian particulate flows. Our goal is the direct simulation of blood, which we model as a mixture of a Stokesian fluid (plasma) and red blood cells (RBCs). Directly simulating blood is a challenging multiscale, multiphysics problem. We report simulations with up to 260 million deformable RBCs. The largest simulation amounts to 90 billion unknowns in space. In terms of the number of cells, we improve the state-of-the art by several orders of magnitude: the previous largest simulation, at the same physical fidelity as ours, resolved the flow of O(1,000-10,000) RBCs. Our approach has three distinct characteristics: (1) we faithfully represent the physics of RBCs by using nonlinear solid mechanics to capture the deformations of each cell; (2) we accurately resolve the long-range, N-body, hydrodynamic interactions between RBCs (which are caused by the surrounding plasma); and (3) we allow for highly non-uniform spatial distributions of RBCs. The new method has been implemented in the software library MOBO (for 'Moving Boundaries'). We designed MOBO to support parallelism at all levels, including inter-node distributed memory parallelism, intra-node shared memory parallelism, data parallelism (vectorization), and fine-grained multithreading for GPUs. We have implemented and optimized the majority of the computation kernels on both Intel/AMD x86 and NVidia's Tesla/Fermi platforms for single and double floating point precision. Overall, the code has scaled on 256 CPU-GPUs on the Teragrid's Lincoln cluster and on 200,000 AMD cores of the Oak Ridge National Laboratory's Jaguar PF system. In our largest simulation, we have achieved 0.7 Petaflops/s of sustained performance on Jaguar.

  18. Damaris: Addressing performance variability in data management for post-petascale simulations

    SciTech Connect

    Dorier, Matthieu; Antoniu, Gabriel; Cappello, Franck; Snir, Marc; Sisneros, Robert; Yildiz, Orcun; Ibrahim, Shadi; Peterka, Tom; Orf, Leigh

    2016-10-01

    With exascale computing on the horizon, reducing performance variability in data management tasks (storage, visualization, analysis, etc.) is becoming a key challenge in sustaining high performance. Here, this variability significantly impacts the overall application performance at scale and its predictability over time. In this article, we present Damaris, a system that leverages dedicated cores in multicore nodes to offload data management tasks, including I/O, data compression, scheduling of data movements, in situ analysis, and visualization. We evaluate Damaris with the CM1 atmospheric simulation and the Nek5000 computational fluid dynamic simulation on four platforms, including NICS’s Kraken and NCSA’s Blue Waters. Our results show that (1) Damaris fully hides the I/O variability as well as all I/O-related costs, thus making simulation performance predictable; (2) it increases the sustained write throughput by a factor of up to 15 compared with standard I/O approaches; (3) it allows almost perfect scalability of the simulation up to over 9,000 cores, as opposed to state-of-the-art approaches that fail to scale; and (4) it enables a seamless connection to the VisIt visualization software to perform in situ analysis and visualization in a way that impacts neither the performance of the simulation nor its variability. In addition, we extended our implementation of Damaris to also support the use of dedicated nodes and conducted a thorough comparison of the two approaches—dedicated cores and dedicated nodes—for I/O tasks with the aforementioned applications.

  19. Damaris: Addressing performance variability in data management for post-petascale simulations

    DOE PAGES

    Dorier, Matthieu; Antoniu, Gabriel; Cappello, Franck; ...

    2016-10-01

    With exascale computing on the horizon, reducing performance variability in data management tasks (storage, visualization, analysis, etc.) is becoming a key challenge in sustaining high performance. Here, this variability significantly impacts the overall application performance at scale and its predictability over time. In this article, we present Damaris, a system that leverages dedicated cores in multicore nodes to offload data management tasks, including I/O, data compression, scheduling of data movements, in situ analysis, and visualization. We evaluate Damaris with the CM1 atmospheric simulation and the Nek5000 computational fluid dynamic simulation on four platforms, including NICS’s Kraken and NCSA’s Blue Waters.more » Our results show that (1) Damaris fully hides the I/O variability as well as all I/O-related costs, thus making simulation performance predictable; (2) it increases the sustained write throughput by a factor of up to 15 compared with standard I/O approaches; (3) it allows almost perfect scalability of the simulation up to over 9,000 cores, as opposed to state-of-the-art approaches that fail to scale; and (4) it enables a seamless connection to the VisIt visualization software to perform in situ analysis and visualization in a way that impacts neither the performance of the simulation nor its variability. In addition, we extended our implementation of Damaris to also support the use of dedicated nodes and conducted a thorough comparison of the two approaches—dedicated cores and dedicated nodes—for I/O tasks with the aforementioned applications.« less

  20. Scaling of Multimillion-Atom Biological Molecular Dynamics Simulation on a Petascale Supercomputer.

    PubMed

    Schulz, Roland; Lindner, Benjamin; Petridis, Loukas; Smith, Jeremy C

    2009-10-13

    A strategy is described for a fast all-atom molecular dynamics simulation of multimillion-atom biological systems on massively parallel supercomputers. The strategy is developed using benchmark systems of particular interest to bioenergy research, comprising models of cellulose and lignocellulosic biomass in an aqueous solution. The approach involves using the reaction field (RF) method for the computation of long-range electrostatic interactions, which permits efficient scaling on many thousands of cores. Although the range of applicability of the RF method for biomolecular systems remains to be demonstrated, for the benchmark systems the use of the RF produces molecular dipole moments, Kirkwood G factors, other structural properties, and mean-square fluctuations in excellent agreement with those obtained with the commonly used Particle Mesh Ewald method. With RF, three million- and five million-atom biological systems scale well up to ∼30k cores, producing ∼30 ns/day. Atomistic simulations of very large systems for time scales approaching the microsecond would, therefore, appear now to be within reach.

  1. Performance of hybrid programming models for multiscale cardiac simulations: preparing for petascale computation.

    PubMed

    Pope, Bernard J; Fitch, Blake G; Pitman, Michael C; Rice, John J; Reumann, Matthias

    2011-10-01

    Future multiscale and multiphysics models that support research into human disease, translational medical science, and treatment can utilize the power of high-performance computing (HPC) systems. We anticipate that computationally efficient multiscale models will require the use of sophisticated hybrid programming models, mixing distributed message-passing processes [e.g., the message-passing interface (MPI)] with multithreading (e.g., OpenMP, Pthreads). The objective of this study is to compare the performance of such hybrid programming models when applied to the simulation of a realistic physiological multiscale model of the heart. Our results show that the hybrid models perform favorably when compared to an implementation using only the MPI and, furthermore, that OpenMP in combination with the MPI provides a satisfactory compromise between performance and code complexity. Having the ability to use threads within MPI processes enables the sophisticated use of all processor cores for both computation and communication phases. Considering that HPC systems in 2012 will have two orders of magnitude more cores than what was used in this study, we believe that faster than real-time multiscale cardiac simulations can be achieved on these systems.

  2. The Petascale Data Storage Institute

    SciTech Connect

    Gibson, Garth; Long, Darrell; Honeyman, Peter; Grider, Gary; Kramer, William; Shalf, John; Roth, Philip; Felix, Evan; Ward, Lee

    2013-07-01

    Petascale computing infrastructures for scientific discovery make petascale demands on information storage capacity, performance, concurrency, reliability, availability, and manageability.The Petascale Data Storage Institute focuses on the data storage problems found in petascale scientific computing environments, with special attention to community issues such as interoperability, community buy-in, and shared tools.The Petascale Data Storage Institute is a collaboration between researchers at Carnegie Mellon University, National Energy Research Scientific Computing Center, Pacific Northwest National Laboratory, Oak Ridge National Laboratory, Sandia National Laboratory, Los Alamos National Laboratory, University of Michigan, and the University of California at Santa Cruz.

  3. Towards Optimal Petascale Simulations

    SciTech Connect

    Demmel, James W.

    2013-11-08

    Our goal in this project was to design scalable numerical algorithms needed by SciDAC applications that adapt to use evolving hardware resources as efficiently as possible. Our primary challenge is minimizing communication costs, where communication means moving data either between levels of a memory hierarchy (L1 cache to L2 cache to main memory etc.) or between processors over a network. Floating point rates are improving exponentially faster than bandwidth, which is improving exponentially faster than latency. So our goal is to minimize communication. We describe our progress in this area, both for direct and iterative linear algebra. In both areas we have (1) identified lower bounds on the amount of communication (measured both by the number of words moved and the number of messages) required to perform these algorithms, (2) analyzed existing algorithms, which by and large do not attain these lower bounds, and (3) identified or invented new algorithms that do attain them, and evaluated their speedups, which can be quite large.

  4. The grand challenge of managing the petascale facility.

    SciTech Connect

    Aiken, R. J.; Mathematics and Computer Science

    2007-02-28

    This report is the result of a study of networks and how they may need to evolve to support petascale leadership computing and science. As Dr. Ray Orbach, director of the Department of Energy's Office of Science, says in the spring 2006 issue of SciDAC Review, 'One remarkable example of growth in unexpected directions has been in high-end computation'. In the same article Dr. Michael Strayer states, 'Moore's law suggests that before the end of the next cycle of SciDAC, we shall see petaflop computers'. Given the Office of Science's strong leadership and support for petascale computing and facilities, we should expect to see petaflop computers in operation in support of science before the end of the decade, and DOE/SC Advanced Scientific Computing Research programs are focused on making this a reality. This study took its lead from this strong focus on petascale computing and the networks required to support such facilities, but it grew to include almost all aspects of the DOE/SC petascale computational and experimental science facilities, all of which will face daunting challenges in managing and analyzing the voluminous amounts of data expected. In addition, trends indicate the increased coupling of unique experimental facilities with computational facilities, along with the integration of multidisciplinary datasets and high-end computing with data-intensive computing; and we can expect these trends to continue at the petascale level and beyond. Coupled with recent technology trends, they clearly indicate the need for including capability petascale storage, networks, and experiments, as well as collaboration tools and programming environments, as integral components of the Office of Science's petascale capability metafacility. The objective of this report is to recommend a new cross-cutting program to support the management of petascale science and infrastructure. The appendices of the report document current and projected DOE computation facilities, science

  5. Petascale Parallelization of the Gyrokinetic Toroidal Code

    SciTech Connect

    Ethier, Stephane; Adams, Mark; Carter, Jonathan; Oliker, Leonid

    2010-05-01

    The Gyrokinetic Toroidal Code (GTC) is a global, three-dimensional particle-in-cell application developed to study microturbulence in tokamak fusion devices. The global capability of GTC is unique, allowing researchers to systematically analyze important dynamics such as turbulence spreading. In this work we examine a new radial domain decomposition approach to allow scalability onto the latest generation of petascale systems. Extensive performance evaluation is conducted on three high performance computing systems: the IBM BG/P, the Cray XT4, and an Intel Xeon Cluster. Overall results show that the radial decomposition approach dramatically increases scalability, while reducing the memory footprint - allowing for fusion device simulations at an unprecedented scale. After a decade where high-end computing (HEC) was dominated by the rapid pace of improvements to processor frequencies, the performance of next-generation supercomputers is increasingly differentiated by varying interconnect designs and levels of integration. Understanding the tradeoffs of these system designs is a key step towards making effective petascale computing a reality. In this work, we examine a new parallelization scheme for the Gyrokinetic Toroidal Code (GTC) [?] micro-turbulence fusion application. Extensive scalability results and analysis are presented on three HEC systems: the IBM BlueGene/P (BG/P) at Argonne National Laboratory, the Cray XT4 at Lawrence Berkeley National Laboratory, and an Intel Xeon cluster at Lawrence Livermore National Laboratory. Overall results indicate that the new radial decomposition approach successfully attains unprecedented scalability to 131,072 BG/P cores by overcoming the memory limitations of the previous approach. The new version is well suited to utilize emerging petascale resources to access new regimes of physical phenomena.

  6. A scalable sparse eigensolver for petascale applications

    NASA Astrophysics Data System (ADS)

    Keceli, Murat; Zhang, Hong; Zapol, Peter; Dixon, David; Wagner, Albert

    2015-03-01

    Exploiting locality of chemical interactions and therefore sparsity is necessary to push the limits of quantum simulations beyond petascale. However, sparse numerical algorithms are known to have poor strong scaling. Here, we show that shift-and-invert parallel spectral transformations (SIPs) method can scale up to two-hundred thousand cores for density functional based tight-binding (DFTB), or semi-empirical molecular orbital (SEMO) applications. We demonstrated the robustness and scalability of the SIPs method on various kinds of systems including metallic carbon nanotubes, diamond crystals and water clusters. We analyzed how sparsity patterns and eigenvalue spectrums of these different type of applications affect the computational performance of the SIPs. The SIPs method enables us to perform simulations with more than five hundred thousands of basis functions utilizing more than hundreds of thousands of cores. SIPs has a better scaling for memory and computational time in contrast to dense eigensolvers, and it does not require fast interconnects.

  7. Petascale Supernova Simulation with CHIMERA

    SciTech Connect

    Messer, Bronson; Bruenn, S. W.; Blondin, J. M.; Mezzacappa, Anthony; Hix, William Raphael; Dirk, Charlotte

    2007-01-01

    CHIMERA is a multi-dimensional radiation hydrodynamics code designed to study core-collapse supernovae. The code is made up of three essentially independent parts: a hydrodynamics module, a nuclear burning module, and a neutrino transport solver combined within an operator-split approach. We describe some ma jor algorithmic facets of the code and briefly discuss some recent results. The multi-physics nature of the problem, and the specific implementation of that physics in CHIMERA, provide a rather straightforward path to effective use of multi-core platforms in the near future.

  8. PETASCALE DATA STORAGE INSTITUTE (PDSI) Final Report

    SciTech Connect

    Gibson, Garth

    2012-11-26

    , and has a large team at EMC supporting and enhancing it. PLFS is open sourced with a BSD license on sourceforge. Post PDSI funding comes from NNSA and industry sources. Moreover, PLFS has spin out half a dozen or more papers, partnered on research with multiple schools and vendors, and has projects to transparently 1) dis- tribute metadata over independent metadata servers, 2) exploit drastically non-POSIX Hadoop storage for HPC POSIX applications, 3) compress checkpoints on the fly, 4) batch delayed writes for write speed, 5) compress read-back indexes and parallelize their redistribution, 6) double-buffer writes in NAND Flash storage to decouple host blocking during checkpoint from disk write time in the storage system, 7) pack small files into a smaller number of bigger containers. There are two large scale open source Linux software projects that PDSI significantly incubated, though neither were initated in PDSI. These are 1) Ceph, a UCSC parallel object storage research project that has continued to be a vehicle for research, and has become a released part of Linux, and 2) Parallel NFS (pNFS) a portion of the IETF’s NFSv4.1 that brings the core data parallelism found in Lustre, PanFS, PVFS, and Ceph to the industry standard NFS, with released code in Linux 3.0, and its vendor offerings, with products from NetApp, EMC, BlueArc and RedHat. Both are fundamentally supported and advanced by vendor companies now, but were critcally transferred from research demonstration to viable product with funding from PDSI, in part. At this point Lustre remains the primary path to scalable IO in Exascale systems, but both Ceph and pNFS are viable alternatives with different fundamental advantages. Finally, research community building was a big success for PDSI. Through the HECFSIO workshops and HECURA project with NSF PDSI stimulated and helped to steer leveraged funding of over $25M. Through the Petascale (now Parallel) Data Storage Workshop series, www.pdsw.org, colocated with

  9. Advanced electromagnetic gun simulation

    NASA Astrophysics Data System (ADS)

    Brown, J. L.; George, E. B.; Lippert, J. R.; Balius, A. R.

    1986-11-01

    The architecture, software and application of a simulation system for evaluating electromagnetic gun (EMG) operability, maintainability, test data and performance tradeoffs are described. The system features a generic preprocessor designed for handling the large data rates necessary for EMG simulations. The preprocessor and postprocessor operate independent of the EMG simulation, which is viewed through windows by the user, who can then select the areas of the simulation desired. The simulation considers a homopolar generator, busbars, pulse shaping coils, the barrel, switches, and prime movers. In particular, account is taken of barrel loading by the magnetic field, Lorentz force and plasma pressure.

  10. Lightweight and Statistical Techniques for Petascale PetaScale Debugging

    SciTech Connect

    Miller, Barton

    2014-06-30

    This project investigated novel techniques for debugging scientific applications on petascale architectures. In particular, we developed lightweight tools that narrow the problem space when bugs are encountered. We also developed techniques that either limit the number of tasks and the code regions to which a developer must apply a traditional debugger or that apply statistical techniques to provide direct suggestions of the location and type of error. We extend previous work on the Stack Trace Analysis Tool (STAT), that has already demonstrated scalability to over one hundred thousand MPI tasks. We also extended statistical techniques developed to isolate programming errors in widely used sequential or threaded applications in the Cooperative Bug Isolation (CBI) project to large scale parallel applications. Overall, our research substantially improved productivity on petascale platforms through a tool set for debugging that complements existing commercial tools. Previously, Office Of Science application developers relied either on primitive manual debugging techniques based on printf or they use tools, such as TotalView, that do not scale beyond a few thousand processors. However, bugs often arise at scale and substantial effort and computation cycles are wasted in either reproducing the problem in a smaller run that can be analyzed with the traditional tools or in repeated runs at scale that use the primitive techniques. New techniques that work at scale and automate the process of identifying the root cause of errors were needed. These techniques significantly reduced the time spent debugging petascale applications, thus leading to a greater overall amount of time for application scientists to pursue the scientific objectives for which the systems are purchased. We developed a new paradigm for debugging at scale: techniques that reduced the debugging scenario to a scale suitable for traditional debuggers, e.g., by narrowing the search for the root-cause analysis

  11. Multi-petascale highly efficient parallel supercomputer

    DOEpatents

    Asaad, Sameh; Bellofatto, Ralph E.; Blocksome, Michael A.; Blumrich, Matthias A.; Boyle, Peter; Brunheroto, Jose R.; Chen, Dong; Cher, Chen -Yong; Chiu, George L.; Christ, Norman; Coteus, Paul W.; Davis, Kristan D.; Dozsa, Gabor J.; Eichenberger, Alexandre E.; Eisley, Noel A.; Ellavsky, Matthew R.; Evans, Kahn C.; Fleischer, Bruce M.; Fox, Thomas W.; Gara, Alan; Giampapa, Mark E.; Gooding, Thomas M.; Gschwind, Michael K.; Gunnels, John A.; Hall, Shawn A.; Haring, Rudolf A.; Heidelberger, Philip; Inglett, Todd A.; Knudson, Brant L.; Kopcsay, Gerard V.; Kumar, Sameer; Mamidala, Amith R.; Marcella, James A.; Megerian, Mark G.; Miller, Douglas R.; Miller, Samuel J.; Muff, Adam J.; Mundy, Michael B.; O'Brien, John K.; O'Brien, Kathryn M.; Ohmacht, Martin; Parker, Jeffrey J.; Poole, Ruth J.; Ratterman, Joseph D.; Salapura, Valentina; Satterfield, David L.; Senger, Robert M.; Smith, Brian; Steinmacher-Burow, Burkhard; Stockdell, William M.; Stunkel, Craig B.; Sugavanam, Krishnan; Sugawara, Yutaka; Takken, Todd E.; Trager, Barry M.; Van Oosten, James L.; Wait, Charles D.; Walkup, Robert E.; Watson, Alfred T.; Wisniewski, Robert W.; Wu, Peng

    2015-07-14

    A Multi-Petascale Highly Efficient Parallel Supercomputer of 100 petaOPS-scale computing, at decreased cost, power and footprint, and that allows for a maximum packaging density of processing nodes from an interconnect point of view. The Supercomputer exploits technological advances in VLSI that enables a computing model where many processors can be integrated into a single Application Specific Integrated Circuit (ASIC). Each ASIC computing node comprises a system-on-chip ASIC utilizing four or more processors integrated into one die, with each having full access to all system resources and enabling adaptive partitioning of the processors to functions such as compute or messaging I/O on an application by application basis, and preferably, enable adaptive partitioning of functions in accordance with various algorithmic phases within an application, or if I/O or other processors are underutilized, then can participate in computation or communication nodes are interconnected by a five dimensional torus network with DMA that optimally maximize the throughput of packet communications between nodes and minimize latency.

  12. Advanced concepts flight simulation facility.

    PubMed

    Chappell, S L; Sexton, G A

    1986-12-01

    The cockpit environment is changing rapidly. New technology allows airborne computerised information, flight automation and data transfer with the ground. By 1995, not only will the pilot's task have changed, but also the tools for doing that task. To provide knowledge and direction for these changes, the National Aeronautics and Space Administration (NASA) and the Lockheed-Georgia Company have completed three identical Advanced Concepts Flight Simulation Facilities. Many advanced features have been incorporated into the simulators - e g, cathode ray tube (CRT) displays of flight and systems information operated via touch-screen or voice, print-outs of clearances, cockpit traffic displays, current databases containing navigational charts, weather and flight plan information, and fuel-efficient autopilot control from take-off to touchdown. More importantly, this cockpit is a versatile test bed for studying displays, controls, procedures and crew management in a full-mission context. The facility also has an air traffic control simulation, with radio and data communications, and an outside visual scene with variable weather conditions. These provide a veridical flight environment to evaluate accurately advanced concepts in flight stations.

  13. Petascale Diagnostic Assessment of the Global Portfolio Rainfall Space Missions' Ability to Support Flood Forecasting

    NASA Astrophysics Data System (ADS)

    Reed, P. M.; Chaney, N.; Herman, J. D.; Wood, E. F.; Ferringer, M. P.

    2015-12-01

    This research represents a multi-institutional collaboration between Cornell University, The Aerospace Corporation, and Princeton University that has completed a Petascale diagnostic assessment of the current 10 satellite missions providing rainfall observations. Our diagnostic assessment has required four core tasks: (1) formally linking high-resolution astrodynamics design and coordination of space assets with their global hydrological impacts within a Petascale "many-objective" global optimization framework, (2) developing a baseline diagnostic evaluation of a 1-degree resolution global implementation of the Variable Infiltration Capacity (VIC) model to establish the required satellite observation frequencies and coverage to maintain acceptable global flood forecasts, (3) evaluating the limitations and vulnerabilities of the full suite of current satellite precipitation missions including the recently approved Global Precipitation Measurement (GPM) mission, and (4) conceptualizing the next generation spaced-based platforms for water cycle observation. Our team exploited over 100 Million hours of computing access on the 700,000+ core Blue Waters machine to radically advance our ability to discover and visualize key system tradeoffs and sensitivities. This project represents to our knowledge the first attempt to develop a 10,000 member Monte Carlo global hydrologic simulation at one degree resolution that characterizes the uncertain effects of changing the available frequencies of satellite precipitation on drought and flood forecasts. The simulation—optimization components of the work have set a theoretical baseline for the best possible frequencies and coverages for global precipitation given unlimited investment, broad international coordination in reconfiguring existing assets, and new satellite constellation design objectives informed directly by key global hydrologic forecasting requirements. Our research poses a step towards realizing the integrated

  14. Advanced simulation of digital filters

    NASA Astrophysics Data System (ADS)

    Doyle, G. S.

    1980-09-01

    An Advanced Simulation of Digital Filters has been implemented on the IBM 360/67 computer utilizing Tektronix hardware and software. The program package is appropriate for use by persons beginning their study of digital signal processing or for filter analysis. The ASDF programs provide the user with an interactive method by which filter pole and zero locations can be manipulated. Graphical output on both the Tektronix graphics screen and the Versatec plotter are provided to observe the effects of pole-zero movement.

  15. Exploring the connectome: petascale volume visualization of microscopy data streams.

    PubMed

    Beyer, Johanna; Hadwiger, Markus; Al-Awami, Ali; Jeong, Won-Ki; Kasthuri, Narayanan; Lichtman, Jeff W; Pfister, Hanspeter

    2013-01-01

    Recent advances in high-resolution microscopy let neuroscientists acquire neural-tissue volume data of extremely large sizes. However, the tremendous resolution and the high complexity of neural structures present big challenges to storage, processing, and visualization at interactive rates. A proposed system provides interactive exploration of petascale (petavoxel) volumes resulting from high-throughput electron microscopy data streams. The system can concurrently handle multiple volumes and can support the simultaneous visualization of high-resolution voxel segmentation data. Its visualization-driven design restricts most computations to a small subset of the data. It employs a multiresolution virtual-memory architecture for better scalability than previous approaches and for handling incomplete data. Researchers have employed it for a 1-teravoxel mouse cortex volume, of which several hundred axons and dendrites as well as synapses have been segmented and labeled.

  16. Final Project Report. Scalable fault tolerance runtime technology for petascale computers

    SciTech Connect

    Krishnamoorthy, Sriram; Sadayappan, P

    2015-06-16

    With the massive number of components comprising the forthcoming petascale computer systems, hardware failures will be routinely encountered during execution of large-scale applications. Due to the multidisciplinary, multiresolution, and multiscale nature of scientific problems that drive the demand for high end systems, applications place increasingly differing demands on the system resources: disk, network, memory, and CPU. In addition to MPI, future applications are expected to use advanced programming models such as those developed under the DARPA HPCS program as well as existing global address space programming models such as Global Arrays, UPC, and Co-Array Fortran. While there has been a considerable amount of work in fault tolerant MPI with a number of strategies and extensions for fault tolerance proposed, virtually none of advanced models proposed for emerging petascale systems is currently fault aware. To achieve fault tolerance, development of underlying runtime and OS technologies able to scale to petascale level is needed. This project has evaluated range of runtime techniques for fault tolerance for advanced programming models.

  17. Working Towards New Transformative Geoscience Analytics Enabled by Petascale Computing

    NASA Astrophysics Data System (ADS)

    Woodcock, R.; Wyborn, L.

    2012-04-01

    Currently the top 10 supercomputers in the world are petascale and already exascale computers are being planned. Cloud computing facilities are becoming mainstream either as private or commercial investments. These computational developments will provide abundant opportunities for the earth science community to tackle the data deluge which has resulted from new instrumentation enabling data to be gathered at a greater rate and at higher resolution. Combined, the new computational environments should enable the earth sciences to be transformed. However, experience in Australia and elsewhere has shown that it is not easy to scale existing earth science methods, software and analytics to take advantage of the increased computational capacity that is now available. It is not simply a matter of 'transferring' current work practices to the new facilities: they have to be extensively 'transformed'. In particular new Geoscientific methods will need to be developed using advanced data mining, assimilation, machine learning and integration algorithms. Software will have to be capable of operating in highly parallelised environments, and will also need to be able to scale as the compute systems grow. Data access will have to improve and the earth science community needs to move from the file discovery, display and then locally download paradigm to self describing data cubes and data arrays that are available as online resources from either major data repositories or in the cloud. In the new transformed world, rather than analysing satellite data scene by scene, sensor agnostic data cubes of calibrated earth observation data will enable researchers to move across data from multiple sensors at varying spatial data resolutions. In using geophysics to characterise basement and cover, rather than analysing individual gridded airborne geophysical data sets, and then combining the results, petascale computing will enable analysis of multiple data types, collected at varying

  18. PreDatA - Preparatory Data Analytics on Peta-Scale Machines

    SciTech Connect

    Zheng, Fang; Abbasi, H.; Docan, Ciprian; Lofstead, J.; Klasky, Scott A; Parashar, Manish; Podhorszki, Norbert; Schwan, Karsten; Wolf, Matthew D; Liu, Gary

    2010-01-01

    Peta-scale scientific applications running on High End Computing (HEC) platforms can generate large volumes of data. For high performance storage and in order to be useful to science end users, such data must be organized in its layout, indexed, sorted, and otherwise manipulated for subsequent data presentation, visualization, and detailed analysis. In addition, scientists desire to gain insights into selected data characteristics 'hidden' or 'latent' in the massive datasets while data is being produced by simulations. PreDatA, short for Preparatory Data Analytics, is an approach for preparing and characterizing data while it is being produced by the large scale simulations running on peta-scale machines. By dedicating additional compute nodes on the peta-scale machine as staging nodes and staging simulation's output data through these nodes, PreDatA can exploit their computational power to perform selected data manipulations with lower latency than attainable by first moving data into file systems and storage. Such in-transit manipulations are supported by the PreDatA middleware through RDMA-based data movement to reduce write latency, application-specific operations on streaming data that are able to discover latent data characteristics, and appropriate data reorganization and metadata annotation to speed up subsequent data access. As a result, PreDatA enhances the scalability and flexibility of current I/O stack on HEC platforms and is useful for data pre-processing, runtime data analysis and inspection, as well as for data exchange between concurrently running simulation models. Performance evaluations with several production peta-scale applications on Oak Ridge National Laboratory's Leadership Computing Facility demonstrate the feasibility and advantages of the PreDatA approach.

  19. Lightweight and Statistical Techniques for Petascale Debugging: Correctness on Petascale Systems (CoPS) Preliminry Report

    SciTech Connect

    de Supinski, B R; Miller, B P; Liblit, B

    2011-09-13

    Petascale platforms with O(10{sup 5}) and O(10{sup 6}) processing cores are driving advancements in a wide range of scientific disciplines. These large systems create unprecedented application development challenges. Scalable correctness tools are critical to shorten the time-to-solution on these systems. Currently, many DOE application developers use primitive manual debugging based on printf or traditional debuggers such as TotalView or DDT. This paradigm breaks down beyond a few thousand cores, yet bugs often arise above that scale. Programmers must reproduce problems in smaller runs to analyze them with traditional tools, or else perform repeated runs at scale using only primitive techniques. Even when traditional tools run at scale, the approach wastes substantial effort and computation cycles. Continued scientific progress demands new paradigms for debugging large-scale applications. The Correctness on Petascale Systems (CoPS) project is developing a revolutionary debugging scheme that will reduce the debugging problem to a scale that human developers can comprehend. The scheme can provide precise diagnoses of the root causes of failure, including suggestions of the location and the type of errors down to the level of code regions or even a single execution point. Our fundamentally new strategy combines and expands three relatively new complementary debugging approaches. The Stack Trace Analysis Tool (STAT), a 2011 R&D 100 Award Winner, identifies behavior equivalence classes in MPI jobs and highlights behavior when elements of the class demonstrate divergent behavior, often the first indicator of an error. The Cooperative Bug Isolation (CBI) project has developed statistical techniques for isolating programming errors in widely deployed code that we will adapt to large-scale parallel applications. Finally, we are developing a new approach to parallelizing expensive correctness analyses, such as analysis of memory usage in the Memgrind tool. In the first two

  20. Advanced Space Shuttle simulation model

    NASA Technical Reports Server (NTRS)

    Tatom, F. B.; Smith, S. R.

    1982-01-01

    A non-recursive model (based on von Karman spectra) for atmospheric turbulence along the flight path of the shuttle orbiter was developed. It provides for simulation of instantaneous vertical and horizontal gusts at the vehicle center-of-gravity, and also for simulation of instantaneous gusts gradients. Based on this model the time series for both gusts and gust gradients were generated and stored on a series of magnetic tapes, entitled Shuttle Simulation Turbulence Tapes (SSTT). The time series are designed to represent atmospheric turbulence from ground level to an altitude of 120,000 meters. A description of the turbulence generation procedure is provided. The results of validating the simulated turbulence are described. Conclusions and recommendations are presented. One-dimensional von Karman spectra are tabulated, while a discussion of the minimum frequency simulated is provided. The results of spectral and statistical analyses of the SSTT are presented.

  1. Quantum Monte Carlo Endstation for Petascale Computing

    SciTech Connect

    Lubos Mitas

    2011-01-26

    NCSU research group has been focused on accomplising the key goals of this initiative: establishing new generation of quantum Monte Carlo (QMC) computational tools as a part of Endstation petaflop initiative for use at the DOE ORNL computational facilities and for use by computational electronic structure community at large; carrying out high accuracy quantum Monte Carlo demonstration projects in application of these tools to the forefront electronic structure problems in molecular and solid systems; expanding the impact of QMC methods and approaches; explaining and enhancing the impact of these advanced computational approaches. In particular, we have developed quantum Monte Carlo code (QWalk, www.qwalk.org) which was significantly expanded and optimized using funds from this support and at present became an actively used tool in the petascale regime by ORNL researchers and beyond. These developments have been built upon efforts undertaken by the PI's group and collaborators over the period of the last decade. The code was optimized and tested extensively on a number of parallel architectures including petaflop ORNL Jaguar machine. We have developed and redesigned a number of code modules such as evaluation of wave functions and orbitals, calculations of pfaffians and introduction of backflow coordinates together with overall organization of the code and random walker distribution over multicore architectures. We have addressed several bottlenecks such as load balancing and verified efficiency and accuracy of the calculations with the other groups of the Endstation team. The QWalk package contains about 50,000 lines of high quality object-oriented C++ and includes also interfaces to data files from other conventional electronic structure codes such as Gamess, Gaussian, Crystal and others. This grant supported PI for one month during summers, a full-time postdoc and partially three graduate students over the period of the grant duration, it has resulted in 13

  2. Advanced Vadose Zone Simulations Using TOUGH

    SciTech Connect

    Finsterle, S.; Doughty, C.; Kowalsky, M.B.; Moridis, G.J.; Pan,L.; Xu, T.; Zhang, Y.; Pruess, K.

    2007-02-01

    The vadose zone can be characterized as a complex subsurfacesystem in which intricate physical and biogeochemical processes occur inresponse to a variety of natural forcings and human activities. Thismakes it difficult to describe, understand, and predict the behavior ofthis specific subsurface system. The TOUGH nonisothermal multiphase flowsimulators are well-suited to perform advanced vadose zone studies. Theconceptual models underlying the TOUGH simulators are capable ofrepresenting features specific to the vadose zone, and of addressing avariety of coupled phenomena. Moreover, the simulators are integratedinto software tools that enable advanced data analysis, optimization, andsystem-level modeling. We discuss fundamental and computationalchallenges in simulating vadose zone processes, review recent advances inmodeling such systems, and demonstrate some capabilities of the TOUGHsuite of codes using illustrative examples.

  3. Advances in atomic oxygen simulation

    NASA Technical Reports Server (NTRS)

    Froechtenigt, Joseph F.; Bareiss, Lyle E.

    1990-01-01

    Atomic oxygen (AO) present in the atmosphere at orbital altitudes of 200 to 700 km has been shown to degrade various exposed materials on Shuttle flights. The relative velocity of the AO with the spacecraft, together with the AO density, combine to yield an environment consisting of a 5 eV beam energy with a flux of 10(exp 14) to 10(exp 15) oxygen atoms/sq cm/s. An AO ion beam apparatus that produces flux levels and energy similar to that encountered by spacecraft in low Earth orbit (LEO) has been in existence since 1987. Test data was obtained from the interaction of the AO ion beam with materials used in space applications (carbon, silver, kapton) and with several special coatings of interest deposited on various surfaces. The ultimate design goal of the AO beam simulation device is to produce neutral AO at sufficient flux levels to replicate on-orbit conditions. A newly acquired mass spectrometer with energy discrimination has allowed 5 eV neutral oxygen atoms to be separated and detected from the background of thermal oxygen atoms of approx 0.2 eV. Neutralization of the AO ion beam at 5 eV was shown at the Martin Marietta AO facility.

  4. Center for Advanced Modeling and Simulation Intern

    SciTech Connect

    Gertman, Vanessa

    2010-01-01

    Some interns just copy papers and seal envelopes. Not at INL! Check out how Vanessa Gertman, an INL intern working at the Center for Advanced Modeling and Simulation, spent her summer working with some intense visualization software. Lots more content like this is available at INL's facebook page http://www.facebook.com/idahonationallaboratory.

  5. Center for Advanced Modeling and Simulation Intern

    ScienceCinema

    Gertman, Vanessa

    2016-07-12

    Some interns just copy papers and seal envelopes. Not at INL! Check out how Vanessa Gertman, an INL intern working at the Center for Advanced Modeling and Simulation, spent her summer working with some intense visualization software. Lots more content like this is available at INL's facebook page http://www.facebook.com/idahonationallaboratory.

  6. DNS/LES of Complex Turbulent Flows beyond Petascale

    NASA Astrophysics Data System (ADS)

    Fischer, Paul

    2014-11-01

    Petascale computing platforms currently feature million-way parallelism and it is anticipated that exascale computers with billion-way concurrency will be deployed by 2020. In this talk, we explore the potential of computing at these scales with a focus on turbulent fluid flow and heat transfer in a variety of applications including nuclear energy, combustion, oceanography, vascular flows, and astrophysics. Following Kreiss and Oliger '72, we argue that high-order methods are essential for scalable simulation of transport phenomena. We demonstrate that these methods can be realized at costs equivalent to those of low-order methods having the same number of gridpoints. We further show that, with care, efficient multilevel solvers having bounded iteration counts will scale to billion-way concurrency. Using data from leading-edge platforms over the past 25 years, we analyze the scalability of state-of-the-art solvers to predict parallel performance on exascale architectures. The analysis sheds light on the expected scope of exascale physics simulations and provides insight to design requirements for future algorithms, codes, and architectures. Supported by DOE Applied Mathematics Research Program.

  7. Quantum Monte Carlo Endstation for Petascale Computing

    SciTech Connect

    David Ceperley

    2011-03-02

    The major achievements enabled by QMC Endstation grant include * Performance improvement on clusters of x86 multi-core systems, especially on Cray XT systems * New and improved methods for the wavefunction optimizations * New forms of trial wavefunctions * Implementation of the full application on NVIDIA GPUs using CUDA The scaling studies of QMCPACK on large-scale systems show excellent parallel efficiency up to 216K cores on Jaguarpf (Cray XT5). The GPU implementation shows speedups of 10-15x over the CPU implementation on older generation of x86. We have implemented hybrid OpenMP/MPI scheme in QMC to take advantage of multi-core shared memory processors of petascale systems. Our hybrid scheme has several advantages over the standard MPI-only scheme. * Memory optimized: large read-only data to store one-body orbitals and other shared properties to represent the trial wave function and many-body Hamiltonian can be shared among threads, which reduces the memory footprint of a large-scale problem. * Cache optimized: the data associated with an active Walker are in cache during the compute-intensive drift-diffusion process and the operations on an Walker are optimized for cache reuse. Thread-local objects are used to ensure the data affinity to a thread. * Load balanced: Walkers in an ensemble are evenly distributed among threads and MPI tasks. The two-level parallelism reduces the population imbalance among MPI tasks and reduces the number of point-to-point communications of large messages (serialized objects) for the Walker exchange. * Communication optimized: the communication overhead, especially for the collective operations necessary to determine ET and measure the properties of an ensemble, is significantly lowered by using less MPI tasks. The multiple forms of parallelism afforded by QMC algorithms make them ideal candidates for acceleration in the many-core paradigm. We presented the results of our effort to port the QMCPACK simulation code to the NVIDIA

  8. Petascale Computing Enabling Technologies Project Final Report

    SciTech Connect

    de Supinski, B R

    2010-02-14

    The Petascale Computing Enabling Technologies (PCET) project addressed challenges arising from current trends in computer architecture that will lead to large-scale systems with many more nodes, each of which uses multicore chips. These factors will soon lead to systems that have over one million processors. Also, the use of multicore chips will lead to less memory and less memory bandwidth per core. We need fundamentally new algorithmic approaches to cope with these memory constraints and the huge number of processors. Further, correct, efficient code development is difficult even with the number of processors in current systems; more processors will only make it harder. The goal of PCET was to overcome these challenges by developing the computer science and mathematical underpinnings needed to realize the full potential of our future large-scale systems. Our research results will significantly increase the scientific output obtained from LLNL large-scale computing resources by improving application scientist productivity and system utilization. Our successes include scalable mathematical algorithms that adapt to these emerging architecture trends and through code correctness and performance methodologies that automate critical aspects of application development as well as the foundations for application-level fault tolerance techniques. PCET's scope encompassed several research thrusts in computer science and mathematics: code correctness and performance methodologies, scalable mathematics algorithms appropriate for multicore systems, and application-level fault tolerance techniques. Due to funding limitations, we focused primarily on the first three thrusts although our work also lays the foundation for the needed advances in fault tolerance. In the area of scalable mathematics algorithms, our preliminary work established that OpenMP performance of the AMG linear solver benchmark and important individual kernels on Atlas did not match the predictions of our

  9. PSH3D fast Poisson solver for petascale DNS

    NASA Astrophysics Data System (ADS)

    Adams, Darren; Dodd, Michael; Ferrante, Antonino

    2016-11-01

    Direct numerical simulation (DNS) of high Reynolds number, Re >= O (105) , turbulent flows requires computational meshes >= O (1012) grid points, and, thus, the use of petascale supercomputers. DNS often requires the solution of a Helmholtz (or Poisson) equation for pressure, which constitutes the bottleneck of the solver. We have developed a parallel solver of the Helmholtz equation in 3D, PSH3D. The numerical method underlying PSH3D combines a parallel 2D Fast Fourier transform in two spatial directions, and a parallel linear solver in the third direction. For computational meshes up to 81923 grid points, our numerical results show that PSH3D scales up to at least 262k cores of Cray XT5 (Blue Waters). PSH3D has a peak performance 6 × faster than 3D FFT-based methods when used with the 'partial-global' optimization, and for a 81923 mesh solves the Poisson equation in 1 sec using 128k cores. Also, we have verified that the use of PSH3D with the 'partial-global' optimization in our DNS solver does not reduce the accuracy of the numerical solution of the incompressible Navier-Stokes equations.

  10. Advanced Civil Transport Simulator Cockpit View

    NASA Technical Reports Server (NTRS)

    1992-01-01

    The Advanced Civil Transport Simulator (ACTS) is a futuristic aircraft cockpit simulator designed to provide full-mission capabilities for researching issues that will affect future transport aircraft flight stations and crews. The objective is to heighten the pilots situation awareness through improved information availability and ease of interpretation in order to reduce the possibility of misinterpreted data. The simulators five 13-inch Cathode Ray Tubes are designed to display flight information in a logical easy-to-see format. Two color flat panel Control Display Units with touch sensitive screens provide monitoring and modification of aircraft parameters, flight plans, flight computers, and aircraft position. Three collimated visual display units have been installed to provide out-the-window scenes via the Computer Generated Image system. The major research objectives are to examine needs for transfer of information to and from the flight crew; study the use of advanced controls and displays for all-weather flying; explore ideas for using computers to help the crew in decision making; study visual scanning and reach behavior under different conditions with various levels of automation and flight deck-arrangements.

  11. Onyx-Advanced Aeropropulsion Simulation Framework Created

    NASA Technical Reports Server (NTRS)

    Reed, John A.

    2001-01-01

    The Numerical Propulsion System Simulation (NPSS) project at the NASA Glenn Research Center is developing a new software environment for analyzing and designing aircraft engines and, eventually, space transportation systems. Its purpose is to dramatically reduce the time, effort, and expense necessary to design and test jet engines by creating sophisticated computer simulations of an aerospace object or system (refs. 1 and 2). Through a university grant as part of that effort, researchers at the University of Toledo have developed Onyx, an extensible Java-based (Sun Micro-systems, Inc.), objectoriented simulation framework, to investigate how advanced software design techniques can be successfully applied to aeropropulsion system simulation (refs. 3 and 4). The design of Onyx's architecture enables users to customize and extend the framework to add new functionality or adapt simulation behavior as required. It exploits object-oriented technologies, such as design patterns, domain frameworks, and software components, to develop a modular system in which users can dynamically replace components with others having different functionality.

  12. Petascale visual data analysis in a production computing environment

    NASA Astrophysics Data System (ADS)

    Ahern, Sean

    2007-07-01

    Supporting the visualization and analysis needs of the users of the Department of Energy's premiere high-performance computing centers requires a careful engineering of software and hardware system architectures to provide maximum capability and algorithmic breadth. Data set growth follows an inverse power law that has implications for the platforms that are deployed for analysis and visualization; central storage and coupled analysis platforms are critical for petascale post-production. Software architectures like VisIt - which exploit parallel platforms, as well as provide remote capability, extensibility, and optimization - are fruitful ground for delivering new analysis capabilities for petascale applications. Finally, direct interaction with customers is key to deploying successful results.

  13. Simulation methods for advanced scientific computing

    SciTech Connect

    Booth, T.E.; Carlson, J.A.; Forster, R.A.

    1998-11-01

    This is the final report of a three-year, Laboratory Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). The objective of the project was to create effective new algorithms for solving N-body problems by computer simulation. The authors concentrated on developing advanced classical and quantum Monte Carlo techniques. For simulations of phase transitions in classical systems, they produced a framework generalizing the famous Swendsen-Wang cluster algorithms for Ising and Potts models. For spin-glass-like problems, they demonstrated the effectiveness of an extension of the multicanonical method for the two-dimensional, random bond Ising model. For quantum mechanical systems, they generated a new method to compute the ground-state energy of systems of interacting electrons. They also improved methods to compute excited states when the diffusion quantum Monte Carlo method is used and to compute longer time dynamics when the stationary phase quantum Monte Carlo method is used.

  14. 14 CFR Appendix H to Part 121 - Advanced Simulation

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... simulator instructors and check airmen must include training policies and procedures, instruction methods... Simulation This appendix provides guidelines and a means for achieving flightcrew training in advanced... simulator, as appropriate. Advanced Simulation Training Program For an operator to conduct Level C or...

  15. 14 CFR Appendix H to Part 121 - Advanced Simulation

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... simulator instructors and check airmen must include training policies and procedures, instruction methods... Simulation This appendix provides guidelines and a means for achieving flightcrew training in advanced... simulator, as appropriate. Advanced Simulation Training Program For an operator to conduct Level C or...

  16. 14 CFR Appendix H to Part 121 - Advanced Simulation

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... simulator instructors and check airmen must include training policies and procedures, instruction methods... Simulation This appendix provides guidelines and a means for achieving flightcrew training in advanced... simulator, as appropriate. Advanced Simulation Training Program For an operator to conduct Level C or...

  17. 14 CFR Appendix H to Part 121 - Advanced Simulation

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... simulator instructors and check airmen must include training policies and procedures, instruction methods... Simulation This appendix provides guidelines and a means for achieving flightcrew training in advanced... simulator, as appropriate. Advanced Simulation Training Program For an operator to conduct Level C or...

  18. Software Framework for Advanced Power Plant Simulations

    SciTech Connect

    John Widmann; Sorin Munteanu; Aseem Jain; Pankaj Gupta; Mark Moales; Erik Ferguson; Lewis Collins; David Sloan; Woodrow Fiveland; Yi-dong Lang; Larry Biegler; Michael Locke; Simon Lingard; Jay Yun

    2010-08-01

    This report summarizes the work accomplished during the Phase II development effort of the Advanced Process Engineering Co-Simulator (APECS). The objective of the project is to develop the tools to efficiently combine high-fidelity computational fluid dynamics (CFD) models with process modeling software. During the course of the project, a robust integration controller was developed that can be used in any CAPE-OPEN compliant process modeling environment. The controller mediates the exchange of information between the process modeling software and the CFD software. Several approaches to reducing the time disparity between CFD simulations and process modeling have been investigated and implemented. These include enabling the CFD models to be run on a remote cluster and enabling multiple CFD models to be run simultaneously. Furthermore, computationally fast reduced-order models (ROMs) have been developed that can be 'trained' using the results from CFD simulations and then used directly within flowsheets. Unit operation models (both CFD and ROMs) can be uploaded to a model database and shared between multiple users.

  19. Recent advances in superconducting-mixer simulations

    NASA Technical Reports Server (NTRS)

    Withington, S.; Kennedy, P. R.

    1992-01-01

    Over the last few years, considerable progress have been made in the development of techniques for fabricating high-quality superconducting circuits, and this success, together with major advances in the theoretical understanding of quantum detection and mixing at millimeter and submillimeter wavelengths, has made the development of CAD techniques for superconducting nonlinear circuits an important new enterprise. For example, arrays of quasioptical mixers are now being manufactured, where the antennas, matching networks, filters and superconducting tunnel junctions are all fabricated by depositing niobium and a variety of oxides on a single quartz substrate. There are no adjustable tuning elements on these integrated circuits, and therefore, one must be able to predict their electrical behavior precisely. This requirement, together with a general interest in the generic behavior of devices such as direct detectors and harmonic mixers, has lead us to develop a range of CAD tools for simulating the large-signal, small-signal, and noise behavior of superconducting tunnel junction circuits.

  20. Understanding I/O workload characteristics of a Peta-scale storage system

    SciTech Connect

    Kim, Youngjae; Gunasekaran, Raghul

    2015-01-01

    Understanding workload characteristics is critical for optimizing and improving the performance of current systems and software, and architecting new storage systems based on observed workload patterns. In this paper, we characterize the I/O workloads of scientific applications of one of the world s fastest high performance computing (HPC) storage cluster, Spider, at the Oak Ridge Leadership Computing Facility (OLCF). OLCF flagship petascale simulation platform, Titan, and other large HPC clusters, in total over 250 thousands compute cores, depend on Spider for their I/O needs. We characterize the system utilization, the demands of reads and writes, idle time, storage space utilization, and the distribution of read requests to write requests for the Peta-scale Storage Systems. From this study, we develop synthesized workloads, and we show that the read and write I/O bandwidth usage as well as the inter-arrival time of requests can be modeled as a Pareto distribution. We also study the I/O load imbalance problems using I/O performance data collected from the Spider storage system.

  1. Advanced Modeling, Simulation and Analysis (AMSA) Capability Roadmap Progress Review

    NASA Technical Reports Server (NTRS)

    Antonsson, Erik; Gombosi, Tamas

    2005-01-01

    Contents include the following: NASA capability roadmap activity. Advanced modeling, simulation, and analysis overview. Scientific modeling and simulation. Operations modeling. Multi-special sensing (UV-gamma). System integration. M and S Environments and Infrastructure.

  2. MOGO: Model-Oriented Global Optimization of Petascale Applications

    SciTech Connect

    Malony, Allen D.; Shende, Sameer S.

    2012-09-14

    The MOGO project was initiated under in 2008 under the DOE Program Announcement for Software Development Tools for Improved Ease-of-Use on Petascale systems (LAB 08-19). The MOGO team consisted of Oak Ridge National Lab, Argonne National Lab, and the University of Oregon. The overall goal of MOGO was to attack petascale performance analysis by developing a general framework where empirical performance data could be efficiently and accurately compared with performance expectations at various levels of abstraction. This information could then be used to automatically identify and remediate performance problems. MOGO was be based on performance models derived from application knowledge, performance experiments, and symbolic analysis. MOGO was able to make reasonable impact on existing DOE applications and systems. New tools and techniques were developed, which, in turn, were used on important DOE applications on DOE LCF systems to show significant performance improvements.

  3. Precision Casting via Advanced Simulation and Manufacturing

    NASA Technical Reports Server (NTRS)

    1997-01-01

    A two-year program was conducted to develop and commercially implement selected casting manufacturing technologies to enable significant reductions in the costs of castings, increase the complexity and dimensional accuracy of castings, and reduce the development times for delivery of high quality castings. The industry-led R&D project was cost shared with NASA's Aerospace Industry Technology Program (AITP). The Rocketdyne Division of Boeing North American, Inc. served as the team lead with participation from Lockheed Martin, Ford Motor Company, Howmet Corporation, PCC Airfoils, General Electric, UES, Inc., University of Alabama, Auburn University, Robinson, Inc., Aracor, and NASA-LeRC. The technical effort was organized into four distinct tasks. The accomplishments reported herein. Task 1.0 developed advanced simulation technology for core molding. Ford headed up this task. On this program, a specialized core machine was designed and built. Task 2.0 focused on intelligent process control for precision core molding. Howmet led this effort. The primary focus of these experimental efforts was to characterize the process parameters that have a strong impact on dimensional control issues of injection molded cores during their fabrication. Task 3.0 developed and applied rapid prototyping to produce near net shape castings. Rocketdyne was responsible for this task. CAD files were generated using reverse engineering, rapid prototype patterns were fabricated using SLS and SLA, and castings produced and evaluated. Task 4.0 was aimed at developing technology transfer. Rocketdyne coordinated this task. Casting related technology, explored and evaluated in the first three tasks of this program, was implemented into manufacturing processes.

  4. Emulation of an Advanced G-Seat on the Advanced Simulator for Pilot Training.

    DTIC Science & Technology

    1978-04-01

    ASPT ) which culminated in the emulation of an advanced approach to G-seat simulation. The development of the software, the design of the advanced seat...components, the implementation of the advanced design on the ASPT , and the results of the study are presented. (Author)

  5. On-line Machine Learning and Event Detection in Petascale Data Streams

    NASA Astrophysics Data System (ADS)

    Thompson, David R.; Wagstaff, K. L.

    2012-01-01

    Traditional statistical data mining involves off-line analysis in which all data are available and equally accessible. However, petascale datasets have challenged this premise since it is often impossible to store, let alone analyze, the relevant observations. This has led the machine learning community to investigate adaptive processing chains where data mining is a continuous process. Here pattern recognition permits triage and followup decisions at multiple stages of a processing pipeline. Such techniques can also benefit new astronomical instruments such as the Large Synoptic Survey Telescope (LSST) and Square Kilometre Array (SKA) that will generate petascale data volumes. We summarize some machine learning perspectives on real time data mining, with representative cases of astronomical applications and event detection in high volume datastreams. The first is a "supervised classification" approach currently used for transient event detection at the Very Long Baseline Array (VLBA). It injects known signals of interest - faint single-pulse anomalies - and tunes system parameters to recover these events. This permits meaningful event detection for diverse instrument configurations and observing conditions whose noise cannot be well-characterized in advance. Second, "semi-supervised novelty detection" finds novel events based on statistical deviations from previous patterns. It detects outlier signals of interest while considering known examples of false alarm interference. Applied to data from the Parkes pulsar survey, the approach identifies anomalous "peryton" phenomena that do not match previous event models. Finally, we consider online light curve classification that can trigger adaptive followup measurements of candidate events. Classifier performance analyses suggest optimal survey strategies, and permit principled followup decisions from incomplete data. These examples trace a broad range of algorithm possibilities available for online astronomical data

  6. Petascale lattice-Boltzmann studies of amphiphilic cubic liquid crystalline materials in a globally distributed high-performance computing and visualization environment.

    PubMed

    Saksena, Radhika S; Mazzeo, Marco D; Zasada, Stefan J; Coveney, Peter V

    2010-08-28

    We present very large-scale rheological studies of self-assembled cubic gyroid liquid crystalline phases in ternary mixtures of oil, water and amphiphilic species performed on petascale supercomputers using the lattice-Boltzmann method. These nanomaterials have found diverse applications in materials science and biotechnology, for example, in photovoltaic devices and protein crystallization. They are increasingly gaining importance as delivery vehicles for active agents in pharmaceuticals, personal care products and food technology. In many of these applications, the self-assembled structures are subject to flows of varying strengths and we endeavour to understand their rheological response with the objective of eventually predicting it under given flow conditions. Computationally, our lattice-Boltzmann simulations of ternary fluids are inherently memory- and data-intensive. Furthermore, our interest in dynamical processes necessitates remote visualization and analysis as well as the associated transfer and storage of terabytes of time-dependent data. These simulations are distributed on a high-performance grid infrastructure using the application hosting environment; we employ a novel parallel in situ visualization approach which is particularly suited for such computations on petascale resources. We present computational and I/O performance benchmarks of our application on three different petascale systems.

  7. Simulator design for advanced ISDN satellite design and experiments

    NASA Technical Reports Server (NTRS)

    Pepin, Gerald R.

    1992-01-01

    This simulation design task completion report documents the simulation techniques associated with the network models of both the Interim Service ISDN (integrated services digital network) Satellite (ISIS) and the Full Service ISDN Satellite (FSIS) architectures. The ISIS network model design represents satellite systems like the Advanced Communication Technology Satellite (ACTS) orbiting switch. The FSIS architecture, the ultimate aim of this element of the Satellite Communications Applications Research (SCAR) program, moves all control and switching functions on-board the next generation ISDN communication satellite. The technical and operational parameters for the advanced ISDN communications satellite design will be obtained from the simulation of ISIS and FSIS engineering software models for their major subsystems. Discrete events simulation experiments will be performed with these models using various traffic scenarios, design parameters and operational procedures. The data from these simulations will be used to determine the engineering parameters for the advanced ISDN communications satellite.

  8. Enhanced Capabilities of Advanced Airborne Radar Simulation.

    DTIC Science & Technology

    1996-01-01

    RCF UNIX-Based Machine 65 BAUHAUS A-l Illustrations to Understand How GTD Files are Read 78 C-l Input File for Sidelobe Jammer Nulling...on the UNIX-based machine BAUHAUS are provided to illustrate the enhancements in run time, as compared to the original version of the simulation [1...Figure 27 presents some CPU run times for executing the enhanced simulation on the RCF UNIX-based machine BAUHAUS . The run times are shown only for

  9. Predicting Performance in Technical Preclinical Dental Courses Using Advanced Simulation.

    PubMed

    Gottlieb, Riki; Baechle, Mary A; Janus, Charles; Lanning, Sharon K

    2017-01-01

    The aim of this study was to investigate whether advanced simulation parameters, such as simulation exam scores, number of student self-evaluations, time to complete the simulation, and time to complete self-evaluations, served as predictors of dental students' preclinical performance. Students from three consecutive classes (n=282) at one U.S. dental school completed advanced simulation training and exams within the first four months of their dental curriculum. The students then completed conventional preclinical instruction and exams in operative dentistry (OD) and fixed prosthodontics (FP) courses, taken during the first and second years of dental school, respectively. Two advanced simulation exam scores (ASES1 and ASES2) were tested as predictors of performance in the two preclinical courses based on final course grades. ASES1 and ASES2 were found to be predictors of OD and FP preclinical course grades. Other advanced simulation parameters were not significantly related to grades in the preclinical courses. These results highlight the value of an early psychomotor skills assessment in dentistry. Advanced simulation scores may allow early intervention in students' learning process and assist in efficient allocation of resources such as faculty coverage and tutor assignment.

  10. Capturing Petascale Application Characteristics with the Sequoia Toolkit

    SciTech Connect

    Vetter, Jeffrey S; Bhatia, Nikhil; Grobelny, Eric M; Roth, Philip C

    2005-09-01

    Characterization of the computation, communication, memory, and I/O demands of current scientific applications is crucial for identifying which technologies will enable petascale scientific computing. In this paper, we present the Sequoia Toolkit for characterizing HPC applications. The Sequoia Toolkit consists of the Sequoia trace capture library and the Sequoia Event Analysis Library, or SEAL, that facilitates the development of tools for analyzing Sequoia event traces. Using the Sequoia Toolkit, we have characterized the behavior of application runs with up to 2048 application processes. To illustrate the use of the Sequoia Toolkit, we present a preliminary characterization of LAMMPS, a molecular dynamics application of great interest to the computational biology community.

  11. 14 CFR Appendix H to Part 121 - Advanced Simulation

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... check airmen must include training policies and procedures, instruction methods and techniques... and a means for achieving flightcrew training in advanced airplane simulators. The requirements in... Simulation Training Program For an operator to conduct Level C or D training under this appendix all...

  12. Molecular dynamics simulations: advances and applications

    PubMed Central

    Hospital, Adam; Goñi, Josep Ramon; Orozco, Modesto; Gelpí, Josep L

    2015-01-01

    Molecular dynamics simulations have evolved into a mature technique that can be used effectively to understand macromolecular structure-to-function relationships. Present simulation times are close to biologically relevant ones. Information gathered about the dynamic properties of macromolecules is rich enough to shift the usual paradigm of structural bioinformatics from studying single structures to analyze conformational ensembles. Here, we describe the foundations of molecular dynamics and the improvements made in the direction of getting such ensemble. Specific application of the technique to three main issues (allosteric regulation, docking, and structure refinement) is discussed. PMID:26604800

  13. Advanced Simulation and Computing Business Plan

    SciTech Connect

    Rummel, E.

    2015-07-09

    To maintain a credible nuclear weapons program, the National Nuclear Security Administration’s (NNSA’s) Office of Defense Programs (DP) needs to make certain that the capabilities, tools, and expert staff are in place and are able to deliver validated assessments. This requires a complete and robust simulation environment backed by an experimental program to test ASC Program models. This ASC Business Plan document encapsulates a complex set of elements, each of which is essential to the success of the simulation component of the Nuclear Security Enterprise. The ASC Business Plan addresses the hiring, mentoring, and retaining of programmatic technical staff responsible for building the simulation tools of the nuclear security complex. The ASC Business Plan describes how the ASC Program engages with industry partners—partners upon whom the ASC Program relies on for today’s and tomorrow’s high performance architectures. Each piece in this chain is essential to assure policymakers, who must make decisions based on the results of simulations, that they are receiving all the actionable information they need.

  14. Interactive visualization to advance earthquake simulation

    USGS Publications Warehouse

    Kellogg, L.H.; Bawden, G.W.; Bernardin, T.; Billen, M.; Cowgill, E.; Hamann, B.; Jadamec, M.; Kreylos, O.; Staadt, O.; Sumner, D.

    2008-01-01

    The geological sciences are challenged to manage and interpret increasing volumes of data as observations and simulations increase in size and complexity. For example, simulations of earthquake-related processes typically generate complex, time-varying data sets in two or more dimensions. To facilitate interpretation and analysis of these data sets, evaluate the underlying models, and to drive future calculations, we have developed methods of interactive visualization with a special focus on using immersive virtual reality (VR) environments to interact with models of Earth's surface and interior. Virtual mapping tools allow virtual "field studies" in inaccessible regions. Interactive tools allow us to manipulate shapes in order to construct models of geological features for geodynamic models, while feature extraction tools support quantitative measurement of structures that emerge from numerical simulation or field observations, thereby enabling us to improve our interpretation of the dynamical processes that drive earthquakes. VR has traditionally been used primarily as a presentation tool, albeit with active navigation through data. Reaping the full intellectual benefits of immersive VR as a tool for scientific analysis requires building on the method's strengths, that is, using both 3D perception and interaction with observed or simulated data. This approach also takes advantage of the specialized skills of geological scientists who are trained to interpret, the often limited, geological and geophysical data available from field observations. ?? Birkhaueser 2008.

  15. Simulation Credibility: Advances in Verification, Validation, and Uncertainty Quantification

    NASA Technical Reports Server (NTRS)

    Mehta, Unmeel B. (Editor); Eklund, Dean R.; Romero, Vicente J.; Pearce, Jeffrey A.; Keim, Nicholas S.

    2016-01-01

    Decision makers and other users of simulations need to know quantified simulation credibility to make simulation-based critical decisions and effectively use simulations, respectively. The credibility of a simulation is quantified by its accuracy in terms of uncertainty, and the responsibility of establishing credibility lies with the creator of the simulation. In this volume, we present some state-of-the-art philosophies, principles, and frameworks. The contributing authors involved in this publication have been dedicated to advancing simulation credibility. They detail and provide examples of key advances over the last 10 years in the processes used to quantify simulation credibility: verification, validation, and uncertainty quantification. The philosophies and assessment methods presented here are anticipated to be useful to other technical communities conducting continuum physics-based simulations; for example, issues related to the establishment of simulation credibility in the discipline of propulsion are discussed. We envision that simulation creators will find this volume very useful to guide and assist them in quantitatively conveying the credibility of their simulations.

  16. Simulation Toolkit for Renewable Energy Advanced Materials Modeling

    SciTech Connect

    Sides, Scott; Kemper, Travis; Larsen, Ross; Graf, Peter

    2013-11-13

    STREAMM is a collection of python classes and scripts that enables and eases the setup of input files and configuration files for simulations of advanced energy materials. The core STREAMM python classes provide a general framework for storing, manipulating and analyzing atomic/molecular coordinates to be used in quantum chemistry and classical molecular dynamics simulations of soft materials systems. The design focuses on enabling the interoperability of materials simulation codes such as GROMACS, LAMMPS and Gaussian.

  17. Advances in NLTE Modeling for Integrated Simulations

    SciTech Connect

    Scott, H A; Hansen, S B

    2009-07-08

    The last few years have seen significant progress in constructing the atomic models required for non-local thermodynamic equilibrium (NLTE) simulations. Along with this has come an increased understanding of the requirements for accurately modeling the ionization balance, energy content and radiative properties of different elements for a wide range of densities and temperatures. Much of this progress is the result of a series of workshops dedicated to comparing the results from different codes and computational approaches applied to a series of test problems. The results of these workshops emphasized the importance of atomic model completeness, especially in doubly excited states and autoionization transitions, to calculating ionization balance, and the importance of accurate, detailed atomic data to producing reliable spectra. We describe a simple screened-hydrogenic model that calculates NLTE ionization balance with surprising accuracy, at a low enough computational cost for routine use in radiation-hydrodynamics codes. The model incorporates term splitting, {Delta}n = 0 transitions, and approximate UTA widths for spectral calculations, with results comparable to those of much more detailed codes. Simulations done with this model have been increasingly successful at matching experimental data for laser-driven systems and hohlraums. Accurate and efficient atomic models are just one requirement for integrated NLTE simulations. Coupling the atomic kinetics to hydrodynamics and radiation transport constrains both discretizations and algorithms to retain energy conservation, accuracy and stability. In particular, the strong coupling between radiation and populations can require either very short timesteps or significantly modified radiation transport algorithms to account for NLTE material response. Considerations such as these continue to provide challenges for NLTE simulations.

  18. Process simulation for advanced composites production

    SciTech Connect

    Allendorf, M.D.; Ferko, S.M.; Griffiths, S.

    1997-04-01

    The objective of this project is to improve the efficiency and lower the cost of chemical vapor deposition (CVD) processes used to manufacture advanced ceramics by providing the physical and chemical understanding necessary to optimize and control these processes. Project deliverables include: numerical process models; databases of thermodynamic and kinetic information related to the deposition process; and process sensors and software algorithms that can be used for process control. Target manufacturing techniques include CVD fiber coating technologies (used to deposit interfacial coatings on continuous fiber ceramic preforms), chemical vapor infiltration, thin-film deposition processes used in the glass industry, and coating techniques used to deposit wear-, abrasion-, and corrosion-resistant coatings for use in the pulp and paper, metals processing, and aluminum industries.

  19. Brush seal numerical simulation: Concepts and advances

    NASA Technical Reports Server (NTRS)

    Braun, M. J.; Kudriavtsev, V. V.

    1994-01-01

    The development of the brush seal is considered to be most promising among the advanced type seals that are presently in use in the high speed turbomachinery. The brush is usually mounted on the stationary portions of the engine and has direct contact with the rotating element, in the process of limiting the 'unwanted' leakage flows between stages, or various engine cavities. This type of sealing technology is providing high (in comparison with conventional seals) pressure drops due mainly to the high packing density (around 100 bristles/sq mm), and brush compliance with the rotor motions. In the design of modern aerospace turbomachinery leakage flows between the stages must be minimal, thus contributing to the higher efficiency of the engine. Use of the brush seal instead of the labyrinth seal reduces the leakage flow by one order of magnitude. Brush seals also have been found to enhance dynamic performance, cost less, and are lighter than labyrinth seals. Even though industrial brush seals have been successfully developed through extensive experimentation, there is no comprehensive numerical methodology for the design or prediction of their performance. The existing analytical/numerical approaches are based on bulk flow models and do not allow the investigation of the effects of brush morphology (bristle arrangement), or brushes arrangement (number of brushes, spacing between them), on the pressure drops and flow leakage. An increase in the brush seal efficiency is clearly a complex problem that is closely related to the brush geometry and arrangement, and can be solved most likely only by means of a numerically distributed model.

  20. Time parallelization of advanced operation scenario simulations of ITER plasma

    SciTech Connect

    Samaddar, D.; Casper, T. A.; Kim, S. H.; Berry, Lee A; Elwasif, Wael R; Batchelor, Donald B; Houlberg, Wayne A

    2013-01-01

    This work demonstrates that simulations of advanced burning plasma operation scenarios can be successfully parallelized in time using the parareal algorithm. CORSICA - an advanced operation scenario code for tokamak plasmas is used as a test case. This is a unique application since the parareal algorithm has so far been applied to relatively much simpler systems except for the case of turbulence. In the present application, a computational gain of an order of magnitude has been achieved which is extremely promising. A successful implementation of the Parareal algorithm to codes like CORSICA ushers in the possibility of time efficient simulations of ITER plasmas.

  1. Hierarchical Petascale Simulation Framework For Stress Corrosion Cracking

    SciTech Connect

    Grama, Ananth

    2013-12-18

    A number of major accomplishments resulted from the project. These include: • Data Structures, Algorithms, and Numerical Methods for Reactive Molecular Dynamics. We have developed a range of novel data structures, algorithms, and solvers (amortized ILU, Spike) for use with ReaxFF and charge equilibration. • Parallel Formulations of ReactiveMD (Purdue ReactiveMolecular Dynamics Package, PuReMD, PuReMD-GPU, and PG-PuReMD) for Messaging, GPU, and GPU Cluster Platforms. We have developed efficient serial, parallel (MPI), GPU (Cuda), and GPU Cluster (MPI/Cuda) implementations. Our implementations have been demonstrated to be significantly better than the state of the art, both in terms of performance and scalability. • Comprehensive Validation in the Context of Diverse Applications. We have demonstrated the use of our software in diverse systems, including silica-water, silicon-germanium nanorods, and as part of other projects, extended it to applications ranging from explosives (RDX) to lipid bilayers (biomembranes under oxidative stress). • Open Source Software Packages for Reactive Molecular Dynamics. All versions of our soft- ware have been released over the public domain. There are over 100 major research groups worldwide using our software. • Implementation into the Department of Energy LAMMPS Software Package. We have also integrated our software into the Department of Energy LAMMPS software package.

  2. LSST Data Management: Entering the Era of Petascale Optical Astronomy

    NASA Astrophysics Data System (ADS)

    Juric, Mario; Tyson, Tony

    2015-03-01

    The Large Synoptic Survey Telescope (LSST; Ivezic et al. 2008, http://lsst.org) is a planned, large-aperture, wide-field, ground-based telescope that will survey half the sky every few nights in six optical bands from 320 to 1050 nm. It will explore a wide range of astrophysical questions, ranging from discovering killer asteroids, to examining the nature of dark energy. LSST will produce on average 15 terabytes of data per night, yielding an (uncompressed) data set of 200 petabytes at the end of its 10-year mission. Dedicated HPC facilities (with a total of 320 TFLOPS at start, scaling up to 1.7 PFLOPS by the end) will process the image data in near real time, with full-dataset reprocessing on annual scale. The nature, quality, and volume of LSST data will be unprecedented, so the data system design requires petascale storage, terascale computing, and gigascale communications.

  3. Implications of advanced collision operators for gyrokinetic simulation

    NASA Astrophysics Data System (ADS)

    Belli, E. A.; Candy, J.

    2017-04-01

    In this work, we explore both the potential improvements and pitfalls that arise when using advanced collision models in gyrokinetic simulations of plasma microinstabilities. Comparisons are made between the simple-but-standard electron Lorentz operator and specific variations of the advanced Sugama operator. The Sugama operator describes multi-species collisions including energy diffusion, momentum and energy conservation terms, and is valid for arbitrary wavelength. We report scans over collision frequency for both low and high {k}θ {ρ }s modes, with relevance for multiscale simulations that couple ion and electron scale physics. The influence of the ion–ion collision terms—not retained in the electron Lorentz model—on the damping of zonal flows is also explored. Collision frequency scans for linear and nonlinear simulations of ion-temperature-gradient instabilities including impurity ions are presented. Finally, implications for modeling turbulence in the highly collisional edge are discussed.

  4. Gasification CFD Modeling for Advanced Power Plant Simulations

    SciTech Connect

    Zitney, S.E.; Guenther, C.P.

    2005-09-01

    In this paper we have described recent progress on developing CFD models for two commercial-scale gasifiers, including a two-stage, coal slurry-fed, oxygen-blown, pressurized, entrained-flow gasifier and a scaled-up design of the PSDF transport gasifier. Also highlighted was NETL’s Advanced Process Engineering Co-Simulator for coupling high-fidelity equipment models with process simulation for the design, analysis, and optimization of advanced power plants. Using APECS, we have coupled the entrained-flow gasifier CFD model into a coal-fired, gasification-based FutureGen power and hydrogen production plant. The results for the FutureGen co-simulation illustrate how the APECS technology can help engineers better understand and optimize gasifier fluid dynamics and related phenomena that impact overall power plant performance.

  5. Lessons Learned From Dynamic Simulations of Advanced Fuel Cycles

    SciTech Connect

    Steven J. Piet; Brent W. Dixon; Jacob J. Jacobson; Gretchen E. Matthern; David E. Shropshire

    2009-04-01

    Years of performing dynamic simulations of advanced nuclear fuel cycle options provide insights into how they could work and how one might transition from the current once-through fuel cycle. This paper summarizes those insights from the context of the 2005 objectives and goals of the Advanced Fuel Cycle Initiative (AFCI). Our intent is not to compare options, assess options versus those objectives and goals, nor recommend changes to those objectives and goals. Rather, we organize what we have learned from dynamic simulations in the context of the AFCI objectives for waste management, proliferation resistance, uranium utilization, and economics. Thus, we do not merely describe “lessons learned” from dynamic simulations but attempt to answer the “so what” question by using this context. The analyses have been performed using the Verifiable Fuel Cycle Simulation of Nuclear Fuel Cycle Dynamics (VISION). We observe that the 2005 objectives and goals do not address many of the inherently dynamic discriminators among advanced fuel cycle options and transitions thereof.

  6. Design and Optimization of Large Accelerator Systems through High-Fidelity Electromagnetic Simulations

    SciTech Connect

    Ng, Cho; Akcelik, Volkan; Candel, Arno; Chen, Sheng; Ge, Lixin; Kabel, Andreas; Lee, Lie-Quan; Li, Zenghai; Prudencio, Ernesto; Schussman, Greg; Uplenchwar1, Ravi; Xiao1, Liling; Ko1, Kwok; Austin, T.; Cary, J.R.; Ovtchinnikov, S.; Smith, D.N.; Werner, G.R.; Bellantoni, L.; /SLAC /TechX Corp. /Fermilab

    2008-08-01

    SciDAC1, with its support for the 'Advanced Computing for 21st Century Accelerator Science and Technology' (AST) project, witnessed dramatic advances in electromagnetic (EM) simulations for the design and optimization of important accelerators across the Office of Science. In SciDAC2, EM simulations continue to play an important role in the 'Community Petascale Project for Accelerator Science and Simulation' (ComPASS), through close collaborations with SciDAC CETs/Institutes in computational science. Existing codes will be improved and new multi-physics tools will be developed to model large accelerator systems with unprecedented realism and high accuracy using computing resources at petascale. These tools aim at targeting the most challenging problems facing the ComPASS project. Supported by advances in computational science research, they have been successfully applied to the International Linear Collider (ILC) and the Large Hadron Collider (LHC) in High Energy Physics (HEP), the JLab 12-GeV Upgrade in Nuclear Physics (NP), as well as the Spallation Neutron Source (SNS) and the Linac Coherent Light Source (LCLS) in Basic Energy Sciences (BES).

  7. Integration of Advanced Simulation and Visualization for Manufacturing Process Optimization

    NASA Astrophysics Data System (ADS)

    Zhou, Chenn; Wang, Jichao; Tang, Guangwu; Moreland, John; Fu, Dong; Wu, Bin

    2016-05-01

    The integration of simulation and visualization can provide a cost-effective tool for process optimization, design, scale-up and troubleshooting. The Center for Innovation through Visualization and Simulation (CIVS) at Purdue University Northwest has developed methodologies for such integration with applications in various manufacturing processes. The methodologies have proven to be useful for virtual design and virtual training to provide solutions addressing issues on energy, environment, productivity, safety, and quality in steel and other industries. In collaboration with its industrial partnerships, CIVS has provided solutions to companies, saving over US38 million. CIVS is currently working with the steel industry to establish an industry-led Steel Manufacturing Simulation and Visualization Consortium through the support of National Institute of Standards and Technology AMTech Planning Grant. The consortium focuses on supporting development and implementation of simulation and visualization technologies to advance steel manufacturing across the value chain.

  8. Advanced Simulation and Computing FY17 Implementation Plan, Version 0

    SciTech Connect

    McCoy, Michel; Archer, Bill; Hendrickson, Bruce; Wade, Doug; Hoang, Thuc

    2016-08-29

    The Stockpile Stewardship Program (SSP) is an integrated technical program for maintaining the safety, surety, and reliability of the U.S. nuclear stockpile. The SSP uses nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of experimental facilities and programs, and the computational capabilities to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities and computational resources that support annual stockpile assessment and certification, study advanced nuclear weapons design and manufacturing processes, analyze accident scenarios and weapons aging, and provide the tools to enable stockpile Life Extension Programs (LEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balance of resource, including technical staff, hardware, simulation software, and computer science solutions. ASC is now focused on increasing predictive capabilities in a three-dimensional (3D) simulation environment while maintaining support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (sufficient resolution, dimensionality, and scientific details), and quantifying critical margins and uncertainties. Resolving each issue requires increasingly difficult analyses because the aging process has progressively moved the stockpile further away from the original test base. Where possible, the program also enables the use of high performance computing (HPC) and simulation tools to address broader national security needs, such as foreign nuclear weapon assessments and counter nuclear terrorism.

  9. Requirements for advanced simulation of nuclear reactor and chemicalseparation plants.

    SciTech Connect

    Palmiotti, G.; Cahalan, J.; Pfeiffer, P.; Sofu, T.; Taiwo, T.; Wei,T.; Yacout, A.; Yang, W.; Siegel, A.; Insepov, Z.; Anitescu, M.; Hovland,P.; Pereira, C.; Regalbuto, M.; Copple, J.; Willamson, M.

    2006-12-11

    This report presents requirements for advanced simulation of nuclear reactor and chemical processing plants that are of interest to the Global Nuclear Energy Partnership (GNEP) initiative. Justification for advanced simulation and some examples of grand challenges that will benefit from it are provided. An integrated software tool that has its main components, whenever possible based on first principles, is proposed as possible future approach for dealing with the complex problems linked to the simulation of nuclear reactor and chemical processing plants. The main benefits that are associated with a better integrated simulation have been identified as: a reduction of design margins, a decrease of the number of experiments in support of the design process, a shortening of the developmental design cycle, and a better understanding of the physical phenomena and the related underlying fundamental processes. For each component of the proposed integrated software tool, background information, functional requirements, current tools and approach, and proposed future approaches have been provided. Whenever possible, current uncertainties have been quoted and existing limitations have been presented. Desired target accuracies with associated benefits to the different aspects of the nuclear reactor and chemical processing plants were also given. In many cases the possible gains associated with a better simulation have been identified, quantified, and translated into economical benefits.

  10. Hybrid and electric advanced vehicle systems (heavy) simulation

    NASA Technical Reports Server (NTRS)

    Hammond, R. A.; Mcgehee, R. K.

    1981-01-01

    A computer program to simulate hybrid and electric advanced vehicle systems (HEAVY) is described. It is intended for use early in the design process: concept evaluation, alternative comparison, preliminary design, control and management strategy development, component sizing, and sensitivity studies. It allows the designer to quickly, conveniently, and economically predict the performance of a proposed drive train. The user defines the system to be simulated using a library of predefined component models that may be connected to represent a wide variety of propulsion systems. The development of three models are discussed as examples.

  11. Preface to advances in numerical simulation of plasmas

    NASA Astrophysics Data System (ADS)

    Parker, Scott E.; Chacon, Luis

    2016-10-01

    This Journal of Computational Physics Special Issue, titled "Advances in Numerical Simulation of Plasmas," presents a snapshot of the international state of the art in the field of computational plasma physics. The articles herein are a subset of the topics presented as invited talks at the 24th International Conference on the Numerical Simulation of Plasmas (ICNSP), August 12-14, 2015 in Golden, Colorado. The choice of papers was highly selective. The ICNSP is held every other year and is the premier scientific meeting in the field of computational plasma physics.

  12. Advances in Simulation of Wave Interaction with Extended MHD Phenomena

    SciTech Connect

    Batchelor, Donald B; Abla, Gheni; D'Azevedo, Ed F; Bateman, Glenn; Bernholdt, David E; Berry, Lee A; Bonoli, P.; Bramley, R; Breslau, Joshua; Chance, M.; Chen, J.; Choi, M.; Elwasif, Wael R; Foley, S.; Fu, GuoYong; Harvey, R. W.; Jaeger, Erwin Frederick; Jardin, S. C.; Jenkins, T; Keyes, David E; Klasky, Scott A; Kruger, Scott; Ku, Long-Poe; Lynch, Vickie E; McCune, Douglas; Ramos, J.; Schissel, D.; Schnack,; Wright, J.

    2009-01-01

    The Integrated Plasma Simulator (IPS) provides a framework within which some of the most advanced, massively-parallel fusion modeling codes can be interoperated to provide a detailed picture of the multi-physics processes involved in fusion experiments. The presentation will cover four topics: 1) recent improvements to the IPS, 2) application of the IPS for very high resolution simulations of ITER scenarios, 3) studies of resistive and ideal MHD stability in tokamk discharges using IPS facilities, and 4) the application of RF power in the electron cyclotron range of frequencies to control slowly growing MHD modes in tokamaks and initial evaluations of optimized location for RF power deposition.

  13. Advances in Simulation of Wave Interactions with Extended MHD Phenomena

    SciTech Connect

    Batchelor, Donald B; D'Azevedo, Eduardo; Bateman, Glenn; Bernholdt, David E; Bonoli, P.; Bramley, Randall B; Breslau, Joshua; Elwasif, Wael R; Foley, S.; Jaeger, Erwin Frederick; Jardin, S. C.; Klasky, Scott A; Kruger, Scott E; Ku, Long-Poe; McCune, Douglas; Ramos, J.; Schissel, David P; Schnack, Dalton D

    2009-01-01

    The Integrated Plasma Simulator (IPS) provides a framework within which some of the most advanced, massively-parallel fusion modeling codes can be interoperated to provide a detailed picture of the multi-physics processes involved in fusion experiments. The presentation will cover four topics: (1) recent improvements to the IPS, (2) application of the IPS for very high resolution simulations of ITER scenarios, (3) studies of resistive and ideal MHD stability in tokamak discharges using IPS facilities, and (4) the application of RF power in the electron cyclotron range of frequencies to control slowly growing MHD modes in tokamaks and initial evaluations of optimized location for RF power deposition.

  14. Distributed Data-Flow for In-Situ Visualization and Analysis at Petascale

    SciTech Connect

    Laney, D E; Childs, H R

    2009-03-13

    We conducted a feasibility study to research modifications to data-flow architectures to enable data-flow to be distributed across multiple machines automatically. Distributed data-flow is a crucial technology to ensure that tools like the VisIt visualization application can provide in-situ data analysis and post-processing for simulations on peta-scale machines. We modified a version of VisIt to study load-balancing trade-offs between light-weight kernel compute environments and dedicated post-processing cluster nodes. Our research focused on memory overheads for contouring operations, which involves variable amounts of generated geometry on each node and computation of normal vectors for all generated vertices. Each compute node independently decided whether to send data to dedicated post-processing nodes at each stage of pipeline execution, depending on available memory. We instrumented the code to allow user settable available memory amounts to test extremely low-overhead compute environments. We performed initial testing of this prototype distributed streaming framework, but did not have time to perform scaling studies at and beyond 1000 compute-nodes.

  15. The Consortium for Advanced Simulation of Light Water Reactors

    SciTech Connect

    Ronaldo Szilard; Hongbin Zhang; Doug Kothe; Paul Turinsky

    2011-10-01

    The Consortium for Advanced Simulation of Light Water Reactors (CASL) is a DOE Energy Innovation Hub for modeling and simulation of nuclear reactors. It brings together an exceptionally capable team from national labs, industry and academia that will apply existing modeling and simulation capabilities and develop advanced capabilities to create a usable environment for predictive simulation of light water reactors (LWRs). This environment, designated as the Virtual Environment for Reactor Applications (VERA), will incorporate science-based models, state-of-the-art numerical methods, modern computational science and engineering practices, and uncertainty quantification (UQ) and validation against data from operating pressurized water reactors (PWRs). It will couple state-of-the-art fuel performance, neutronics, thermal-hydraulics (T-H), and structural models with existing tools for systems and safety analysis and will be designed for implementation on both today's leadership-class computers and the advanced architecture platforms now under development by the DOE. CASL focuses on a set of challenge problems such as CRUD induced power shift and localized corrosion, grid-to-rod fretting fuel failures, pellet clad interaction, fuel assembly distortion, etc. that encompass the key phenomena limiting the performance of PWRs. It is expected that much of the capability developed will be applicable to other types of reactors. CASL's mission is to develop and apply modeling and simulation capabilities to address three critical areas of performance for nuclear power plants: (1) reduce capital and operating costs per unit energy by enabling power uprates and plant lifetime extension, (2) reduce nuclear waste volume generated by enabling higher fuel burnup, and (3) enhance nuclear safety by enabling high-fidelity predictive capability for component performance.

  16. Advanced simulation study on bunch gap transient effect

    NASA Astrophysics Data System (ADS)

    Kobayashi, Tetsuya; Akai, Kazunori

    2016-06-01

    Bunch phase shift along the train due to a bunch gap transient is a concern in high-current colliders. In KEKB operation, the measured phase shift along the train agreed well with a simulation and a simple analytical form in most part of the train. However, a rapid phase change was observed at the leading part of the train, which was not predicted by the simulation or by the analytical form. In order to understand the cause of this observation, we have developed an advanced simulation, which treats the transient loading in each of the cavities of the three-cavity system of the accelerator resonantly coupled with energy storage (ARES) instead of the equivalent single cavities used in the previous simulation, operating in the accelerating mode. In this paper, we show that the new simulation reproduces the observation, and clarify that the rapid phase change at the leading part of the train is caused by a transient loading in the three-cavity system of ARES. KEKB is being upgraded to SuperKEKB, which is aiming at 40 times higher luminosity than KEKB. The gap transient in SuperKEKB is investigated using the new simulation, and the result shows that the rapid phase change at the leading part of the train is much larger due to higher beam currents. We will also present measures to mitigate possible luminosity reduction or beam performance deterioration due to the rapid phase change caused by the gap transient.

  17. Advanced Virtual Reality Simulations in Aerospace Education and Research

    NASA Astrophysics Data System (ADS)

    Plotnikova, L.; Trivailo, P.

    2002-01-01

    Recent research developments at Aerospace Engineering, RMIT University have demonstrated great potential for using Virtual Reality simulations as a very effective tool in advanced structures and dynamics applications. They have also been extremely successful in teaching of various undergraduate and postgraduate courses for presenting complex concepts in structural and dynamics designs. Characteristic examples are related to the classical orbital mechanics, spacecraft attitude and structural dynamics. Advanced simulations, reflecting current research by the authors, are mainly related to the implementation of various non-linear dynamic techniques, including using Kane's equations to study dynamics of space tethered satellite systems and the Co-rotational Finite Element method to study reconfigurable robotic systems undergoing large rotations and large translations. The current article will describe the numerical implementation of the modern methods of dynamics, and will concentrate on the post-processing stage of the dynamic simulations. Numerous examples of building Virtual Reality stand-alone animations, designed by the authors, will be discussed in detail. These virtual reality examples will include: The striking feature of the developed technology is the use of the standard mathematical packages, like MATLAB, as a post-processing tool to generate Virtual Reality Modelling Language files with brilliant interactive, graphics and audio effects. These stand-alone demonstration files can be run under Netscape or Microsoft Explorer and do not require MATLAB. Use of this technology enables scientists to easily share their results with colleagues using the Internet, contributing to the flexible learning development at schools and Universities.

  18. Toward Interoperable Mesh, Geometry and Field Components for PDE Simulation Development

    SciTech Connect

    Chand, K K; Diachin, L F; Li, X; Ollivier-Gooch, C; Seol, E S; Shephard, M; Tautges, T; Trease, H

    2005-07-11

    Mesh-based PDE simulation codes are becoming increasingly sophisticated and rely on advanced meshing and discretization tools. Unfortunately, it is still difficult to interchange or interoperate tools developed by different communities to experiment with various technologies or to develop new capabilities. To address these difficulties, we have developed component interfaces designed to support the information flow of mesh-based PDE simulations. We describe this information flow and discuss typical roles and services provided by the geometry, mesh, and field components of the simulation. Based on this delineation for the roles of each component, we give a high-level description of the abstract data model and set of interfaces developed by the Department of Energy's Interoperable Tools for Advanced Petascale Simulation (ITAPS) center. These common interfaces are critical to our interoperability goal, and we give examples of several services based upon these interfaces including mesh adaptation and mesh improvement.

  19. Simulated herbivory advances autumn phenology in Acer rubrum.

    PubMed

    Forkner, Rebecca E

    2014-05-01

    To determine the degree to which herbivory contributes to phenotypic variation in autumn phenology for deciduous trees, red maple (Acer rubrum) branches were subjected to low and high levels of simulated herbivory and surveyed at the end of the season to assess abscission and degree of autumn coloration. Overall, branches with simulated herbivory abscised ∼7 % more leaves at each autumn survey date than did control branches within trees. While branches subjected to high levels of damage showed advanced phenology, abscission rates did not differ from those of undamaged branches within trees because heavy damage induced earlier leaf loss on adjacent branch nodes in this treatment. Damaged branches had greater proportions of leaf area colored than undamaged branches within trees, having twice the amount of leaf area colored at the onset of autumn and having ~16 % greater leaf area colored in late October when nearly all leaves were colored. When senescence was scored as the percent of all leaves abscised and/or colored, branches in both treatments reached peak senescence earlier than did control branches within trees: dates of 50 % senescence occurred 2.5 days earlier for low herbivory branches and 9.7 days earlier for branches with high levels of simulated damage. These advanced rates are of the same time length as reported delays in autumn senescence and advances in spring onset due to climate warming. Thus, results suggest that should insect damage increase as a consequence of climate change, it may offset a lengthening of leaf life spans in some tree species.

  20. The Advanced Gamma-ray Imaging System (AGIS): Simulation studies

    SciTech Connect

    Maier, G.; Buckley, J.; Bugaev, V.; Fegan, S.; Funk, S.; Konopelko, A.; Vassiliev, V.V.; /UCLA

    2011-06-14

    The Advanced Gamma-ray Imaging System (AGIS) is a next-generation ground-based gamma-ray observatory being planned in the U.S. The anticipated sensitivity of AGIS is about one order of magnitude better than the sensitivity of current observatories, allowing it to measure gamma-ray emission from a large number of Galactic and extra-galactic sources. We present here results of simulation studies of various possible designs for AGIS. The primary characteristics of the array performance - collecting area, angular resolution, background rejection, and sensitivity - are discussed.

  1. The Advanced Gamma-ray Imaging System (AGIS) - Simulation Studies

    SciTech Connect

    Maier, G.; Buckley, J.; Bugaev, V.; Fegan, S.; Vassiliev, V. V.; Funk, S.; Konopelko, A.

    2008-12-24

    The Advanced Gamma-ray Imaging System (AGIS) is a US-led concept for a next-generation instrument in ground-based very-high-energy gamma-ray astronomy. The most important design requirement for AGIS is a sensitivity of about 10 times greater than current observatories like Veritas, H.E.S.S or MAGIC. We present results of simulation studies of various possible designs for AGIS. The primary characteristics of the array performance, collecting area, angular resolution, background rejection, and sensitivity are discussed.

  2. EGR Distribution in Engine Cylinders Using Advanced Virtual Simulation

    SciTech Connect

    Fan, Xuetong

    2000-08-20

    Exhaust Gas Recirculation (EGR) is a well-known technology for reduction of NOx in diesel engines. With the demand for extremely low engine out NOx emissions, it is important to have a consistently balanced EGR flow to individual engine cylinders. Otherwise, the variation in the cylinders' NOx contribution to the overall engine emissions will produce unacceptable variability. This presentation will demonstrate the effective use of advanced virtual simulation in the development of a balanced EGR distribution in engine cylinders. An initial design is analyzed reflecting the variance in the EGR distribution, quantitatively and visually. Iterative virtual lab tests result in an optimized system.

  3. New Developments in the Simulation of Advanced Accelerator Concepts

    SciTech Connect

    Bruhwiler, David L.; Cary, John R.; Cowan, Benjamin M.; Paul, Kevin; Mullowney, Paul J.; Messmer, Peter; Geddes, Cameron G. R.; Esarey, Eric; Cormier-Michel, Estelle; Leemans, Wim; Vay, Jean-Luc

    2009-01-22

    Improved computational methods are essential to the diverse and rapidly developing field of advanced accelerator concepts. We present an overview of some computational algorithms for laser-plasma concepts and high-brightness photocathode electron sources. In particular, we discuss algorithms for reduced laser-plasma models that can be orders of magnitude faster than their higher-fidelity counterparts, as well as important on-going efforts to include relevant additional physics that has been previously neglected. As an example of the former, we present 2D laser wakefield accelerator simulations in an optimal Lorentz frame, demonstrating >10 GeV energy gain of externally injected electrons over a 2 m interaction length, showing good agreement with predictions from scaled simulations and theory, with a speedup factor of {approx}2,000 as compared to standard particle-in-cell.

  4. Benchmarking of Advanced Control Strategies for a Simulated Hydroelectric System

    NASA Astrophysics Data System (ADS)

    Finotti, S.; Simani, S.; Alvisi, S.; Venturini, M.

    2017-01-01

    This paper analyses and develops the design of advanced control strategies for a typical hydroelectric plant during unsteady conditions, performed in the Matlab and Simulink environments. The hydraulic system consists of a high water head and a long penstock with upstream and downstream surge tanks, and is equipped with a Francis turbine. The nonlinear characteristics of hydraulic turbine and the inelastic water hammer effects were considered to calculate and simulate the hydraulic transients. With reference to the control solutions addressed in this work, the proposed methodologies rely on data-driven and model-based approaches applied to the system under monitoring. Extensive simulations and comparisons serve to determine the best solution for the development of the most effective, robust and reliable control tool when applied to the considered hydraulic system.

  5. New Developments in the Simulation of Advanced Accelerator Concepts

    SciTech Connect

    Paul, K.; Cary, J.R.; Cowan, B.; Bruhwiler, D.L.; Geddes, C.G.R.; Mullowney, P.J.; Messmer, P.; Esarey, E.; Cormier-Michel, E.; Leemans, W.P.; Vay, J.-L.

    2008-09-10

    Improved computational methods are essential to the diverse and rapidly developing field of advanced accelerator concepts. We present an overview of some computational algorithms for laser-plasma concepts and high-brightness photocathode electron sources. In particular, we discuss algorithms for reduced laser-plasma models that can be orders of magnitude faster than their higher-fidelity counterparts, as well as important on-going efforts to include relevant additional physics that has been previously neglected. As an example of the former, we present 2D laser wakefield accelerator simulations in an optimal Lorentz frame, demonstrating>10 GeV energy gain of externally injected electrons over a 2 m interaction length, showing good agreement with predictions from scaled simulations and theory, with a speedup factor of ~;;2,000 as compared to standard particle-in-cell.

  6. Recent advances of strong-strong beam-beam simulation

    SciTech Connect

    Qiang, Ji; Furman, Miguel A.; Ryne, Robert D.; Fischer, Wolfram; Ohmi,Kazuhito

    2004-09-15

    In this paper, we report on recent advances in strong-strong beam-beam simulation. Numerical methods used in the calculation of the beam-beam forces are reviewed. A new computational method to solve the Poisson equation on nonuniform grid is presented. This method reduces the computational cost by a half compared with the standard FFT based method on uniform grid. It is also more accurate than the standard method for a colliding beam with low transverse aspect ratio. In applications, we present the study of coherent modes with multi-bunch, multi-collision beam-beam interactions at RHIC. We also present the strong-strong simulation of the luminosity evolution at KEKB with and without finite crossing angle.

  7. Active Storage with Analytics Capabilities and I/O Runtime System for Petascale Systems

    SciTech Connect

    Choudhary, Alok

    2015-03-18

    Computational scientists must understand results from experimental, observational and computational simulation generated data to gain insights and perform knowledge discovery. As systems approach the petascale range, problems that were unimaginable a few years ago are within reach. With the increasing volume and complexity of data produced by ultra-scale simulations and high-throughput experiments, understanding the science is largely hampered by the lack of comprehensive I/O, storage, acceleration of data manipulation, analysis, and mining tools. Scientists require techniques, tools and infrastructure to facilitate better understanding of their data, in particular the ability to effectively perform complex data analysis, statistical analysis and knowledge discovery. The goal of this work is to enable more effective analysis of scientific datasets through the integration of enhancements in the I/O stack, from active storage support at the file system layer to MPI-IO and high-level I/O library layers. We propose to provide software components to accelerate data analytics, mining, I/O, and knowledge discovery for large-scale scientific applications, thereby increasing productivity of both scientists and the systems. Our approaches include 1) design the interfaces in high-level I/O libraries, such as parallel netCDF, for applications to activate data mining operations at the lower I/O layers; 2) Enhance MPI-IO runtime systems to incorporate the functionality developed as a part of the runtime system design; 3) Develop parallel data mining programs as part of runtime library for server-side file system in PVFS file system; and 4) Prototype an active storage cluster, which will utilize multicore CPUs, GPUs, and FPGAs to carry out the data mining workload.

  8. Advanced radiometric and interferometric milimeter-wave scene simulations

    NASA Technical Reports Server (NTRS)

    Hauss, B. I.; Moffa, P. J.; Steele, W. G.; Agravante, H.; Davidheiser, R.; Samec, T.; Young, S. K.

    1993-01-01

    Smart munitions and weapons utilize various imaging sensors (including passive IR, active and passive millimeter-wave, and visible wavebands) to detect/identify targets at short standoff ranges and in varied terrain backgrounds. In order to design and evaluate these sensors under a variety of conditions, a high-fidelity scene simulation capability is necessary. Such a capability for passive millimeter-wave scene simulation exists at TRW. TRW's Advanced Radiometric Millimeter-Wave Scene Simulation (ARMSS) code is a rigorous, benchmarked, end-to-end passive millimeter-wave scene simulation code for interpreting millimeter-wave data, establishing scene signatures and evaluating sensor performance. In passive millimeter-wave imaging, resolution is limited due to wavelength and aperture size. Where high resolution is required, the utility of passive millimeter-wave imaging is confined to short ranges. Recent developments in interferometry have made possible high resolution applications on military platforms. Interferometry or synthetic aperture radiometry allows the creation of a high resolution image with a sparsely filled aperture. Borrowing from research work in radio astronomy, we have developed and tested at TRW scene reconstruction algorithms that allow the recovery of the scene from a relatively small number of spatial frequency components. In this paper, the TRW modeling capability is described and numerical results are presented.

  9. Advanced radiometric and interferometric milimeter-wave scene simulations

    NASA Astrophysics Data System (ADS)

    Hauss, B. I.; Moffa, P. J.; Steele, W. G.; Agravante, H.; Davidheiser, R.; Samec, T.; Young, S. K.

    1993-12-01

    Smart munitions and weapons utilize various imaging sensors (including passive IR, active and passive millimeter-wave, and visible wavebands) to detect/identify targets at short standoff ranges and in varied terrain backgrounds. In order to design and evaluate these sensors under a variety of conditions, a high-fidelity scene simulation capability is necessary. Such a capability for passive millimeter-wave scene simulation exists at TRW. TRW's Advanced Radiometric Millimeter-Wave Scene Simulation (ARMSS) code is a rigorous, benchmarked, end-to-end passive millimeter-wave scene simulation code for interpreting millimeter-wave data, establishing scene signatures and evaluating sensor performance. In passive millimeter-wave imaging, resolution is limited due to wavelength and aperture size. Where high resolution is required, the utility of passive millimeter-wave imaging is confined to short ranges. Recent developments in interferometry have made possible high resolution applications on military platforms. Interferometry or synthetic aperture radiometry allows the creation of a high resolution image with a sparsely filled aperture. Borrowing from research work in radio astronomy, we have developed and tested at TRW scene reconstruction algorithms that allow the recovery of the scene from a relatively small number of spatial frequency components. In this paper, the TRW modeling capability is described and numerical results are presented.

  10. Graphics simulation and training aids for advanced teleoperation

    NASA Technical Reports Server (NTRS)

    Kim, Won S.; Schenker, Paul S.; Bejczy, Antal K.

    1993-01-01

    Graphics displays can be of significant aid in accomplishing a teleoperation task throughout all three phases of off-line task analysis and planning, operator training, and online operation. In the first phase, graphics displays provide substantial aid to investigate work cell layout, motion planning with collision detection and with possible redundancy resolution, and planning for camera views. In the second phase, graphics displays can serve as very useful tools for introductory training of operators before training them on actual hardware. In the third phase, graphics displays can be used for previewing planned motions and monitoring actual motions in any desired viewing angle, or, when communication time delay prevails, for providing predictive graphics overlay on the actual camera view of the remote site to show the non-time-delayed consequences of commanded motions in real time. This paper addresses potential space applications of graphics displays in all three operational phases of advanced teleoperation. Possible applications are illustrated with techniques developed and demonstrated in the Advanced Teleoperation Laboratory at JPL. The examples described include task analysis and planning of a simulated Solar Maximum Satellite Repair task, a novel force-reflecting teleoperation simulator for operator training, and preview and predictive displays for on-line operations.

  11. A Virtual Engineering Framework for Simulating Advanced Power System

    SciTech Connect

    Mike Bockelie; Dave Swensen; Martin Denison; Stanislav Borodai

    2008-06-18

    In this report is described the work effort performed to provide NETL with VE-Suite based Virtual Engineering software and enhanced equipment models to support NETL's Advanced Process Engineering Co-simulation (APECS) framework for advanced power generation systems. Enhancements to the software framework facilitated an important link between APECS and the virtual engineering capabilities provided by VE-Suite (e.g., equipment and process visualization, information assimilation). Model enhancements focused on improving predictions for the performance of entrained flow coal gasifiers and important auxiliary equipment (e.g., Air Separation Units) used in coal gasification systems. In addition, a Reduced Order Model generation tool and software to provide a coupling between APECS/AspenPlus and the GE GateCycle simulation system were developed. CAPE-Open model interfaces were employed where needed. The improved simulation capability is demonstrated on selected test problems. As part of the project an Advisory Panel was formed to provide guidance on the issues on which to focus the work effort. The Advisory Panel included experts from industry and academics in gasification, CO2 capture issues, process simulation and representatives from technology developers and the electric utility industry. To optimize the benefit to NETL, REI coordinated its efforts with NETL and NETL funded projects at Iowa State University, Carnegie Mellon University and ANSYS/Fluent, Inc. The improved simulation capabilities incorporated into APECS will enable researchers and engineers to better understand the interactions of different equipment components, identify weaknesses and processes needing improvement and thereby allow more efficient, less expensive plants to be developed and brought on-line faster and in a more cost-effective manner. These enhancements to APECS represent an important step toward having a fully integrated environment for performing plant simulation and engineering

  12. Using 100G Network Technology in Support of Petascale Science

    NASA Technical Reports Server (NTRS)

    Gary, James P.

    2011-01-01

    NASA in collaboration with a number of partners conducted a set of individual experiments and demonstrations during SC 10 that collectively were titled "Using 100G Network Technology in Support of Petascale Science". The partners included the iCAIR, Internet2, LAC, MAX, National LambdaRail (NLR), NOAA and SCinet Research Sandbox (SRS) as well as the vendors Ciena, Cisco, ColorChip, cPacket, Extreme Networks, Fusion-io, HP and Panduit who most generously allowed some of their leading edge 40G/100G optical transport, Ethernet switch and Internet Protocol router equipment and file server technologies to be involved. The experiments and demonstrations featured different vendor-provided 40G/100G network technology solutions for full-duplex 40G and 100G LAN data flows across SRS-deployed single-node fiber-pairs among the Exhibit Booths of NASA, the National Center for Data lining, NOAA and the SCinet Network Operations Center, as well as between the NASA Exhibit Booth in New Orleans and the Starlight Communications Exchange facility in Chicago across special SC 10- only 80- and 100-Gbps wide area network links provisioned respectively by the NLR and Internet2, then on to GSFC across a 40-Gbps link. provisioned by the Mid-Atlantic Crossroads. The networks and vendor equipment were load-stressed by sets of NASA/GSFC High End Computer Network Team-built, relatively inexpensive, net-test-workstations that are capable of demonstrating greater than 100Gbps uni-directional nuttcp-enabled memory-to-memory data transfers, greater than 80-Gbps aggregate--bidirectional memory-to-memory data transfers, and near 40-Gbps uni-directional disk-to-disk file copying. This paper will summarize the background context, key accomplishments and some significances of these experiments and demonstrations.

  13. Probabilistic Photometric Redshifts in the Era of Petascale Astronomy

    SciTech Connect

    Carrasco Kind, Matias

    2014-01-01

    to enable the development of precision cosmology in the era of petascale astronomical surveys.

  14. Final Report for Enhancing the MPI Programming Model for PetaScale Systems

    SciTech Connect

    Gropp, William Douglas

    2013-07-22

    This project performed research into enhancing the MPI programming model in two ways: developing improved algorithms and implementation strategies, tested and realized in the MPICH implementation, and exploring extensions to the MPI standard to better support PetaScale and ExaScale systems.

  15. Investigations and advanced concepts on gyrotron interaction modeling and simulations

    SciTech Connect

    Avramidis, K. A.

    2015-12-15

    In gyrotron theory, the interaction between the electron beam and the high frequency electromagnetic field is commonly modeled using the slow variables approach. The slow variables are quantities that vary slowly in time in comparison to the electron cyclotron frequency. They represent the electron momentum and the high frequency field of the resonant TE modes in the gyrotron cavity. For their definition, some reference frequencies need to be introduced. These include the so-called averaging frequency, used to define the slow variable corresponding to the electron momentum, and the carrier frequencies, used to define the slow variables corresponding to the field envelopes of the modes. From the mathematical point of view, the choice of the reference frequencies is, to some extent, arbitrary. However, from the numerical point of view, there are arguments that point toward specific choices, in the sense that these choices are advantageous in terms of simulation speed and accuracy. In this paper, the typical monochromatic gyrotron operation is considered, and the numerical integration of the interaction equations is performed by the trajectory approach, since it is the fastest, and therefore it is the one that is most commonly used. The influence of the choice of the reference frequencies on the interaction simulations is studied using theoretical arguments, as well as numerical simulations. From these investigations, appropriate choices for the values of the reference frequencies are identified. In addition, novel, advanced concepts for the definitions of these frequencies are addressed, and their benefits are demonstrated numerically.

  16. Advanced modeling and simulation to design and manufacture high performance and reliable advanced microelectronics and microsystems.

    SciTech Connect

    Nettleship, Ian (University of Pittsburgh, Pittsburgh, PA); Hinklin, Thomas; Holcomb, David Joseph; Tandon, Rajan; Arguello, Jose Guadalupe, Jr.; Dempsey, James Franklin; Ewsuk, Kevin Gregory; Neilsen, Michael K.; Lanagan, Michael (Pennsylvania State University, University Park, PA)

    2007-07-01

    An interdisciplinary team of scientists and engineers having broad expertise in materials processing and properties, materials characterization, and computational mechanics was assembled to develop science-based modeling/simulation technology to design and reproducibly manufacture high performance and reliable, complex microelectronics and microsystems. The team's efforts focused on defining and developing a science-based infrastructure to enable predictive compaction, sintering, stress, and thermomechanical modeling in ''real systems'', including: (1) developing techniques to and determining materials properties and constitutive behavior required for modeling; (2) developing new, improved/updated models and modeling capabilities, (3) ensuring that models are representative of the physical phenomena being simulated; and (4) assessing existing modeling capabilities to identify advances necessary to facilitate the practical application of Sandia's predictive modeling technology.

  17. Advanced Simulation Capability for Environmental Management (ASCEM): Early Site Demonstration

    SciTech Connect

    Meza, Juan; Hubbard, Susan; Freshley, Mark D.; Gorton, Ian; Moulton, David; Denham, Miles E.

    2011-03-07

    The U.S. Department of Energy Office of Environmental Management, Technology Innovation and Development (EM-32), is supporting development of the Advanced Simulation Capability for Environmental Management (ASCEM). ASCEM is a state-of-the-art scientific tool and approach for understanding and predicting contaminant fate and transport in natural and engineered systems. The modular and open source high performance computing tool will facilitate integrated approaches to modeling and site characterization that enable robust and standardized assessments of performance and risk for EM cleanup and closure activities. As part of the initial development process, a series of demonstrations were defined to test ASCEM components and provide feedback to developers, engage end users in applications, and lead to an outcome that would benefit the sites. The demonstration was implemented for a sub-region of the Savannah River Site General Separations Area that includes the F-Area Seepage Basins. The physical domain included the unsaturated and saturated zones in the vicinity of the seepage basins and Fourmile Branch, using an unstructured mesh fit to the hydrostratigraphy and topography of the site. The calculations modeled variably saturated flow and the resulting flow field was used in simulations of the advection of non-reactive species and the reactive-transport of uranium. As part of the demonstrations, a new set of data management, visualization, and uncertainty quantification tools were developed to analyze simulation results and existing site data. These new tools can be used to provide summary statistics, including information on which simulation parameters were most important in the prediction of uncertainty and to visualize the relationships between model input and output.

  18. TID Simulation of Advanced CMOS Devices for Space Applications

    NASA Astrophysics Data System (ADS)

    Sajid, Muhammad

    2016-07-01

    This paper focuses on Total Ionizing Dose (TID) effects caused by accumulation of charges at silicon dioxide, substrate/silicon dioxide interface, Shallow Trench Isolation (STI) for scaled CMOS bulk devices as well as at Buried Oxide (BOX) layer in devices based on Silicon-On-Insulator (SOI) technology to be operated in space radiation environment. The radiation induced leakage current and corresponding density/concentration electrons in leakage current path was presented/depicted for 180nm, 130nm and 65nm NMOS, PMOS transistors based on CMOS bulk as well as SOI process technologies on-board LEO and GEO satellites. On the basis of simulation results, the TID robustness analysis for advanced deep sub-micron technologies was accomplished up to 500 Krad. The correlation between the impact of technology scaling and magnitude of leakage current with corresponding total dose was established utilizing Visual TCAD Genius program.

  19. Simulated Interactive Research Experiments as Educational Tools for Advanced Science.

    PubMed

    Tomandl, Mathias; Mieling, Thomas; Losert-Valiente Kroon, Christiane M; Hopf, Martin; Arndt, Markus

    2015-09-15

    Experimental research has become complex and thus a challenge to science education. Only very few students can typically be trained on advanced scientific equipment. It is therefore important to find new tools that allow all students to acquire laboratory skills individually and independent of where they are located. In a design-based research process we have investigated the feasibility of using a virtual laboratory as a photo-realistic and scientifically valid representation of advanced scientific infrastructure to teach modern experimental science, here, molecular quantum optics. We found a concept based on three educational principles that allows undergraduate students to become acquainted with procedures and concepts of a modern research field. We find a significant increase in student understanding using our Simulated Interactive Research Experiment (SiReX), by evaluating the learning outcomes with semi-structured interviews in a pre/post design. This suggests that this concept of an educational tool can be generalized to disseminate findings in other fields.

  20. Simulated Interactive Research Experiments as Educational Tools for Advanced Science

    PubMed Central

    Tomandl, Mathias; Mieling, Thomas; Losert-Valiente Kroon, Christiane M.; Hopf, Martin; Arndt, Markus

    2015-01-01

    Experimental research has become complex and thus a challenge to science education. Only very few students can typically be trained on advanced scientific equipment. It is therefore important to find new tools that allow all students to acquire laboratory skills individually and independent of where they are located. In a design-based research process we have investigated the feasibility of using a virtual laboratory as a photo-realistic and scientifically valid representation of advanced scientific infrastructure to teach modern experimental science, here, molecular quantum optics. We found a concept based on three educational principles that allows undergraduate students to become acquainted with procedures and concepts of a modern research field. We find a significant increase in student understanding using our Simulated Interactive Research Experiment (SiReX), by evaluating the learning outcomes with semi-structured interviews in a pre/post design. This suggests that this concept of an educational tool can be generalized to disseminate findings in other fields. PMID:26370627

  1. Advancements in Afterbody Radiative Heating Simulations for Earth Entry

    NASA Technical Reports Server (NTRS)

    Johnston, Christopher O.; Panesi, Marco; Brandis, Aaron M.

    2016-01-01

    Four advancements to the simulation of backshell radiative heating for Earth entry are presented. The first of these is the development of a flow field model that treats electronic levels of the dominant backshell radiator, N, as individual species. This is shown to allow improvements in the modeling of electron-ion recombination and two-temperature modeling, which are shown to increase backshell radiative heating by 10 to 40%. By computing the electronic state populations of N within the flow field solver, instead of through the quasi-steady state approximation in the radiation code, the coupling of radiative transition rates to the species continuity equations for the levels of N, including the impact of non-local absorption, becomes feasible. Implementation of this additional level of coupling between the flow field and radiation codes represents the second advancement presented in this work, which is shown to increase the backshell radiation by another 10 to 50%. The impact of radiative transition rates due to non-local absorption indicates the importance of accurate radiation transport in the relatively complex flow geometry of the backshell. This motivates the third advancement, which is the development of a ray-tracing radiation transport approach to compute the radiative transition rates and divergence of the radiative flux at every point for coupling to the flow field, therefore allowing the accuracy of the commonly applied tangent-slab approximation to be assessed for radiative source terms. For the sphere considered at lunar-return conditions, the tangent-slab approximation is shown to provide a sufficient level of accuracy for the radiative source terms, even for backshell cases. This is in contrast to the agreement between the two approaches for computing the radiative flux to the surface, which differ by up to 40%. The final advancement presented is the development of a nonequilibrium model for NO radiation, which provides significant backshell

  2. The advanced computational testing and simulation toolkit (ACTS)

    SciTech Connect

    Drummond, L.A.; Marques, O.

    2002-05-21

    During the past decades there has been a continuous growth in the number of physical and societal problems that have been successfully studied and solved by means of computational modeling and simulation. Distinctively, a number of these are important scientific problems ranging in scale from the atomic to the cosmic. For example, ionization is a phenomenon as ubiquitous in modern society as the glow of fluorescent lights and the etching on silicon computer chips; but it was not until 1999 that researchers finally achieved a complete numerical solution to the simplest example of ionization, the collision of a hydrogen atom with an electron. On the opposite scale, cosmologists have long wondered whether the expansion of the Universe, which began with the Big Bang, would ever reverse itself, ending the Universe in a Big Crunch. In 2000, analysis of new measurements of the cosmic microwave background radiation showed that the geometry of the Universe is flat, and thus the Universe will continue expanding forever. Both of these discoveries depended on high performance computer simulations that utilized computational tools included in the Advanced Computational Testing and Simulation (ACTS) Toolkit. The ACTS Toolkit is an umbrella project that brought together a number of general purpose computational tool development projects funded and supported by the U.S. Department of Energy (DOE). These tools, which have been developed independently, mainly at DOE laboratories, make it easier for scientific code developers to write high performance applications for parallel computers. They tackle a number of computational issues that are common to a large number of scientific applications, mainly implementation of numerical algorithms, and support for code development, execution and optimization. The ACTS Toolkit Project enables the use of these tools by a much wider community of computational scientists, and promotes code portability, reusability, reduction of duplicate efforts

  3. Enabling Advanced Modeling and Simulations for Fuel-Flexible Combustors

    SciTech Connect

    Pitsch, Heinz

    2010-05-31

    The overall goal of the present project is to enable advanced modeling and simulations for the design and optimization of fuel-flexible turbine combustors. For this purpose we use a high fidelity, extensively-tested large-eddy simulation (LES) code and state-of-the-art models for premixed/partially-premixed turbulent combustion developed in the PI's group. In the frame of the present project, these techniques are applied, assessed, and improved for hydrogen enriched premixed and partially premixed gas-turbine combustion. Our innovative approaches include a completely consistent description of flame propagation; a coupled progress variable/level set method to resolve the detailed flame structure, and incorporation of thermal-diffusion (non-unity Lewis number) effects. In addition, we have developed a general flamelet-type transformation holding in the limits of both non-premixed and premixed burning. As a result, a model for partially premixed combustion has been derived. The coupled progress variable/level method and the general flamelet transformation were validated by LES of a lean-premixed low-swirl burner that has been studied experimentally at Lawrence Berkeley National Laboratory. The model is extended to include the non-unity Lewis number effects, which play a critical role in fuel-flexible combustor with high hydrogen content fuel. More specifically, a two-scalar model for lean hydrogen and hydrogen-enriched combustion is developed and validated against experimental and direct numerical simulation (DNS) data. Results are presented to emphasize the importance of non-unity Lewis number effects in the lean-premixed low-swirl burner of interest in this project. The proposed model gives improved results, which shows that the inclusion of the non-unity Lewis number effects is essential for accurate prediction of the lean-premixed low-swirl flame.

  4. Enabling Advanced Modeling and Simulations for Fuel-Flexible Combustors

    SciTech Connect

    Heinz Pitsch

    2010-05-31

    The overall goal of the present project is to enable advanced modeling and simulations for the design and optimization of fuel-flexible turbine combustors. For this purpose we use a high-fidelity, extensively-tested large-eddy simulation (LES) code and state-of-the-art models for premixed/partially-premixed turbulent combustion developed in the PI's group. In the frame of the present project, these techniques are applied, assessed, and improved for hydrogen enriched premixed and partially premixed gas-turbine combustion. Our innovative approaches include a completely consistent description of flame propagation, a coupled progress variable/level set method to resolve the detailed flame structure, and incorporation of thermal-diffusion (non-unity Lewis number) effects. In addition, we have developed a general flamelet-type transformation holding in the limits of both non-premixed and premixed burning. As a result, a model for partially premixed combustion has been derived. The coupled progress variable/level method and the general flamelet tranformation were validated by LES of a lean-premixed low-swirl burner that has been studied experimentally at Lawrence Berkeley National Laboratory. The model is extended to include the non-unity Lewis number effects, which play a critical role in fuel-flexible combustor with high hydrogen content fuel. More specifically, a two-scalar model for lean hydrogen and hydrogen-enriched combustion is developed and validated against experimental and direct numerical simulation (DNS) data. Results are presented to emphasize the importance of non-unity Lewis number effects in the lean-premixed low-swirl burner of interest in this project. The proposed model gives improved results, which shows that the inclusion of the non-unity Lewis number effects is essential for accurate prediction of the lean-premixed low-swirl flame.

  5. PoPLAR: Portal for Petascale Lifescience Applications and Research

    PubMed Central

    2013-01-01

    Background We are focusing specifically on fast data analysis and retrieval in bioinformatics that will have a direct impact on the quality of human health and the environment. The exponential growth of data generated in biology research, from small atoms to big ecosystems, necessitates an increasingly large computational component to perform analyses. Novel DNA sequencing technologies and complementary high-throughput approaches--such as proteomics, genomics, metabolomics, and meta-genomics--drive data-intensive bioinformatics. While individual research centers or universities could once provide for these applications, this is no longer the case. Today, only specialized national centers can deliver the level of computing resources required to meet the challenges posed by rapid data growth and the resulting computational demand. Consequently, we are developing massively parallel applications to analyze the growing flood of biological data and contribute to the rapid discovery of novel knowledge. Methods The efforts of previous National Science Foundation (NSF) projects provided for the generation of parallel modules for widely used bioinformatics applications on the Kraken supercomputer. We have profiled and optimized the code of some of the scientific community's most widely used desktop and small-cluster-based applications, including BLAST from the National Center for Biotechnology Information (NCBI), HMMER, and MUSCLE; scaled them to tens of thousands of cores on high-performance computing (HPC) architectures; made them robust and portable to next-generation architectures; and incorporated these parallel applications in science gateways with a web-based portal. Results This paper will discuss the various developmental stages, challenges, and solutions involved in taking bioinformatics applications from the desktop to petascale with a front-end portal for very-large-scale data analysis in the life sciences. Conclusions This research will help to bridge the gap

  6. ADVANCED TECHNIQUES FOR RESERVOIR SIMULATION AND MODELING OF NONCONVENTIONAL WELLS

    SciTech Connect

    Louis J. Durlofsky; Khalid Aziz

    2004-08-20

    Nonconventional wells, which include horizontal, deviated, multilateral and ''smart'' wells, offer great potential for the efficient management of oil and gas reservoirs. These wells are able to contact larger regions of the reservoir than conventional wells and can also be used to target isolated hydrocarbon accumulations. The use of nonconventional wells instrumented with downhole inflow control devices allows for even greater flexibility in production. Because nonconventional wells can be very expensive to drill, complete and instrument, it is important to be able to optimize their deployment, which requires the accurate prediction of their performance. However, predictions of nonconventional well performance are often inaccurate. This is likely due to inadequacies in some of the reservoir engineering and reservoir simulation tools used to model and optimize nonconventional well performance. A number of new issues arise in the modeling and optimization of nonconventional wells. For example, the optimal use of downhole inflow control devices has not been addressed for practical problems. In addition, the impact of geological and engineering uncertainty (e.g., valve reliability) has not been previously considered. In order to model and optimize nonconventional wells in different settings, it is essential that the tools be implemented into a general reservoir simulator. This simulator must be sufficiently general and robust and must in addition be linked to a sophisticated well model. Our research under this five year project addressed all of the key areas indicated above. The overall project was divided into three main categories: (1) advanced reservoir simulation techniques for modeling nonconventional wells; (2) improved techniques for computing well productivity (for use in reservoir engineering calculations) and for coupling the well to the simulator (which includes the accurate calculation of well index and the modeling of multiphase flow in the wellbore

  7. An efficient time advancing strategy for energy-preserving simulations

    NASA Astrophysics Data System (ADS)

    Capuano, F.; Coppola, G.; de Luca, L.

    2015-08-01

    Energy-conserving numerical methods are widely employed within the broad area of convection-dominated systems. Semi-discrete conservation of energy is usually obtained by adopting the so-called skew-symmetric splitting of the non-linear convective term, defined as a suitable average of the divergence and advective forms. Although generally allowing global conservation of kinetic energy, it has the drawback of being roughly twice as expensive as standard divergence or advective forms alone. In this paper, a general theoretical framework has been developed to derive an efficient time-advancement strategy in the context of explicit Runge-Kutta schemes. The novel technique retains the conservation properties of skew-symmetric-based discretizations at a reduced computational cost. It is found that optimal energy conservation can be achieved by properly constructed Runge-Kutta methods in which only divergence and advective forms for the convective term are used. As a consequence, a considerable improvement in computational efficiency over existing practices is achieved. The overall procedure has proved to be able to produce new schemes with a specified order of accuracy on both solution and energy. The effectiveness of the method as well as the asymptotic behavior of the schemes is demonstrated by numerical simulation of Burgers' equation.

  8. An Advanced Leakage Scheme for Neutrino Treatment in Astrophysical Simulations

    NASA Astrophysics Data System (ADS)

    Perego, A.; Cabezón, R. M.; Käppeli, R.

    2016-04-01

    We present an Advanced Spectral Leakage (ASL) scheme to model neutrinos in the context of core-collapse supernovae (CCSNe) and compact binary mergers. Based on previous gray leakage schemes, the ASL scheme computes the neutrino cooling rates by interpolating local production and diffusion rates (relevant in optically thin and thick regimes, respectively) separately for discretized values of the neutrino energy. Neutrino trapped components are also modeled, based on equilibrium and timescale arguments. The better accuracy achieved by the spectral treatment allows a more reliable computation of neutrino heating rates in optically thin conditions. The scheme has been calibrated and tested against Boltzmann transport in the context of Newtonian spherically symmetric models of CCSNe. ASL shows a very good qualitative and a partial quantitative agreement for key quantities from collapse to a few hundreds of milliseconds after core bounce. We have proved the adaptability and flexibility of our ASL scheme, coupling it to an axisymmetric Eulerian and to a three-dimensional smoothed particle hydrodynamics code to simulate core collapse. Therefore, the neutrino treatment presented here is ideal for large parameter-space explorations, parametric studies, high-resolution tests, code developments, and long-term modeling of asymmetric configurations, where more detailed neutrino treatments are not available or are currently computationally too expensive.

  9. AN ADVANCED LEAKAGE SCHEME FOR NEUTRINO TREATMENT IN ASTROPHYSICAL SIMULATIONS

    SciTech Connect

    Perego, A.; Cabezón, R. M.; Käppeli, R.

    2016-04-15

    We present an Advanced Spectral Leakage (ASL) scheme to model neutrinos in the context of core-collapse supernovae (CCSNe) and compact binary mergers. Based on previous gray leakage schemes, the ASL scheme computes the neutrino cooling rates by interpolating local production and diffusion rates (relevant in optically thin and thick regimes, respectively) separately for discretized values of the neutrino energy. Neutrino trapped components are also modeled, based on equilibrium and timescale arguments. The better accuracy achieved by the spectral treatment allows a more reliable computation of neutrino heating rates in optically thin conditions. The scheme has been calibrated and tested against Boltzmann transport in the context of Newtonian spherically symmetric models of CCSNe. ASL shows a very good qualitative and a partial quantitative agreement for key quantities from collapse to a few hundreds of milliseconds after core bounce. We have proved the adaptability and flexibility of our ASL scheme, coupling it to an axisymmetric Eulerian and to a three-dimensional smoothed particle hydrodynamics code to simulate core collapse. Therefore, the neutrino treatment presented here is ideal for large parameter-space explorations, parametric studies, high-resolution tests, code developments, and long-term modeling of asymmetric configurations, where more detailed neutrino treatments are not available or are currently computationally too expensive.

  10. EarthServer: an Intercontinental Collaboration on Petascale Datacubes

    NASA Astrophysics Data System (ADS)

    Baumann, P.; Rossi, A. P.

    2015-12-01

    With the unprecedented increase of orbital sensor, in-situ measurement, and simulation data there is a rich, yet not leveraged potential for getting insights from dissecting datasets and rejoining them with other datasets. Obviously, the goal is to allow users to "ask any question, any time" thereby enabling them to "build their own product on the go".One of the most influential initiatives in Big Geo Data is EarthServer which has demonstrated new directions for flexible, scalable EO services based on innovative NewSQL technology. Researchers from Europe, the US and recently Australia have teamed up to rigourously materialize the concept of the datacube. Such a datacube may have spatial and temporal dimensions (such as a satellite image time series) and may unite an unlimited number of scenes. Independently from whatever efficient data structuring a server network may perform internally, users will always see just a few datacubes they can slice and dice. EarthServer has established client and server technology for such spatio-temporal datacubes. The underlying scalable array engine, rasdaman, enables direct interaction, including 3-D visualization, what-if scenarios, common EO data processing, and general analytics. Services exclusively rely on the open OGC "Big Geo Data" standards suite, the Web Coverage Service (WCS) including the Web Coverage Processing Service (WCPS). Conversely, EarthServer has significantly shaped and advanced the OGC Big Geo Data standards landscape based on the experience gained.Phase 1 of EarthServer has advanced scalable array database technology into 100+ TB services; in phase 2, Petabyte datacubes will be built in Europe and Australia to perform ad-hoc querying and merging. Standing between EarthServer phase 1 (from 2011 through 2014) and phase 2 (from 2015 through 2018) we present the main results and outline the impact on the international standards landscape; effectively, the Big Geo Data standards established through initiative of

  11. Modeling and simulation challenges pursued by the Consortium for Advanced Simulation of Light Water Reactors (CASL)

    NASA Astrophysics Data System (ADS)

    Turinsky, Paul J.; Kothe, Douglas B.

    2016-05-01

    The Consortium for the Advanced Simulation of Light Water Reactors (CASL), the first Energy Innovation Hub of the Department of Energy, was established in 2010 with the goal of providing modeling and simulation (M&S) capabilities that support and accelerate the improvement of nuclear energy's economic competitiveness and the reduction of spent nuclear fuel volume per unit energy, and all while assuring nuclear safety. To accomplish this requires advances in M&S capabilities in radiation transport, thermal-hydraulics, fuel performance and corrosion chemistry. To focus CASL's R&D, industry challenge problems have been defined, which equate with long standing issues of the nuclear power industry that M&S can assist in addressing. To date CASL has developed a multi-physics ;core simulator; based upon pin-resolved radiation transport and subchannel (within fuel assembly) thermal-hydraulics, capitalizing on the capabilities of high performance computing. CASL's fuel performance M&S capability can also be optionally integrated into the core simulator, yielding a coupled multi-physics capability with untapped predictive potential. Material models have been developed to enhance predictive capabilities of fuel clad creep and growth, along with deeper understanding of zirconium alloy clad oxidation and hydrogen pickup. Understanding of corrosion chemistry (e.g., CRUD formation) has evolved at all scales: micro, meso and macro. CFD R&D has focused on improvement in closure models for subcooled boiling and bubbly flow, and the formulation of robust numerical solution algorithms. For multiphysics integration, several iterative acceleration methods have been assessed, illuminating areas where further research is needed. Finally, uncertainty quantification and data assimilation techniques, based upon sampling approaches, have been made more feasible for practicing nuclear engineers via R&D on dimensional reduction and biased sampling. Industry adoption of CASL's evolving M

  12. Advanced wellbore thermal simulator GEOTEMP2 research report

    SciTech Connect

    Mitchell, R.F.

    1982-02-01

    The development of the GEOTEMP2 wellbore thermal simulator is described. The major technical features include a general purpose air and mist drilling simulator and a two-phase steam flow simulator that can model either injection or production.

  13. Collaboratively Architecting a Scalable and Adaptable Petascale Infrastructure to Support Transdisciplinary Scientific Research for the Australian Earth and Environmental Sciences

    NASA Astrophysics Data System (ADS)

    Wyborn, L. A.; Evans, B. J. K.; Pugh, T.; Lescinsky, D. T.; Foster, C.; Uhlherr, A.

    2014-12-01

    The National Computational Infrastructure (NCI) at the Australian National University (ANU) is a partnership between CSIRO, ANU, Bureau of Meteorology (BoM) and Geoscience Australia. Recent investments in a 1.2 PFlop Supercomputer (Raijin), ~ 20 PB data storage using Lustre filesystems and a 3000 core high performance cloud have created a hybrid platform for higher performance computing and data-intensive science to enable large scale earth and climate systems modelling and analysis. There are > 3000 users actively logging in and > 600 projects on the NCI system. Efficiently scaling and adapting data and software systems to petascale infrastructures requires the collaborative development of an architecture that is designed, programmed and operated to enable users to interactively invoke different forms of in-situ computation over complex and large scale data collections. NCI makes available major and long tail data collections from both the government and research sectors based on six themes: 1) weather, climate and earth system science model simulations, 2) marine and earth observations, 3) geosciences, 4) terrestrial ecosystems, 5) water and hydrology and 6) astronomy, bio and social. Collectively they span the lithosphere, crust, biosphere, hydrosphere, troposphere, and stratosphere. Collections are the operational form for data management and access. Similar data types from individual custodians are managed cohesively. Use of international standards for discovery and interoperability allow complex interactions within and between the collections. This design facilitates a transdisciplinary approach to research and enables a shift from small scale, 'stove-piped' science efforts to large scale, collaborative systems science. This new and complex infrastructure requires a move to shared, globally trusted software frameworks that can be maintained and updated. Workflow engines become essential and need to integrate provenance, versioning, traceability, repeatability

  14. Genome Reshuffling for Advanced Intercross Permutation (GRAIP): Simulation and permutation for advanced intercross population analysis

    SciTech Connect

    Pierce, Jeremy; Broman, Karl; Lu, Lu; Chesler, Elissa J; Zhou, Guomin; Airey, David; Birmingham, Amanda; Williams, Robert

    2008-04-01

    Background: Advanced intercross lines (AIL) are segregating populations created using a multi-generation breeding protocol for fine mapping complex trait loci (QTL) in mice and other organisms. Applying QTL mapping methods for intercross and backcross populations, often followed by na ve permutation of individuals and phenotypes, does not account for the effect of AIL family structure in which final generations have been expanded and leads to inappropriately low significance thresholds. The critical problem with na ve mapping approaches in AIL populations is that the individual is not an exchangeable unit. Methodology/Principal Findings: The effect of family structure has immediate implications for the optimal AIL creation (many crosses, few animals per cross, and population expansion before the final generation) and we discuss these and the utility of AIL populations for QTL fine mapping. We also describe Genome Reshuffling for Advanced Intercross Permutation, (GRAIP) a method for analyzing AIL data that accounts for family structure. GRAIP permutes a more interchangeable unit in the final generation crosses - the parental genome - and simulating regeneration of a permuted AIL population based on exchanged parental identities. GRAIP determines appropriate genome-wide significance thresholds and locus-specific Pvalues for AILs and other populations with similar family structures. We contrast GRAIP with na ve permutation using a large densely genotyped mouse AIL population (1333 individuals from 32 crosses). A na ve permutation using coat color as a model phenotype demonstrates high false-positive locus identification and uncertain significance levels, which are corrected using GRAIP. GRAIP also detects an established hippocampus weight locus and a new locus, Hipp9a. Conclusions and Significance: GRAIP determines appropriate genome-wide significance thresholds and locus-specific Pvalues for AILs and other populations with similar family structures. The effect of

  15. Advanced Simulation Capability for Environmental Management (ASCEM) Phase II Demonstration

    SciTech Connect

    Freshley, M.; Hubbard, S.; Flach, G.; Freedman, V.; Agarwal, D.; Andre, B.; Bott, Y.; Chen, X.; Davis, J.; Faybishenko, B.; Gorton, I.; Murray, C.; Moulton, D.; Meyer, J.; Rockhold, M.; Shoshani, A.; Steefel, C.; Wainwright, H.; Waichler, S.

    2012-09-28

    In 2009, the National Academies of Science (NAS) reviewed and validated the U.S. Department of Energy Office of Environmental Management (EM) Technology Program in its publication, Advice on the Department of Energy’s Cleanup Technology Roadmap: Gaps and Bridges. The NAS report outlined prioritization needs for the Groundwater and Soil Remediation Roadmap, concluded that contaminant behavior in the subsurface is poorly understood, and recommended further research in this area as a high priority. To address this NAS concern, the EM Office of Site Restoration began supporting the development of the Advanced Simulation Capability for Environmental Management (ASCEM). ASCEM is a state-of-the-art scientific approach that uses an integration of toolsets for understanding and predicting contaminant fate and transport in natural and engineered systems. The ASCEM modeling toolset is modular and open source. It is divided into three thrust areas: Multi-Process High Performance Computing (HPC), Platform and Integrated Toolsets, and Site Applications. The ASCEM toolsets will facilitate integrated approaches to modeling and site characterization that enable robust and standardized assessments of performance and risk for EM cleanup and closure activities. During fiscal year 2012, the ASCEM project continued to make significant progress in capabilities development. Capability development occurred in both the Platform and Integrated Toolsets and Multi-Process HPC Simulator areas. The new Platform and Integrated Toolsets capabilities provide the user an interface and the tools necessary for end-to-end model development that includes conceptual model definition, data management for model input, model calibration and uncertainty analysis, and model output processing including visualization. The new HPC Simulator capabilities target increased functionality of process model representations, toolsets for interaction with the Platform, and model confidence testing and verification for

  16. Modeling and simulation challenges pursued by the Consortium for Advanced Simulation of Light Water Reactors (CASL)

    SciTech Connect

    Turinsky, Paul J.; Kothe, Douglas B.

    2016-05-15

    The Consortium for the Advanced Simulation of Light Water Reactors (CASL), the first Energy Innovation Hub of the Department of Energy, was established in 2010 with the goal of providing modeling and simulation (M&S) capabilities that support and accelerate the improvement of nuclear energy's economic competitiveness and the reduction of spent nuclear fuel volume per unit energy, and all while assuring nuclear safety. To accomplish this requires advances in M&S capabilities in radiation transport, thermal-hydraulics, fuel performance and corrosion chemistry. To focus CASL's R&D, industry challenge problems have been defined, which equate with long standing issues of the nuclear power industry that M&S can assist in addressing. To date CASL has developed a multi-physics “core simulator” based upon pin-resolved radiation transport and subchannel (within fuel assembly) thermal-hydraulics, capitalizing on the capabilities of high performance computing. CASL's fuel performance M&S capability can also be optionally integrated into the core simulator, yielding a coupled multi-physics capability with untapped predictive potential. Material models have been developed to enhance predictive capabilities of fuel clad creep and growth, along with deeper understanding of zirconium alloy clad oxidation and hydrogen pickup. Understanding of corrosion chemistry (e.g., CRUD formation) has evolved at all scales: micro, meso and macro. CFD R&D has focused on improvement in closure models for subcooled boiling and bubbly flow, and the formulation of robust numerical solution algorithms. For multiphysics integration, several iterative acceleration methods have been assessed, illuminating areas where further research is needed. Finally, uncertainty quantification and data assimilation techniques, based upon sampling approaches, have been made more feasible for practicing nuclear engineers via R&D on dimensional reduction and biased sampling. Industry adoption of CASL's evolving M

  17. UCLA Final Technical Report for the "Community Petascale Project for Accelerator Science and Simulation”.

    SciTech Connect

    Mori, Warren

    2015-08-14

    The UCLA Plasma Simulation Group is a major partner of the “Community Petascale Project for Accelerator Science and Simulation”. This is the final technical report. We include an overall summary, a list of publications, progress for the most recent year, and individual progress reports for each year. We have made tremendous progress during the three years. SciDAC funds have contributed to the development of a large number of skeleton codes that illustrate how to write PIC codes with a hierarchy of parallelism. These codes cover 2D and 3D as well as electrostatic solvers (which are used in beam dynamics codes and quasi-static codes) and electromagnetic solvers (which are used in plasma based accelerator codes). We also used these ideas to develop a GPU enabled version of OSIRIS. SciDAC funds were also contributed to the development of strategies to eliminate the Numerical Cerenkov Instability (NCI) which is an issue when carrying laser wakefield accelerator (LWFA) simulations in a boosted frame and when quantifying the emittance and energy spread of self-injected electron beams. This work included the development of a new code called UPIC-EMMA which is an FFT based electromagnetic PIC code and to new hybrid algorithms in OSIRIS. A new hybrid (PIC in r-z and gridless in φ) algorithm was implemented into OSIRIS. In this algorithm the fields and current are expanded into azimuthal harmonics and the complex amplitude for each harmonic is calculated separately. The contributions from each harmonic are summed and then used to push the particles. This algorithm permits modeling plasma based acceleration with some 3D effects but with the computational load of an 2D r-z PIC code. We developed a rigorously charge conserving current deposit for this algorithm. Very recently, we made progress in combining the speed up from the quasi-3D algorithm with that from the Lorentz boosted frame. SciDAC funds also contributed to the improvement and speed up of the quasi-static PIC

  18. Genome Reshuffling for Advanced Intercross Permutation (GRAIP): Simulation and permutation for advanced intercross population analysis

    SciTech Connect

    Pierce, Jeremy; Broman, Karl; Chesler, Elissa J; Zhou, Guomin; Airey, David; Birmingham, Amanda; Williams, Robert

    2008-01-01

    Abstract Background Advanced intercross lines (AIL) are segregating populations created using a multigeneration breeding protocol for fine mapping complex traits in mice and other organisms. Applying quantitative trait locus (QTL) mapping methods for intercross and backcross populations, often followed by na ve permutation of individuals and phenotypes, does not account for the effect of family structure in AIL populations in which final generations have been expanded and leads to inappropriately low significance thresholds. The critical problem with a na ve mapping approach in such AIL populations is that the individual is not an exchangeable unit given the family structure. Methodology/Principal Findings The effect of family structure has immediate implications for the optimal AIL creation (many crosses, few animals per cross, and population expansion before the final generation) and we discuss these and the utility of AIL populations for QTL fine mapping. We also describe Genome Reshuffling for Advanced Intercross Permutation, (GRAIP) a method for analyzing AIL data that accounts for family structure. RAIP permutes a more interchangeable unit in the final generation crosses - the parental genome - and simulating regeneration of a permuted AIL population based on exchanged parental identities. GRAIP determines appropriate genome- ide significance thresholds and locus-specific P-values for AILs and other populations with similar family structures. We contrast GRAIP with na ve permutation using a large densely genotyped mouse AIL population (1333 individuals from 32 crosses). A na ve permutation using coat color as a model phenotype demonstrates high false-positive locus identification and uncertain significance levels in our AIL population, which are corrected by use of GRAIP. We also show that GRAIP detects an established hippocampus weight locus and a new locus, Hipp9a. Conclusions and Significance GRAIP determines appropriate genome-wide significance thresholds

  19. Advancement of DOE's EnergyPlus Building Energy Simulation Payment

    SciTech Connect

    Gu, Lixing; Shirey, Don; Raustad, Richard; Nigusse, Bereket; Sharma, Chandan; Lawrie, Linda; Strand, Rick; Pedersen, Curt; Fisher, Dan; Lee, Edwin; Witte, Mike; Glazer, Jason; Barnaby, Chip

    2011-09-30

    EnergyPlus{sup TM} is a new generation computer software analysis tool that has been developed, tested, and commercialized to support DOE's Building Technologies (BT) Program in terms of whole-building, component, and systems R&D (http://www.energyplus.gov). It is also being used to support evaluation and decision making of zero energy building (ZEB) energy efficiency and supply technologies during new building design and existing building retrofits. The 5-year project was managed by the National Energy Technology Laboratory and was divided into 5 budget period between 2006 and 2011. During the project period, 11 versions of EnergyPlus were released. This report summarizes work performed by an EnergyPlus development team led by the University of Central Florida's Florida Solar Energy Center (UCF/FSEC). The team members consist of DHL Consulting, C. O. Pedersen Associates, University of Illinois at Urbana-Champaign, Oklahoma State University, GARD Analytics, Inc., and WrightSoft Corporation. The project tasks involved new feature development, testing and validation, user support and training, and general EnergyPlus support. The team developed 146 new features during the 5-year period to advance the EnergyPlus capabilities. Annual contributions of new features are 7 in budget period 1, 19 in period 2, 36 in period 3, 41 in period 4, and 43 in period 5, respectively. The testing and validation task focused on running test suite and publishing report, developing new IEA test suite cases, testing and validating new source code, addressing change requests, and creating and testing installation package. The user support and training task provided support for users and interface developers, and organized and taught workshops. The general support task involved upgrading StarTeam (team sharing) software and updating existing utility software. The project met the DOE objectives and completed all tasks successfully. Although the EnergyPlus software was enhanced significantly

  20. Advanced Simulation in Undergraduate Pilot Training: Systems Integration. Final Report (February 1972-March 1975).

    ERIC Educational Resources Information Center

    Larson, D. F.; Terry, C.

    The Advanced Simulator for Undergraduate Pilot Training (ASUPT) was designed to investigate the role of simulation in the future Undergraduate Pilot Training (UPT) program. The problem addressed in this report was one of integrating two unlike components into one synchronized system. These two components were the Basic T-37 Simulators and their…

  1. Development of Kinetic Mechanisms for Next-Generation Fuels and CFD Simulation of Advanced Combustion Engines

    SciTech Connect

    Pitz, William J.; McNenly, Matt J.; Whitesides, Russell; Mehl, Marco; Killingsworth, Nick J.; Westbrook, Charles K.

    2015-12-17

    Predictive chemical kinetic models are needed to represent next-generation fuel components and their mixtures with conventional gasoline and diesel fuels. These kinetic models will allow the prediction of the effect of alternative fuel blends in CFD simulations of advanced spark-ignition and compression-ignition engines. Enabled by kinetic models, CFD simulations can be used to optimize fuel formulations for advanced combustion engines so that maximum engine efficiency, fossil fuel displacement goals, and low pollutant emission goals can be achieved.

  2. Advanced beam-dynamics simulation tools for RIA.

    SciTech Connect

    Garnett, R. W.; Wangler, T. P.; Billen, J. H.; Qiang, J.; Ryne, R.; Crandall, K. R.; Ostroumov, P.; York, R.; Zhao, Q.; Physics; LANL; LBNL; Tech Source; Michigan State Univ.

    2005-01-01

    We are developing multi-particle beam-dynamics simulation codes for RIA driver-linac simulations extending from the low-energy beam transport (LEBT) line to the end of the linac. These codes run on the NERSC parallel supercomputing platforms at LBNL, which allow us to run simulations with large numbers of macroparticles. The codes have the physics capabilities needed for RIA, including transport and acceleration of multiple-charge-state beams, beam-line elements such as high-voltage platforms within the linac, interdigital accelerating structures, charge-stripper foils, and capabilities for handling the effects of machine errors and other off-normal conditions. This year will mark the end of our project. In this paper we present the status of the work, describe some recent additions to the codes, and show some preliminary simulation results.

  3. Advanced Simulator Development for Power Flow and Sources

    DTIC Science & Technology

    2006-02-01

    specifications for sub-system (primary energy store, water pulse compression/transmission lines, vacuum power flow) design. Using our experience with pulsed ...also enable beneficial upgrades to existing simulator facilities. 14. SUBJECT TERMS 15. NUMBER OF PAGES 109 Marx Generator Plasma Radiation Source Pulsed ...minimize cost for large dose X area products. Based upon simple scaling from existing pulsed power simulators , we assumed that we could achieve yields

  4. [Research advances in soil nitrogen cycling models and their simulation].

    PubMed

    Tang, Guoyong; Huang, Daoyou; Tong, Chengli; Zhang, Wenju; Wu, Jinshui

    2005-11-01

    Nitrogen is one of the necessary nutrients for plant, and also a primary element leading to environmental pollution. Many researches have been concerned about the contribution of agricultural activities to environmental pollution by nitrogenous compounds, and the focus is how to simulate soil nitrogen cycling processes correctly. In this paper, the primary soil nitrogen cycling processes were reviewed in brief, with 13 cycling models and 6 simulated cycling processes introduced, and the parameterization of models discussed.

  5. Advanced Thermal Simulator Testing: Thermal Analysis and Test Results

    NASA Technical Reports Server (NTRS)

    Bragg-Sitton, Shannon M.; Dickens, Ricky; Dixon, David; Reid, Robert; Adams, Mike; Davis, Joe

    2008-01-01

    Work at the NASA Marshall Space Flight Center seeks to develop high fidelity, electrically heated thermal simulators that represent fuel elements in a nuclear reactor design to support non-nuclear testing applicable to the development of a space nuclear power or propulsion system. Comparison between the fuel pins and thermal simulators is made at the outer fuel clad surface, which corresponds to the outer sheath surface in the thermal simulator. The thermal simulators that are currently being tested correspond to a SNAP derivative reactor design that could be applied for Lunar surface power. These simulators are designed to meet the geometric and power requirements of a proposed surface power reactor design, accommodate testing of various axial power profiles, and incorporate imbedded instrumentation. This paper reports the results of thermal simulator analysis and testing in a bare element configuration, which does not incorporate active heat removal, and testing in a water-cooled calorimeter designed to mimic the heat removal that would be experienced in a reactor core.

  6. Advanced Thermal Simulator Testing: Thermal Analysis and Test Results

    SciTech Connect

    Bragg-Sitton, Shannon M.; Dickens, Ricky; Dixon, David; Reid, Robert; Adams, Mike; Davis, Joe

    2008-01-21

    Work at the NASA Marshall Space Flight Center seeks to develop high fidelity, electrically heated thermal simulators that represent fuel elements in a nuclear reactor design to support non-nuclear testing applicable to the potential development of a space nuclear power or propulsion system. Comparison between the fuel pins and thermal simulators is made at the outer fuel clad surface, which corresponds to the outer sheath surface in the thermal simulator. The thermal simulators that are currently being tested correspond to a liquid metal cooled reactor design that could be applied for Lunar surface power. These simulators are designed to meet the geometric and power requirements of a proposed surface power reactor design, accommodate testing of various axial power profiles, and incorporate imbedded instrumentation. This paper reports the results of thermal simulator analysis and testing in a bare element configuration, which does not incorporate active heat removal, and testing in a water-cooled calorimeter designed to mimic the heat removal that would be experienced in a reactor core.

  7. Advanced SAR simulator with multi-beam interferometric capabilities

    NASA Astrophysics Data System (ADS)

    Reppucci, Antonio; Márquez, José; Cazcarra, Victor; Ruffini, Giulio

    2014-10-01

    State of the art simulations are of great interest when designing a new instrument, studying the imaging mechanisms due to a given scenario or for inversion algorithm design as they allow to analyze and understand the effects of different instrument configurations and targets compositions. In the framework of the studies about a new instruments devoted to the estimation of the ocean surface movements using Synthetic Aperture Radar along-track interferometry (SAR-ATI) an End-to-End simulator has been developed. The simulator, built in a high modular way to allow easy integration of different processing-features, deals with all the basic operations involved in an end to end scenario. This includes the computation of the position and velocity of the platform (airborne/spaceborne) and the geometric parameters defining the SAR scene, the surface definition, the backscattering computation, the atmospheric attenuation, the instrument configuration, and the simulation of the transmission/reception chains and the raw data. In addition, the simulator provides a inSAR processing suit and a sea surface movement retrieval module. Up to four beams (each one composed by a monostatic and a bistatic channel) can be activated. Each channel provides raw data and SLC images with the possibility of choosing between Strip-map and Scansar modes. Moreover, the software offers the possibility of radiometric sensitivity analysis and error analysis due atmospheric disturbances, instrument-noise, interferogram phase-noise, platform velocity and attitude variations. In this paper, the architecture and the capabilities of this simulator will be presented. Meaningful simulation examples will be shown.

  8. Fortran Transformational Tools in Support of Scientific Application Development for Petascale Computer Architectures

    SciTech Connect

    Sottille, Matthew

    2013-09-12

    This document is the final report for a multi-year effort building infrastructure to support tool development for Fortran programs. We also investigated static analysis and code transformation methods relevant to scientific programmers who are writing Fortran programs for petascale-class high performance computing systems. This report details our accomplishments, technical approaches, and provides information on where the research results and code may be obtained from an open source software repository. The report for the first year of the project that was performed at the University of Oregon prior to the PI moving to Galois, Inc. is included as an appendix.

  9. Final Scientific Report: A Scalable Development Environment for Peta-Scale Computing

    SciTech Connect

    Karbach, Carsten; Frings, Wolfgang

    2013-02-22

    This document is the final scientific report of the project DE-SC000120 (A scalable Development Environment for Peta-Scale Computing). The objective of this project is the extension of the Parallel Tools Platform (PTP) for applying it to peta-scale systems. PTP is an integrated development environment for parallel applications. It comprises code analysis, performance tuning, parallel debugging and system monitoring. The contribution of the Juelich Supercomputing Centre (JSC) aims to provide a scalable solution for system monitoring of supercomputers. This includes the development of a new communication protocol for exchanging status data between the target remote system and the client running PTP. The communication has to work for high latency. PTP needs to be implemented robustly and should hide the complexity of the supercomputer's architecture in order to provide a transparent access to various remote systems via a uniform user interface. This simplifies the porting of applications to different systems, because PTP functions as abstraction layer between parallel application developer and compute resources. The common requirement for all PTP components is that they have to interact with the remote supercomputer. E.g. applications are built remotely and performance tools are attached to job submissions and their output data resides on the remote system. Status data has to be collected by evaluating outputs of the remote job scheduler and the parallel debugger needs to control an application executed on the supercomputer. The challenge is to provide this functionality for peta-scale systems in real-time. The client server architecture of the established monitoring application LLview, developed by the JSC, can be applied to PTP's system monitoring. LLview provides a well-arranged overview of the supercomputer's current status. A set of statistics, a list of running and queued jobs as well as a node display mapping running jobs to their compute resources form the

  10. Numerical simulation of turbomachinery flows with advanced turbulence models

    NASA Technical Reports Server (NTRS)

    Lakshminarayana, B.; Kunz, R.; Luo, J.; Fan, S.

    1992-01-01

    A three dimensional full Navier-Stokes (FNS) code is used to simulate complex turbomachinery flows. The code incorporates an explicit multistep scheme and solves a conservative form of the density averaged continuity, momentum, and energy equations. A compressible low Reynolds number form of the k-epsilon turbulence model, and a q-omega model and an algebraic Reynolds stress model have been incorporated in a fully coupled manner to approximate Reynolds stresses. The code is used to predict the viscous flow field in a backswept transonic centrifugal compressor for which laser two focus data is available. The code is also used to simulate the tip clearance flow in a cascade. The code has been extended to include unsteady Euler solutions for predicting the unsteady flow through a cascade due to incoming wakes, simulating rotor-stator interactions.

  11. Advances in Discrete-Event Simulation for MSL Command Validation

    NASA Technical Reports Server (NTRS)

    Patrikalakis, Alexander; O'Reilly, Taifun

    2013-01-01

    In the last five years, the discrete event simulator, SEQuence GENerator (SEQGEN), developed at the Jet Propulsion Laboratory to plan deep-space missions, has greatly increased uplink operations capacity to deal with increasingly complicated missions. In this paper, we describe how the Mars Science Laboratory (MSL) project makes full use of an interpreted environment to simulate change in more than fifty thousand flight software parameters and conditional command sequences to predict the result of executing a conditional branch in a command sequence, and enable the ability to warn users whenever one or more simulated spacecraft states change in an unexpected manner. Using these new SEQGEN features, operators plan more activities in one sol than ever before.

  12. Advances in simulation study on organic small molecular solar cells

    NASA Astrophysics Data System (ADS)

    Zhang, Xuan; Guo, Wenge; Li, Ming; Ma, Wentao; Meng, Sen

    2015-02-01

    Recently, more focuses have been put on organic semiconductors because of its advantages, such as its flexibility, ease of fabrication and potential low cost, etc. The reasons we pay highlight on small molecular photovoltaic material are its ease of purification, easy to adjust and determine structure, easy to assemble range units and get high carrier mobility, etc. Simulation study on organic small molecular solar cells before the experiment can help the researchers find relationship between the efficiency and structure parameters, properties of material, estimate the performance of the device, bring the optimization of guidance. Also, the applicability of the model used in simulation can be discussed by comparison with experimental data. This paper summaries principle, structure, progress of numerical simulation on organic small molecular solar cells.

  13. Design and simulation of advanced fault tolerant flight control schemes

    NASA Astrophysics Data System (ADS)

    Gururajan, Srikanth

    This research effort describes the design and simulation of a distributed Neural Network (NN) based fault tolerant flight control scheme and the interface of the scheme within a simulation/visualization environment. The goal of the fault tolerant flight control scheme is to recover an aircraft from failures to its sensors or actuators. A commercially available simulation package, Aviator Visual Design Simulator (AVDS), was used for the purpose of simulation and visualization of the aircraft dynamics and the performance of the control schemes. For the purpose of the sensor failure detection, identification and accommodation (SFDIA) task, it is assumed that the pitch, roll and yaw rate gyros onboard are without physical redundancy. The task is accomplished through the use of a Main Neural Network (MNN) and a set of three De-Centralized Neural Networks (DNNs), providing analytical redundancy for the pitch, roll and yaw gyros. The purpose of the MNN is to detect a sensor failure while the purpose of the DNNs is to identify the failed sensor and then to provide failure accommodation. The actuator failure detection, identification and accommodation (AFDIA) scheme also features the MNN, for detection of actuator failures, along with three Neural Network Controllers (NNCs) for providing the compensating control surface deflections to neutralize the failure induced pitching, rolling and yawing moments. All NNs continue to train on-line, in addition to an offline trained baseline network structure, using the Extended Back-Propagation Algorithm (EBPA), with the flight data provided by the AVDS simulation package. The above mentioned adaptive flight control schemes have been traditionally implemented sequentially on a single computer. This research addresses the implementation of these fault tolerant flight control schemes on parallel and distributed computer architectures, using Berkeley Software Distribution (BSD) sockets and Message Passing Interface (MPI) for inter

  14. Advanced Techniques for Reservoir Simulation and Modeling of Non-Conventional Wells

    SciTech Connect

    Durlofsky, Louis J.

    2000-08-28

    This project targets the development of (1) advanced reservoir simulation techniques for modeling non-conventional wells; (2) improved techniques for computing well productivity (for use in reservoir engineering calculations) and well index (for use in simulation models), including the effects of wellbore flow; and (3) accurate approaches to account for heterogeneity in the near-well region.

  15. Cost-efficiency assessment of Advanced Life Support (ALS) courses based on the comparison of advanced simulators with conventional manikins

    PubMed Central

    Iglesias-Vázquez, José Antonio; Rodríguez-Núñez, Antonio; Penas-Penas, Mónica; Sánchez-Santos, Luís; Cegarra-García, Maria; Barreiro-Díaz, Maria Victoria

    2007-01-01

    Background Simulation is an essential tool in modern medical education. The object of this study was to assess, in cost-effective measures, the introduction of new generation simulators in an adult life support (ALS) education program. Methods Two hundred fifty primary care physicians and nurses were admitted to ten ALS courses (25 students per course). Students were distributed at random in two groups (125 each). Group A candidates were trained and tested with standard ALS manikins and Group B ones with new generation emergency and life support integrated simulator systems. Results In group A, 98 (78%) candidates passed the course, compared with 110 (88%) in group B (p < 0.01). The total cost of conventional courses was €7689 per course and the cost of the advanced simulator courses was €29034 per course (p < 0.001). Cost per passed student was €392 in group A and €1320 in group B (p < 0.001). Conclusion Although ALS advanced simulator systems may slightly increase the rate of students who pass the course, the cost-effectiveness of ALS courses with standard manikins is clearly superior. PMID:17953771

  16. A Distributed Simulation Facility to Support Human Factors Research in Advanced Air Transportation Technology

    NASA Technical Reports Server (NTRS)

    Amonlirdviman, Keith; Farley, Todd C.; Hansman, R. John, Jr.; Ladik, John F.; Sherer, Dana Z.

    1998-01-01

    A distributed real-time simulation of the civil air traffic environment developed to support human factors research in advanced air transportation technology is presented. The distributed environment is based on a custom simulation architecture designed for simplicity and flexibility in human experiments. Standard Internet protocols are used to create the distributed environment, linking all advanced cockpit simulator, all Air Traffic Control simulator, and a pseudo-aircraft control and simulation management station. The pseudo-aircraft control station also functions as a scenario design tool for coordinating human factors experiments. This station incorporates a pseudo-pilot interface designed to reduce workload for human operators piloting multiple aircraft simultaneously in real time. The application of this distributed simulation facility to support a study of the effect of shared information (via air-ground datalink) on pilot/controller shared situation awareness and re-route negotiation is also presented.

  17. Advanced Simulation and Computing Co-Design Strategy

    SciTech Connect

    Ang, James A.; Hoang, Thuc T.; Kelly, Suzanne M.; McPherson, Allen; Neely, Rob

    2015-11-01

    This ASC Co-design Strategy lays out the full continuum and components of the co-design process, based on what we have experienced thus far and what we wish to do more in the future to meet the program’s mission of providing high performance computing (HPC) and simulation capabilities for NNSA to carry out its stockpile stewardship responsibility.

  18. Advanced Computation Dynamics Simulation of Protective Structures Research

    DTIC Science & Technology

    2013-02-01

    between the steel and CMU, grout, a flowable concrete mixture, is placed into the reinforced cells. If grout is placed into every cell (including...multi-wythe walls that were fully grouted and had a brick veneer filled with a foam insulated cavity. He simulated the grout and CMU with a single

  19. Technical advances in molecular simulation since the 1980s.

    PubMed

    Field, Martin J

    2015-09-15

    This review describes how the theory and practice of molecular simulation have evolved since the beginning of the 1980s when the author started his career in this field. The account is of necessity brief and subjective and highlights the changes that the author considers have had significant impact on his research and mode of working.

  20. Advanced Shuttle Simulation Turbulence Tapes (SSTT) users guide

    NASA Technical Reports Server (NTRS)

    Tatom, F. B.; Smith, S. R.

    1981-01-01

    A nonrecursive model (based on von Karman spectra) for atmospheric turbulence along the flight path of the shuttle orbiter was developed which provides for simulation of instantaneous vertical and horizontal gusts at the vehicle center-of-gravity and also for simulation of instantaneous gust gradients. Based on this model, the time series for both gusts and gust gradients was generated and stored on a series of magnetic tapes which are entitled shuttle simulation turbulence tapes (SSTT). The time series are designed to represent atmospheric turbulence from ground level to an altitude of 120,000 meters. An appropriate description of the characteristics of the simulated turbulence stored on the tapes, as well as instructions regarding their proper use are provided. The characteristics of the turbulence series, including the spectral shape, cutoff frequencies, and variation of turbulence parameters with altitude, are discussed. Information regarding the tapes and their use is presented. Appendices provide results of spectral and statistical analyses of the SSTT and examples of how the SSTT should be used.

  1. Interim Service ISDN Satellite (ISIS) simulator development for advanced satellite designs and experiments

    NASA Technical Reports Server (NTRS)

    Pepin, Gerard R.

    1992-01-01

    The simulation development associated with the network models of both the Interim Service Integrated Services Digital Network (ISDN) Satellite (ISIS) and the Full Service ISDN Satellite (FSIS) architectures is documented. The ISIS Network Model design represents satellite systems like the Advanced Communications Technology Satellite (ACTS) orbiting switch. The FSIS architecture, the ultimate aim of this element of the Satellite Communications Applications Research (SCAR) Program, moves all control and switching functions on-board the next generation ISDN communications satellite. The technical and operational parameters for the advanced ISDN communications satellite design will be obtained from the simulation of ISIS and FSIS engineering software models for their major subsystems. Discrete event simulation experiments will be performed with these models using various traffic scenarios, design parameters, and operational procedures. The data from these simulations will be used to determine the engineering parameters for the advanced ISDN communications satellite.

  2. Simulating data processing for an Advanced Ion Mobility Mass Spectrometer

    SciTech Connect

    Chavarría-Miranda, Daniel; Clowers, Brian H.; Anderson, Gordon A.; Belov, Mikhail E.

    2007-11-03

    We have designed and implemented a Cray XD-1-based sim- ulation of data capture and signal processing for an ad- vanced Ion Mobility mass spectrometer (Hadamard trans- form Ion Mobility). Our simulation is a hybrid application that uses both an FPGA component and a CPU-based soft- ware component to simulate Ion Mobility mass spectrome- try data processing. The FPGA component includes data capture and accumulation, as well as a more sophisticated deconvolution algorithm based on a PNNL-developed en- hancement to standard Hadamard transform Ion Mobility spectrometry. The software portion is in charge of stream- ing data to the FPGA and collecting results. We expect the computational and memory addressing logic of the FPGA component to be portable to an instrument-attached FPGA board that can be interfaced with a Hadamard transform Ion Mobility mass spectrometer.

  3. Advanced Dynamically Adaptive Algorithms for Stochastic Simulations on Extreme Scales

    SciTech Connect

    Xiu, Dongbin

    2016-06-21

    The focus of the project is the development of mathematical methods and high-performance com- putational tools for stochastic simulations, with a particular emphasis on computations on extreme scales. The core of the project revolves around the design of highly e cient and scalable numer- ical algorithms that can adaptively and accurately, in high dimensional spaces, resolve stochastic problems with limited smoothness, even containing discontinuities.

  4. Advanced Simulation in Undergraduate Pilot Training: Motion System Development

    DTIC Science & Technology

    1975-10-01

    Resources Laboratory * a~ October 1975 DISTRIBUTED BY: National Technical Infolmation Service U. S. DEPARTMENT OF COMMERCE 329055 AFHRL-TR-75.59(11) AIR...1911 - March 1975 0 A plloved (or publ( rele.Le; ditribution unlii h¢uted. E S LABORATORY NATIONAL TECHNICAL I INFORMATION SERVICEIJS D-pvt-f Of ,CU...Force IHuman Resources Laboratory (AFSC), Wright-Patterson Air Force Base. Ohio 45433. Mr. Don R. Gur.i Simulation Techniques Branch. was tile contract

  5. Simulation of an advanced techniques of ion propulsion Rocket system

    NASA Astrophysics Data System (ADS)

    Bakkiyaraj, R.

    2016-07-01

    The ion propulsion rocket system is expected to become popular with the development of Deuterium,Argon gas and Hexagonal shape Magneto hydrodynamic(MHD) techniques because of the stimulation indirectly generated the power from ionization chamber,design of thrust range is 1.2 N with 40 KW of electric power and high efficiency.The proposed work is the study of MHD power generation through ionization level of Deuterium gas and combination of two gaseous ions(Deuterium gas ions + Argon gas ions) at acceleration stage.IPR consists of three parts 1.Hexagonal shape MHD based power generator through ionization chamber 2.ion accelerator 3.Exhaust of Nozzle.Initially the required energy around 1312 KJ/mol is carrying out the purpose of deuterium gas which is changed to ionization level.The ionized Deuterium gas comes out from RF ionization chamber to nozzle through MHD generator with enhanced velocity then after voltage is generated across the two pairs of electrode in MHD.it will produce thrust value with the help of mixing of Deuterium ion and Argon ion at acceleration position.The simulation of the IPR system has been carried out by MATLAB.By comparing the simulation results with the theoretical and previous results,if reaches that the proposed method is achieved of thrust value with 40KW power for simulating the IPR system.

  6. ADVANCES IN COMPREHENSIVE GYROKINETIC SIMULATIONS OF TRANSPORT IN TOKAMAKS

    SciTech Connect

    WALTZ RE; CANDY J; HINTON FL; ESTRADA-MILA C; KINSEY JE

    2004-10-01

    A continuum global gyrokinetic code GYRO has been developed to comprehensively simulate core turbulent transport in actual experimental profiles and enable direct quantitative comparisons to the experimental transport flows. GYRO not only treats the now standard ion temperature gradient (ITG) mode turbulence, but also treats trapped and passing electrons with collisions and finite {beta}, equilibrium ExB shear stabilization, and all in real tokamak geometry. Most importantly the code operates at finite relative gyroradius ({rho}{sub *}) so as to treat the profile shear stabilization and nonlocal effects which can break gyroBohm scaling. The code operates in either a cyclic flux-tube limit (which allows only gyroBohm scaling) or a globally with physical profile variation. Rohm scaling of DIII-D L-mode has been simulated with power flows matching experiment within error bars on the ion temperature gradient. Mechanisms for broken gyroBohm scaling, neoclassical ion flows embedded in turbulence, turbulent dynamos and profile corrugations, plasma pinches and impurity flow, and simulations at fixed flow rather than fixed gradient are illustrated and discussed.

  7. Interactive Volume Exploration of Petascale Microscopy Data Streams Using a Visualization-Driven Virtual Memory Approach.

    PubMed

    Hadwiger, M; Beyer, J; Jeong, Won-Ki; Pfister, H

    2012-12-01

    This paper presents the first volume visualization system that scales to petascale volumes imaged as a continuous stream of high-resolution electron microscopy images. Our architecture scales to dense, anisotropic petascale volumes because it: (1) decouples construction of the 3D multi-resolution representation required for visualization from data acquisition, and (2) decouples sample access time during ray-casting from the size of the multi-resolution hierarchy. Our system is designed around a scalable multi-resolution virtual memory architecture that handles missing data naturally, does not pre-compute any 3D multi-resolution representation such as an octree, and can accept a constant stream of 2D image tiles from the microscopes. A novelty of our system design is that it is visualization-driven: we restrict most computations to the visible volume data. Leveraging the virtual memory architecture, missing data are detected during volume ray-casting as cache misses, which are propagated backwards for on-demand out-of-core processing. 3D blocks of volume data are only constructed from 2D microscope image tiles when they have actually been accessed during ray-casting. We extensively evaluate our system design choices with respect to scalability and performance, compare to previous best-of-breed systems, and illustrate the effectiveness of our system for real microscopy data from neuroscience.

  8. Advanced simulation model for IPM motor drive with considering phase voltage and stator inductance

    NASA Astrophysics Data System (ADS)

    Lee, Dong-Myung; Park, Hyun-Jong; Lee, Ju

    2016-10-01

    This paper proposes an advanced simulation model of driving system for Interior Permanent Magnet (IPM) BrushLess Direct Current (BLDC) motors driven by 120-degree conduction method (two-phase conduction method, TPCM) that is widely used for sensorless control of BLDC motors. BLDC motors can be classified as SPM (Surface mounted Permanent Magnet) and IPM motors. Simulation model of driving system with SPM motors is simple due to the constant stator inductance regardless of the rotor position. Simulation models of SPM motor driving system have been proposed in many researches. On the other hand, simulation models for IPM driving system by graphic-based simulation tool such as Matlab/Simulink have not been proposed. Simulation study about driving system of IPMs with TPCM is complex because stator inductances of IPM vary with the rotor position, as permanent magnets are embedded in the rotor. To develop sensorless scheme or improve control performance, development of control algorithm through simulation study is essential, and the simulation model that accurately reflects the characteristic of IPM is required. Therefore, this paper presents the advanced simulation model of IPM driving system, which takes into account the unique characteristic of IPM due to the position-dependent inductances. The validity of the proposed simulation model is validated by comparison to experimental and simulation results using IPM with TPCM control scheme.

  9. Beyond Petascale with the HipGISAXS Software Suite

    NASA Astrophysics Data System (ADS)

    Hexemer, Alexander; Li, Sherry; Chourou, Slim; Sarje, Abhinav

    2014-03-01

    We have developed HipGISAXS, a software suite to analyze GISAXS and SAXS data for structural characterization of materials at the nano scale using X-rays. The software has been developed as a massively-parallel system capable of harnessing the raw computational power offered by clusters and supercomputers built using graphics processors (GPUs), Intel Phi co-processors, or commodity multi-core CPUs. Currently the forward GISAXS simulation is a major component of HipGISAXS, which simulates the X-ray scattering process based on the Distorted Wave Born Approximation (DWBS) theory, for any given nano structures and morphologies with a set of experimental configurations. These simulations are compute-intensive, and have a high degree of parallelism available, making them well-suited for fine-grained parallel computations on highly parallel many core processors like GPUs. Furthermore, a large number of such simulations can be carried out simultaneously for various experimental input parameters. HipGISAXS also includes a Reverse Monte Carlo based modeling tool for SAXS data. With HipGISAXS we have demonstrated a sustained compute performance of over 1 Petaflop on 8000 GPU nodes of the Titan supercomputer at ORNL, and have shown it to be highly scalable.

  10. Advanced Simulation Technology to Design Etching Process on CMOS Devices

    NASA Astrophysics Data System (ADS)

    Kuboi, Nobuyuki

    2015-09-01

    Prediction and control of plasma-induced damage is needed to mass-produce high performance CMOS devices. In particular, side-wall (SW) etching with low damage is a key process for the next generation of MOSFETs and FinFETs. To predict and control the damage, we have developed a SiN etching simulation technique for CHxFy/Ar/O2 plasma processes using a three-dimensional (3D) voxel model. This model includes new concepts for the gas transportation in the pattern, detailed surface reactions on the SiN reactive layer divided into several thin slabs and C-F polymer layer dependent on the H/N ratio, and use of ``smart voxels''. We successfully predicted the etching properties such as the etch rate, polymer layer thickness, and selectivity for Si, SiO2, and SiN films along with process variations and demonstrated the 3D damage distribution time-dependently during SW etching on MOSFETs and FinFETs. We confirmed that a large amount of Si damage was caused in the source/drain region with the passage of time in spite of the existing SiO2 layer of 15 nm in the over etch step and the Si fin having been directly damaged by a large amount of high energy H during the removal step of the parasitic fin spacer leading to Si fin damage to a depth of 14 to 18 nm. By analyzing the results of these simulations and our previous simulations, we found that it is important to carefully control the dose of high energy H, incident energy of H, polymer layer thickness, and over-etch time considering the effects of the pattern structure, chamber-wall condition, and wafer open area ratio. In collaboration with Masanaga Fukasawa and Tetsuya Tatsumi, Sony Corporation. We thank Mr. T. Shigetoshi and Mr. T. Kinoshita of Sony Corporation for their assistance with the experiments.

  11. Advanced flight deck/crew station simulator functional requirements

    NASA Technical Reports Server (NTRS)

    Wall, R. L.; Tate, J. L.; Moss, M. J.

    1980-01-01

    This report documents a study of flight deck/crew system research facility requirements for investigating issues involved with developing systems, and procedures for interfacing transport aircraft with air traffic control systems planned for 1985 to 2000. Crew system needs of NASA, the U.S. Air Force, and industry were investigated and reported. A matrix of these is included, as are recommended functional requirements and design criteria for simulation facilities in which to conduct this research. Methods of exploiting the commonality and similarity in facilities are identified, and plans for exploiting this in order to reduce implementation costs and allow efficient transfer of experiments from one facility to another are presented.

  12. Microwave Processing of Simulated Advanced Nuclear Fuel Pellets

    SciTech Connect

    D.E. Clark; D.C. Folz

    2010-08-29

    Throughout the three-year project funded by the Department of Energy (DOE) and lead by Virginia Tech (VT), project tasks were modified by consensus to fit the changing needs of the DOE with respect to developing new inert matrix fuel processing techniques. The focus throughout the project was on the use of microwave energy to sinter fully stabilized zirconia pellets using microwave energy and to evaluate the effectiveness of techniques that were developed. Additionally, the research team was to propose fundamental concepts as to processing radioactive fuels based on the effectiveness of the microwave process in sintering the simulated matrix material.

  13. Advanced solid elements for sheet metal forming simulation

    NASA Astrophysics Data System (ADS)

    Mataix, Vicente; Rossi, Riccardo; Oñate, Eugenio; Flores, Fernando G.

    2016-08-01

    The solid-shells are an attractive kind of element for the simulation of forming processes, due to the fact that any kind of generic 3D constitutive law can be employed without any additional hypothesis. The present work consists in the improvement of a triangular prism solid-shell originally developed by Flores[2, 3]. The solid-shell can be used in the analysis of thin/thick shell, undergoing large deformations. The element is formulated in total Lagrangian formulation, and employs the neighbour (adjacent) elements to perform a local patch to enrich the displacement field. In the original formulation a modified right Cauchy-Green deformation tensor (C) is obtained; in the present work a modified deformation gradient (F) is obtained, which allows to generalise the methodology and allows to employ the Pull-Back and Push-Forwards operations. The element is based in three modifications: (a) a classical assumed strain approach for transverse shear strains (b) an assumed strain approach for the in-plane components using information from neighbour elements and (c) an averaging of the volumetric strain over the element. The objective is to use this type of elements for the simulation of shells avoiding transverse shear locking, improving the membrane behaviour of the in-plane triangle and to handle quasi-incompressible materials or materials with isochoric plastic flow.

  14. Simulation models and designs for advanced Fischer-Tropsch technology

    SciTech Connect

    Choi, G.N.; Kramer, S.J.; Tam, S.S.

    1995-12-31

    Process designs and economics were developed for three grass-roots indirect Fischer-Tropsch coal liquefaction facilities. A baseline and an alternate upgrading design were developed for a mine-mouth plant located in southern Illinois using Illinois No. 6 coal, and one for a mine-mouth plane located in Wyoming using Power River Basin coal. The alternate design used close-coupled ZSM-5 reactors to upgrade the vapor stream leaving the Fischer-Tropsch reactor. ASPEN process simulation models were developed for all three designs. These results have been reported previously. In this study, the ASPEN process simulation model was enhanced to improve the vapor/liquid equilibrium calculations for the products leaving the slurry bed Fischer-Tropsch reactors. This significantly improved the predictions for the alternate ZSM-5 upgrading design. Another model was developed for the Wyoming coal case using ZSM-5 upgrading of the Fischer-Tropsch reactor vapors. To date, this is the best indirect coal liquefaction case. Sensitivity studies showed that additional cost reductions are possible.

  15. ADVANCES IN COMPREHENSIVE GYROKINETIC SIMULATIONS OF TRANSPORT IN TOKAMAKS

    SciTech Connect

    WALTZ,R.E; CANDY,J; HINTON,F.L; ESTRADA-MILA,C; KINSEY,J.E

    2004-10-01

    A continuum global gyrokinetic code GYRO has been developed to comprehensively simulate core turbulent transport in actual experimental profiles and enable direct quantitative comparisons to the experimental transport flows. GYRO not only treats the now standard ion temperature gradient (ITG) mode turbulence, but also treats trapped and passing electrons with collisions and finite {beta}, equilibrium ExB shear stabilization, and all in real tokamak geometry. Most importantly the code operates at finite relative gyroradius ({rho}{sub *}) so as to treat the profile shear stabilization and nonlocal effects which can break gyroBohm scaling. The code operates in either a cyclic flux-tube limit (which allows only gyroBohm scaling) or globally with physical profile variation. Bohm scaling of DIII-D L-mode has been simulated with power flows matching experiment within error bars on the ion temperature gradient. Mechanisms for broken gyroBohm scaling, neoclassical ion flows embedded in turbulence, turbulent dynamos and profile corrugations, are illustrated.

  16. Vector Field Visual Data Analysis Technologies for Petascale Computational Science

    SciTech Connect

    Garth, Christoph; Deines, Eduard; Joy, Kenneth I.; Bethel, E. Wes; Childs, Hank; Weber, Gunther; Ahern, Sean; Pugmire, Dave; Sanderson, Allen; Johnson, Chris

    2009-11-13

    State-of-the-art computational science simulations generate large-scale vector field data sets. Visualization and analysis is a key aspect of obtaining insight into these data sets and represents an important challenge. This article discusses possibilities and challenges of modern vector field visualization and focuses on methods and techniques developed in the SciDAC Visualization and Analytics Center for Enabling Technologies (VACET) and deployed in the open-source visualization tool, VisIt.

  17. Simulation and ground testing with the Advanced Video Guidance Sensor

    NASA Technical Reports Server (NTRS)

    Howard, Richard T.; Johnston, Albert S.; Bryan, Thomas C.; Book, Michael L.

    2005-01-01

    The Advanced Video Guidance Sensor (AVGS), an active sensor system that provides near-range 6-degree-of-freedom sensor data, has been developed as part of an automatic rendezvous and docking system for the Demonstration of Autonomous Rendezvous Technology (DART). The sensor determines the relative positions and attitudes between the active sensor and the passive target at ranges up to 300 meters. The AVGS uses laser diodes to illuminate retro-reflectors in the target, a solid-state imager to detect the light returned from the target, and image capture electronics and a digital signal processor to convert the video information into the relative positions and attitudes. The development of the sensor, through initial prototypes, final prototypes, and three flight units, has required a great deal of testing at every phase, and the different types of testing, their effectiveness, and their results, are presented in this paper, focusing on the testing of the flight units. Testing has improved the sensor's performance.

  18. Using CONFIG for Simulation of Operation of Water Recovery Subsystems for Advanced Control Software Evaluation

    NASA Technical Reports Server (NTRS)

    Malin, Jane T.; Flores, Luis; Fleming, Land; Throop, Daiv

    2002-01-01

    A hybrid discrete/continuous simulation tool, CONFIG, has been developed to support evaluation of the operability life support systems. CON FIG simulates operations scenarios in which flows and pressures change continuously while system reconfigurations occur as discrete events. In simulations, intelligent control software can interact dynamically with hardware system models. CONFIG simulations have been used to evaluate control software and intelligent agents for automating life support systems operations. A CON FIG model of an advanced biological water recovery system has been developed to interact with intelligent control software that is being used in a water system test at NASA Johnson Space Center

  19. Design and Test of Advanced Thermal Simulators for an Alkali Metal-Cooled Reactor Simulator

    NASA Technical Reports Server (NTRS)

    Garber, Anne E.; Dickens, Ricky E.

    2011-01-01

    The Early Flight Fission Test Facility (EFF-TF) at NASA Marshall Space Flight Center (MSFC) has as one of its primary missions the development and testing of fission reactor simulators for space applications. A key component in these simulated reactors is the thermal simulator, designed to closely mimic the form and function of a nuclear fuel pin using electric heating. Continuing effort has been made to design simple, robust, inexpensive thermal simulators that closely match the steady-state and transient performance of a nuclear fuel pin. A series of these simulators have been designed, developed, fabricated and tested individually and in a number of simulated reactor systems at the EFF-TF. The purpose of the thermal simulators developed under the Fission Surface Power (FSP) task is to ensure that non-nuclear testing can be performed at sufficiently high fidelity to allow a cost-effective qualification and acceptance strategy to be used. Prototype thermal simulator design is founded on the baseline Fission Surface Power reactor design. Recent efforts have been focused on the design, fabrication and test of a prototype thermal simulator appropriate for use in the Technology Demonstration Unit (TDU). While designing the thermal simulators described in this paper, effort were made to improve the axial power profile matching of the thermal simulators. Simultaneously, a search was conducted for graphite materials with higher resistivities than had been employed in the past. The combination of these two efforts resulted in the creation of thermal simulators with power capacities of 2300-3300 W per unit. Six of these elements were installed in a simulated core and tested in the alkali metal-cooled Fission Surface Power Primary Test Circuit (FSP-PTC) at a variety of liquid metal flow rates and temperatures. This paper documents the design of the thermal simulators, test program, and test results.

  20. Preliminary simulation of an advanced, hingless rotor XV-15 tilt-rotor aircraft

    NASA Technical Reports Server (NTRS)

    Mcveigh, M. A.

    1976-01-01

    The feasibility of the tilt-rotor concept was verified through investigation of the performance, stability and handling qualities of the XV-15 tilt rotor. The rotors were replaced by advanced-technology fiberglass/composite hingless rotors of larger diameter, combined with an advanced integrated fly-by-wire control system. A parametric simulation model of the HRXV-15 was developed, model was used to define acceptable preliminary ranges of primary and secondary control schedules as functions of the flight parameters, to evaluate performance, flying qualities and structural loads, and to have a Boeing-Vertol pilot conduct a simulated flight test evaluation of the aircraft.

  1. State of the Art Assessment of Simulation in Advanced Materials Development

    NASA Technical Reports Server (NTRS)

    Wise, Kristopher E.

    2008-01-01

    Advances in both the underlying theory and in the practical implementation of molecular modeling techniques have increased their value in the advanced materials development process. The objective is to accelerate the maturation of emerging materials by tightly integrating modeling with the other critical processes: synthesis, processing, and characterization. The aims of this report are to summarize the state of the art of existing modeling tools and to highlight a number of areas in which additional development is required. In an effort to maintain focus and limit length, this survey is restricted to classical simulation techniques including molecular dynamics and Monte Carlo simulations.

  2. Methodological advances: using greenhouses to simulate climate change scenarios.

    PubMed

    Morales, F; Pascual, I; Sánchez-Díaz, M; Aguirreolea, J; Irigoyen, J J; Goicoechea, N; Antolín, M C; Oyarzun, M; Urdiain, A

    2014-09-01

    Human activities are increasing atmospheric CO2 concentration and temperature. Related to this global warming, periods of low water availability are also expected to increase. Thus, CO2 concentration, temperature and water availability are three of the main factors related to climate change that potentially may influence crops and ecosystems. In this report, we describe the use of growth chamber - greenhouses (GCG) and temperature gradient greenhouses (TGG) to simulate climate change scenarios and to investigate possible plant responses. In the GCG, CO2 concentration, temperature and water availability are set to act simultaneously, enabling comparison of a current situation with a future one. Other characteristics of the GCG are a relative large space of work, fine control of the relative humidity, plant fertirrigation and the possibility of light supplementation, within the photosynthetic active radiation (PAR) region and/or with ultraviolet-B (UV-B) light. In the TGG, the three above-mentioned factors can act independently or in interaction, enabling more mechanistic studies aimed to elucidate the limiting factor(s) responsible for a given plant response. Examples of experiments, including some aimed to study photosynthetic acclimation, a phenomenon that leads to decreased photosynthetic capacity under long-term exposures to elevated CO2, using GCG and TGG are reported.

  3. Advanced numerical methods and software approaches for semiconductor device simulation

    SciTech Connect

    CAREY,GRAHAM F.; PARDHANANI,A.L.; BOVA,STEVEN W.

    2000-03-23

    In this article the authors concisely present several modern strategies that are applicable to drift-dominated carrier transport in higher-order deterministic models such as the drift-diffusion, hydrodynamic, and quantum hydrodynamic systems. The approaches include extensions of upwind and artificial dissipation schemes, generalization of the traditional Scharfetter-Gummel approach, Petrov-Galerkin and streamline-upwind Petrov Galerkin (SUPG), entropy variables, transformations, least-squares mixed methods and other stabilized Galerkin schemes such as Galerkin least squares and discontinuous Galerkin schemes. The treatment is representative rather than an exhaustive review and several schemes are mentioned only briefly with appropriate reference to the literature. Some of the methods have been applied to the semiconductor device problem while others are still in the early stages of development for this class of applications. They have included numerical examples from the recent research tests with some of the methods. A second aspect of the work deals with algorithms that employ unstructured grids in conjunction with adaptive refinement strategies. The full benefits of such approaches have not yet been developed in this application area and they emphasize the need for further work on analysis, data structures and software to support adaptivity. Finally, they briefly consider some aspects of software frameworks. These include dial-an-operator approaches such as that used in the industrial simulator PROPHET, and object-oriented software support such as those in the SANDIA National Laboratory framework SIERRA.

  4. Advanced Numerical Methods and Software Approaches for Semiconductor Device Simulation

    DOE PAGES

    Carey, Graham F.; Pardhanani, A. L.; Bova, S. W.

    2000-01-01

    In this article we concisely present several modern strategies that are applicable to driftdominated carrier transport in higher-order deterministic models such as the driftdiffusion, hydrodynamic, and quantum hydrodynamic systems. The approaches include extensions of “upwind” and artificial dissipation schemes, generalization of the traditional Scharfetter – Gummel approach, Petrov – Galerkin and streamline-upwind Petrov Galerkin (SUPG), “entropy” variables, transformations, least-squares mixed methods and other stabilized Galerkin schemes such as Galerkin least squares and discontinuous Galerkin schemes. The treatment is representative rather than an exhaustive review and several schemes are mentioned only briefly with appropriate reference to the literature. Some of themore » methods have been applied to the semiconductor device problem while others are still in the early stages of development for this class of applications. We have included numerical examples from our recent research tests with some of the methods. A second aspect of the work deals with algorithms that employ unstructured grids in conjunction with adaptive refinement strategies. The full benefits of such approaches have not yet been developed in this application area and we emphasize the need for further work on analysis, data structures and software to support adaptivity. Finally, we briefly consider some aspects of software frameworks. These include dial-an-operator approaches such as that used in the industrial simulator PROPHET, and object-oriented software support such as those in the SANDIA National Laboratory framework SIERRA.« less

  5. CHARMM-GUI PDB manipulator for advanced modeling and simulations of proteins containing nonstandard residues.

    PubMed

    Jo, Sunhwan; Cheng, Xi; Islam, Shahidul M; Huang, Lei; Rui, Huan; Zhu, Allen; Lee, Hui Sun; Qi, Yifei; Han, Wei; Vanommeslaeghe, Kenno; MacKerell, Alexander D; Roux, Benoît; Im, Wonpil

    2014-01-01

    CHARMM-GUI, http://www.charmm-gui.org, is a web-based graphical user interface to prepare molecular simulation systems and input files to facilitate the usage of common and advanced simulation techniques. Since it is originally developed in 2006, CHARMM-GUI has been widely adopted for various purposes and now contains a number of different modules designed to setup a broad range of simulations including free energy calculation and large-scale coarse-grained representation. Here, we describe functionalities that have recently been integrated into CHARMM-GUI PDB Manipulator, such as ligand force field generation, incorporation of methanethiosulfonate spin labels and chemical modifiers, and substitution of amino acids with unnatural amino acids. These new features are expected to be useful in advanced biomolecular modeling and simulation of proteins.

  6. Advanced Flight Simulator: Utilization in A-10 Conversion and Air-to-Surface Attack Training.

    DTIC Science & Technology

    1981-01-01

    CLASSIFIC.TION OF THIS PAGE(1Whl Data Emiterd) Item 20 (Continued) -" blocks of instruction on the Advanced Simulator for Pilot Training ( ASPT ). The first...training, the transfer of training from the ASPT to the A-10 is nearly 100 percent. therefore, in the early phases of AiS training, one simulator... ASPT ) could be suitably modified, an alternative to initially dangerous and expensive aircraft training would exist which also offered considerable

  7. Advanced Simulator for Pilot Training: Design of Automated Performance Measurement System

    DTIC Science & Technology

    1980-08-01

    reverse aide if necessary and identify by block number) pilot pertormance measurement Advanced Simulator for Pilot Training ( ASPT ) Aircrew performance...Simulator for Pilot Training ( ASPT ). This report documents that development effort and describes the current status of the measurement system. It was...Continued): cj;? /To date, the following scenarios have been implemented on the ASPT : (a)1’nusition Tasks - Straight and Level, Airspeed Changes, Turns

  8. Advances in Constitutive and Failure Models for Sheet Forming Simulation

    NASA Astrophysics Data System (ADS)

    Yoon, Jeong Whan; Stoughton, Thomas B.

    2016-08-01

    Non-Associated Flow Rule (Non-AFR) can be used as a convenient way to account for anisotropic material response in metal deformation processes, making it possible for example, to eliminate the problem of the anomalous yielding in equibiaxial tension that is mistakenly attributed to limitations of the quadratic yield function, but may instead be attributed to the Associated Flow Rule (AFR). Seeing as in Non-AFR based models two separate functions can be adopted for yield and plastic potential, there is no constraint to which models are used to describe each of them. In this work, the flexible combination of two different yield criteria as yield function and plastic potential under Non-AFR is proposed and evaluated. FE simulations were carried so as to verify the accuracy of the material directionalities predicted using these constitutive material models. The stability conditions for non-associated flow connected with the prediction of yield point elongation are also reviewed. Anisotropic distortion hardening is further incorporated under non-associated flow. It has been found that anisotropic hardening makes the noticeable improvements for both earing and spring-back predictions. This presentation is followed by a discussion of the topic of the forming limit & necking, the evidence in favor of stress analysis, and the motivation for the development of a new type of forming limit diagram based on the polar effective plastic strain (PEPS) diagram. In order to connect necking to fracture in metals, the stress-based necking limit is combined with a stress- based fracture criterion in the principal stress, which provides an efficient method for the analysis of necking and fracture limits. The concept for the PEPS diagram is further developed to cover the path-independent PEPS fracture which is compatible with the stress-based fracture approach. Thus this fracture criterion can be utilized to describe the post-necking behavior and to cover nonlinear strain-path. Fracture

  9. Advanced Method to Estimate Fuel Slosh Simulation Parameters

    NASA Technical Reports Server (NTRS)

    Schlee, Keith; Gangadharan, Sathya; Ristow, James; Sudermann, James; Walker, Charles; Hubert, Carl

    2005-01-01

    The nutation (wobble) of a spinning spacecraft in the presence of energy dissipation is a well-known problem in dynamics and is of particular concern for space missions. The nutation of a spacecraft spinning about its minor axis typically grows exponentially and the rate of growth is characterized by the Nutation Time Constant (NTC). For launch vehicles using spin-stabilized upper stages, fuel slosh in the spacecraft propellant tanks is usually the primary source of energy dissipation. For analytical prediction of the NTC this fuel slosh is commonly modeled using simple mechanical analogies such as pendulums or rigid rotors coupled to the spacecraft. Identifying model parameter values which adequately represent the sloshing dynamics is the most important step in obtaining an accurate NTC estimate. Analytic determination of the slosh model parameters has met with mixed success and is made even more difficult by the introduction of propellant management devices and elastomeric diaphragms. By subjecting full-sized fuel tanks with actual flight fuel loads to motion similar to that experienced in flight and measuring the forces experienced by the tanks these parameters can be determined experimentally. Currently, the identification of the model parameters is a laborious trial-and-error process in which the equations of motion for the mechanical analog are hand-derived, evaluated, and their results are compared with the experimental results. The proposed research is an effort to automate the process of identifying the parameters of the slosh model using a MATLAB/SimMechanics-based computer simulation of the experimental setup. Different parameter estimation and optimization approaches are evaluated and compared in order to arrive at a reliable and effective parameter identification process. To evaluate each parameter identification approach, a simple one-degree-of-freedom pendulum experiment is constructed and motion is induced using an electric motor. By applying the

  10. WinSRFR: Current Advances in Software for Surface Irrigation Simulation and Analysis

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Significant advances have been made over the last decade in the development of software for surface irrigation analysis. WinSRFR is an integrated tool that combines unsteady flow simulation with tools for system evaluation/parameter estimation, system design, and for operational optimization. Ongoi...

  11. Battery Performance of ADEOS (Advanced Earth Observing Satellite) and Ground Simulation Test Results

    NASA Technical Reports Server (NTRS)

    Koga, K.; Suzuki, Y.; Kuwajima, S.; Kusawake, H.

    1997-01-01

    The Advanced Earth Observing Satellite (ADEOS) is developed with the aim of establishment of platform technology for future spacecraft and inter-orbit communication technology for the transmission of earth observation data. ADEOS uses 5 batteries, consists of two packs. This paper describes, using graphs and tables, the ground simulation tests and results that are carried to determine the performance of the ADEOS batteries.

  12. ADVANCED UTILITY SIMULATION MODEL, DESCRIPTION OF THE NATIONAL LOOP (VERSION 3.0)

    EPA Science Inventory

    The report is one of 11 in a series describing the initial development of the Advanced Utility Simulation Model (AUSM) by the Universities Research Group on Energy (URGE) and its continued development by the Science Applications International Corporation (SAIC) research team. The...

  13. Analysis of PV Advanced Inverter Functions and Setpoints under Time Series Simulation.

    SciTech Connect

    Seuss, John; Reno, Matthew J.; Broderick, Robert Joseph; Grijalva, Santiago

    2016-05-01

    Utilities are increasingly concerned about the potential negative impacts distributed PV may have on the operational integrity of their distribution feeders. Some have proposed novel methods for controlling a PV system's grid - tie inverter to mitigate poten tial PV - induced problems. This report investigates the effectiveness of several of these PV advanced inverter controls on improving distribution feeder operational metrics. The controls are simulated on a large PV system interconnected at several locations within two realistic distribution feeder models. Due to the time - domain nature of the advanced inverter controls, quasi - static time series simulations are performed under one week of representative variable irradiance and load data for each feeder. A para metric study is performed on each control type to determine how well certain measurable network metrics improve as a function of the control parameters. This methodology is used to determine appropriate advanced inverter settings for each location on the f eeder and overall for any interconnection location on the feeder.

  14. Petascale cyberinfrastructure for ground-based solar physics: approach of the DKIST data center

    NASA Astrophysics Data System (ADS)

    Berukoff, S.; Hays, T.; Reardon, K.; Spiess, DJ; Watson, F.; Wiant, S.

    2016-07-01

    The Daniel K Inouye Solar Telescope, under construction in Maui, is designed to perform high-resolution spectropolarimetric visible and infrared measurements of the Sun, and will annually produce 3 PB of data, via 5x108 images and 2x1011 metadata elements requiring calibration, long-term data management, and open and free distribution. After briefly describing the DKIST and its instrument suite, we provide an overview of functions that the DKIST Data Center will provide, and focus on major challenges in its development. We conclude by discussing approach and mention some technologies that the Data Center team is using to develop a petascale computational and data storage resource to support this unique world-class DKIST facility and support its long-term scientific and operational goals.

  15. A graphical workstation based part-task flight simulator for preliminary rapid evaluation of advanced displays

    NASA Technical Reports Server (NTRS)

    Wanke, Craig; Kuchar, James; Hahn, Edward; Pritchett, Amy; Hansman, R. J.

    1992-01-01

    Advances in avionics and display technology are significantly changing the cockpit environment in current transport aircraft. The MIT Aeronautical Systems Lab (ASL) has developed a part-task flight simulator specifically to study the effects of these new technologies on flight crew situational awareness and performance. The simulator is based on a commercially-available graphics workstation, and can be rapidly reconfigured to meet the varying demands of experimental studies. The simulator has been successfully used to evaluate graphical microburst alerting displays, electronic instrument approach plates, terrain awareness and alerting displays, and ATC routing amendment delivery through digital datalinks.

  16. A graphical workstation based part-task flight simulator for preliminary rapid evaluation of advanced displays

    NASA Technical Reports Server (NTRS)

    Wanke, Craig; Kuchar, James; Hahn, Edward; Pritchett, A.; Hansman, R. John

    1994-01-01

    Advances in avionics and display technology are significantly changing the cockpit environment in current transport aircraft. The MIT Aeronautical Systems Lab (ASL) developed a part-task flight simulator specifically to study the effects of these new technologies on flight crew situational awareness and performance. The simulator is based on a commercially-available graphics workstation, and can be rapidly reconfigured to meet the varying demands of experimental studies. The simulator was successfully used to evaluate graphical microbursts alerting displays, electronic instrument approach plates, terrain awareness and alerting displays, and ATC routing amendment delivery through digital datalinks.

  17. Report of the Workshop on Petascale Systems Integration for LargeScale Facilities

    SciTech Connect

    Kramer, William T.C.; Walter, Howard; New, Gary; Engle, Tom; Pennington, Rob; Comes, Brad; Bland, Buddy; Tomlison, Bob; Kasdorf, Jim; Skinner, David; Regimbal, Kevin

    2007-10-01

    There are significant issues regarding Large Scale System integration that are not being addressed in other forums such as current research portfolios or vendor user groups. Unfortunately, the issues in the area of large-scale system integration often fall into a netherworld; not research, not facilities, not procurement, not operations, not user services. Taken together, these issues along with the impact of sub-optimal integration technology means the time required to deploy, integrate and stabilize large scale system may consume up to 20 percent of the useful life of such systems. Improving the state of the art for large scale systems integration has potential to increase the scientific productivity of these systems. Sites have significant expertise, but there are no easy ways to leverage this expertise among them . Many issues inhibit the sharing of information, including available time and effort, as well as issues with sharing proprietary information. Vendors also benefit in the long run from the solutions to issues detected during site testing and integration. There is a great deal of enthusiasm for making large scale system integration a full-fledged partner along with the other major thrusts supported by funding agencies in the definition, design, and use of a petascale systems. Integration technology and issues should have a full 'seat at the table' as petascale and exascale initiatives and programs are planned. The workshop attendees identified a wide range of issues and suggested paths forward. Pursuing these with funding opportunities and innovation offers the opportunity to dramatically improve the state of large scale system integration.

  18. Development of a VOR/DME model for an advanced concepts simulator

    NASA Technical Reports Server (NTRS)

    Steinmetz, G. G.; Bowles, R. L.

    1984-01-01

    The report presents a definition of a VOR/DME, airborne and ground systems simulation model. This description was drafted in response to a need in the creation of an advanced concepts simulation in which flight station design for the 1980 era can be postulated and examined. The simulation model described herein provides a reasonable representation of VOR/DME station in the continental United States including area coverage by type and noise errors. The detail in which the model has been cast provides the interested researcher with a moderate fidelity level simulator tool for conducting research and evaluation of navigator algorithms. Assumptions made within the development are listed and place certain responsibilities (data bases, communication with other simulation modules, uniform round earth, etc.) upon the researcher.

  19. Advanced manned space flight simulation and training: An investigation of simulation host computer system concepts

    NASA Technical Reports Server (NTRS)

    Montag, Bruce C.; Bishop, Alfred M.; Redfield, Joe B.

    1989-01-01

    The findings of a preliminary investigation by Southwest Research Institute (SwRI) in simulation host computer concepts is presented. It is designed to aid NASA in evaluating simulation technologies for use in spaceflight training. The focus of the investigation is on the next generation of space simulation systems that will be utilized in training personnel for Space Station Freedom operations. SwRI concludes that NASA should pursue a distributed simulation host computer system architecture for the Space Station Training Facility (SSTF) rather than a centralized mainframe based arrangement. A distributed system offers many advantages and is seen by SwRI as the only architecture that will allow NASA to achieve established functional goals and operational objectives over the life of the Space Station Freedom program. Several distributed, parallel computing systems are available today that offer real-time capabilities for time critical, man-in-the-loop simulation. These systems are flexible in terms of connectivity and configurability, and are easily scaled to meet increasing demands for more computing power.

  20. Advances in the simulation and automated measurement of well-sorted granular material: 1. Simulation

    NASA Astrophysics Data System (ADS)

    Buscombe, D.; Rubin, D. M.

    2012-06-01

    In this, the first of a pair of papers which address the simulation and automated measurement of well-sorted natural granular material, a method is presented for simulation of two-phase (solid, void) assemblages of discrete non-cohesive particles. The purpose is to have a flexible, yet computationally and theoretically simple, suite of tools with well constrained and well known statistical properties, in order to simulate realistic granular material as a discrete element model with realistic size and shape distributions, for a variety of purposes. The stochastic modeling framework is based on three-dimensional tessellations with variable degrees of order in particle-packing arrangement. Examples of sediments with a variety of particle size distributions and spatial variability in grain size are presented. The relationship between particle shape and porosity conforms to published data. The immediate application is testing new algorithms for automated measurements of particle properties (mean and standard deviation of particle sizes, and apparent porosity) from images of natural sediment, as detailed in the second of this pair of papers. The model could also prove useful for simulating specific depositional structures found in natural sediments, the result of physical alterations to packing and grain fabric, using discrete particle flow models. While the principal focus here is on naturally occurring sediment and sedimentary rock, the methods presented might also be useful for simulations of similar granular or cellular material encountered in engineering, industrial and life sciences.

  1. Advances in the simulation and automated measurement of well-sorted granular material: 1. Simulation

    USGS Publications Warehouse

    Daniel Buscombe,; Rubin, David M.

    2012-01-01

    1. In this, the first of a pair of papers which address the simulation and automated measurement of well-sorted natural granular material, a method is presented for simulation of two-phase (solid, void) assemblages of discrete non-cohesive particles. The purpose is to have a flexible, yet computationally and theoretically simple, suite of tools with well constrained and well known statistical properties, in order to simulate realistic granular material as a discrete element model with realistic size and shape distributions, for a variety of purposes. The stochastic modeling framework is based on three-dimensional tessellations with variable degrees of order in particle-packing arrangement. Examples of sediments with a variety of particle size distributions and spatial variability in grain size are presented. The relationship between particle shape and porosity conforms to published data. The immediate application is testing new algorithms for automated measurements of particle properties (mean and standard deviation of particle sizes, and apparent porosity) from images of natural sediment, as detailed in the second of this pair of papers. The model could also prove useful for simulating specific depositional structures found in natural sediments, the result of physical alterations to packing and grain fabric, using discrete particle flow models. While the principal focus here is on naturally occurring sediment and sedimentary rock, the methods presented might also be useful for simulations of similar granular or cellular material encountered in engineering, industrial and life sciences.

  2. A stochastic model updating strategy-based improved response surface model and advanced Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Zhai, Xue; Fei, Cheng-Wei; Choy, Yat-Sze; Wang, Jian-Jun

    2017-01-01

    To improve the accuracy and efficiency of computation model for complex structures, the stochastic model updating (SMU) strategy was proposed by combining the improved response surface model (IRSM) and the advanced Monte Carlo (MC) method based on experimental static test, prior information and uncertainties. Firstly, the IRSM and its mathematical model were developed with the emphasis on moving least-square method, and the advanced MC simulation method is studied based on Latin hypercube sampling method as well. And then the SMU procedure was presented with experimental static test for complex structure. The SMUs of simply-supported beam and aeroengine stator system (casings) were implemented to validate the proposed IRSM and advanced MC simulation method. The results show that (1) the SMU strategy hold high computational precision and efficiency for the SMUs of complex structural system; (2) the IRSM is demonstrated to be an effective model due to its SMU time is far less than that of traditional response surface method, which is promising to improve the computational speed and accuracy of SMU; (3) the advanced MC method observably decrease the samples from finite element simulations and the elapsed time of SMU. The efforts of this paper provide a promising SMU strategy for complex structure and enrich the theory of model updating.

  3. The Osseus platform: a prototype for advanced web-based distributed simulation

    NASA Astrophysics Data System (ADS)

    Franceschini, Derrick; Riecken, Mark

    2016-05-01

    Recent technological advances in web-based distributed computing and database technology have made possible a deeper and more transparent integration of some modeling and simulation applications. Despite these advances towards true integration of capabilities, disparate systems, architectures, and protocols will remain in the inventory for some time to come. These disparities present interoperability challenges for distributed modeling and simulation whether the application is training, experimentation, or analysis. Traditional approaches call for building gateways to bridge between disparate protocols and retaining interoperability specialists. Challenges in reconciling data models also persist. These challenges and their traditional mitigation approaches directly contribute to higher costs, schedule delays, and frustration for the end users. Osseus is a prototype software platform originally funded as a research project by the Defense Modeling & Simulation Coordination Office (DMSCO) to examine interoperability alternatives using modern, web-based technology and taking inspiration from the commercial sector. Osseus provides tools and services for nonexpert users to connect simulations, targeting the time and skillset needed to successfully connect disparate systems. The Osseus platform presents a web services interface to allow simulation applications to exchange data using modern techniques efficiently over Local or Wide Area Networks. Further, it provides Service Oriented Architecture capabilities such that finer granularity components such as individual models can contribute to simulation with minimal effort.

  4. The role of numerical simulation for the development of an advanced HIFU system

    NASA Astrophysics Data System (ADS)

    Okita, Kohei; Narumi, Ryuta; Azuma, Takashi; Takagi, Shu; Matumoto, Yoichiro

    2014-10-01

    High-intensity focused ultrasound (HIFU) has been used clinically and is under clinical trials to treat various diseases. An advanced HIFU system employs ultrasound techniques for guidance during HIFU treatment instead of magnetic resonance imaging in current HIFU systems. A HIFU beam imaging for monitoring the HIFU beam and a localized motion imaging for treatment validation of tissue are introduced briefly as the real-time ultrasound monitoring techniques. Numerical simulations have a great impact on the development of real-time ultrasound monitoring as well as the improvement of the safety and efficacy of treatment in advanced HIFU systems. A HIFU simulator was developed to reproduce ultrasound propagation through the body in consideration of the elasticity of tissue, and was validated by comparison with in vitro experiments in which the ultrasound emitted from the phased-array transducer propagates through the acrylic plate acting as a bone phantom. As the result, the defocus and distortion of the ultrasound propagating through the acrylic plate in the simulation quantitatively agree with that in the experimental results. Therefore, the HIFU simulator accurately reproduces the ultrasound propagation through the medium whose shape and physical properties are well known. In addition, it is experimentally confirmed that simulation-assisted focus control of the phased-array transducer enables efficient assignment of the focus to the target. Simulation-assisted focus control can contribute to design of transducers and treatment planning.

  5. Monte Carlo Simulations in Statistical Physics -- From Basic Principles to Advanced Applications

    NASA Astrophysics Data System (ADS)

    Janke, Wolfhard

    2013-08-01

    This chapter starts with an overview of Monte Carlo computer simulation methodologies which are illustrated for the simple case of the Ising model. After reviewing importance sampling schemes based on Markov chains and standard local update rules (Metropolis, Glauber, heat-bath), nonlocal cluster-update algorithms are explained which drastically reduce the problem of critical slowing down at second-order phase transitions and thus improve the performance of simulations. How this can be quantified is explained in the section on statistical error analyses of simulation data including the effect of temporal correlations and autocorrelation times. Histogram reweighting methods are explained in the next section. Eventually, more advanced generalized ensemble methods (simulated and parallel tempering, multicanonical ensemble, Wang-Landau method) are discussed which are particularly important for simulations of first-order phase transitions and, in general, of systems with rare-event states. The setup of scaling and finite-size scaling analyses is the content of the following section. The chapter concludes with two advanced applications to complex physical systems. The first example deals with a quenched, diluted ferromagnet, and in the second application we consider the adsorption properties of macromolecules such as polymers and proteins to solid substrates. Such systems often require especially tailored algorithms for their efficient and successful simulation.

  6. FY05-FY06 Advanced Simulation and Computing Implementation Plan, Volume 2

    SciTech Connect

    Baron, A L

    2004-07-19

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the safety and reliability of the U.S. nuclear stockpile. The SSP uses past nuclear test data along with future non-nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program will require the continued use of current facilities and programs along with new experimental facilities and computational enhancements to support these programs. The Advanced Simulation and Computing program (ASC) is a cornerstone of the SSP, providing simulation capabilities and computational resources to support the annual stockpile assessment and certification, to study advanced nuclear weapon design and manufacturing processes, to analyze accident scenarios and weapons aging, and to provide the tools to enable stockpile life extension programs and the resolution of significant finding investigations (SFIs). This requires a balanced system of technical staff, hardware, simulation software, and computer science solutions.

  7. In-silico simulations of advanced drug delivery systems: what will the future offer?

    PubMed

    Siepmann, Juergen

    2013-09-15

    This commentary enlarges on some of the topics addressed in the Position Paper "Towards more effective advanced drug delivery systems" by Crommelin and Florence (2013). Inter alia, the role of mathematical modeling and computer-assisted device design is briefly addressed in the Position Paper. This emerging and particularly promising field is considered in more depth in this commentary. In fact, in-silico simulations have become of fundamental importance in numerous scientific and related domains, allowing for a better understanding of various phenomena and for facilitated device design. The development of novel prototypes of space shuttles, nuclear power plants and automobiles are just a few examples. In-silico simulations are nowadays also well established in the field of pharmacokinetics/pharmacodynamics (PK/PD) and have become an integral part of the discovery and development process of novel drug products. Since Takeru Higuchi published his seminal equation in 1961 the use of mathematical models for the analysis and optimization of drug delivery systems in vitro has also become more and more popular. However, applying in-silico simulations for facilitated optimization of advanced drug delivery systems is not yet common practice. One of the reasons is the gap between in vitro and in vivo (PK/PD) simulations. In the future it can be expected that this gap will be closed and that computer assisted device design will play a central role in the research on, and development of advanced drug delivery systems.

  8. A Flight Dynamic Simulation Program in Air-Path Axes Using ACSL (Advanced Continuous Simulation Language).

    DTIC Science & Technology

    1986-06-01

    NO-A±?3 649 A FLIGHT DYNANIC SINULRTION PROGRAM IN AIR-PRTH AXES 11𔃼 USING ACSL (ADVANCED.. (U) AERONAUTICAL RESEARCH LABS MELBOURNE (AUSTRALIA) P W...Aeronajutical Restvarch Laboratrmes, ....,. i P.O. Box 4331,M lo re Vic:toria. 3001, Aus trali ."-" Melbourne.-a ’ 𔃾’ -- .-,, : _" • , (C) CMMONWALTH F...of time dependent results . e Tne DERIVATIVE section contains tne aitnd1- of the six degrees look- of freedom flight model. Tr imm inrg o f tnte a ir

  9. Simulation Neurotechnologies for Advancing Brain Research: Parallelizing Large Networks in NEURON.

    PubMed

    Lytton, William W; Seidenstein, Alexandra H; Dura-Bernal, Salvador; McDougal, Robert A; Schürmann, Felix; Hines, Michael L

    2016-10-01

    Large multiscale neuronal network simulations are of increasing value as more big data are gathered about brain wiring and organization under the auspices of a current major research initiative, such as Brain Research through Advancing Innovative Neurotechnologies. The development of these models requires new simulation technologies. We describe here the current use of the NEURON simulator with message passing interface (MPI) for simulation in the domain of moderately large networks on commonly available high-performance computers (HPCs). We discuss the basic layout of such simulations, including the methods of simulation setup, the run-time spike-passing paradigm, and postsimulation data storage and data management approaches. Using the Neuroscience Gateway, a portal for computational neuroscience that provides access to large HPCs, we benchmark simulations of neuronal networks of different sizes (500-100,000 cells), and using different numbers of nodes (1-256). We compare three types of networks, composed of either Izhikevich integrate-and-fire neurons (I&F), single-compartment Hodgkin-Huxley (HH) cells, or a hybrid network with half of each. Results show simulation run time increased approximately linearly with network size and decreased almost linearly with the number of nodes. Networks with I&F neurons were faster than HH networks, although differences were small since all tested cells were point neurons with a single compartment.

  10. Advances in POST2 End-to-End Descent and Landing Simulation for the ALHAT Project

    NASA Technical Reports Server (NTRS)

    Davis, Jody L.; Striepe, Scott A.; Maddock, Robert W.; Hines, Glenn D.; Paschall, Stephen, II; Cohanim, Babak E.; Fill, Thomas; Johnson, Michael C.; Bishop, Robert H.; DeMars, Kyle J.; Sostaric, Ronald r.; Johnson, Andrew E.

    2008-01-01

    Program to Optimize Simulated Trajectories II (POST2) is used as a basis for an end-to-end descent and landing trajectory simulation that is essential in determining design and integration capability and system performance of the lunar descent and landing system and environment models for the Autonomous Landing and Hazard Avoidance Technology (ALHAT) project. The POST2 simulation provides a six degree-of-freedom capability necessary to test, design and operate a descent and landing system for successful lunar landing. This paper presents advances in the development and model-implementation of the POST2 simulation, as well as preliminary system performance analysis, used for the testing and evaluation of ALHAT project system models.

  11. Advanced Initiatives in Medical Simulation, 3rd Annual Conference to Create Awareness of Medical Simulation

    DTIC Science & Technology

    2006-06-30

    expertise in psychomotor skills . That understanding makes it possible to predict which measures to distinguish among levels of expertise. With a...students have “virtual mentors” that tell them whenever they make an error. Most simulators focus on psychomotor skills , but they need to also assess and...features at which the student is looking to assess the student’s judgment. Hand motions can be monitored to quantify psychomotor skills during the

  12. Advanced Simulation & Computing FY15 Implementation Plan Volume 2, Rev. 0.5

    SciTech Connect

    McCoy, Michel; Archer, Bill; Matzen, M. Keith

    2014-09-16

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the surety and reliability of the U.S. nuclear stockpile. The SSP uses nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of experimental facilities and programs, and the computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities and computational resources that support annual stockpile assessment and certification, study advanced nuclear weapons design and manufacturing processes, analyze accident scenarios and weapons aging, and provide the tools to enable stockpile Life Extension Programs (LEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balance of resource, including technical staff, hardware, simulation software, and computer science solutions. As the program approaches the end of its second decade, ASC is intently focused on increasing predictive capabilities in a three-dimensional (3D) simulation environment while maintaining support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (sufficient resolution, dimensionality, and scientific details), quantify critical margins and uncertainties, and resolve increasingly difficult analyses needed for the SSP. Where possible, the program also enables the use of high-performance simulation and computing tools to address broader national security needs, such as foreign nuclear weapon assessments and counternuclear terrorism.

  13. CAPE-OPEN Integration for Advanced Process Engineering Co-Simulation

    SciTech Connect

    Zitney, S.E.

    2006-11-01

    This paper highlights the use of the CAPE-OPEN (CO) standard interfaces in the Advanced Process Engineering Co-Simulator (APECS) developed at the National Energy Technology Laboratory (NETL). The APECS system uses the CO unit operation, thermodynamic, and reaction interfaces to provide its plug-and-play co-simulation capabilities, including the integration of process simulation with computational fluid dynamics (CFD) simulation. APECS also relies heavily on the use of a CO COM/CORBA bridge for running process/CFD co-simulations on multiple operating systems. For process optimization in the face of multiple and some time conflicting objectives, APECS offers stochastic modeling and multi-objective optimization capabilities developed to comply with the CO software standard. At NETL, system analysts are applying APECS to a wide variety of advanced power generation systems, ranging from small fuel cell systems to commercial-scale power plants including the coal-fired, gasification-based FutureGen power and hydrogen production plant.

  14. Simulation Study of Injection Performance for the Advanced Photon Source Upgrade

    SciTech Connect

    Xiao, A.; Sajaev, V.

    2015-01-01

    A vertical on-axis injection scheme has been proposed for the hybrid seven-bend-achromat (H7BA) [1] Advanced Photon Source upgrade (APSU) lattice. In order to evaluate the injection performance, various errors, such as injection beam jitter, optical mismatch and errors, and injection element errors have been investigated and their significance has been discovered. Injection efficiency is then simulated under different error levels. Based on these simulation results, specifications and an error-budget for individual systems have been defined.

  15. Advanced Simulation and Computing Fiscal Year 2011-2012 Implementation Plan, Revision 0

    SciTech Connect

    McCoy, M; Phillips, J; Hpson, J; Meisner, R

    2010-04-22

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the surety and reliability of the U.S. nuclear stockpile. The SSP uses past nuclear test data along with current and future non-nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering (D&E) programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of current facilities and programs along with new experimental facilities and computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities and computational resources to support the annual stockpile assessment and certification, to study advanced nuclear weapons design and manufacturing processes, to analyze accident scenarios and weapons aging, and to provide the tools to enable stockpile Life Extension Programs (LEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balanced resource, including technical staff, hardware, simulation software, and computer science solutions. In its first decade, the ASC strategy focused on demonstrating simulation capabilities of unprecedented scale in three spatial dimensions. In its second decade, ASC is focused on increasing its predictive capabilities in a three-dimensional (3D) simulation environment while maintaining support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (focused on sufficient resolution, dimensionality and scientific details); to quantify critical margins and uncertainties (QMU); and to resolve increasingly difficult analyses needed for the SSP. Moreover, ASC has restructured its business model

  16. Advanced Simulation and Computing FY08-09 Implementation Plan Volume 2 Revision 0

    SciTech Connect

    McCoy, M; Kusnezov, D; Bikkel, T; Hopson, J

    2007-04-25

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the safety and reliability of the U.S. nuclear stockpile. The SSP uses past nuclear test data along with current and future nonnuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of current facilities and programs along with new experimental facilities and computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities and computational resources to support the annual stockpile assessment and certification, to study advanced nuclear-weapons design and manufacturing processes, to analyze accident scenarios and weapons aging, and to provide the tools to enable Stockpile Life Extension Programs (SLEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balanced resource, including technical staff, hardware, simulation software, and computer science solutions. In its first decade, the ASC strategy focused on demonstrating simulation capabilities of unprecedented scale in three spatial dimensions. In its second decade, ASC is focused on increasing its predictive capabilities in a three-dimensional simulation environment while maintaining the support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (focused on sufficient resolution, dimensionality and scientific details); to quantify critical margins and uncertainties (QMU); and to resolve increasingly difficult analyses needed for the SSP. Moreover, ASC has restructured its business model from one

  17. Advanced Simulation and Computing FY07-08 Implementation Plan Volume 2

    SciTech Connect

    Kusnezov, D; Hale, A; McCoy, M; Hopson, J

    2006-06-22

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the safety and reliability of the U.S. nuclear stockpile. The SSP uses past nuclear test data along with current and future nonnuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program will require the continued use of current facilities and programs along with new experimental facilities and computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities and computational resources to support the annual stockpile assessment and certification, to study advanced nuclear-weapons design and manufacturing processes, to analyze accident scenarios and weapons aging, and to provide the tools to enable Stockpile Life Extension Programs (SLEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balanced resource, including technical staff, hardware, simulation software, and computer science solutions. In its first decade, the ASC strategy focused on demonstrating simulation capabilities of unprecedented scale in three spatial dimensions. In its second decade, ASC is focused on increasing its predictive capabilities in a three-dimensional simulation environment while maintaining the support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (focused on sufficient resolution, dimensionality and scientific details); to quantify critical margins and uncertainties (QMU); and to resolve increasingly difficult analyses needed for the SSP. Moreover, ASC has restructured its business model from

  18. Advanced Simulation & Computing FY09-FY10 Implementation Plan Volume 2, Rev. 0

    SciTech Connect

    Meisner, R; Perry, J; McCoy, M; Hopson, J

    2008-04-30

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the safety and reliability of the U.S. nuclear stockpile. The SSP uses past nuclear test data along with current and future nonnuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of current facilities and programs along with new experimental facilities and computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC)1 is a cornerstone of the SSP, providing simulation capabilities and computational resources to support the annual stockpile assessment and certification, to study advanced nuclear-weapons design and manufacturing processes, to analyze accident scenarios and weapons aging, and to provide the tools to enable Stockpile Life Extension Programs (SLEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balanced resource, including technical staff, hardware, simulation software, and computer science solutions. In its first decade, the ASC strategy focused on demonstrating simulation capabilities of unprecedented scale in three spatial dimensions. In its second decade, ASC is focused on increasing its predictive capabilities in a three-dimensional simulation environment while maintaining the support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (focused on sufficient resolution, dimensionality and scientific details); to quantify critical margins and uncertainties (QMU); and to resolve increasingly difficult analyses needed for the SSP. Moreover, ASC has restructured its business model from one

  19. Advanced Simulation and Computing FY09-FY10 Implementation Plan, Volume 2, Revision 0.5

    SciTech Connect

    Meisner, R; Hopson, J; Peery, J; McCoy, M

    2008-10-07

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the surety and reliability of the U.S. nuclear stockpile. The SSP uses past nuclear test data along with current and future non-nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of current facilities and programs along with new experimental facilities and computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC)1 is a cornerstone of the SSP, providing simulation capabilities and computational resources to support the annual stockpile assessment and certification, to study advanced nuclear weapons design and manufacturing processes, to analyze accident scenarios and weapons aging, and to provide the tools to enable stockpile Life Extension Programs (LEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balanced resource, including technical staff, hardware, simulation software, and computer science solutions. In its first decade, the ASC strategy focused on demonstrating simulation capabilities of unprecedented scale in three spatial dimensions. In its second decade, ASC is focused on increasing its predictive capabilities in a three-dimensional simulation environment while maintaining support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (focused on sufficient resolution, dimensionality and scientific details); to quantify critical margins and uncertainties (QMU); and to resolve increasingly difficult analyses needed for the SSP. Moreover, ASC has restructured its business model from one

  20. Advanced Simulation and Computing FY08-09 Implementation Plan, Volume 2, Revision 0.5

    SciTech Connect

    Kusnezov, D; Bickel, T; McCoy, M; Hopson, J

    2007-09-13

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the surety and reliability of the U.S. nuclear stockpile. The SSP uses past nuclear test data along with current and future non-nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of current facilities and programs along with new experimental facilities and computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC)1 is a cornerstone of the SSP, providing simulation capabilities and computational resources to support the annual stockpile assessment and certification, to study advanced nuclear-weapons design and manufacturing processes, to analyze accident scenarios and weapons aging, and to provide the tools to enable Stockpile Life Extension Programs (SLEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balanced resource, including technical staff, hardware, simulation software, and computer science solutions. In its first decade, the ASC strategy focused on demonstrating simulation capabilities of unprecedented scale in three spatial dimensions. In its second decade, ASC is focused on increasing its predictive capabilities in a three-dimensional simulation environment while maintaining the support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (focused on sufficient resolution, dimensionality and scientific details); to quantify critical margins and uncertainties (QMU); and to resolve increasingly difficult analyses needed for the SSP. Moreover, ASC has restructured its business model from

  1. Advanced Simulation and Computing FY09-FY10 Implementation Plan Volume 2, Rev. 1

    SciTech Connect

    Kissel, L

    2009-04-01

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the surety and reliability of the U.S. nuclear stockpile. The SSP uses past nuclear test data along with current and future non-nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of current facilities and programs along with new experimental facilities and computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities and computational resources to support the annual stockpile assessment and certification, to study advanced nuclear weapons design and manufacturing processes, to analyze accident scenarios and weapons aging, and to provide the tools to enable stockpile Life Extension Programs (LEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balanced resource, including technical staff, hardware, simulation software, and computer science solutions. In its first decade, the ASC strategy focused on demonstrating simulation capabilities of unprecedented scale in three spatial dimensions. In its second decade, ASC is focused on increasing its predictive capabilities in a three-dimensional simulation environment while maintaining support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (focused on sufficient resolution, dimensionality and scientific details); to quantify critical margins and uncertainties (QMU); and to resolve increasingly difficult analyses needed for the SSP. Moreover, ASC has restructured its business model from one that

  2. Advanced Simulation and Computing FY10-FY11 Implementation Plan Volume 2, Rev. 0.5

    SciTech Connect

    Meisner, R; Peery, J; McCoy, M; Hopson, J

    2009-09-08

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the surety and reliability of the U.S. nuclear stockpile. The SSP uses past nuclear test data along with current and future non-nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering (D&E) programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of current facilities and programs along with new experimental facilities and computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities and computational resources to support the annual stockpile assessment and certification, to study advanced nuclear weapons design and manufacturing processes, to analyze accident scenarios and weapons aging, and to provide the tools to enable stockpile Life Extension Programs (LEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balanced resource, including technical staff, hardware, simulation software, and computer science solutions. In its first decade, the ASC strategy focused on demonstrating simulation capabilities of unprecedented scale in three spatial dimensions. In its second decade, ASC is focused on increasing its predictive capabilities in a three-dimensional (3D) simulation environment while maintaining support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (focused on sufficient resolution, dimensionality and scientific details); to quantify critical margins and uncertainties (QMU); and to resolve increasingly difficult analyses needed for the SSP. Moreover, ASC has restructured its business model

  3. Advanced Methodology for Simulation of Complex Flows Using Structured Grid Systems

    NASA Technical Reports Server (NTRS)

    Steinthorsson, Erlendur; Modiano, David

    1995-01-01

    Detailed simulations of viscous flows in complicated geometries pose a significant challenge to current capabilities of Computational Fluid Dynamics (CFD). To enable routine application of CFD to this class of problems, advanced methodologies are required that employ (a) automated grid generation, (b) adaptivity, (c) accurate discretizations and efficient solvers, and (d) advanced software techniques. Each of these ingredients contributes to increased accuracy, efficiency (in terms of human effort and computer time), and/or reliability of CFD software. In the long run, methodologies employing structured grid systems will remain a viable choice for routine simulation of flows in complex geometries only if genuinely automatic grid generation techniques for structured grids can be developed and if adaptivity is employed more routinely. More research in both these areas is urgently needed.

  4. Advanced Distributed Simulation Technology II Global Positioning System Interactive Simulation (GPS DIS) Experiment

    DTIC Science & Technology

    2007-11-02

    RWA Manned Simulators 11 3.2.6 Voice Radio Communications: SRE & ASTi 11 3.2.7 ModSAF Operations 11 3.2.8 Data Logger 12 3.2.9 Time Stamper 12...utilized were the Single Channel Ground and Airborne Radio System (SINCGARS) Radio Emulator (SRE), the ASTi Radio, and the Tactical Internet Model (TIM...SGIs at the MWTB and ASTi radios at Ft. Rucker. These two Approved for public release; distribution is unlimited 4 ADST-II-CDRL-GPSDIS-9800018A

  5. Overview of the Consortium for the Advanced Simulation of Light Water Reactors (CASL)

    NASA Astrophysics Data System (ADS)

    Kulesza, Joel A.; Franceschini, Fausto; Evans, Thomas M.; Gehin, Jess C.

    2016-02-01

    The Consortium for Advanced Simulation of Light Water Reactors (CASL) was established in July 2010 for the purpose of providing advanced modeling and simulation solutions for commercial nuclear reactors. The primary goal is to provide coupled, higher-fidelity, usable modeling and simulation capabilities than are currently available. These are needed to address light water reactor (LWR) operational and safety performance-defining phenomena that are not yet able to be fully modeled taking a first-principles approach. In order to pursue these goals, CASL has participation from laboratory, academic, and industry partners. These partners are pursuing the solution of ten major "Challenge Problems" in order to advance the state-of-the-art in reactor design and analysis to permit power uprates, higher burnup, life extension, and increased safety. At present, the problems being addressed by CASL are primarily reactor physics-oriented; however, this paper is intended to introduce CASL to the reactor dosimetry community because of the importance of reactor physics modelling and nuclear data to define the source term for that community and the applicability and extensibility of the transport methods being developed.

  6. The SCEC Petascale Cyberfacility for Physics-based Seismic Hazard Analysis (PetaSHA): Accelerating SCEC Research Using High Performance Computing

    NASA Astrophysics Data System (ADS)

    Maechling, P. J.; Jordan, T. H.; Kesselman, C.; Moore, R.; Minster, J. B.; SCEC Collaboration

    2007-12-01

    The SCEC Community Modeling Environment (SCEC/CME) collaboration is extending SCEC's program of seismic hazard research using high performance computing with the NSF-funded Petascale Cyberfacility for Physics-based Seismic Hazard Analysis (PetaSHA) Project. The SCEC PetaSHA project is a collaboration of geoscientists and computer scientists that integrate geophysical numerical modeling codes with leading-edge cyberinfrastructure to perform seismic hazard research at large-scales and high-resolution using national academic supercomputing facilities. The PetaSHA computational capabilities are organized around the development of robust, re-usable, well-validated simulation systems we call computational platforms. Researchers on the PetaSHA Project are currently developing the DynaShake Platform (dynamic rupture simulations), the TeraShake Platform (wave propagation simulations), the CyberShake Platform (physics-based probabilistic seismic hazard analysis), the BroadBand Platform (deterministic and stochastic modeling of high frequency synthetic waveforms), the Full 3D Tomography (F3DT) Platform (improvements in structural representations), as well as using and extending the OpenSHA Platform (Probabilistic Seismic Hazard Analysis). We will describe several current PetaSHA research projects including the application of the DynaShake Platform to dynamic rupture modeling of the ShakeOut source, the use of the TeraShake Platform, including the URS- Graves, SDSU-Olsen and CMU-Hercules Anelastic Wave Propagation codes, to model 1Hz ShakeOut simulations, the use of the CyberShake Platform to investigate physics-based PSHA hazard curves, and the use of the F3DT Platform to produce an improved structural model for a large region in southern California.

  7. Impact of an Advanced Cardiac Life Support Simulation Laboratory Experience on Pharmacy Student Confidence and Knowledge

    PubMed Central

    Mohorn, Phillip L.; Haney, Jason S.; Phillips, Cynthia M.; Lu, Z. Kevin; Clark, Kimberly; Corboy, Alex; Ragucci, Kelly R.

    2016-01-01

    Objective. To assess the impact of an advanced cardiac life support (ACLS) simulation on pharmacy student confidence and knowledge. Design. Third-year pharmacy students participated in a simulation experience that consisted of team roles training, high-fidelity ACLS simulations, and debriefing. Students completed a pre/postsimulation confidence and knowledge assessment. Assessment. Overall, student knowledge assessment scores and student confidence scores improved significantly. Student confidence and knowledge changes from baseline were not significantly correlated. Conversely, a significant, weak positive correlation between presimulation studying and both presimulation confidence and presimulation knowledge was discovered. Conclusions. Overall, student confidence and knowledge assessment scores in ACLS significantly improved from baseline; however, student confidence and knowledge were not significantly correlated. PMID:27899836

  8. Technical Basis for Physical Fidelity of NRC Control Room Training Simulators for Advanced Reactors

    SciTech Connect

    Minsk, Brian S.; Branch, Kristi M.; Bates, Edward K.; Mitchell, Mark R.; Gore, Bryan F.; Faris, Drury K.

    2009-10-09

    The objective of this study is to determine how simulator physical fidelity influences the effectiveness of training the regulatory personnel responsible for examination and oversight of operating personnel and inspection of technical systems at nuclear power reactors. It seeks to contribute to the U.S. Nuclear Regulatory Commission’s (NRC’s) understanding of the physical fidelity requirements of training simulators. The goal of the study is to provide an analytic framework, data, and analyses that inform NRC decisions about the physical fidelity requirements of the simulators it will need to train its staff for assignment at advanced reactors. These staff are expected to come from increasingly diverse educational and experiential backgrounds.

  9. Impact of an Advanced Cardiac Life Support Simulation Laboratory Experience on Pharmacy Student Confidence and Knowledge.

    PubMed

    Maxwell, Whitney D; Mohorn, Phillip L; Haney, Jason S; Phillips, Cynthia M; Lu, Z Kevin; Clark, Kimberly; Corboy, Alex; Ragucci, Kelly R

    2016-10-25

    Objective. To assess the impact of an advanced cardiac life support (ACLS) simulation on pharmacy student confidence and knowledge. Design. Third-year pharmacy students participated in a simulation experience that consisted of team roles training, high-fidelity ACLS simulations, and debriefing. Students completed a pre/postsimulation confidence and knowledge assessment. Assessment. Overall, student knowledge assessment scores and student confidence scores improved significantly. Student confidence and knowledge changes from baseline were not significantly correlated. Conversely, a significant, weak positive correlation between presimulation studying and both presimulation confidence and presimulation knowledge was discovered. Conclusions. Overall, student confidence and knowledge assessment scores in ACLS significantly improved from baseline; however, student confidence and knowledge were not significantly correlated.

  10. Do Advance Yield Markings Increase Safe Driver Behaviors at Unsignalized, Marked Midblock Crosswalks? Driving Simulator Study

    PubMed Central

    Gómez, Radhameris A.; Samuel, Siby; Gerardino, Luis Roman; Romoser, Matthew R. E.; Collura, John; Knodler, Michael; Fisher, Donald L.

    2012-01-01

    In the United States, 78% of pedestrian crashes occur at noninter-section crossings. As a result, unsignalized, marked midblock crosswalks are prime targets for remediation. Many of these crashes occur under sight-limited conditions in which the view of critical information by the driver or pedestrian is obstructed by a vehicle stopped in an adjacent travel or parking lane on the near side of the crosswalk. Study of such a situation on the open road is much too risky, but study of the situation in a driving simulator is not. This paper describes the development of scenarios with sight limitations to compare potential vehicle–pedestrian conflicts on a driving simulator under conditions with two different types of pavement markings. Under the first condition, advance yield markings and symbol signs (prompts) that indicated “yield here to pedestrians” were used to warn drivers of pedestrians at marked, midblock crosswalks. Under the second condition, standard crosswalk treatments and prompts were used to warn drivers of these hazards. Actual crashes as well as the drivers' point of gaze were measured to determine if the drivers approaching a marked midblock crosswalk looked for pedestrians in the crosswalk more frequently and sooner in high-risk scenarios when advance yield markings and prompts were present than when standard markings and prompts were used. Fewer crashes were found to occur with advance yield markings. Drivers were also found to look for pedestrians much more frequently and much sooner with advance yield markings. The advantages and limitations of the use of driving simulation to study problems such as these are discussed. PMID:23082040

  11. Do Advance Yield Markings Increase Safe Driver Behaviors at Unsignalized, Marked Midblock Crosswalks? Driving Simulator Study.

    PubMed

    Gómez, Radhameris A; Samuel, Siby; Gerardino, Luis Roman; Romoser, Matthew R E; Collura, John; Knodler, Michael; Fisher, Donald L

    2011-01-01

    In the United States, 78% of pedestrian crashes occur at noninter-section crossings. As a result, unsignalized, marked midblock crosswalks are prime targets for remediation. Many of these crashes occur under sight-limited conditions in which the view of critical information by the driver or pedestrian is obstructed by a vehicle stopped in an adjacent travel or parking lane on the near side of the crosswalk. Study of such a situation on the open road is much too risky, but study of the situation in a driving simulator is not. This paper describes the development of scenarios with sight limitations to compare potential vehicle-pedestrian conflicts on a driving simulator under conditions with two different types of pavement markings. Under the first condition, advance yield markings and symbol signs (prompts) that indicated "yield here to pedestrians" were used to warn drivers of pedestrians at marked, midblock crosswalks. Under the second condition, standard crosswalk treatments and prompts were used to warn drivers of these hazards. Actual crashes as well as the drivers' point of gaze were measured to determine if the drivers approaching a marked midblock crosswalk looked for pedestrians in the crosswalk more frequently and sooner in high-risk scenarios when advance yield markings and prompts were present than when standard markings and prompts were used. Fewer crashes were found to occur with advance yield markings. Drivers were also found to look for pedestrians much more frequently and much sooner with advance yield markings. The advantages and limitations of the use of driving simulation to study problems such as these are discussed.

  12. Retention of Advanced Cardiac Life Support Knowledge and Skills Following High-Fidelity Mannequin Simulation Training

    PubMed Central

    Sen, Sanchita; Finn, Laura A.; Cawley, Michael J.

    2015-01-01

    Objective. To assess pharmacy students’ ability to retain advanced cardiac life support (ACLS) knowledge and skills within 120 days of previous high-fidelity mannequin simulation training. Design. Students were randomly assigned to rapid response teams of 5-6. Skills in ACLS and mannequin survival were compared between teams some members of which had simulation training 120 days earlier and teams who had not had previous training. Assessment. A checklist was used to record and assess performance in the simulations. Teams with previous simulation training (n=10) demonstrated numerical superiority to teams without previous training (n=12) for 6 out of 8 (75%) ACLS skills observed, including time calculating accurate vasopressor infusion rate (83 sec vs 113 sec; p=0.01). Mannequin survival was 37% higher for teams who had previous simulation training, but this result was not significant (70% vs 33%; p=0.20). Conclusion. Teams with students who had previous simulation training demonstrated numerical superiority in ACLS knowledge and skill retention within 120 days of previous training compared to those who had no previous training. Future studies are needed to add to the current evidence of pharmacy students’ and practicing pharmacists’ ACLS knowledge and skill retention. PMID:25741028

  13. Mission simulation as an approach to develop requirements for automation in Advanced Life Support Systems

    NASA Technical Reports Server (NTRS)

    Erickson, J. D.; Eckelkamp, R. E.; Barta, D. J.; Dragg, J.; Henninger, D. L. (Principal Investigator)

    1996-01-01

    This paper examines mission simulation as an approach to develop requirements for automation and robotics for Advanced Life Support Systems (ALSS). The focus is on requirements and applications for command and control, control and monitoring, situation assessment and response, diagnosis and recovery, adaptive planning and scheduling, and other automation applications in addition to mechanized equipment and robotics applications to reduce the excessive human labor requirements to operate and maintain an ALSS. Based on principles of systems engineering, an approach is proposed to assess requirements for automation and robotics using mission simulation tools. First, the story of a simulated mission is defined in terms of processes with attendant types of resources needed, including options for use of automation and robotic systems. Next, systems dynamics models are used in simulation to reveal the implications for selected resource allocation schemes in terms of resources required to complete operational tasks. The simulations not only help establish ALSS design criteria, but also may offer guidance to ALSS research efforts by identifying gaps in knowledge about procedures and/or biophysical processes. Simulations of a planned one-year mission with 4 crewmembers in a Human Rated Test Facility are presented as an approach to evaluation of mission feasibility and definition of automation and robotics requirements.

  14. A new paradigm for petascale Monte Carlo simulation: Replica exchange Wang-Landau sampling

    NASA Astrophysics Data System (ADS)

    Li, Ying Wai; Vogel, Thomas; Wüst, Thomas; Landau, David P.

    2014-05-01

    We introduce a generic, parallel Wang-Landau method that is naturally suited to implementation on massively parallel, petaflop supercomputers. The approach introduces a replica-exchange framework in which densities of states for overlapping sub-windows in energy space are determined iteratively by traditional Wang-Landau sampling. The advantages and general applicability of the method are demonstrated for several distinct systems that possess discrete or continuous degrees of freedom, including those with complex free energy landscapes and topological constraints.

  15. From Petascale to Exascale: Eight Focus Areas of R&D Challenges for HPC Simulation Environments

    SciTech Connect

    Springmeyer, R; Still, C; Schulz, M; Ahrens, J; Hemmert, S; Minnich, R; McCormick, P; Ward, L; Knoll, D

    2011-03-17

    Programming models bridge the gap between the underlying hardware architecture and the supporting layers of software available to applications. Programming models are different from both programming languages and application programming interfaces (APIs). Specifically, a programming model is an abstraction of the underlying computer system that allows for the expression of both algorithms and data structures. In comparison, languages and APIs provide implementations of these abstractions and allow the algorithms and data structures to be put into practice - a programming model exists independently of the choice of both the programming language and the supporting APIs. Programming models are typically focused on achieving increased developer productivity, performance, and portability to other system designs. The rapidly changing nature of processor architectures and the complexity of designing an exascale platform provide significant challenges for these goals. Several other factors are likely to impact the design of future programming models. In particular, the representation and management of increasing levels of parallelism, concurrency and memory hierarchies, combined with the ability to maintain a progressive level of interoperability with today's applications are of significant concern. Overall the design of a programming model is inherently tied not only to the underlying hardware architecture, but also to the requirements of applications and libraries including data analysis, visualization, and uncertainty quantification. Furthermore, the successful implementation of a programming model is dependent on exposed features of the runtime software layers and features of the operating system. Successful use of a programming model also requires effective presentation to the software developer within the context of traditional and new software development tools. Consideration must also be given to the impact of programming models on both languages and the associated compiler infrastructure. Exascale programming models must reflect several, often competing, design goals. These design goals include desirable features such as abstraction and separation of concerns. However, some aspects are unique to large-scale computing. For example, interoperability and composability with existing implementations will prove critical. In particular, performance is the essential underlying goal for large-scale systems. A key evaluation metric for exascale models will be the extent to which they support these goals rather than merely enable them.

  16. Advanced virtual energy simulation training and research: IGCC with CO2 capture power plant

    SciTech Connect

    Zitney, S.; Liese, E.; Mahapatra, P.; Bhattacharyya, D.; Provost, G.

    2011-01-01

    In this presentation, we highlight the deployment of a real-time dynamic simulator of an integrated gasification combined cycle (IGCC) power plant with CO{sub 2} capture at the Department of Energy's (DOE) National Energy Technology Laboratory's (NETL) Advanced Virtual Energy Simulation Training and Research (AVESTARTM) Center. The Center was established as part of the DOE's accelerating initiative to advance new clean coal technology for power generation. IGCC systems are an attractive technology option, generating low-cost electricity by converting coal and/or other fuels into a clean synthesis gas mixture in a process that is efficient and environmentally superior to conventional power plants. The IGCC dynamic simulator builds on, and reaches beyond, conventional power plant simulators to merge, for the first time, a 'gasification with CO{sub 2} capture' process simulator with a 'combined-cycle' power simulator. Fueled with coal, petroleum coke, and/or biomass, the gasification island of the simulated IGCC plant consists of two oxygen-blown, downward-fired, entrained-flow, slagging gasifiers with radiant syngas coolers and two-stage sour shift reactors, followed by a dual-stage acid gas removal process for CO{sub 2} capture. The combined cycle island consists of two F-class gas turbines, steam turbine, and a heat recovery steam generator with three-pressure levels. The dynamic simulator can be used for normal base-load operation, as well as plant start-up and shut down. The real-time dynamic simulator also responds satisfactorily to process disturbances, feedstock blending and switchovers, fluctuations in ambient conditions, and power demand load shedding. In addition, the full-scope simulator handles a wide range of abnormal situations, including equipment malfunctions and failures, together with changes initiated through actions from plant field operators. By providing a comprehensive IGCC operator training system, the AVESTAR Center is poised to develop a

  17. Recent advances in 3D computed tomography techniques for simulation and navigation in hepatobiliary pancreatic surgery.

    PubMed

    Uchida, Masafumi

    2014-04-01

    A few years ago it could take several hours to complete a 3D image using a 3D workstation. Thanks to advances in computer science, obtaining results of interest now requires only a few minutes. Many recent 3D workstations or multimedia computers are equipped with onboard 3D virtual patient modeling software, which enables patient-specific preoperative assessment and virtual planning, navigation, and tool positioning. Although medical 3D imaging can now be conducted using various modalities, including computed tomography (CT), magnetic resonance imaging (MRI), positron emission tomography (PET), and ultrasonography (US) among others, the highest quality images are obtained using CT data, and CT images are now the most commonly used source of data for 3D simulation and navigation image. If the 2D source image is bad, no amount of 3D image manipulation in software will provide a quality 3D image. In this exhibition, the recent advances in CT imaging technique and 3D visualization of the hepatobiliary and pancreatic abnormalities are featured, including scan and image reconstruction technique, contrast-enhanced techniques, new application of advanced CT scan techniques, and new virtual reality simulation and navigation imaging.

  18. The Advanced Modeling, Simulation and Analysis Capability Roadmap Vision for Engineering

    NASA Technical Reports Server (NTRS)

    Zang, Thomas; Lieber, Mike; Norton, Charles; Fucik, Karen

    2006-01-01

    This paper summarizes a subset of the Advanced Modeling Simulation and Analysis (AMSA) Capability Roadmap that was developed for NASA in 2005. The AMSA Capability Roadmap Team was chartered to "To identify what is needed to enhance NASA's capabilities to produce leading-edge exploration and science missions by improving engineering system development, operations, and science understanding through broad application of advanced modeling, simulation and analysis techniques." The AMSA roadmap stressed the need for integration, not just within the science, engineering and operations domains themselves, but also across these domains. Here we discuss the roadmap element pertaining to integration within the engineering domain, with a particular focus on implications for future observatory missions. The AMSA products supporting the system engineering function are mission information, bounds on information quality, and system validation guidance. The Engineering roadmap element contains 5 sub-elements: (1) Large-Scale Systems Models, (2) Anomalous Behavior Models, (3) advanced Uncertainty Models, (4) Virtual Testing Models, and (5) space-based Robotics Manufacture and Servicing Models.

  19. NWChem Meeting on Science Driven Petascale Computing and Capability Development at EMSL

    SciTech Connect

    De Jong, Wibe A.

    2007-02-19

    On January 25, and 26, 2007, an NWChem meeting was held that was attended by 65 scientists from 29 institutions including 22 universities and 5 national laboratories. The goals of the meeting were to look at major scientific challenges that could be addressed by computational modeling in environmental molecular sciences, and to identify the associated capability development needs. In addition, insights were sought into petascale computing developments in computational chemistry. During the meeting common themes were identified that will drive the need for the development of new or improved capabilities in NWChem. Crucial areas of development that the developer's team will be focusing on are (1) modeling of dynamics and kinetics in chemical transformations, (2) modeling of chemistry at interfaces and in the condensed phase, and (3) spanning longer time scales in biological processes modeled with molecular dynamics. Various computational chemistry methodologies were discussed during the meeting, which will provide the basis for the capability developments in the near or long term future of NWChem.

  20. DAG Software Architectures for Multi-Scale Multi-Physics Problems at Petascale and Beyond

    NASA Astrophysics Data System (ADS)

    Berzins, Martin

    2015-03-01

    The challenge of computations at Petascale and beyond is to ensure how to make possible efficient calculations on possibly hundreds of thousands for cores or on large numbers of GPUs or Intel Xeon Phis. An important methodology for achieving this is at present thought to be that of asynchronous task-based parallelism. The success of this approach will be demonstrated using the Uintah software framework for the solution of coupled fluid-structure interaction problems with chemical reactions. The layered approach of this software makes it possible for the user to specify the physical problems without parallel code, for that specification to be translated into a parallel set of tasks. These tasks are executed using a runtime system that executes tasks asynchronously and sometimes out-of-order. The scalability and portability of this approach will be demonstrated using examples from large scale combustion problems, industrial detonations and multi-scale, multi-physics models. The challenges of scaling such calculations to the next generations of leadership class computers (with more than a hundred petaflops) will be discussed. Thanks to NSF, XSEDE, DOE NNSA, DOE NETL, DOE ALCC and DOE INCITE.

  1. Network-friendly one-sided communication through multinode cooperation on petascale cray xt5 systems

    SciTech Connect

    Tipparaju, Vinod; Que, Xinyu; Yu, Weikuan; Vetter, Jeffrey S

    2011-01-01

    ne-sided communication is important to enable asynchronous communication and data movement for Global Address Space (GAS) programming models. Such communication is typically realized through direct messages between initiator and target processes. For petascale systems with 10,000s of nodes and 100,000s of cores, these direct messages require dedicated communication buffers and/or channels, which can lead to significant scalability challenges for GAS programming models. In this paper, we describe a network-friendly communication model, multinode cooperation, to enable indirect one-sided communication. Compute nodes work together to handle one-sided requests through (1) request forwarding in which one node can intercept a request and forward it to a target node, and (2) request aggregation in which one node can aggregate many requests to a target node. We have implemented multinode cooperation for a popular GAS runtime library, Aggregate Remote Memory Copy Interface (ARMCI). Our experimental results on a large-scale Cray XT5 system demonstrate that, multinode cooperation is able to greatly increase the memory scalability by reducing the number of communication buffers. In addition, multinode cooperation improves the resiliency of GAS runtime system to network contention. Furthermore, multinode cooperation can benefit the performance of scientific applications. In one case, it reduces the total execution time of an NWChem application by 52%.

  2. Network-friendly one-sided communication through multinode cooperation on petascale cray xt5 systems

    SciTech Connect

    Tipparaju, Vinod; Que, Xinyu; Yu, Weikuan; Vetter, Jeffrey S

    2011-05-01

    ne-sided communication is important to enable asynchronous communication and data movement for Global Address Space (GAS) programming models. Such communication is typically realized through direct messages between initiator and target processes. For petascale systems with 10,000s of nodes and 100,000s of cores, these direct messages require dedicated communication buffers and/or channels, which can lead to significant scalability challenges for GAS programming models. In this paper, we describe a network-friendly communication model, multinode cooperation, to enable indirect one-sided communication. Compute nodes work together to handle one-sided requests through (1) request forwarding in which one node can intercept a request and forward it to a target node, and (2) request aggregation in which one node can aggregate many requests to a target node. We have implemented multinode cooperation for a popular GAS runtime library, Aggregate Remote Memory Copy Interface (ARMCI). Our experimental results on a large-scale Cray XT5 system demonstrate that, multinode cooperation is able to greatly increase the memory scalability by reducing the number of communication buffers. In addition, multinode cooperation improves the resiliency of GAS runtime system to network contention. Furthermore, multinode cooperation can benefit the performance of scientific applications. In one case, it reduces the total execution time of an NWChem application by 52%.

  3. ADVANCED UTILITY SIMULATION MODEL, REPORT OF SENSITIVITY TESTING, CALIBRATION, AND MODEL OUTPUT COMPARISONS (VERSION 3.0)

    EPA Science Inventory

    The report gives results of activities relating to the Advanced Utility Simulation Model (AUSM): sensitivity testing. comparison with a mature electric utility model, and calibration to historical emissions. The activities were aimed at demonstrating AUSM's validity over input va...

  4. Large eddy simulation of unsteady wind farm behavior using advanced actuator disk models

    NASA Astrophysics Data System (ADS)

    Moens, Maud; Duponcheel, Matthieu; Winckelmans, Gregoire; Chatelain, Philippe

    2014-11-01

    The present project aims at improving the level of fidelity of unsteady wind farm scale simulations through an effort on the representation and the modeling of the rotors. The chosen tool for the simulations is a Fourth Order Finite Difference code, developed at Universite catholique de Louvain; this solver implements Large Eddy Simulation (LES) approaches. The wind turbines are modeled as advanced actuator disks: these disks are coupled with the Blade Element Momentum method (BEM method) and also take into account the turbine dynamics and controller. A special effort is made here to reproduce the specific wake behaviors. Wake decay and expansion are indeed initially governed by vortex instabilities. This is an information that cannot be obtained from the BEM calculations. We thus aim at achieving this by matching the large scales of the actuator disk flow to high fidelity wake simulations produced using a Vortex Particle-Mesh method. It is obtained by adding a controlled excitation at the disk. We apply this tool to the investigation of atmospheric turbulence effects on the power production and on the wake behavior at a wind farm level. A turbulent velocity field is then used as inflow boundary condition for the simulations. We gratefully acknowledge the support of GDF Suez for the fellowship of Mrs Maud Moens.

  5. Simulation for Supporting Scale-Up of a Fluidized Bed Reactor for Advanced Water Oxidation

    PubMed Central

    Abdul Raman, Abdul Aziz; Daud, Wan Mohd Ashri Wan

    2014-01-01

    Simulation of fluidized bed reactor (FBR) was accomplished for treating wastewater using Fenton reaction, which is an advanced oxidation process (AOP). The simulation was performed to determine characteristics of FBR performance, concentration profile of the contaminants, and various prominent hydrodynamic properties (e.g., Reynolds number, velocity, and pressure) in the reactor. Simulation was implemented for 2.8 L working volume using hydrodynamic correlations, continuous equation, and simplified kinetic information for phenols degradation as a model. The simulation shows that, by using Fe3+ and Fe2+ mixtures as catalyst, TOC degradation up to 45% was achieved for contaminant range of 40–90 mg/L within 60 min. The concentration profiles and hydrodynamic characteristics were also generated. A subsequent scale-up study was also conducted using similitude method. The analysis shows that up to 10 L working volume, the models developed are applicable. The study proves that, using appropriate modeling and simulation, data can be predicted for designing and operating FBR for wastewater treatment. PMID:25309949

  6. The Nuclear Energy Advanced Modeling and Simulation Enabling Computational Technologies FY09 Report

    SciTech Connect

    Diachin, L F; Garaizar, F X; Henson, V E; Pope, G

    2009-10-12

    In this document we report on the status of the Nuclear Energy Advanced Modeling and Simulation (NEAMS) Enabling Computational Technologies (ECT) effort. In particular, we provide the context for ECT In the broader NEAMS program and describe the three pillars of the ECT effort, namely, (1) tools and libraries, (2) software quality assurance, and (3) computational facility (computers, storage, etc) needs. We report on our FY09 deliverables to determine the needs of the integrated performance and safety codes (IPSCs) in these three areas and lay out the general plan for software quality assurance to meet the requirements of DOE and the DOE Advanced Fuel Cycle Initiative (AFCI). We conclude with a brief description of our interactions with the Idaho National Laboratory computer center to determine what is needed to expand their role as a NEAMS user facility.

  7. [Objective surgery -- advanced robotic devices and simulators used for surgical skill assessment].

    PubMed

    Suhánszki, Norbert; Haidegger, Tamás

    2014-12-01

    Robotic assistance became a leading trend in minimally invasive surgery, which is based on the global success of laparoscopic surgery. Manual laparoscopy requires advanced skills and capabilities, which is acquired through tedious learning procedure, while da Vinci type surgical systems offer intuitive control and advanced ergonomics. Nevertheless, in either case, the key issue is to be able to assess objectively the surgeons' skills and capabilities. Robotic devices offer radically new way to collect data during surgical procedures, opening the space for new ways of skill parameterization. This may be revolutionary in MIS training, given the new and objective surgical curriculum and examination methods. The article reviews currently developed skill assessment techniques for robotic surgery and simulators, thoroughly inspecting their validation procedure and utility. In the coming years, these methods will become the mainstream of Western surgical education.

  8. Development of Computational Approaches for Simulation and Advanced Controls for Hybrid Combustion-Gasification Chemical Looping

    SciTech Connect

    Joshi, Abhinaya; Lou, Xinsheng; Neuschaefer, Carl; Chaudry, Majid; Quinn, Joseph

    2012-07-31

    This document provides the results of the project through September 2009. The Phase I project has recently been extended from September 2009 to March 2011. The project extension will begin work on Chemical Looping (CL) Prototype modeling and advanced control design exploration in preparation for a scale-up phase. The results to date include: successful development of dual loop chemical looping process models and dynamic simulation software tools, development and test of several advanced control concepts and applications for Chemical Looping transport control and investigation of several sensor concepts and establishment of two feasible sensor candidates recommended for further prototype development and controls integration. There are three sections in this summary and conclusions. Section 1 presents the project scope and objectives. Section 2 highlights the detailed accomplishments by project task area. Section 3 provides conclusions to date and recommendations for future work.

  9. Development of Advanced Electrochemical Emission Spectroscopy for Monitoring Corrosion in Simulated DOE Liquid Waste

    SciTech Connect

    Macdonald, Digby; Liu, Jun; Liu, Sue; Al-Rifaie, Mohammed; Sikora; Elzbieta

    2000-06-01

    The principal goals of this project are to develop advanced electrochemical emission spectroscopic (EES) methods for monitoring the corrosion of carbon steel in simulated DOE liquid waste and to develop a better understanding of the mechanisms of the corrosion of metals (e.g. iron, nickel, and chromium) and alloys (carbon steel, low alloy steels, stainless steels) in thes e environments. During the first two years of this project, significant advances have been made in developing a better understanding of the corrosion of iron in aqueous solutions as a function of pH, on developing a better understanding of the growth of passive films on metal surfaces, and on developing EES techniques for corrosion monitoring. This report summarizes work on beginning the third year of the 3-year project.

  10. Simulations of Failure via Three-Dimensional Cracking in Fuel Cladding for Advanced Nuclear Fuels

    SciTech Connect

    Lu, Hongbing; Bukkapatnam, Satish; Harimkar, Sandip; Singh, Raman; Bardenhagen, Scott

    2014-01-09

    Enhancing performance of fuel cladding and duct alloys is a key means of increasing fuel burnup. This project will address the failure of fuel cladding via three-dimensional cracking models. Researchers will develop a simulation code for the failure of the fuel cladding and validate the code through experiments. The objective is to develop an algorithm to determine the failure of fuel cladding in the form of three-dimensional cracking due to prolonged exposure under varying conditions of pressure, temperature, chemical environment, and irradiation. This project encompasses the following tasks: 1. Simulate 3D crack initiation and growth under instantaneous and/or fatigue loads using a new variant of the material point method (MPM); 2. Simulate debonding of the materials in the crack path using cohesive elements, considering normal and shear traction separation laws; 3. Determine the crack propagation path, considering damage of the materials incorporated in the cohesive elements to allow the energy release rate to be minimized; 4. Simulate the three-dimensional fatigue crack growth as a function of loading histories; 5. Verify the simulation code by comparing results to theoretical and numerical studies available in the literature; 6. Conduct experiments to observe the crack path and surface profile in unused fuel cladding and validate against simulation results; and 7. Expand the adaptive mesh refinement infrastructure parallel processing environment to allow adaptive mesh refinement at the 3D crack fronts and adaptive mesh merging in the wake of cracks. Fuel cladding is made of materials such as stainless steels and ferritic steels with added alloying elements, which increase stability and durability under irradiation. As fuel cladding is subjected to water, chemicals, fission gas, pressure, high temperatures, and irradiation while in service, understanding performance is essential. In the fast fuel used in advanced burner reactors, simulations of the nuclear

  11. Pantograph catenary dynamic optimisation based on advanced multibody and finite element co-simulation tools

    NASA Astrophysics Data System (ADS)

    Massat, Jean-Pierre; Laurent, Christophe; Bianchi, Jean-Philippe; Balmès, Etienne

    2014-05-01

    This paper presents recent developments undertaken by SNCF Innovation & Research Department on numerical modelling of pantograph catenary interaction. It aims at describing an efficient co-simulation process between finite element (FE) and multibody (MB) modelling methods. FE catenary models are coupled with a full flexible MB representation with pneumatic actuation of pantograph. These advanced functionalities allow new kind of numerical analyses such as dynamic improvements based on innovative pneumatic suspensions or assessment of crash risks crossing areas that demonstrate the powerful capabilities of this computing approach.

  12. Advanced Numerical methods for F. E. Simulation of Metal Forming Processes

    NASA Astrophysics Data System (ADS)

    Chenot, Jean-Loup; Bernacki, Marc; Fourment, Lionel; Ducloux, Richard

    2010-06-01

    The classical scientific basis for finite element modeling of metal forming processes is first recalled. Several developments in advanced topics are summarized: adaptive and anisotropic remeshing, parallel solving, multi material deformation. More recent researches in numerical analysis are outlined, including multi grid and multi mesh methods, mainly devoted to decrease computation time, automatic optimization method for faster and more effective design of forming processes. The link of forming simulation and structural computations is considered with emphasis on the necessity to predict the final mechanical properties. Finally a brief account of computation at the micro scale level is given.

  13. The GEANT low energy Compton scattering (GLECS) package for use in simulating advanced Compton telescopes

    NASA Astrophysics Data System (ADS)

    Kippen, R. Marc

    2004-02-01

    Compton γ-ray imaging is inherently based on the assumption of γ-rays scattering with free electrons. In reality, the non-zero momentum of target electrons bound in atoms blurs this ideal scattering response in a process known as Doppler broadening. The design and understanding of advanced Compton telescopes, thus, depends critically on the ability to accurately account for Doppler broadening effects. For this purpose, a Monte Carlo package that simulates detailed Doppler broadening has been developed for use with the powerful, general-purpose GEANT3 and GEANT4 radiation transport codes. This paper describes the design of this package, and illustrates results of comparison with selected experimental data.

  14. On Simulation of Edge Stretchability of an 800MPa Advanced High Strength Steel

    NASA Astrophysics Data System (ADS)

    Pathak, Nikky; Butcher, Cliff; Worswick, Michael

    2016-08-01

    In the present work, the edge stretchability of advanced high strength steel (AHSS) was investigated experimentally and numerically using both a hole expansion test and a tensile specimen with a central hole. The experimental fracture strains obtained using the hole expansion and hole tension test in both reamed and sheared edge conditions were in very good agreement, suggesting the tests are equivalent for fracture characterization. Isotropic finite-element simulations of both tests were performed to compare the stress-state near the hole edge.

  15. Absolute Time Error Calibration of GPS Receivers Using Advanced GPS Simulators

    DTIC Science & Technology

    1997-12-01

    29th Annual Precise Time a d Time Interval (PTTI) Meeting ABSOLUTE TIME ERROR CALIBRATION OF GPS RECEIVERS USING ADVANCED GPS SIMULATORS E.D...DC 20375 USA Abstract Preche time transfer eq)er&nen& using GPS with t h e stabd?v’s under ten nanoseconh are common& being reported willrbr the... time transfer communily. Relarive calibrations are done by naeasurhg the time error of one GPS receiver versus a “known master refmence receiver.” Z?t

  16. Advances in Systems and Technologies Toward Interopoerating Operational Military C2 and Simulation Systems

    DTIC Science & Technology

    2014-06-01

    Standards   Organization   (SISO)   provides   a   collaborative   environment   for   exchange   of   information   about...19th  ICCRTS   “C2  Agility:  Lessons   Learned  from  Research  and  Operations”   Advances  in  Systems  and...Their vision is a future where military organizations can link their C2 and simulation systems without special preparation in support of coalition

  17. Orthogonal Metal Cutting Simulation Using Advanced Constitutive Equations with Damage and Fully Adaptive Numerical Procedure

    NASA Astrophysics Data System (ADS)

    Saanouni, Kkemais; Labergère, Carl; Issa, Mazen; Rassineux, Alain

    2010-06-01

    This work proposes a complete adaptive numerical methodology which uses `advanced' elastoplastic constitutive equations coupling: thermal effects, large elasto-viscoplasticity with mixed non linear hardening, ductile damage and contact with friction, for 2D machining simulation. Fully coupled (strong coupling) thermo-elasto-visco-plastic-damage constitutive equations based on the state variables under large plastic deformation developed for metal forming simulation are presented. The relevant numerical aspects concerning the local integration scheme as well as the global resolution strategy and the adaptive remeshing facility are briefly discussed. Applications are made to the orthogonal metal cutting by chip formation and segmentation under high velocity. The interactions between hardening, plasticity, ductile damage and thermal effects and their effects on the adiabatic shear band formation including the formation of cracks are investigated.

  18. Advanced thermal energy management: A thermal test bed and heat pipe simulation

    NASA Technical Reports Server (NTRS)

    Barile, Ronald G.

    1986-01-01

    Work initiated on a common-module thermal test simulation was continued, and a second project on heat pipe simulation was begun. The test bed, constructed from surplus Skylab equipment, was modeled and solved for various thermal load and flow conditions. Low thermal load caused the radiator fluid, Coolanol 25, to thicken due to its temperature avoided by using a regenerator-heat-exchanger. Other possible solutions modeled include a radiator heater and shunting heat from the central thermal bus to the radiator. Also, module air temperature can become excessive with high avionics load. A second preoject concerning advanced heat pipe concepts was initiated. A program was written which calculates fluid physical properties, liquid and vapor pressure in the evaporator and condenser, fluid flow rates, and thermal flux. The program is directed to evaluating newer heat pipe wicks and geometries, especially water in an artery surrounded by six vapor channels. Effects of temperature, groove and slot dimensions, and wick properties are reported.

  19. Ejector nozzle test results at simulated flight conditions for an advanced supersonic transport propulsion system

    NASA Technical Reports Server (NTRS)

    Nelson, D. P.; Bresnahan, D. L.

    1983-01-01

    Results are presented of wind tunnel tests conducted to verify the performance improvements of a refined ejector nozzle design for advanced supersonic transport propulsion systems. The analysis of results obtained at simulated engine operating conditions is emphasized. Tests were conducted with models of approximately 1/10th scale which were configured to simulate nozzle operation at takeoff, subsonic cruise, transonic cruise, and supersonic cruise. Transonic cruise operation was not a consideration during the nozzle design phase, although an evaluation at this condition was later conducted. Test results, characterized by thrust and flow coefficients, are given for a range of nozzle pressure ratios, emphasizing the thrust performance at the engine operating conditions predicted for each flight Mach number. The results indicate that nozzle performance goals were met or closely approximated at takeoff and supersonic cruise, while subsonic cruise performance was within 2.3 percent of the goal with further improvement possible.

  20. Strategic Plan for Nuclear Energy -- Knowledge Base for Advanced Modeling and Simulation (NE-KAMS)

    SciTech Connect

    Kimberlyn C. Mousseau

    2011-10-01

    The Nuclear Energy Computational Fluid Dynamics Advanced Modeling and Simulation (NE-CAMS) system is being developed at the Idaho National Laboratory (INL) in collaboration with Bettis Laboratory, Sandia National Laboratory (SNL), Argonne National Laboratory (ANL), Utah State University (USU), and other interested parties with the objective of developing and implementing a comprehensive and readily accessible data and information management system for computational fluid dynamics (CFD) verification and validation (V&V) in support of nuclear energy systems design and safety analysis. The two key objectives of the NE-CAMS effort are to identify, collect, assess, store and maintain high resolution and high quality experimental data and related expert knowledge (metadata) for use in CFD V&V assessments specific to the nuclear energy field and to establish a working relationship with the U.S. Nuclear Regulatory Commission (NRC) to develop a CFD V&V database, including benchmark cases, that addresses and supports the associated NRC regulations and policies on the use of CFD analysis. In particular, the NE-CAMS system will support the Department of Energy Office of Nuclear Energy Advanced Modeling and Simulation (NEAMS) Program, which aims to develop and deploy advanced modeling and simulation methods and computational tools for reliable numerical simulation of nuclear reactor systems for design and safety analysis. Primary NE-CAMS Elements There are four primary elements of the NE-CAMS knowledge base designed to support computer modeling and simulation in the nuclear energy arena as listed below. Element 1. The database will contain experimental data that can be used for CFD validation that is relevant to nuclear reactor and plant processes, particularly those important to the nuclear industry and the NRC. Element 2. Qualification standards for data evaluation and classification will be incorporated and applied such that validation data sets will result in well

  1. Current Advances in the Computational Simulation of the Formation of Low-Mass Stars

    SciTech Connect

    Klein, R I; Inutsuka, S; Padoan, P; Tomisaka, K

    2005-10-24

    Developing a theory of low-mass star formation ({approx} 0.1 to 3 M{sub {circle_dot}}) remains one of the most elusive and important goals of theoretical astrophysics. The star-formation process is the outcome of the complex dynamics of interstellar gas involving non-linear interactions of turbulence, gravity, magnetic field and radiation. The evolution of protostellar condensations, from the moment they are assembled by turbulent flows to the time they reach stellar densities, spans an enormous range of scales, resulting in a major computational challenge for simulations. Since the previous Protostars and Planets conference, dramatic advances in the development of new numerical algorithmic techniques have been successfully implemented on large scale parallel supercomputers. Among such techniques, Adaptive Mesh Refinement and Smooth Particle Hydrodynamics have provided frameworks to simulate the process of low-mass star formation with a very large dynamic range. It is now feasible to explore the turbulent fragmentation of molecular clouds and the gravitational collapse of cores into stars self-consistently within the same calculation. The increased sophistication of these powerful methods comes with substantial caveats associated with the use of the techniques and the interpretation of the numerical results. In this review, we examine what has been accomplished in the field and present a critique of both numerical methods and scientific results. We stress that computational simulations should obey the available observational constraints and demonstrate numerical convergence. Failing this, results of large scale simulations do not advance our understanding of low-mass star formation.

  2. Development of an advanced actuator disk model for Large-Eddy Simulation of wind farms

    NASA Astrophysics Data System (ADS)

    Moens, Maud; Duponcheel, Matthieu; Winckelmans, Gregoire; Chatelain, Philippe

    2015-11-01

    This work aims at improving the fidelity of the wind turbine modelling for Large-Eddy Simulation (LES) of wind farms, in order to accurately predict the loads, the production, and the wake dynamics. In those simulations, the wind turbines are accounted for through actuator disks. i.e. a body-force term acting over the regularised disk swept by the rotor. These forces are computed using the Blade Element theory to estimate the normal and tangential components (based on the local simulated flow and the blade characteristics). The local velocities are modified using the Glauert tip-loss factor in order to account for the finite number of blades; the computation of this correction is here improved thanks to a local estimation of the effective upstream velocity at every point of the disk. These advanced actuator disks are implemented in a 4th order finite difference LES solver and are compared to a classical Blade Element Momentum method and to high fidelity wake simulations performed using a Vortex Particle-Mesh method in uniform and turbulent flows.

  3. On the utility of graphics cards to perform massively parallel simulation of advanced Monte Carlo methods

    PubMed Central

    Lee, Anthony; Yau, Christopher; Giles, Michael B.; Doucet, Arnaud; Holmes, Christopher C.

    2011-01-01

    We present a case-study on the utility of graphics cards to perform massively parallel simulation of advanced Monte Carlo methods. Graphics cards, containing multiple Graphics Processing Units (GPUs), are self-contained parallel computational devices that can be housed in conventional desktop and laptop computers and can be thought of as prototypes of the next generation of many-core processors. For certain classes of population-based Monte Carlo algorithms they offer massively parallel simulation, with the added advantage over conventional distributed multi-core processors that they are cheap, easily accessible, easy to maintain, easy to code, dedicated local devices with low power consumption. On a canonical set of stochastic simulation examples including population-based Markov chain Monte Carlo methods and Sequential Monte Carlo methods, we nd speedups from 35 to 500 fold over conventional single-threaded computer code. Our findings suggest that GPUs have the potential to facilitate the growth of statistical modelling into complex data rich domains through the availability of cheap and accessible many-core computation. We believe the speedup we observe should motivate wider use of parallelizable simulation methods and greater methodological attention to their design. PMID:22003276

  4. Nuclear Energy Advanced Modeling and Simulation Waste Integrated Performance and Safety Codes (NEAMS Waste IPSC).

    SciTech Connect

    Schultz, Peter Andrew

    2011-12-01

    The objective of the U.S. Department of Energy Office of Nuclear Energy Advanced Modeling and Simulation Waste Integrated Performance and Safety Codes (NEAMS Waste IPSC) is to provide an integrated suite of computational modeling and simulation (M&S) capabilities to quantitatively assess the long-term performance of waste forms in the engineered and geologic environments of a radioactive-waste storage facility or disposal repository. Achieving the objective of modeling the performance of a disposal scenario requires describing processes involved in waste form degradation and radionuclide release at the subcontinuum scale, beginning with mechanistic descriptions of chemical reactions and chemical kinetics at the atomic scale, and upscaling into effective, validated constitutive models for input to high-fidelity continuum scale codes for coupled multiphysics simulations of release and transport. Verification and validation (V&V) is required throughout the system to establish evidence-based metrics for the level of confidence in M&S codes and capabilities, including at the subcontiunuum scale and the constitutive models they inform or generate. This Report outlines the nature of the V&V challenge at the subcontinuum scale, an approach to incorporate V&V concepts into subcontinuum scale modeling and simulation (M&S), and a plan to incrementally incorporate effective V&V into subcontinuum scale M&S destined for use in the NEAMS Waste IPSC work flow to meet requirements of quantitative confidence in the constitutive models informed by subcontinuum scale phenomena.

  5. On the utility of graphics cards to perform massively parallel simulation of advanced Monte Carlo methods.

    PubMed

    Lee, Anthony; Yau, Christopher; Giles, Michael B; Doucet, Arnaud; Holmes, Christopher C

    2010-12-01

    We present a case-study on the utility of graphics cards to perform massively parallel simulation of advanced Monte Carlo methods. Graphics cards, containing multiple Graphics Processing Units (GPUs), are self-contained parallel computational devices that can be housed in conventional desktop and laptop computers and can be thought of as prototypes of the next generation of many-core processors. For certain classes of population-based Monte Carlo algorithms they offer massively parallel simulation, with the added advantage over conventional distributed multi-core processors that they are cheap, easily accessible, easy to maintain, easy to code, dedicated local devices with low power consumption. On a canonical set of stochastic simulation examples including population-based Markov chain Monte Carlo methods and Sequential Monte Carlo methods, we nd speedups from 35 to 500 fold over conventional single-threaded computer code. Our findings suggest that GPUs have the potential to facilitate the growth of statistical modelling into complex data rich domains through the availability of cheap and accessible many-core computation. We believe the speedup we observe should motivate wider use of parallelizable simulation methods and greater methodological attention to their design.

  6. A demonstration of motion base design alternatives for the National Advanced Driving Simulator

    NASA Technical Reports Server (NTRS)

    Mccauley, Michael E.; Sharkey, Thomas J.; Sinacori, John B.; Laforce, Soren; Miller, James C.; Cook, Anthony

    1992-01-01

    A demonstration of the capability of NASA's Vertical Motion Simulator to simulate two alternative motion base designs for the National Advanced Driving simulator (NADS) is reported. The VMS is located at ARC. The motion base conditions used in this demonstration were as follows: (1) a large translational motion base; and (2) a motion base design with limited translational capability. The latter had translational capability representative of a typical synergistic motion platform. These alternatives were selected to test the prediction that large amplitude translational motion would result in a lower incidence or severity of simulator induced sickness (SIS) than would a limited translational motion base. A total of 10 drivers performed two tasks, slaloms and quick-stops, using each of the motion bases. Physiological, objective, and subjective measures were collected. No reliable differences in SIS between the motion base conditions was found in this demonstration. However, in light of the cost considerations and engineering challenges associated with implementing a large translation motion base, performance of a formal study is recommended.

  7. Space-based radar representation in the advanced warfighting simulation (AWARS)

    NASA Astrophysics Data System (ADS)

    Phend, Andrew E.; Buckley, Kathryn; Elliott, Steven R.; Stanley, Page B.; Shea, Peter M.; Rutland, Jimmie A.

    2004-09-01

    Space and orbiting systems impact multiple battlefield operating systems (BOS). Space support to current operations is a perfect example of how the United States fights. Satellite-aided munitions, communications, navigation and weather systems combine to achieve military objectives in a relatively short amount of time. Through representation of space capabilities within models and simulations, the military will have the ability to train and educate officers and soldiers to fight from the high ground of space or to conduct analysis and determine the requirements or utility of transformed forces empowered with advanced space-based capabilities. The Army Vice Chief of Staff acknowledged deficiencies in space modeling and simulation during the September 2001 Space Force Management Analsyis Review (FORMAL) and directed that a multi-disciplinary team be established to recommend a service-wide roadmap to address shortcomings. A Focus Area Collaborative Team (FACT), led by the U.S. Army Space & Missile Defense Command with participation across the Army, confirmed the weaknesses in scope, consistency, correctness, completeness, availability, and usability of space model and simulation (M&S) for Army applications. The FACT addressed the need to develop a roadmap to remedy Space M&S deficiencies using a highly parallelized process and schedule designed to support a recommendation during the Sep 02 meeting of the Army Model and Simulation Executive Council (AMSEC).

  8. Direct-execution parallel architecture for the Advanced Continuous Simulation Language (ACSL)

    SciTech Connect

    Carroll, C.C.; Owen, J.E.

    1988-05-01

    A direct-execution parallel architecture for the Advanced Continuous Simulation Language (ACSL) is presented which overcomes the traditional disadvantages of simulations executed on a digital computer. The incorporation of parallel processing allows the mapping of simulations into a digital computer to be done in the same inherently parallel manner as they are currently mapped onto an analog computer. The direct-execution format maximizes the efficiency of the executed code since the need for a high level language compiler is eliminated. Resolution is greatly increased over that which is available with an analog computer without the sacrifice in execution speed normally expected with digitial computer simulations. Although this report covers all aspects of the new architecture, key emphasis is placed on the processing element configuration and the microprogramming of the ACLS constructs. The execution times for all ACLS constructs are computed using a model of a processing element based on the AMD 29000 CPU and the AMD 29027 FPU. The increase in execution speed provided by parallel processing is exemplified by comparing the derived execution times of two ACSL programs with the execution times for the same programs executed on a similar sequential architecture.

  9. A direct-execution parallel architecture for the Advanced Continuous Simulation Language (ACSL)

    NASA Technical Reports Server (NTRS)

    Carroll, Chester C.; Owen, Jeffrey E.

    1988-01-01

    A direct-execution parallel architecture for the Advanced Continuous Simulation Language (ACSL) is presented which overcomes the traditional disadvantages of simulations executed on a digital computer. The incorporation of parallel processing allows the mapping of simulations into a digital computer to be done in the same inherently parallel manner as they are currently mapped onto an analog computer. The direct-execution format maximizes the efficiency of the executed code since the need for a high level language compiler is eliminated. Resolution is greatly increased over that which is available with an analog computer without the sacrifice in execution speed normally expected with digitial computer simulations. Although this report covers all aspects of the new architecture, key emphasis is placed on the processing element configuration and the microprogramming of the ACLS constructs. The execution times for all ACLS constructs are computed using a model of a processing element based on the AMD 29000 CPU and the AMD 29027 FPU. The increase in execution speed provided by parallel processing is exemplified by comparing the derived execution times of two ACSL programs with the execution times for the same programs executed on a similar sequential architecture.

  10. Characterization and Simulation of the Thermoacoustic Instability Behavior of an Advanced, Low Emissions Combustor Prototype

    NASA Technical Reports Server (NTRS)

    DeLaat, John C.; Paxson, Daniel E.

    2008-01-01

    Extensive research is being done toward the development of ultra-low-emissions combustors for aircraft gas turbine engines. However, these combustors have an increased susceptibility to thermoacoustic instabilities. This type of instability was recently observed in an advanced, low emissions combustor prototype installed in a NASA Glenn Research Center test stand. The instability produces pressure oscillations that grow with increasing fuel/air ratio, preventing full power operation. The instability behavior makes the combustor a potentially useful test bed for research into active control methods for combustion instability suppression. The instability behavior was characterized by operating the combustor at various pressures, temperatures, and fuel and air flows representative of operation within an aircraft gas turbine engine. Trends in instability behavior versus operating condition have been identified and documented, and possible explanations for the trends provided. A simulation developed at NASA Glenn captures the observed instability behavior. The physics-based simulation includes the relevant physical features of the combustor and test rig, employs a Sectored 1-D approach, includes simplified reaction equations, and provides time-accurate results. A computationally efficient method is used for area transitions, which decreases run times and allows the simulation to be used for parametric studies, including control method investigations. Simulation results show that the simulation exhibits a self-starting, self-sustained combustion instability and also replicates the experimentally observed instability trends versus operating condition. Future plans are to use the simulation to investigate active control strategies to suppress combustion instabilities and then to experimentally demonstrate active instability suppression with the low emissions combustor prototype, enabling full power, stable operation.

  11. A driver linac for the Advanced Exotic Beam Laboratory : physics design and beam dynamics simulations.

    SciTech Connect

    Ostroumov, P. N.; Mustapha, B.; Nolen, J.; Physics

    2007-01-01

    The Advanced Exotic Beam Laboratory (AEBL) being developed at ANL consists of an 833 MV heavy-ion driver linac capable of producing uranium ions up to 200 MeV/u and protons to 580 MeV with 400 kW beam power. We have designed all accelerator components including a two charge state LEBT, an RFQ, a MEBT, a superconducting linac, a stripper station and chicane. We present the results of an optimized linac design and end-to-end simulations including machine errors and detailed beam loss analysis. The Advanced Exotic Beam Laboratory (AEBL) has been proposed at ANL as a reduced scale of the original Rare Isotope Accelerator (RIA) project with about half the cost but the same beam power. AEBL will address 90% or more of RIA physics but with reduced multi-users capabilities. The focus of this paper is the physics design and beam dynamics simulations of the AEBL driver linac. The reported results are for a multiple charge state U{sup 238} beam.

  12. Progress and opportunities in direct numerical simulations at the next higher resolution

    NASA Astrophysics Data System (ADS)

    Yeung, P. K.; Sreenivasan, K. R.

    2013-11-01

    In recent years, many researchers in the turbulence community have been able to exploit the steady advancement of computing power to advance our understanding of turbulence, including new parameter ranges and the effects of coupling with other physical processes. However it is remarkable that, the ``record'' grid resolution of 40963, first achieved just over 10 years ago (Kaneda et al., Phys. Fluids 2003) still stands in the literature of the field. In this talk, we will present preliminary results from an 81923 simulation of turbulence on a periodic domain, carried out using 262144 CPU cores on the Blue Waters supercomputer under the NSF Track 1 Petascale Resource Allocations program. Since a simulation at this magnitude is still extremely expensive, and the resources required are not easily secured, very careful planning and very aggressive efforts at algorithmic enhancement are necessary (which we will also briefly discuss). This new simulation is expected to allow us to probe deeply into fundamental questions such as intermittency at the highest Reynolds numbers and the best possible resolution of the small scales at the current limit of computing power available. Supported by NSF Grant ACI-1036170.

  13. Numerical Simulation of Multi-Material Mixing in an Inclined Interface Richtmyer-Meshkov Instability

    NASA Astrophysics Data System (ADS)

    Subramaniam, Akshay; Lele, Sanjiva

    2015-11-01

    The Richtmyer-Meshkov instability arises when a shock wave interacts with an interface separating two fluids. In this work, high fidelity simulations of shock induced multi-material mixing between N2 and CO2 in a shock tube are performed for a Mach 1.55 shock interacting with a planar material interface that is inclined with respect to the shock propagation direction. In the current configuration, unlike the classical perturbed flat interface case, the evolution of the interface is non-linear from early time onwards. Our previous simulations of this problem at multiple spatial resolutions have shown that very small 3D perturbations have a large effect on vortex breakdown mechanisms and hence fine scale turbulence. We propose a comparison of our simulations to the experiments performed at the Georgia Tech Shock Tube and Advanced Mixing Laboratory (STAML). Results before and after reshock of the interface will be shown. Results from simulations of a second case with a more complex initial interface will also be presented. Simulations shown are conducted with an extended version of the Miranda solver developed by Cook et al. (2007) which combines high-order compact finite differences with localized non-linear artificial properties for shock and interface capturing. This research is part of the Blue Waters sustained-petascale computing project, which is supported by the National Science Foundation (awards OCI-0725070 and ACI-1238993) and the state of Illinois.

  14. Recent advances in computational methodology for simulation of mechanical circulatory assist devices

    PubMed Central

    Marsden, Alison L.; Bazilevs, Yuri; Long, Christopher C.; Behr, Marek

    2014-01-01

    Ventricular assist devices (VADs) provide mechanical circulatory support to offload the work of one or both ventricles during heart failure. They are used in the clinical setting as destination therapy, as bridge to transplant, or more recently as bridge to recovery to allow for myocardial remodeling. Recent developments in computational simulation allow for detailed assessment of VAD hemodynamics for device design and optimization for both children and adults. Here, we provide a focused review of the recent literature on finite element methods and optimization for VAD simulations. As VAD designs typically fall into two categories, pulsatile and continuous flow devices, we separately address computational challenges of both types of designs, and the interaction with the circulatory system with three representative case studies. In particular, we focus on recent advancements in finite element methodology that has increased the fidelity of VAD simulations. We outline key challenges, which extend to the incorporation of biological response such as thrombosis and hemolysis, as well as shape optimization methods and challenges in computational methodology. PMID:24449607

  15. An educational training simulator for advanced perfusion techniques using a high-fidelity virtual patient model.

    PubMed

    Tokaji, Megumi; Ninomiya, Shinji; Kurosaki, Tatsuya; Orihashi, Kazumasa; Sueda, Taijiro

    2012-12-01

    The operation of cardiopulmonary bypass procedure requires an advanced skill in both physiological and mechanical knowledge. We developed a virtual patient simulator system using a numerical cardiovascular regulation model to manage perfusion crisis. This article evaluates the ability of the new simulator to prevent perfusion crisis. It combined short-term baroreflex regulation of venous capacity, vascular resistance, heart rate, time-varying elastance of the heart, and plasma-refilling with a simple lumped parameter model of the cardiovascular system. The combination of parameters related to baroreflex regulation was calculated using clinical hemodynamic data. We examined the effect of differences in autonomous-nerve control parameter settings on changes in blood volume and hemodynamic parameters and determined the influence of the model on operation of the control arterial line flow and blood volume during the initiation and weaning from cardiopulmonary bypass. Typical blood pressure (BP) changes (hypertension, stable, and hypotension) were reproducible using a combination of four control parameters that can be estimated from changes in patient physiology, BP, and blood volume. This simulation model is a useful educational tool to learn the recognition and management skills of extracorporeal circulation. Identification method for control parameter can be applied for diagnosis of heart failure.

  16. Annoyance response to simulated advanced turboprop aircraft interior noise containing tonal beats

    NASA Technical Reports Server (NTRS)

    Leatherwood, Jack D.

    1987-01-01

    A study is done to investigate the effects on subjective annoyance of simulated advanced turboprop (ATP) interior noise environments containing tonal beats. The simulated environments consisted of low-frequency tones superimposed on a turbulent-boundary-layer noise spectrum. The variables used in the study included propeller tone frequency (100 to 250 Hz), propeller tone levels (84 to 105 dB), and tonal beat frequency (0 to 1.0 Hz). Results indicated that propeller tones within the simulated ATP environment resulted in increased annoyance response that was fully predictable in terms of the increase in overall sound pressure level due to the tones. Implications for ATP aircraft include the following: (1) the interior noise environment with propeller tones is more annoying than an environment without tones if the tone is present at a level sufficient to increase the overall sound pressure level; (2) the increased annoyance due to the fundamental propeller tone frequency without harmonics is predictable from the overall sound pressure level; and (3) no additional noise penalty due to the perception of single discrete-frequency tones and/or beats was observed.

  17. Efficient Multi-Dimensional Simulation of Quantum Confinement Effects in Advanced MOS Devices

    NASA Technical Reports Server (NTRS)

    Biegel, Bryan A.; Ancona, Mario G.; Rafferty, Conor S.; Yu, Zhiping

    2000-01-01

    We investigate the density-gradient (DG) transport model for efficient multi-dimensional simulation of quantum confinement effects in advanced MOS devices. The formulation of the DG model is described as a quantum correction ot the classical drift-diffusion model. Quantum confinement effects are shown to be significant in sub-100nm MOSFETs. In thin-oxide MOS capacitors, quantum effects may reduce gate capacitance by 25% or more. As a result, the inclusion of quantum effects may reduce gate capacitance by 25% or more. As a result, the inclusion of quantum effects in simulations dramatically improves the match between C-V simulations and measurements for oxide thickness down to 2 nm. Significant quantum corrections also occur in the I-V characteristics of short-channel (30 to 100 nm) n-MOSFETs, with current drive reduced by up to 70%. This effect is shown to result from reduced inversion charge due to quantum confinement of electrons in the channel. Also, subthreshold slope is degraded by 15 to 20 mV/decade with the inclusion of quantum effects via the density-gradient model, and short channel effects (in particular, drain-induced barrier lowering) are noticeably increased.

  18. Efficient Multi-Dimensional Simulation of Quantum Confinement Effects in Advanced MOS Devices

    NASA Technical Reports Server (NTRS)

    Biegel, Bryan A.; Rafferty, Conor S.; Ancona, Mario G.; Yu, Zhi-Ping

    2000-01-01

    We investigate the density-gradient (DG) transport model for efficient multi-dimensional simulation of quantum confinement effects in advanced MOS devices. The formulation of the DG model is described as a quantum correction to the classical drift-diffusion model. Quantum confinement effects are shown to be significant in sub-100nm MOSFETs. In thin-oxide MOS capacitors, quantum effects may reduce gate capacitance by 25% or more. As a result, the inclusion or quantum effects in simulations dramatically improves the match between C-V simulations and measurements for oxide thickness down to 2 nm. Significant quantum corrections also occur in the I-V characteristics of short-channel (30 to 100 nm) n-MOSFETs, with current drive reduced by up to 70%. This effect is shown to result from reduced inversion charge due to quantum confinement of electrons in the channel. Also, subthreshold slope is degraded by 15 to 20 mV/decade with the inclusion of quantum effects via the density-gradient model, and short channel effects (in particular, drain-induced barrier lowering) are noticeably increased.

  19. Simulation of Thin-Film Damping and Thermal Mechanical Noise Spectra for Advanced Micromachined Microphone Structures

    PubMed Central

    Hall, Neal A.; Okandan, Murat; Littrell, Robert; Bicen, Baris; Degertekin, F. Levent

    2008-01-01

    In many micromachined sensors the thin (2–10 μm thick) air film between a compliant diaphragm and backplate electrode plays a dominant role in shaping both the dynamic and thermal noise characteristics of the device. Silicon microphone structures used in grating-based optical-interference microphones have recently been introduced that employ backplates with minimal area to achieve low damping and low thermal noise levels. Finite-element based modeling procedures based on 2-D discretization of the governing Reynolds equation are ideally suited for studying thin-film dynamics in such structures which utilize relatively complex backplate geometries. In this paper, the dynamic properties of both the diaphragm and thin air film are studied using a modal projection procedure in a commonly used finite element software and the results are used to simulate the dynamic frequency response of the coupled structure to internally generated electrostatic actuation pressure. The model is also extended to simulate thermal mechanical noise spectra of these advanced sensing structures. In all cases simulations are compared with measured data and show excellent agreement—demonstrating 0.8 pN/√Hz and 1.8 μPa/√Hz thermal force and thermal pressure noise levels, respectively, for the 1.5 mm diameter structures under study which have a fundamental diaphragm resonance-limited bandwidth near 20 kHz. PMID:19081811

  20. Simulation of Thin-Film Damping and Thermal Mechanical Noise Spectra for Advanced Micromachined Microphone Structures.

    PubMed

    Hall, Neal A; Okandan, Murat; Littrell, Robert; Bicen, Baris; Degertekin, F Levent

    2008-06-01

    In many micromachined sensors the thin (2-10 μm thick) air film between a compliant diaphragm and backplate electrode plays a dominant role in shaping both the dynamic and thermal noise characteristics of the device. Silicon microphone structures used in grating-based optical-interference microphones have recently been introduced that employ backplates with minimal area to achieve low damping and low thermal noise levels. Finite-element based modeling procedures based on 2-D discretization of the governing Reynolds equation are ideally suited for studying thin-film dynamics in such structures which utilize relatively complex backplate geometries. In this paper, the dynamic properties of both the diaphragm and thin air film are studied using a modal projection procedure in a commonly used finite element software and the results are used to simulate the dynamic frequency response of the coupled structure to internally generated electrostatic actuation pressure. The model is also extended to simulate thermal mechanical noise spectra of these advanced sensing structures. In all cases simulations are compared with measured data and show excellent agreement-demonstrating 0.8 pN/√Hz and 1.8 μPa/√Hz thermal force and thermal pressure noise levels, respectively, for the 1.5 mm diameter structures under study which have a fundamental diaphragm resonance-limited bandwidth near 20 kHz.

  1. Exploring the use of standardized patients for simulation-based learning in preparing advanced practice nurses.

    PubMed

    Kowitlawakul, Yanika; Chow, Yeow Leng; Salam, Zakir Hussian Abdul; Ignacio, Jeanette

    2015-07-01

    The use of standardized patients for simulation-based learning was integrated into the Master of Nursing curriculum in the 2012-2013 academic year. The study aimed to explore the Master of Nursing students' experiences with and perceptions of using standardized patients in simulations, and to identify the students' learning needs in preparing to become advanced practice nurses. The study adopted an exploratory descriptive qualitative design, using a focus group interview. The study was conducted at a university in Singapore. Seven Master of Nursing students who were enrolled in the Acute Care Track of Master of Nursing program in the 2012-2013 academic year participated in the study. The data were gathered at the end of the first semester. Content analysis was used to analyze the data. Three main categories - usefulness, clinical limitations, and realism - were identified in the study. The results revealed that the students felt using standardized patients was useful and realistic for developing skills in history taking, communication, and responding to an emergency situation. On the other hand, they found that the standardized patients were limited in providing critical signs and symptoms of case scenarios. To meet the learning objectives, future development and integration of standardized patients in the Master of Nursing curriculum might need to be considered along with the use of a high-fidelity simulator. This can be an alternative strategy to fill the gaps in each method. Obviously, using standardized patients for simulation-based learning has added value to the students' learning experiences. It is highly recommended that future studies explore the impact of using standardized patients on students' performance in clinical settings.

  2. Advances in simulating radiance signatures for dynamic air/water interfaces

    NASA Astrophysics Data System (ADS)

    Goodenough, Adam A.; Brown, Scott D.; Gerace, Aaron

    2015-05-01

    The air-water interface poses a number of problems for both collecting and simulating imagery. At the surface, the magnitude of observed radiance can change by multiple orders of magnitude at high spatiotemporal frequency due to glinting effects. In the volume, similarly high frequency focusing of photons by a dynamic wave surface significantly changes the reflected radiance of in-water objects and the scattered return of the volume itself. These phenomena are often manifest as saturated pixels and artifacts in collected imagery (often enhanced by time delays between neighboring pixels or interpolation between adjacent filters) and as noise and greater required computation times in simulated imagery. This paper describes recent advances made to the Digital Image and Remote Sensing Image Generation (DIRSIG) model to address the simulation issues to better facilitate an understanding of a multi/hyper-spectral collection. Glint effects are simulated using a dynamic height field that can be driven by wave frequency models and generates a sea state at arbitrary time scales. The volume scattering problem is handled by coupling the geometry representing the surface (facetization by the height field) with the single scattering contribution at any point in the water. The problem is constrained somewhat by assuming that contributions come from a Snell's window above the scattering point and by assuming a direct source (sun). Diffuse single scattered and multiple scattered energy contributions are handled by Monte Carlo techniques employed previously. The model is compared to existing radiative transfer codes where possible, with the objective of providing a robust movel of time-dependent absolute radiance at many wavelengths.

  3. Assessment of driving-related performance in chronic whiplash using an advanced driving simulator.

    PubMed

    Takasaki, Hiroshi; Treleaven, Julia; Johnston, Venerina; Rakotonirainy, Andry; Haines, Andrew; Jull, Gwendolen

    2013-11-01

    Driving is often nominated as problematic by individuals with chronic whiplash associated disorders (WAD), yet driving-related performance has not been evaluated objectively. The purpose of this study was to test driving-related performance in persons with chronic WAD against healthy controls of similar age, gender and driving experience to determine if driving-related performance in the WAD group was sufficiently impaired to recommend fitness to drive assessment. Driving-related performance was assessed using an advanced driving simulator during three driving scenarios; freeway, residential and a central business district (CBD). Total driving duration was approximately 15min. Five driving tasks which could cause a collision (critical events) were included in the scenarios. In addition, the effect of divided attention (identify red dots projected onto side or rear view mirrors) was assessed three times in each scenario. Driving performance was measured using the simulator performance index (SPI) which is calculated from 12 measures. z-Scores for all SPI measures were calculated for each WAD subject based on mean values of the control subjects. The z-scores were then averaged for the WAD group. A z-score of ≤-2 indicated a driving failing grade in the simulator. The number of collisions over the five critical events was compared between the WAD and control groups as was reaction time and missed response ratio in identifying the red dots. Seventeen WAD and 26 control subjects commenced the driving assessment. Demographic data were comparable between the groups. All subjects completed the freeway scenario but four withdrew during the residential and eight during the CBD scenario because of motion sickness. All scenarios were completed by 14 WAD and 17 control subjects. Mean z-scores for the SPI over the three scenarios was statistically lower in the WAD group (-0.3±0.3; P<0.05) but the score was not below the cut-off point for safe driving. There were no

  4. Strategic Plan for Nuclear Energy -- Knowledge Base for Advanced Modeling and Simulation (NE-KAMS)

    SciTech Connect

    Rich Johnson; Kimberlyn C. Mousseau; Hyung Lee

    2011-09-01

    NE-KAMS knowledge base will assist computational analysts, physics model developers, experimentalists, nuclear reactor designers, and federal regulators by: (1) Establishing accepted standards, requirements and best practices for V&V and UQ of computational models and simulations, (2) Establishing accepted standards and procedures for qualifying and classifying experimental and numerical benchmark data, (3) Providing readily accessible databases for nuclear energy related experimental and numerical benchmark data that can be used in V&V assessments and computational methods development, (4) Providing a searchable knowledge base of information, documents and data on V&V and UQ, and (5) Providing web-enabled applications, tools and utilities for V&V and UQ activities, data assessment and processing, and information and data searches. From its inception, NE-KAMS will directly support nuclear energy research, development and demonstration programs within the U.S. Department of Energy (DOE), including the Consortium for Advanced Simulation of Light Water Reactors (CASL), the Nuclear Energy Advanced Modeling and Simulation (NEAMS), the Light Water Reactor Sustainability (LWRS), the Small Modular Reactors (SMR), and the Next Generation Nuclear Power Plant (NGNP) programs. These programs all involve computational modeling and simulation (M&S) of nuclear reactor systems, components and processes, and it is envisioned that NE-KAMS will help to coordinate and facilitate collaboration and sharing of resources and expertise for V&V and UQ across these programs. In addition, from the outset, NE-KAMS will support the use of computational M&S in the nuclear industry by developing guidelines and recommended practices aimed at quantifying the uncertainty and assessing the applicability of existing analysis models and methods. The NE-KAMS effort will initially focus on supporting the use of computational fluid dynamics (CFD) and thermal hydraulics (T/H) analysis for M&S of nuclear

  5. Special issue on the "Consortium for Advanced Simulation of Light Water Reactors Research and Development Progress"

    NASA Astrophysics Data System (ADS)

    Turinsky, Paul J.; Martin, William R.

    2017-04-01

    In this special issue of the Journal of Computational Physics, the research and development completed at the time of manuscript submission by the Consortium for Advanced Simulation of Light Water Reactors (CASL) is presented. CASL is the first of several Energy Innovation Hubs that have been created by the Department of Energy. The Hubs are modeled after the strong scientific management characteristics of the Manhattan Project and AT&T Bell Laboratories, and function as integrated research centers that combine basic and applied research with engineering to accelerate scientific discovery that addresses critical energy issues. Lifetime of a Hub is expected to be five or ten years depending upon performance, with CASL being granted a ten year lifetime.

  6. Advanced adaptive computational methods for Navier-Stokes simulations in rotorcraft aerodynamics

    NASA Technical Reports Server (NTRS)

    Stowers, S. T.; Bass, J. M.; Oden, J. T.

    1993-01-01

    A phase 2 research and development effort was conducted in area transonic, compressible, inviscid flows with an ultimate goal of numerically modeling complex flows inherent in advanced helicopter blade designs. The algorithms and methodologies therefore are classified as adaptive methods, which are error estimation techniques for approximating the local numerical error, and automatically refine or unrefine the mesh so as to deliver a given level of accuracy. The result is a scheme which attempts to produce the best possible results with the least number of grid points, degrees of freedom, and operations. These types of schemes automatically locate and resolve shocks, shear layers, and other flow details to an accuracy level specified by the user of the code. The phase 1 work involved a feasibility study of h-adaptive methods for steady viscous flows, with emphasis on accurate simulation of vortex initiation, migration, and interaction. Phase 2 effort focused on extending these algorithms and methodologies to a three-dimensional topology.

  7. Advanced Simulation and Computing Fiscal Year 2016 Implementation Plan, Version 0

    SciTech Connect

    McCoy, M.; Archer, B.; Hendrickson, B.

    2015-08-27

    The Stockpile Stewardship Program (SSP) is an integrated technical program for maintaining the safety, surety, and reliability of the U.S. nuclear stockpile. The SSP uses nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of experimental facilities and programs, and the computational capabilities to support these programs. The purpose of this IP is to outline key work requirements to be performed and to control individual work activities within the scope of work. Contractors may not deviate from this plan without a revised WA or subsequent IP.

  8. Simulations of Liners and Test Objects for a New Atlas Advanced Radiography Source

    SciTech Connect

    D. V. Morgan; S. Iversen; R. A. Hilko

    2002-06-01

    The Advanced Radiographic Source (ARS) will improve the data significantly due to its smaller source width. Because of the enhanced ARS output, larger source-to-object distances are a reality. The harder ARS source will allow radiography of thick high-Z targets. The five different spectral simulations resulted in similar imaging detector weighted transmission. This work used a limited set of test objects and imaging detectors. Other test objects and imaging detectors could possibly change the MVp-sensitivity result. The effect of material motion blur must be considered for the ARS due to the expected smaller X-ray source size. This study supports the original 1.5-MVp value.

  9. Photocatalytic Removal of Microcystin-LR by Advanced WO3-Based Nanoparticles under Simulated Solar Light

    PubMed Central

    Zhao, Chao; Li, Dawei; Feng, Chuanping; Zhang, Zhenya; Sugiura, Norio; Yang, Yingnan

    2015-01-01

    A series of advanced WO3-based photocatalysts including CuO/WO3, Pd/WO3, and Pt/WO3 were synthesized for the photocatalytic removal of microcystin-LR (MC-LR) under simulated solar light. In the present study, Pt/WO3 exhibited the best performance for the photocatalytic degradation of MC-LR. The MC-LR degradation can be described by pseudo-first-order kinetic model. Chloride ion (Cl−) with proper concentration could enhance the MC-LR degradation. The presence of metal cations (Cu2+ and Fe3+) improved the photocatalytic degradation of MC-LR. This study suggests that Pt/WO3 photocatalytic oxidation under solar light is a promising option for the purification of water containing MC-LR. PMID:25884038

  10. Virtual charge state separator as an advanced tool coupling measurements and simulations

    NASA Astrophysics Data System (ADS)

    Yaramyshev, S.; Vormann, H.; Adonin, A.; Barth, W.; Dahl, L.; Gerhard, P.; Groening, L.; Hollinger, R.; Maier, M.; Mickat, S.; Orzhekhovskaya, A.

    2015-05-01

    A new low energy beam transport for a multicharge uranium beam will be built at the GSI High Current Injector (HSI). All uranium charge states coming from the new ion source will be injected into GSI heavy ion high current HSI Radio Frequency Quadrupole (RFQ), but only the design ions U4 + will be accelerated to the final RFQ energy. A detailed knowledge about injected beam current and emittance for pure design U4 + ions is necessary for a proper beam line design commissioning and operation, while measurements are possible only for a full beam including all charge states. Detailed measurements of the beam current and emittance are performed behind the first quadrupole triplet of the beam line. A dedicated algorithm, based on a combination of measurements and the results of advanced beam dynamics simulations, provides for an extraction of beam current and emittance values for only the U4 + component of the beam. The proposed methods and obtained results are presented.

  11. Petascale self-consistent electromagnetic computations using scalable and accurate algorithms for complex structures

    NASA Astrophysics Data System (ADS)

    Cary, John R.; Abell, D.; Amundson, J.; Bruhwiler, D. L.; Busby, R.; Carlsson, J. A.; Dimitrov, D. A.; Kashdan, E.; Messmer, P.; Nieter, C.; Smithe, D. N.; Spentzouris, P.; Stoltz, P.; Trines, R. M.; Wang, H.; Werner, G. R.

    2006-09-01

    As the size and cost of particle accelerators escalate, high-performance computing plays an increasingly important role; optimization through accurate, detailed computermodeling increases performance and reduces costs. But consequently, computer simulations face enormous challenges. Early approximation methods, such as expansions in distance from the design orbit, were unable to supply detailed accurate results, such as in the computation of wake fields in complex cavities. Since the advent of message-passing supercomputers with thousands of processors, earlier approximations are no longer necessary, and it is now possible to compute wake fields, the effects of dampers, and self-consistent dynamics in cavities accurately. In this environment, the focus has shifted towards the development and implementation of algorithms that scale to large numbers of processors. So-called charge-conserving algorithms evolve the electromagnetic fields without the need for any global solves (which are difficult to scale up to many processors). Using cut-cell (or embedded) boundaries, these algorithms can simulate the fields in complex accelerator cavities with curved walls. New implicit algorithms, which are stable for any time-step, conserve charge as well, allowing faster simulation of structures with details small compared to the characteristic wavelength. These algorithmic and computational advances have been implemented in the VORPAL7 Framework, a flexible, object-oriented, massively parallel computational application that allows run-time assembly of algorithms and objects, thus composing an application on the fly.

  12. A Computational Methodology for Simulating Thermal Loss Testing of the Advanced Stirling Convertor

    NASA Technical Reports Server (NTRS)

    Reid, Terry V.; Wilson, Scott D.; Schifer, Nicholas A.; Briggs, Maxwell H.

    2012-01-01

    The U.S. Department of Energy (DOE) and Lockheed Martin Space Systems Company (LMSSC) have been developing the Advanced Stirling Radioisotope Generator (ASRG) for use as a power system for space science missions. This generator would use two highefficiency Advanced Stirling Convertors (ASCs), developed by Sunpower Inc. and NASA Glenn Research Center (GRC). The ASCs convert thermal energy from a radioisotope heat source into electricity. As part of ground testing of these ASCs, different operating conditions are used to simulate expected mission conditions. These conditions require achieving a particular operating frequency, hot end and cold end temperatures, and specified electrical power output for a given net heat input. In an effort to improve net heat input predictions, numerous tasks have been performed which provided a more accurate value for net heat input into the ASCs, including the use of multidimensional numerical models. Validation test hardware has also been used to provide a direct comparison of numerical results and validate the multi-dimensional numerical models used to predict convertor net heat input and efficiency. These validation tests were designed to simulate the temperature profile of an operating Stirling convertor and resulted in a measured net heat input of 244.4 W. The methodology was applied to the multi-dimensional numerical model which resulted in a net heat input of 240.3 W. The computational methodology resulted in a value of net heat input that was 1.7 percent less than that measured during laboratory testing. The resulting computational methodology and results are discussed.

  13. Advanced Simulation and Computing: A Summary Report to the Director's Review

    SciTech Connect

    McCoy, M G; Peck, T

    2003-06-01

    It has now been three years since the Advanced Simulation and Computing Program (ASCI), as managed by Defense and Nuclear Technologies (DNT) Directorate, has been reviewed by this Director's Review Committee (DRC). Since that time, there has been considerable progress for all components of the ASCI Program, and these developments will be highlighted in this document and in the presentations planned for June 9 and 10, 2003. There have also been some name changes. Today, the Program is called ''Advanced Simulation and Computing,'' Although it retains the familiar acronym ASCI, the initiative nature of the effort has given way to sustained services as an integral part of the Stockpile Stewardship Program (SSP). All computing efforts at LLNL and the other two Defense Program (DP) laboratories are funded and managed under ASCI. This includes the so-called legacy codes, which remain essential tools in stockpile stewardship. The contract between the Department of Energy (DOE) and the University of California (UC) specifies an independent appraisal of Directorate technical work and programmatic management. Such represents the work of this DNT Review Committee. Beginning this year, the Laboratory is implementing a new review system. This process was negotiated between UC, the National Nuclear Security Administration (NNSA), and the Laboratory Directors. Central to this approach are eight performance objectives that focus on key programmatic and administrative goals. Associated with each of these objectives are a number of performance measures to more clearly characterize the attainment of the objectives. Each performance measure has a lead directorate and one or more contributing directorates. Each measure has an evaluation plan and has identified expected documentation to be included in the ''Assessment File''.

  14. Complex Spine Pathology Simulator: An Innovative Tool for Advanced Spine Surgery Training.

    PubMed

    Gragnaniello, Cristian; Abou-Hamden, Amal; Mortini, Pietro; Colombo, Elena V; Bailo, Michele; Seex, Kevin A; Litvack, Zachary; Caputy, Anthony J; Gagliardi, Filippo

    2016-11-01

    Background Technical advancements in spine surgery have made possible the treatment of increasingly complex pathologies with less morbidity. Time constraints in surgeons' training have made it necessary to develop new training models for spine pathology. Objective To describe the application of a novel compound, Stratathane resin ST-504 derived polymer (SRSDP), that can be injected at different spinal target locations to mimic spinal epidural, subdural extra-axial, and intra-axial pathologies for the use in advanced surgical training. Material and Methods Fresh-frozen thoracolumbar and cervical spine segments of human and sheep cadavers were used to study the model. SRSDP is initially liquid after mixing, allowing it to be injected into target areas where it expands and solidifies, mimicking the entire spectrum of spinal pathologies. Results Different polymer concentrations have been codified to vary adhesiveness, texture, spread capability, deformability, and radiologic visibility. Polymer injection was performed under fluoroscopic guidance through pathology-specific injection sites that avoided compromising the surgical approach for subsequent excision of the artificial lesion. Inflation of a balloon catheter of the desired size was used to displace stiff cadaveric neurovascular structures to mimic pathology-related mass effect. Conclusion The traditional cadaveric training models principally only allow surgeons to practice the surgical approach. The complex spine pathology simulator is a novel educational tool that in a user-friendly, low-cost fashion allows trainees to practice advanced technical skills in the removal of complex spine pathology, potentially shortening some of the aspects of the learning curve of operative skills that may otherwise take many years to acquire.

  15. Influence of setback and advancement osseous genioplasty on facial outcome: A computer-simulated study.

    PubMed

    Möhlhenrich, Stephan Christian; Heussen, Nicole; Kamal, Mohammad; Peters, Florian; Fritz, Ulrike; Hölzle, Frank; Modabber, Ali

    2015-12-01

    The aim of this virtual study was to investigate the influence of angular deviation and displacement distance on the overlying soft tissue during chin genioplasty. Computed tomography data from 21 patients were read using ProPlan CMF software. Twelve simulated genioplasties were performed per patient with variable osteotomy angles and displacement distances. Soft-tissue deformations and cephalometric analysis were compared. Changes in anterior and inferior soft-tissue of the chin along with resultant lower facial third area were determined. Maximum average changes in soft-tissue were obtained anterior after 10-mm advancement about 4.19 SD 0.84 mm and inferior about -1.55 SD 0.96 mm. After 10-mm setback anterior -4.63 SD 0.56 mm and inferior 0.75 SD 1.16 mm were deviations found. The anterior soft tissue showed a statistically significant change with bony displacement in both directions independent of osteotomy angle (p < 0.001) and only after a 10-mm advancement with an angle of -5° significant differences at inferior soft-tissue were noted (p = 0.0055). The average area of the total lower third of the face was 24,807.80 SD 4,091.72 mm(2) and up to 62.75% was influenced. Advanced genioplasty leads to greater changes in the overlying soft tissue, whereas the affected area is larger after setback displacement. The ratio between soft and hard tissue movements largely depends on the displacement distance.

  16. A simulation study of crew performance in operating an advanced transport aircraft in an automated terminal area environment

    NASA Technical Reports Server (NTRS)

    Houck, J. A.

    1983-01-01

    A simulation study assessing crew performance operating an advanced transport aircraft in an automated terminal area environment is described. The linking together of the Langley Advanced Transport Operating Systems Aft Flight Deck Simulator with the Terminal Area Air Traffic Model Simulation was required. The realism of an air traffic control (ATC) environment with audio controller instructions for the flight crews and the capability of inserting a live aircraft into the terminal area model to interact with computer generated aircraft was provided. Crew performance using the advanced displays and two separate control systems (automatic and manual) in flying area navigation routes in the automated ATC environment was assessed. Although the crews did not perform as well using the manual control system, their performances were within acceptable operational limits with little increase in workload. The crews favored using the manual control system and felt they were more alert and aware of their environment when using it.

  17. Technology advancement for the ASCENDS mission using the ASCENDS CarbonHawk Experiment Simulator (ACES)

    NASA Astrophysics Data System (ADS)

    Obland, M. D.; Antill, C.; Browell, E. V.; Campbell, J. F.; CHEN, S.; Cleckner, C.; Dijoseph, M. S.; Harrison, F. W.; Ismail, S.; Lin, B.; Meadows, B. L.; Mills, C.; Nehrir, A. R.; Notari, A.; Prasad, N. S.; Kooi, S. A.; Vitullo, N.; Dobler, J. T.; Bender, J.; Blume, N.; Braun, M.; Horney, S.; McGregor, D.; Neal, M.; Shure, M.; Zaccheo, T.; Moore, B.; Crowell, S.; Rayner, P. J.; Welch, W.

    2013-12-01

    The ASCENDS CarbonHawk Experiment Simulator (ACES) is a NASA Langley Research Center project funded by NASA's Earth Science Technology Office that seeks to advance technologies critical to measuring atmospheric column carbon dioxide (CO2) mixing ratios in support of the NASA Active Sensing of CO2 Emissions over Nights, Days, and Seasons (ASCENDS) mission. The technologies being advanced are: (1) multiple transmitter and telescope-aperture operations, (2) high-efficiency CO2 laser transmitters, (3) a high bandwidth detector and transimpedance amplifier (TIA), and (4) advanced algorithms for cloud and aerosol discrimination. The instrument architecture is being developed for ACES to operate on a high-altitude aircraft, and it will be directly scalable to meet the ASCENDS mission requirements. The above technologies are critical for developing an airborne simulator and spaceborne instrument with lower platform consumption of size, mass, and power, and with improved performance. This design employs several laser transmitters and telescope-apertures to demonstrate column CO2 retrievals with alignment of multiple laser beams in the far-field. ACES will transmit five laser beams: three from commercial lasers operating near 1.57-microns, and two from the Exelis atmospheric oxygen (O2) fiber laser amplifier system operating near 1.26-microns. The Master Oscillator Power Amplifier at 1.57-microns measures CO2 column concentrations using an Integrated-Path Differential Absorption (IPDA) lidar approach. O2 column amounts needed for calculating the CO2 mixing ratio will be retrieved using the Exelis laser system with a similar IPDA approach. The three aperture telescope design was built to meet the constraints of the Global Hawk high-altitude unmanned aerial vehicle (UAV). This assembly integrates fiber-coupled transmit collimators for all of the laser transmitters and fiber-coupled optical signals from the three telescopes to the aft optics and detector package. The detector

  18. Some Specific CASL Requirements for Advanced Multiphase Flow Simulation of Light Water Reactors

    SciTech Connect

    R. A. Berry

    2010-11-01

    Because of the diversity of physical phenomena occuring in boiling, flashing, and bubble collapse, and of the length and time scales of LWR systems, it is imperative that the models have the following features: • Both vapor and liquid phases (and noncondensible phases, if present) must be treated as compressible. • Models must be mathematically and numerically well-posed. • The models methodology must be multi-scale. A fundamental derivation of the multiphase governing equation system, that should be used as a basis for advanced multiphase modeling in LWR coolant systems, is given in the Appendix using the ensemble averaging method. The remainder of this work focuses specifically on the compressible, well-posed, and multi-scale requirements of advanced simulation methods for these LWR coolant systems, because without these are the most fundamental aspects, without which widespread advancement cannot be claimed. Because of the expense of developing multiple special-purpose codes and the inherent inability to couple information from the multiple, separate length- and time-scales, efforts within CASL should be focused toward development of a multi-scale approaches to solve those multiphase flow problems relevant to LWR design and safety analysis. Efforts should be aimed at developing well-designed unified physical/mathematical and high-resolution numerical models for compressible, all-speed multiphase flows spanning: (1) Well-posed general mixture level (true multiphase) models for fast transient situations and safety analysis, (2) DNS (Direct Numerical Simulation)-like models to resolve interface level phenmena like flashing and boiling flows, and critical heat flux determination (necessarily including conjugate heat transfer), and (3) Multi-scale methods to resolve both (1) and (2) automatically, depending upon specified mesh resolution, and to couple different flow models (single-phase, multiphase with several velocities and pressures, multiphase with single

  19. Reactivity Initiated Accident Simulation to Inform Transient Testing of Candidate Advanced Cladding

    SciTech Connect

    Brown, Nicholas R; Wysocki, Aaron J; Terrani, Kurt A

    2016-01-01

    Abstract. Advanced cladding materials with potentially enhanced accident tolerance will yield different light water reactor performance and safety characteristics than the present zirconium-based cladding alloys. These differences are due to different cladding material properties and responses to the transient, and to some extent, reactor physics, thermal, and hydraulic characteristics. Some of the differences in reactors physics characteristics will be driven by the fundamental properties (e.g., absorption in iron for an iron-based cladding) and others will be driven by design modifications necessitated by the candidate cladding materials (e.g., a larger fuel pellet to compensate for parasitic absorption). Potential changes in thermal hydraulic limits after transition from the current zirconium-based cladding to the advanced materials will also affect the transient response of the integral fuel. This paper leverages three-dimensional reactor core simulation capabilities to inform on appropriate experimental test conditions for candidate advanced cladding materials in a control rod ejection event. These test conditions are using three-dimensional nodal kinetics simulations of a reactivity initiated accident (RIA) in a representative state-of-the-art pressurized water reactor with both nuclear-grade iron-chromium-aluminum (FeCrAl) and silicon carbide based (SiC-SiC) cladding materials. The effort yields boundary conditions for experimental mechanical tests, specifically peak cladding strain during the power pulse following the rod ejection. The impact of candidate cladding materials on the reactor kinetics behavior of RIA progression versus reference zirconium cladding is predominantly due to differences in: (1) fuel mass/volume/specific power density, (2) spectral effects due to parasitic neutron absorption, (3) control rod worth due to hardened (or softened) spectrum, and (4) initial conditions due to power peaking and neutron transport cross sections in the

  20. The Synergy Between Total Scattering and Advanced Simulation Techniques: Quantifying Geopolymer Gel Evolution

    SciTech Connect

    White, Claire; Bloomer, Breaunnah E.; Provis, John L.; Henson, Neil J.; Page, Katharine L.

    2012-05-16

    With the ever increasing demands for technologically advanced structural materials, together with emerging environmental consciousness due to climate change, geopolymer cement is fast becoming a viable alternative to traditional cements due to proven mechanical engineering characteristics and the reduction in CO2 emitted (approximately 80% less CO2 emitted compared to ordinary Portland cement). Nevertheless, much remains unknown regarding the kinetics of the molecular changes responsible for nanostructural evolution during the geopolymerization process. Here, in-situ total scattering measurements in the form of X-ray pair distribution function (PDF) analysis are used to quantify the extent of reaction of metakaolin/slag alkali-activated geopolymer binders, including the effects of various activators (alkali hydroxide/silicate) on the kinetics of the geopolymerization reaction. Restricting quantification of the kinetics to the initial ten hours of reaction does not enable elucidation of the true extent of the reaction, but using X-ray PDF data obtained after 128 days of reaction enables more accurate determination of the initial extent of reaction. The synergies between the in-situ X-ray PDF data and simulations conducted by multiscale density functional theory-based coarse-grained Monte Carlo analysis are outlined, particularly with regard to the potential for the X-ray data to provide a time scale for kinetic analysis of the extent of reaction obtained from the multiscale simulation methodology.

  1. A low-cost RK time advancing strategy for energy-preserving turbulent simulations

    NASA Astrophysics Data System (ADS)

    Capuano, Francesco; Coppola, Gennaro; de Luca, Luigi; Balarac, Guillaume

    2014-11-01

    Energy-conserving numerical methods are widely employed in direct and large eddy simulation of turbulent flows. Semi-discrete conservation of energy is usually obtained by adopting the so-called skew-symmetric splitting of the non-linear term, defined as a suitable average of the divergence and advective forms. Although generally allowing global conservation of kinetic energy by convection, it has the drawback of being roughly twice as expensive as standard divergence or advective forms alone. A novel time-advancement strategy that retains the conservation properties of skew-symmetric-based schemes at a reduced computational cost has been developed in the framework of explicit Runge-Kutta schemes. It is found that optimal energy-conservation can be achieved by properly constructed Runge-Kutta methods in which only divergence and advective forms for the convective term are adopted. The new schemes can be considerably faster than skew-symmetric-based techniques. A general framework for the construction of optimized Runge-Kutta coefficients is developed, which has proven to be able to produce new methods with a specified order of accuracy on both solution and energy. The effectiveness of the method is demonstrated by numerical simulation of homogeneous isotropic turbulence.

  2. Advanced techniques and painless procedures for nonlinear contact analysis and forming simulation via implicit FEM

    NASA Astrophysics Data System (ADS)

    Zhuang, Shoubing

    2013-05-01

    Nonlinear contact analysis including forming simulation via finite element methods has a crucial and practical application in many engineering fields. However, because of high nonlinearity, nonlinear contact analysis still remains as an extremely challenging obstacle for many industrial applications. The implicit finite element scheme is generally more accurate than the explicit finite element scheme, but it has a known challenge of convergence because of complex geometries, large relative motion and rapid contact state change. It might be thought as a very painful process to diagnose the convergence issue of nonlinear contact. Most complicated contact models have a great many contact surfaces, and it is hard work to well define the contact pairs using the common contact definition methods, which either result in hundreds of contact pairs or are time-consuming. This paper presents the advanced techniques of nonlinear contact analysis and forming simulation via the implicit finite element scheme and the penalty method. The calculation of the default automatic contact stiffness is addressed. Furthermore, this paper presents the idea of selection groups to help easily and efficiently define contact pairs for complicated contact analysis, and the corresponding implementation and usage are discussed. Lastly, typical nonlinear contact models and forming models with nonlinear material models are shown in the paper to demonstrate the key presented method and technologies.

  3. Propulsion Simulations Using Advanced Turbulence Models with the Unstructured Grid CFD Tool, TetrUSS

    NASA Technical Reports Server (NTRS)

    Abdol-Hamid, Khaled S.; Frink, Neal T.; Deere, Karen A.; Pandya, Mohangna J.

    2004-01-01

    A computational investigation has been completed to assess the capability of TetrUSS for exhaust nozzle flows. Three configurations were chosen for this study (1) an axisymmetric supersonic jet, (2) a transonic axisymmetric boattail with solid sting operated at different Reynolds number and Mach number, and (3) an isolated non-axisymmetric nacelle with a supersonic cruise nozzle. These configurations were chosen because existing experimental data provided a means for measuring the ability of TetrUSS for simulating complex nozzle flows. The main objective of this paper is to validate the implementation of advanced two-equation turbulence models in the unstructured-grid CFD code USM3D for propulsion flow cases. USM3D is the flow solver of the TetrUSS system. Three different turbulence models, namely, Menter Shear Stress Transport (SST), basic k epsilon, and the Spalart-Allmaras (SA) are used in the present study. The results are generally in agreement with other implementations of these models in structured-grid CFD codes. Results indicate that USM3D provides accurate simulations for complex aerodynamic configurations with propulsion integration.

  4. Advanced coal-fueled industrial cogeneration gas turbine system: Hot End Simulation Rig

    SciTech Connect

    Galica, M.A.

    1994-02-01

    This Hot End Simulation Rig (HESR) was an integral part of the overall Solar/METC program chartered to prove the technical, economic, an environmental feasibility of a coal-fueled gas turbine, for cogeneration applications. The program was to culminate in a test of a Solar Centaur Type H engine system operated on coal slurry fuel throughput the engine design operating range. This particular activity was designed to verify the performance of the Centaur Type H engine hot section materials in a coal-fired environment varying the amounts of alkali, ash, and sulfur in the coal to assess the material corrosion. Success in the program was dependent upon the satisfactory resolution of several key issues. Included was the control of hot end corrosion and erosion, necessary to ensure adequate operating life. The Hot End Simulation Rig addressed this important issue by exposing currently used hot section turbine alloys, alternate alloys, and commercially available advanced protective coating systems to a representative coal-fueled environment at turbine inlet temperatures typical of Solar`s Centaur Type H. Turbine hot end components which would experience material degradation include the transition duct from the combustor outlet to the turbine inlet, the shroud, nozzles, and blades. A ceramic candle filter vessel was included in the system as the particulate removal device for the HESR. In addition to turbine material testing, the candle material was exposed and evaluated. Long-term testing was intended to sufficiently characterize the performance of these materials for the turbine.

  5. A computer program for estimating the power-density spectrum of advanced continuous simulation language generated time histories

    NASA Technical Reports Server (NTRS)

    Dunn, H. J.

    1981-01-01

    A computer program for performing frequency analysis of time history data is presented. The program uses circular convolution and the fast Fourier transform to calculate power density spectrum (PDS) of time history data. The program interfaces with the advanced continuous simulation language (ACSL) so that a frequency analysis may be performed on ACSL generated simulation variables. An example of the calculation of the PDS of a Van de Pol oscillator is presented.

  6. ADVANCED UTILITY SIMULATION MODEL, REPORT OF SENSITIVITY TESTING, CALIBRATION, AND MODEL OUTPUT COMPARISONS (VERSION 3.0) TAPE

    EPA Science Inventory

    The report is one of 11 in a series describing the initial development of the Advanced Utility Simulation Model (AUSM) by the Universities Research Group on Energy (URGE) and its continued development by the Science Applications International Corporation (SAIC) research team. The...

  7. ADVANCED SIMULATION CAPABILITY FOR ENVIRONMENTAL MANAGEMENT- CURRENT STATUS AND PHASE II DEMONSTRATION RESULTS

    SciTech Connect

    Seitz, R.

    2013-02-26

    The U.S. Department of Energy (USDOE) Office of Environmental Management (EM), Office of Soil and Groundwater, is supporting development of the Advanced Simulation Capability for Environmental Management (ASCEM). ASCEM is a state-of-the-art scientific tool and approach for understanding and predicting contaminant fate and transport in natural and engineered systems. The modular and open source high-performance computing tool facilitates integrated approaches to modeling and site characterization that enable robust and standardized assessments of performance and risk for EM cleanup and closure activities. The ASCEM project continues to make significant progress in development of computer software capabilities with an emphasis on integration of capabilities in FY12. Capability development is occurring for both the Platform and Integrated Toolsets and High-Performance Computing (HPC) Multiprocess Simulator. The Platform capabilities provide the user interface and tools for end-to-end model development, starting with definition of the conceptual model, management of data for model input, model calibration and uncertainty analysis, and processing of model output, including visualization. The HPC capabilities target increased functionality of process model representations, toolsets for interaction with Platform, and verification and model confidence testing. The Platform and HPC capabilities are being tested and evaluated for EM applications in a set of demonstrations as part of Site Applications Thrust Area activities. The Phase I demonstration focusing on individual capabilities of the initial toolsets was completed in 2010. The Phase II demonstration completed in 2012 focused on showcasing integrated ASCEM capabilities. For Phase II, the Hanford Site deep vadose zone (BC Cribs) served as an application site for an end-to-end demonstration of capabilities, with emphasis on integration and linkages between the Platform and HPC components. Other demonstrations

  8. ADVANCED SIMULATION CAPABILITY FOR ENVIRONMENTAL MANAGEMENT – CURRENT STATUS AND PHASE II DEMONSTRATION RESULTS

    SciTech Connect

    Seitz, Roger; Freshley, Mark D.; Dixon, Paul; Hubbard, Susan S.; Freedman, Vicky L.; Flach, Gregory P.; Faybishenko, Boris; Gorton, Ian; Finsterle, Stefan A.; Moulton, John D.; Steefel, Carl I.; Marble, Justin

    2013-06-27

    The U.S. Department of Energy (USDOE) Office of Environmental Management (EM), Office of Soil and Groundwater, is supporting development of the Advanced Simulation Capability for Environmental Management (ASCEM). ASCEM is a state-of-the-art scientific tool and approach for understanding and predicting contaminant fate and transport in natural and engineered systems. The modular and open source high-performance computing tool facilitates integrated approaches to modeling and site characterization that enable robust and standardized assessments of performance and risk for EM cleanup and closure activities. The ASCEM project continues to make significant progress in development of computer software capabilities with an emphasis on integration of capabilities in FY12. Capability development is occurring for both the Platform and Integrated Toolsets and High-Performance Computing (HPC) Multiprocess Simulator. The Platform capabilities provide the user interface and tools for end-to-end model development, starting with definition of the conceptual model, management of data for model input, model calibration and uncertainty analysis, and processing of model output, including visualization. The HPC capabilities target increased functionality of process model representations, toolsets for interaction with Platform, and verification and model confidence testing. The Platform and HPC capabilities are being tested and evaluated for EM applications in a set of demonstrations as part of Site Applications Thrust Area activities. The Phase I demonstration focusing on individual capabilities of the initial toolsets was completed in 2010. The Phase II demonstration completed in 2012 focused on showcasing integrated ASCEM capabilities. For Phase II, the Hanford Site deep vadose zone (BC Cribs) served as an application site for an end-to-end demonstration of capabilities, with emphasis on integration and linkages between the Platform and HPC components. Other demonstrations

  9. The Nuclear Energy Advanced Modeling and Simulation Safeguards and Separations Reprocessing Plant Toolkit

    SciTech Connect

    McCaskey, Alex; Billings, Jay Jay; de Almeida, Valmor F

    2011-08-01

    This report details the progress made in the development of the Reprocessing Plant Toolkit (RPTk) for the DOE Nuclear Energy Advanced Modeling and Simulation (NEAMS) program. RPTk is an ongoing development effort intended to provide users with an extensible, integrated, and scalable software framework for the modeling and simulation of spent nuclear fuel reprocessing plants by enabling the insertion and coupling of user-developed physicochemical modules of variable fidelity. The NEAMS Safeguards and Separations IPSC (SafeSeps) and the Enabling Computational Technologies (ECT) supporting program element have partnered to release an initial version of the RPTk with a focus on software usability and utility. RPTk implements a data flow architecture that is the source of the system's extensibility and scalability. Data flows through physicochemical modules sequentially, with each module importing data, evolving it, and exporting the updated data to the next downstream module. This is accomplished through various architectural abstractions designed to give RPTk true plug-and-play capabilities. A simple application of this architecture, as well as RPTk data flow and evolution, is demonstrated in Section 6 with an application consisting of two coupled physicochemical modules. The remaining sections describe this ongoing work in full, from system vision and design inception to full implementation. Section 3 describes the relevant software development processes used by the RPTk development team. These processes allow the team to manage system complexity and ensure stakeholder satisfaction. This section also details the work done on the RPTk ``black box'' and ``white box'' models, with a special focus on the separation of concerns between the RPTk user interface and application runtime. Section 4 and 5 discuss that application runtime component in more detail, and describe the dependencies, behavior, and rigorous testing of its constituent components.

  10. An architecture and model for cognitive engineering simulation analysis - Application to advanced aviation automation

    NASA Technical Reports Server (NTRS)

    Corker, Kevin M.; Smith, Barry R.

    1993-01-01

    The process of designing crew stations for large-scale, complex automated systems is made difficult because of the flexibility of roles that the crew can assume, and by the rapid rate at which system designs become fixed. Modern cockpit automation frequently involves multiple layers of control and display technology in which human operators must exercise equipment in augmented, supervisory, and fully automated control modes. In this context, we maintain that effective human-centered design is dependent on adequate models of human/system performance in which representations of the equipment, the human operator(s), and the mission tasks are available to designers for manipulation and modification. The joint Army-NASA Aircrew/Aircraft Integration (A3I) Program, with its attendant Man-machine Integration Design and Analysis System (MIDAS), was initiated to meet this challenge. MIDAS provides designers with a test bed for analyzing human-system integration in an environment in which both cognitive human function and 'intelligent' machine function are described in similar terms. This distributed object-oriented simulation system, its architecture and assumptions, and our experiences from its application in advanced aviation crew stations are described.

  11. Performance experiments with alternative advanced teleoperator control modes for a simulated solar maximum satellite repair

    NASA Technical Reports Server (NTRS)

    Das, H.; Zak, H.; Kim, W. S.; Bejczy, A. K.; Schenker, P. S.

    1992-01-01

    Experiments are described which were conducted at the JPL Advanced Teleoperator Lab to demonstrate and evaluate the effectiveness of various teleoperator control modes in the performance of a simulated Solar Max Satellite Repair (SMSR) task. THe SMSR was selected as a test because it is very rich in performance capability requirements and it actually has been performed by two EVA astronauts in the Space Shuttle Bay in 1984. The main subtasks are: thermal blanket removal; installation of a hinge attachment for electrical panel opening; opening of electrical panel; removal of electrical connectors; relining of cable bundles; replacement of electrical panel; securing parts and cables; re-mate electrical connectors; closing of electrical panel; and reinstating thermal blanket. The current performance experiments are limited to thermal blanket cutting, electrical panel unbolting and handling electrical bundles and connectors. In one formal experiment even different control modes were applied to the unbolting and reinsertion of electrical panel screws subtasks. The seven control modes are alternative combinations of manual position and rate control with force feedback and remote compliance referenced to force-torque sensor information. Force-torque sensor and end effector position data and task completion times were recorded for analysis and quantification of operator performance.

  12. Recent advances in large-eddy simulation of spray and coal combustion

    NASA Astrophysics Data System (ADS)

    Zhou, L. X.

    2013-07-01

    Large-eddy simulation (LES) is under its rapid development and is recognized as a possible second generation of CFD methods used in engineering. Spray and coal combustion is widely used in power, transportation, chemical and metallurgical, iron and steel making, aeronautical and astronautical engineering, hence LES of spray and coal two-phase combustion is particularly important for engineering application. LES of two-phase combustion attracts more and more attention; since it can give the detailed instantaneous flow and flame structures and more exact statistical results than those given by the Reynolds averaged modeling (RANS modeling). One of the key problems in LES is to develop sub-grid scale (SGS) models, including SGS stress models and combustion models. Different investigators proposed or adopted various SGS models. In this paper the present author attempts to review the advances in studies on LES of spray and coal combustion, including the studies done by the present author and his colleagues. Different SGS models adopted by different investigators are described, some of their main results are summarized, and finally some research needs are discussed.

  13. An expanded framework for the advanced computational testing and simulation toolkit

    SciTech Connect

    Marques, Osni A.; Drummond, Leroy A.

    2003-11-09

    The Advanced Computational Testing and Simulation (ACTS) Toolkit is a set of computational tools developed primarily at DOE laboratories and is aimed at simplifying the solution of common and important computational problems. The use of the tools reduces the development time for new codes and the tools provide functionality that might not otherwise be available. This document outlines an agenda for expanding the scope of the ACTS Project based on lessons learned from current activities. Highlights of this agenda include peer-reviewed certification of new tools; finding tools to solve problems that are not currently addressed by the Toolkit; working in collaboration with other software initiatives and DOE computer facilities; expanding outreach efforts; promoting interoperability, further development of the tools; and improving functionality of the ACTS Information Center, among other tasks. The ultimate goal is to make the ACTS tools more widely used and more effective in solving DOE's and the nation's scientific problems through the creation of a reliable software infrastructure for scientific computing.

  14. Time controlled descent guidance algorithm for simulation of advanced ATC systems

    NASA Technical Reports Server (NTRS)

    Lee, H. Q.; Erzberger, H.

    1983-01-01

    Concepts and computer algorithms for generating time controlled four dimensional descent trajectories are described. The algorithms were implemented in the air traffic control simulator and used by experienced controllers in studies of advanced air traffic flow management procedures. A time controlled descent trajectory comprises a vector function of time, including position, altitude, and heading, that starts at the initial position of the aircraft and ends at touchdown. The trajectory provides a four dimensional reference path which will cause an aircraft tracking it to touchdown at a predetermined time with a minimum of fuel consumption. The problem of constructing such trajectories is divided into three subproblems involving synthesis of horizontal, vertical, and speed profiles. The horizontal profile is constructed as a sequence of turns and straight lines passing through a specified set of waypoints. The vertical profile consists of a sequence of level flight and constant descent angle segments defined by altitude waypoints. The speed profile is synthesized as a sequence of constant Mach number, constant indicated airspeed, and acceleration/deceleration legs. It is generated by integrating point mass differential equations of motion, which include the thrust and drag models of the aircraft.

  15. Advanced simulation capability for environmental management - current status and future applications

    SciTech Connect

    Freshley, Mark; Scheibe, Timothy; Robinson, Bruce; Moulton, J. David; Dixon, Paul; Marble, Justin; Gerdes, Kurt; Stockton, Tom; Seitz, Roger; Black, Paul

    2013-07-01

    The U.S. Department of Energy (US DOE) Office of Environmental Management (EM), Office of Soil and Groundwater (EM-12), is supporting development of the Advanced Simulation Capability for Environmental Management (ASCEM). ASCEM is a state-of-the-art scientific tool and approach that is currently aimed at understanding and predicting contaminant fate and transport in natural and engineered systems. ASCEM is a modular and open source high-performance computing tool. It will be used to facilitate integrated approaches to modeling and site characterization, and provide robust and standardized assessments of performance and risk for EM cleanup and closure activities. The ASCEM project continues to make significant progress in development of capabilities, with current emphasis on integration of capabilities in FY12. Capability development is occurring for both the Platform and Integrated Tool-sets and High-Performance Computing (HPC) multi-process simulator. The Platform capabilities provide the user interface and tools for end-to-end model development, starting with definition of the conceptual model, management of data for model input, model calibration and uncertainty analysis, and processing of model output, including visualization. The HPC capabilities target increased functionality of process model representations, tool-sets for interaction with Platform, and verification and model confidence testing. The integration of the Platform and HPC capabilities were tested and evaluated for EM applications in a set of demonstrations as part of Site Applications Thrust Area activities in 2012. The current maturity of the ASCEM computational and analysis capabilities has afforded the opportunity for collaborative efforts to develop decision analysis tools to support and optimize radioactive waste disposal. Recent advances in computerized decision analysis frameworks provide the perfect opportunity to bring this capability into ASCEM. This will allow radioactive waste

  16. Update on ORNL TRANSFORM Tool: Simulating Multi-Module Advanced Reactor with End-to-End I&C

    SciTech Connect

    Hale, Richard Edward; Fugate, David L.; Cetiner, Sacit M.; Qualls, A. L.

    2015-05-01

    The Small Modular Reactor (SMR) Dynamic System Modeling Tool project is in the fourth year of development. The project is designed to support collaborative modeling and study of various advanced SMR (non-light water cooled reactor) concepts, including the use of multiple coupled reactors at a single site. The focus of this report is the development of a steam generator and drum system model that includes the complex dynamics of typical steam drum systems, the development of instrumentation and controls for the steam generator with drum system model, and the development of multi-reactor module models that reflect the full power reactor innovative small module design concept. The objective of the project is to provide a common simulation environment and baseline modeling resources to facilitate rapid development of dynamic advanced reactor models; ensure consistency among research products within the Instrumentation, Controls, and Human-Machine Interface technical area; and leverage cross-cutting capabilities while minimizing duplication of effort. The combined simulation environment and suite of models are identified as the TRANSFORM tool. The critical elements of this effort include (1) defining a standardized, common simulation environment that can be applied throughout the Advanced Reactors Technology program; (2) developing a library of baseline component modules that can be assembled into full plant models using available geometry, design, and thermal-hydraulic data; (3) defining modeling conventions for interconnecting component models; and (4) establishing user interfaces and support tools to facilitate simulation development (i.e., configuration and parameterization), execution, and results display and capture.

  17. Advanced simulation for analysis of critical infrastructure : abstract cascades, the electric power grid, and Fedwire.

    SciTech Connect

    Glass, Robert John, Jr.; Stamber, Kevin Louis; Beyeler, Walter Eugene

    2004-08-01

    Critical Infrastructures are formed by a large number of components that interact within complex networks. As a rule, infrastructures contain strong feedbacks either explicitly through the action of hardware/software control, or implicitly through the action/reaction of people. Individual infrastructures influence others and grow, adapt, and thus evolve in response to their multifaceted physical, economic, cultural, and political environments. Simply put, critical infrastructures are complex adaptive systems. In the Advanced Modeling and Techniques Investigations (AMTI) subgroup of the National Infrastructure Simulation and Analysis Center (NISAC), we are studying infrastructures as complex adaptive systems. In one of AMTI's efforts, we are focusing on cascading failure as can occur with devastating results within and between infrastructures. Over the past year we have synthesized and extended the large variety of abstract cascade models developed in the field of complexity science and have started to apply them to specific infrastructures that might experience cascading failure. In this report we introduce our comprehensive model, Polynet, which simulates cascading failure over a wide range of network topologies, interaction rules, and adaptive responses as well as multiple interacting and growing networks. We first demonstrate Polynet for the classical Bac, Tang, and Wiesenfeld or BTW sand-pile in several network topologies. We then apply Polynet to two very different critical infrastructures: the high voltage electric power transmission system which relays electricity from generators to groups of distribution-level consumers, and Fedwire which is a Federal Reserve service for sending large-value payments between banks and other large financial institutions. For these two applications, we tailor interaction rules to represent appropriate unit behavior and consider the influence of random transactions within two stylized networks: a regular homogeneous array and a

  18. Advanced Simulation Capability for Environmental Management - Current Status and Phase II Demonstration Results - 13161

    SciTech Connect

    Seitz, Roger R.; Flach, Greg; Freshley, Mark D.; Freedman, Vicky; Gorton, Ian; Dixon, Paul; Moulton, J. David; Hubbard, Susan S.; Faybishenko, Boris; Steefel, Carl I.; Finsterle, Stefan; Marble, Justin

    2013-07-01

    The U.S. Department of Energy (US DOE) Office of Environmental Management (EM), Office of Soil and Groundwater, is supporting development of the Advanced Simulation Capability for Environmental Management (ASCEM). ASCEM is a state-of-the-art scientific tool and approach for understanding and predicting contaminant fate and transport in natural and engineered systems. The modular and open source high-performance computing tool facilitates integrated approaches to modeling and site characterization that enable robust and standardized assessments of performance and risk for EM cleanup and closure activities. The ASCEM project continues to make significant progress in development of computer software capabilities with an emphasis on integration of capabilities in FY12. Capability development is occurring for both the Platform and Integrated Tool-sets and High-Performance Computing (HPC) Multi-process Simulator. The Platform capabilities provide the user interface and tools for end-to-end model development, starting with definition of the conceptual model, management of data for model input, model calibration and uncertainty analysis, and processing of model output, including visualization. The HPC capabilities target increased functionality of process model representations, tool-sets for interaction with Platform, and verification and model confidence testing. The Platform and HPC capabilities are being tested and evaluated for EM applications in a set of demonstrations as part of Site Applications Thrust Area activities. The Phase I demonstration focusing on individual capabilities of the initial tool-sets was completed in 2010. The Phase II demonstration completed in 2012 focused on showcasing integrated ASCEM capabilities. For Phase II, the Hanford Site deep vadose zone (BC Cribs) served as an application site for an end-to-end demonstration of capabilities, with emphasis on integration and linkages between the Platform and HPC components. Other demonstrations

  19. Noble gas and hydrocarbon tracers in multiphase unconventional hydrocarbon systems: Toward integrated advanced reservoir simulators

    NASA Astrophysics Data System (ADS)

    Darrah, T.; Moortgat, J.; Poreda, R. J.; Muehlenbachs, K.; Whyte, C. J.

    2015-12-01

    Although hydrocarbon production from unconventional energy resources has increased dramatically in the last decade, total unconventional oil and gas recovery from black shales is still less than 25% and 9% of the totals in place, respectively. Further, the majority of increased hydrocarbon production results from increasing the lengths of laterals, the number of hydraulic fracturing stages, and the volume of consumptive water usage. These strategies all reduce the economic efficiency of hydrocarbon extraction. The poor recovery statistics result from an insufficient understanding of some of the key physical processes in complex, organic-rich, low porosity formations (e.g., phase behavior, fluid-rock interactions, and flow mechanisms at nano-scale confinement and the role of natural fractures and faults as conduits for flow). Noble gases and other hydrocarbon tracers are capably of recording subsurface fluid-rock interactions on a variety of geological scales (micro-, meso-, to macro-scale) and provide analogs for the movement of hydrocarbons in the subsurface. As such geochemical data enrich the input for the numerical modeling of multi-phase (e.g., oil, gas, and brine) fluid flow in highly heterogeneous, low permeability formations Herein we will present a combination of noble gas (He, Ne, Ar, Kr, and Xe abundances and isotope ratios) and molecular and isotopic hydrocarbon data from a geographically and geologically diverse set of unconventional hydrocarbon reservoirs in North America. Specifically, we will include data from the Marcellus, Utica, Barnett, Eagle Ford, formations and the Illinois basin. Our presentation will include geochemical and geological interpretation and our perspective on the first steps toward building an advanced reservoir simulator for tracer transport in multicomponent multiphase compositional flow (presented separately, in Moortgat et al., 2015).

  20. Development of Advanced Wear and Corrosion Resistant Systems Through Laser Surface Alloying and Materials Simulations

    SciTech Connect

    R. P. Martukanitz and S. Babu

    2007-05-03

    Laser surfacing in the form of cladding, alloying, and modifications are gaining widespread use because of its ability to provide high deposition rates, low thermal distortion, and refined microstructure due to high solidification rates. Because of these advantages, laser surface alloying is considered a prime candidate for producing ultra-hard coatings through the establishment or in situ formation of composite structures. Therefore, a program was conducted by the Applied Research Laboratory, Pennsylvania State University and Oak Ridge National Laboratory to develop the scientific and engineering basis for performing laser-based surface modifications involving the addition of hard particles, such as carbides, borides, and nitrides, within a metallic matrix for improved wear, fatigue, creep, and corrosion resistance. This has involved the development of advanced laser processing and simulation techniques, along with the refinement and application of these techniques for predicting and selecting materials and processing parameters for the creation of new surfaces having improved properties over current coating technologies. This program has also resulted in the formulation of process and material simulation tools capable of examining the potential for the formation and retention of composite coatings and deposits produced using laser processing techniques, as well as positive laboratory demonstrations in producing these coatings. In conjunction with the process simulation techniques, the application of computational thermodynamic and kinetic models to design laser surface alloying materials was demonstrated and resulted in a vast improvement in the formulation of materials used for producing composite coatings. The methodology was used to identify materials and to selectively modify microstructures for increasing hardness of deposits produced by the laser surface alloying process. Computational thermodynamic calculations indicated that it was possible to induce the

  1. Development of Advanced Electrochemical Emission Spectroscopy for Monitoring Corrosion in Simulated DOE Liquid Waste

    SciTech Connect

    Digby Macdonald; Brian Marx; Balaji Soundararajan; Morgan Smith

    2005-07-28

    The different tasks that have been carried out under the current program are as follows: (1) Theoretical and experimental assessment of general corrosion of iron/steel in borate buffer solutions by using electrochemical impedance spectroscopy (EIS), ellipsometry and XPS techniques; (2) Development of a damage function analysis (DFA), which would help in predicting the accumulation of damage due to pitting corrosion in an environment prototypical of DOE liquid waste systems; (3) Experimental measurement of crack growth rate, acoustic emission signals, and coupling currents for fracture in carbon and low alloy steels as functions of mechanical (stress intensity), chemical (conductivity), electrochemical (corrosion potential, ECP), and microstructural (grain size, precipitate size, etc) variables in a systematic manner, with particular attention being focused on the structure of the noise in the current and its correlation with the acoustic emissions; (4) Development of fracture mechanisms for carbon and low alloy steels that are consistent with the crack growth rate, coupling current data and acoustic emissions; (5) Inserting advanced crack growth rate models for SCC into existing deterministic codes for predicting the evolution of corrosion damage in DOE liquid waste storage tanks; (6) Computer simulation of the anodic and cathodic activity on the surface of the steel samples in order to exactly predict the corrosion mechanisms; (7) Wavelet analysis of EC noise data from steel samples undergoing corrosion in an environment similar to that of the high level waste storage containers, to extract data pertaining to general, pitting and stress corrosion processes, from the overall data. The work has yielded a number of important findings, including an unequivocal demonstration of the role of chloride ion in passivity breakdown on nickel in terms of cation vacancy generation within the passive film, the first detection and characterization of individual micro fracture

  2. Investigation of Alien Wavelength Quality in Live Multi-Domain, Multi-Vendor Link Using Advanced Simulation Tool

    NASA Astrophysics Data System (ADS)

    Nordal Petersen, Martin; Nuijts, Roeland; Lange Bjørn, Lars

    2014-05-01

    This article presents an advanced optical model for simulation of alien wavelengths in multi-domain and multi-vendor dense wavelength-division multiplexing networks. The model aids optical network planners with a better understanding of the non-linear effects present in dense wavelength-division multiplexing systems and better utilization of alien wavelengths in future applications. The limiting physical effects for alien wavelengths are investigated in relation to power levels, channel spacing, and other factors. The simulation results are verified through experimental setup in live multi-domain dense wavelength-division multiplexing systems between two national research networks: SURFnet in Holland and NORDUnet in Denmark.

  3. Big data to smart data in Alzheimer's disease: Real-world examples of advanced modeling and simulation.

    PubMed

    Haas, Magali; Stephenson, Diane; Romero, Klaus; Gordon, Mark Forrest; Zach, Neta; Geerts, Hugo

    2016-09-01

    Many disease-modifying clinical development programs in Alzheimer's disease (AD) have failed to date, and development of new and advanced preclinical models that generate actionable knowledge is desperately needed. This review reports on computer-based modeling and simulation approach as a powerful tool in AD research. Statistical data-analysis techniques can identify associations between certain data and phenotypes, such as diagnosis or disease progression. Other approaches integrate domain expertise in a formalized mathematical way to understand how specific components of pathology integrate into complex brain networks. Private-public partnerships focused on data sharing, causal inference and pathway-based analysis, crowdsourcing, and mechanism-based quantitative systems modeling represent successful real-world modeling examples with substantial impact on CNS diseases. Similar to other disease indications, successful real-world examples of advanced simulation can generate actionable support of drug discovery and development in AD, illustrating the value that can be generated for different stakeholders.

  4. CFD Simulations of a Regenerative Process for Carbon Dioxide Capture in Advanced Gasification Based Power Systems

    SciTech Connect

    Arastoopour, Hamid; Abbasian, Javad

    2014-07-31

    This project describes the work carried out to prepare a highly reactive and mechanically strong MgO based sorbents and to develop a Population Balance Equations (PBE) approach to describe the evolution of the particle porosity distribution that is linked with Computational Fluid Dynamics (CFD) to perform simulations of the CO2 capture and sorbent regeneration. A large number of MgO-based regenerable sorbents were prepared using low cost and abundant dolomite as the base material. Among various preparation parameters investigated the potassium/magnesium (K/Mg) ratio was identified as the key variable affecting the reactivity and CO2 capacity of the sorbent. The optimum K/Mg ratio is about 0.15. The sorbent formulation HD52-P2 was identified as the “best” sorbent formulation and a large batch (one kg) of the sorbent was prepared for the detailed study. The results of parametric study indicate the optimum carbonation and regeneration temperatures are 360° and 500°C, respectively. The results also indicate that steam has a beneficial effect on the rate of carbonation and regeneration of the sorbent and that the reactivity and capacity of the sorbent decreases in the cycling process (sorbent deactivation). The results indicate that to achieve a high CO2 removal efficiency, the bed of sorbent should be operated at a temperature range of 370-410°C which also favors production of hydrogen through the WGS reaction. To describe the carbonation reaction kinetics of the MgO, the Variable Diffusivity shrinking core Model (VDM) was developed in this project, which was shown to accurately fit the experimental data. An important advantage of this model is that the changes in the sorbent conversion with time can be expressed in an explicit manner, which will significantly reduce the CFD computation time. A Computational Fluid Dynamic/Population Balance Equations (CFD/PBE) model was developed that accounts for the particle (sorbent) porosity distribution and a new version of

  5. Distance-Learning for Advanced Military Education: Using Wargame Simulation Course as an Example

    ERIC Educational Resources Information Center

    Keh, Huan-Chao; Wang, Kuei-Min; Wai, Shu-Shen; Huang, Jiung-yao; Hui, Lin; Wu, Ji-Jen

    2008-01-01

    Distance learning in advanced military education can assist officers around the world to become more skilled and qualified for future challenges. Through well-chosen technology, the efficiency of distance-learning can be improved significantly. In this paper we present the architecture of Advanced Military Education-Distance Learning (AME-DL)…

  6. An Advanced, Interactive, High-Performance Liquid Chromatography Simulator and Instructor Resources

    ERIC Educational Resources Information Center

    Boswell, Paul G.; Stoll, Dwight R.; Carr, Peter W.; Nagel, Megan L.; Vitha, Mark F.; Mabbott, Gary A.

    2013-01-01

    High-performance liquid chromatography (HPLC) simulation software has long been recognized as an effective educational tool, yet many of the existing HPLC simulators are either too expensive, outdated, or lack many important features necessary to make them widely useful for educational purposes. Here, a free, open-source HPLC simulator is…

  7. JSMARTS Initiative: Advanced Distributed Simulation Across the Government of Canada, Academia and Industry - Technical Description

    DTIC Science & Technology

    2005-07-01

    CH 146 Griffon helicopter human-in-the-loop simulator. DAR (Director Aerospace...provided an NTS (Networked Tactical Simulator) CH 146 Griffon Helicopter human-in-the-loop simulator - DAR (Director Aerospace Requirements... Griffon Helicopter operating on its own – Scenario 2: A CH - 146 and a UAV operating with a third party comms link. – Scenario 3: A CH 146 and a

  8. Advanced Distributed Simulation Technology 2 (ASDT 2), Joint Combat Search and Rescue Virtual Simulation 2 (JCSAR VS2)

    DTIC Science & Technology

    2007-11-02

    SYSTEM COMPONENTS 10 3.3.1 ASTi Servers 10 3.3.2 SIMNET Devices 10 3.3.3 Meta VR Channel Surfer 10 3.3.4 JTIDS (Link-16) 10 3.3.5 Theater of...models for the simulator devices at the Aviation Test Bed, Ft. Rucker were performed by TASC in San Antonio, TX. ASTi Corporation in Herndon, VA...performed modifications to the ASTi radio simulators. The final integration phase was completed at the AVTB from January to April, 1998 during scheduled

  9. Role of 3D photo-resist simulation for advanced technology nodes

    NASA Astrophysics Data System (ADS)

    Narayana Samy, Aravind; Seltmann, Rolf; Kahlenberg, Frank; Schramm, Jessy; Küchler, Bernd; Klostermann, Ulrich

    2013-04-01

    3D Resist Models are gaining significant interest for advanced technology node development. Correct prediction of resist profiles, resist top-loss and top-rounding are acquiring higher importance in ORC hotspot verification due to impact on etch resistance and post etch results. We would like to highlight the specific calibration procedure to calibrate a rigorous 3D model. Special focus is on the importance of high quality metrology data for both a successful calibration and for allowing a reduction of the number of data points used for calibration [1]. In a productive application the calibration could be performed using a subset of 20 features measured through dose and focus and model validation was done with 500 features through dose and focus. This data reduction minimized the actual calibration effort of the 3D resist model and enabled calibration run times of less than one hour. The successful validation with the complete data set showed that the data reduction did not cause over- fitting of the model. The model is applied and verified at hotspots showing defects such as bottom bridging or top loss that would not be visible in a 2D resist model. The model performance is also evaluated with a conventional CD error metric where CD at Bottom of simulation and measurement are compared. We could achieve excellent results for both metrics using SEM CD, SEM images, AFM measurements and wafer cross sections. Additional modeling criterion is resist model portability. A prerequisite is the separability of resist model and optical model, i.e. the resist model shall characterize the resist only and should not lump characteristics from the optical model. This is a requirement to port the resist model to different optical setups such as another illumination source without the need of re-calibration. Resist model portability is shown by validation and application of the model to a second process with significantly different optical settings. The resist model can predict hot

  10. Computational Plasma Physics at the Bleeding Edge: Simulating Kinetic Turbulence Dynamics in Fusion Energy Sciences

    NASA Astrophysics Data System (ADS)

    Tang, William

    2013-04-01

    Advanced computing is generally recognized to be an increasingly vital tool for accelerating progress in scientific research in the 21st Century. The imperative is to translate the combination of the rapid advances in super-computing power together with the emergence of effective new algorithms and computational methodologies to help enable corresponding increases in the physics fidelity and the performance of the scientific codes used to model complex physical systems. If properly validated against experimental measurements and verified with mathematical tests and computational benchmarks, these codes can provide more reliable predictive capability for the behavior of complex systems, including fusion energy relevant high temperature plasmas. The magnetic fusion energy research community has made excellent progress in developing advanced codes for which computer run-time and problem size scale very well with the number of processors on massively parallel supercomputers. A good example is the effective usage of the full power of modern leadership class computational platforms from the terascale to the petascale and beyond to produce nonlinear particle-in-cell simulations which have accelerated progress in understanding the nature of plasma turbulence in magnetically-confined high temperature plasmas. Illustrative results provide great encouragement for being able to include increasingly realistic dynamics in extreme-scale computing campaigns to enable predictive simulations with unprecedented physics fidelity. Some illustrative examples will be presented of the algorithmic progress from the magnetic fusion energy sciences area in dealing with low memory per core extreme scale computing challenges for the current top 3 supercomputers worldwide. These include advanced CPU systems (such as the IBM-Blue-Gene-Q system and the Fujitsu K Machine) as well as the GPU-CPU hybrid system (Titan).

  11. An advanced configuration management system for full scope power plant simulators

    SciTech Connect

    Storm, J.; Goemann, A.

    1996-11-01

    In August 1993 KSG Kraftwerks-Simulator-Gesellschaft, Germany, awarded a contract to STN ATLAS Elektronik for the delivery of two full scope replica training simulators for the German BWR plants Isar 1 and Philipsburg 1, known as the double simulator project S30 (S31/S32). For both projects a computer based Configuration Management System (CMS) was required to overcome deficiencies of older simulator systems in terms of limited upgrade and maintenance capabilities and incomplete documentation. The CMS allows complete control over the entire simulator system covering all software- and hardware-items and therewith exceed quality assurance requirements as defined in ISO 9000-3 which gives recommendations for software configuration management only. The system is realized under the project using the UNIX based relational database system EMPRESS and is in use as a development- and maintenance-tool to improve simulator quality and ensure simulator configuration integrity.

  12. Using simulation to improve the cognitive and psychomotor skills of novice students in advanced laparoscopic surgery: a meta-analysis.

    PubMed

    Al-Kadi, Azzam S; Donnon, Tyrone

    2013-01-01

    Advances in simulation technologies have enhanced the ability to introduce the teaching and learning of laparoscopic surgical skills to novice students. In this meta-analysis, a total of 18 randomized controlled studies were identified that specifically looked at training novices in comparison with a control group as it pertains to knowledge retention, time to completion and suturing and knotting skills. The combined random-effect sizes (ESs) showed that novice students who trained on laparoscopic simulators have considerably developed better laparoscopic suturing and knot tying skills (d = 1.96, p < 0.01), conducted fewer errors (d = 2.13, p < 0.01), retained more knowledge (d = 1.57, p < 0.01) than their respective control groups, and were significantly faster on time to completion (d = 1.98, p < 0.01). As illustrated in corresponding Forest plots, the majority of the primary study outcomes included in this meta-analysis show statistically significant support (p < 0.05) for the use of laparoscopic simulators for novice student training on both knowledge and advanced surgical skill development (28 of 35 outcomes, 80%). The findings of this meta-analysis support strongly the use of simulators for teaching laparoscopic surgery skills to novice students in surgical residency programs.

  13. Full electromagnetic Vlasov code simulation of the Kelvin-Helmholtz instability

    SciTech Connect

    Umeda, Takayuki; Miwa, Jun-ichiro; Matsumoto, Yosuke; Togano, Kentaro; Nakamura, Takuma K. M.; Shinohara, Iku; Fukazawa, Keiichiro

    2010-05-15

    Recent advancement in numerical techniques for Vlasov simulations and their application to cross-scale coupling in the plasma universe are discussed. Magnetohydrodynamic (MHD) simulations are now widely used for numerical modeling of global and macroscopic phenomena. In the framework of the MHD approximation, however, diffusion coefficients such as resistivity and adiabatic index are given from empirical models. Thus there are recent attempts to understand first-principle kinetic processes in macroscopic phenomena, such as magnetic reconnection and the Kelvin-Helmholtz (KH) instability via full kinetic particle-in-cell and Vlasov codes. In the present study, a benchmark test for a new four-dimensional full electromagnetic Vlasov code is performed. First, the computational speed of the Vlasov code is measured and a linear performance scaling is obtained on a massively parallel supercomputer with more than 12 000 cores. Second, a first-principle Vlasov simulation of the KH instability is performed in order to evaluate current status of numerical techniques for Vlasov simulations. The KH instability is usually adopted as a benchmark test problem for guiding-center Vlasov codes, in which a cyclotron motion of charged particles is neglected. There is not any full electromagnetic Vlasov simulation of the KH instability; this is because it is difficult to follow E-vectorxB-vector drift motion accurately without approximations. The present first-principle Vlasov simulation has successfully represented the formation of KH vortices and its secondary instability. These results suggest that Vlasov code simulations would be a powerful approach for studies of cross-scale coupling on future Peta-scale supercomputers.

  14. Effect of simulation on knowledge of advanced cardiac life support, knowledge retention, and confidence of nursing students in Jordan.

    PubMed

    Tawalbeh, Loai I; Tubaishat, Ahmad

    2014-01-01

    This study examined the effect of simulation on nursing students' knowledge of advanced cardiac life support (ACLS), knowledge retention, and confidence in applying ACLS skills. An experimental, randomized controlled (pretest-posttest) design was used. The experimental group (n = 40) attended an ACLS simulation scenario, a 4-hour PowerPoint presentation, and demonstration on a static manikin, whereas the control group (n = 42) attended the PowerPoint presentation and a demonstration only. A paired t test indicated that posttest mean knowledge of ACLS and confidence was higher in both groups. The experimental group showed higher knowledge of ACLS and higher confidence in applying ACLS, compared with the control group. Traditional training involving PowerPoint presentation and demonstration on a static manikin is an effective teaching strategy; however, simulation is significantly more effective than traditional training in helping to improve nursing students' knowledge acquisition, knowledge retention, and confidence about ACLS.

  15. Parameter identification studies on the NASA/Ames Research Center Advanced Concepts Flight Simulator. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Mckavitt, Thomas P., Jr.

    1990-01-01

    The results of an aircraft parameters identification study conducted on the National Aeronautics and Space Administration/Ames Research Center Advanced Concepts Flight Simulator (ACFS) in conjunction with the Navy-NASA Joint Institute of Aeronautics are given. The ACFS is a commercial airline simulator with a design based on future technology. The simulator is used as a laboratory for human factors research and engineering as applied to the commercial airline industry. Parametric areas examined were engine pressure ratio (EPR), optimum long range cruise Mach number, flap reference speed, and critical take-off speeds. Results were compared with corresponding parameters of the Boeing 757 and 767 aircraft. This comparison identified two areas where improvements can be made: (1) low maximum lift coefficients (on the order of 20-25 percent less than those of a 757); and (2) low optimum cruise Mach numbers. Recommendations were made to those anticipated with the application of future technologies.

  16. Patient Simulation to Demonstrate Students’ Competency in Core Domain Abilities Prior to Beginning Advanced Pharmacy Practice Experiences

    PubMed Central

    Bhutada, Nilesh S.; Feng, Xiaodong

    2012-01-01

    Objective. To implement a simulation-based introductory pharmacy practice experience (IPPE) and determine its effectiveness in assessing pharmacy students’ core domain abilities prior to beginning advanced pharmacy practice experience (APPE). Design. A 60-hour IPPE that used simulation-based techniques to provide clinical experiences was implemented. Twenty-eight students were enrolled in this simulation IPPE, while 60 were enrolled in hospital and specialty IPPEs within the region. Assessment. The IPPE assessed 10 out of 11 of the pre-APPE core domain abilities, and on the practical examination, 67% of students passed compared to 52% of students in the control group. Students performed better on all 6 knowledge quizzes after completing the simulation IPPE. Based on scores on the Perception of Preparedness to Perform (PREP) survey, students felt more prepared regarding “technical” aspects after completing the simulation experience (p<0.001). Ninety-six percent of the respondents agreed with the statement “I am more aware of medication errors after this IPPE.” Conclusion. Simulation is an effective method for assessing the pre-APPE abilities of pharmacy students, preparing them for real clinical encounters, and for making them more aware of medication errors and other patient safety issues. PMID:23193340

  17. Advanced Distributed Simulation Technology II (ADST II), Battle Command Reengineering II (BCR II)

    DTIC Science & Technology

    2007-11-02

    BCV Driver’s Station DIS 2.03 ASTi Radio Simulator Simulated Radio Communications DIS 2.03 Plan View Display Terrain Map of the battlefield for...conducted over ASTi radio simulators. The ASTi inventory consisted of six Digital Aural-cue/Communications System (DACS), 12 Remote Interface Units...radio communications were primarily conducted using various configurations of the ASTi Digital Aural-cue/Communications Systems (DACS) integrated

  18. Development of Advanced Electrochemical Emission Spectroscopy for Monitoring Corrosion in Simulated DOE Liquid Waste

    SciTech Connect

    Digby D. Macdonald; Brian M. Marx; Sejin Ahn; Julio de Ruiz; Balaji Soundararaja; Morgan Smith; and Wendy Coulson

    2008-01-15

    Various forms of general and localized corrosion represent principal threats to the integrity of DOE liquid waste storage tanks. These tanks, which are of a single wall or double wall design, depending upon their age, are fabricated from welded carbon steel and contain a complex waste-form comprised of NaOH and NaNO{sub 3}, along with trace amounts of phosphate, sulfate, carbonate, and chloride. Because waste leakage can have a profound environmental impact, considerable interest exists in predicting the accumulation of corrosion damage, so as to more effectively schedule maintenance and repair. The different tasks that are being carried out under the current program are as follows: (1) Theoretical and experimental assessment of general corrosion of iron/steel in borate buffer solutions by using electrochemical impedance spectroscopy (EIS), ellipsometry and XPS techniques; (2) Development of a damage function analysis (DFA) which would help in predicting the accumulation of damage due to pitting corrosion in an environment prototypical of DOE liquid waste systems; (3) Experimental measurement of crack growth rate, acoustic emission signals and coupling currents for fracture in carbon and low alloy steels as functions of mechanical (stress intensity), chemical (conductivity), electrochemical (corrosion potential, ECP), and microstructural (grain size, precipitate size, etc) variables in a systematic manner, with particular attention being focused on the structure of the noise in the current and its correlation with the acoustic emissions; (4) Development of fracture mechanisms for carbon and low alloy steels that are consistent with the crack growth rate, coupling current data and acoustic emissions; (5) Inserting advanced crack growth rate models for SCC into existing deterministic codes for predicting the evolution of corrosion damage in DOE liquid waste storage tanks; (6) Computer simulation of the anodic and cathodic activity on the surface of the steel samples

  19. A real-time simulation facility for advanced digital guidance and control system research

    NASA Technical Reports Server (NTRS)

    Bryant, W. H.; Downing, D. R.; Ostroff, A. J.

    1979-01-01

    A real-time simulation facility built at NASA's Langley Research Center to support digital guidance and control research and development activities is examined. The unit has recently been used to develop autoland systems for VTOL. The paper describes the autoland experiment and the flight environment, the simulation facility hardware and software, and presents typical simulation data to illustrate the type of data analysis carried out during software development. Finally, flight data for a later version of the autoland system are presented to demonstrate the simulation's capability to predict overall system behavior.

  20. Progress and new advances in simulating electron microscopy datasets using MULTEM.

    PubMed

    Lobato, I; Van Aert, S; Verbeeck, J

    2016-09-01

    A new version of the open source program MULTEM is presented here. It includes a graphical user interface, tapering truncation of the atomic potential, CPU multithreading functionality, single/double precision calculations, scanning transmission electron microscopy (STEM) simulations using experimental detector sensitivities, imaging STEM (ISTEM) simulations, energy filtered transmission electron microscopy (EFTEM) simulations, STEM electron energy loss spectroscopy (EELS) simulations along with other improvements in the algorithms. We also present a mixed channeling approach for the calculation of inelastic excitations, which allows one to considerably speed up time consuming EFTEM/STEM-EELS calculations.

  1. Advanced Techniques for Reservoir Simulation and Modeling of Non-Conventional Wells

    SciTech Connect

    Durlofsky, Louis J.; Aziz, Khalid

    2001-08-23

    Research results for the second year of this project on the development of improved modeling techniques for non-conventional (e.g., horizontal, deviated or multilateral) wells were presented. The overall program entails the development of enhanced well modeling and general simulation capabilities. A general formulation for black-oil and compositional reservoir simulation was presented.

  2. Integrating Simulation and Data for Materials in Extreme Environments

    NASA Astrophysics Data System (ADS)

    Germann, Timothy

    2014-03-01

    We are using large-scale molecular dynamics (MD) simulations to study the response of nanocrystalline metals such as tantalum to uniaxial (e.g., shock) compression. With modern petascale-class platforms, we are able to model sample sizes with edge lengths over one micrometer, which match the length and time scales experimentally accessible at Argonne's Advanced Photon Source (APS) and SLAC's Linac Coherent Light Source (LCLS). I will describe our simulation predictions and their recent verification at LCLS, as well as outstanding challenges in modeling the response of materials to extreme mechanical and radiation environments, and our efforts to tackle these as part of the multi-institutional, multi-disciplinary Exascale Co-design Center for Materials in Extreme Environments (ExMatEx). ExMatEx has initiated an early and deep collaboration between domain (computational materials) scientists, applied mathematicians, computer scientists, and hardware architects, in order to establish the relationships between algorithms, software stacks, and architectures needed to enable exascale-ready materials science application codes within the next decade. We anticipate that we will be able to exploit hierarchical, heterogeneous architectures to achieve more realistic large-scale simulations with adaptive physics refinement, and are using tractable application scale-bridging proxy application testbeds to assess new approaches and requirements. The current scale-bridging strategies accumulate (or recompute) a distributed response database from fine-scale calculations, in a top-down rather than bottom-up multiscale approach. I will demonstrate this approach and our initial assessments, using the newly emerging capabilities at new 4th generation synchrotron light sources as an experimental driver.

  3. Computational cardiology: how computer simulations could be used to develop new therapies and advance existing ones

    PubMed Central

    Trayanova, Natalia A.; O'Hara, Thomas; Bayer, Jason D.; Boyle, Patrick M.; McDowell, Kathleen S.; Constantino, Jason; Arevalo, Hermenegild J.; Hu, Yuxuan; Vadakkumpadan, Fijoy

    2012-01-01

    This article reviews the latest developments in computational cardiology. It focuses on the contribution of cardiac modelling to the development of new therapies as well as the advancement of existing ones for cardiac arrhythmias and pump dysfunction. Reviewed are cardiac modelling efforts aimed at advancing and optimizing existent therapies for cardiac disease (defibrillation, ablation of ventricular tachycardia, and cardiac resynchronization therapy) and at suggesting novel treatments, including novel molecular targets, as well as efforts to use cardiac models in stratification of patients likely to benefit from a given therapy, and the use of models in diagnostic procedures. PMID:23104919

  4. Simulation of fast-ion-driven Alfvén eigenmodes on the Experimental Advanced Superconducting Tokamak

    NASA Astrophysics Data System (ADS)

    Hu, Youjun; Todo, Y.; Pei, Youbin; Li, Guoqiang; Qian, Jinping; Xiang, Nong; Zhou, Deng; Ren, Qilong; Huang, Juan; Xu, Liqing

    2016-02-01

    Kinetic-MHD hybrid simulations are carried out to investigate possible fast-ion-driven modes on the Experimental Advanced Superconducting Tokamak. Three typical kinds of fast-ion-driven modes, namely, toroidicity-induced Alfvén eigenmodes, reversed shear Alfvén eigenmodes, and energetic-particle continuum modes, are observed simultaneously in the simulations. The simulation results are compared with the results of an ideal MHD eigenvalue code, which shows agreement with respect to the mode frequency, dominant poloidal mode numbers, and radial location. However, the modes in the hybrid simulations take a twisted structure on the poloidal plane, which is different from the results of the ideal MHD eigenvalue code. The twist is due to the radial phase variation of the eigenfunction, which may be attributed to the non-perturbative kinetic effects of the fast ions. By varying the stored energy of fast ions to change the fast ion drive in the simulations, it is demonstrated that the twist (i.e., the radial phase variation) is positively correlated with the fast ion drive.

  5. Brain-wave measures of workload in advanced cockpits: The transition of technology from laboratory to cockpit simulator, phase 2

    NASA Technical Reports Server (NTRS)

    Horst, Richard L.; Mahaffey, David L.; Munson, Robert C.

    1989-01-01

    The present Phase 2 small business innovation research study was designed to address issues related to scalp-recorded event-related potential (ERP) indices of mental workload and to transition this technology from the laboratory to cockpit simulator environments for use as a systems engineering tool. The project involved five main tasks: (1) Two laboratory studies confirmed the generality of the ERP indices of workload obtained in the Phase 1 study and revealed two additional ERP components related to workload. (2) A task analysis' of flight scenarios and pilot tasks in the Advanced Concepts Flight Simulator (ACFS) defined cockpit events (i.e., displays, messages, alarms) that would be expected to elicit ERPs related to workload. (3) Software was developed to support ERP data analysis. An existing ARD-proprietary package of ERP data analysis routines was upgraded, new graphics routines were developed to enhance interactive data analysis, and routines were developed to compare alternative single-trial analysis techniques using simulated ERP data. (4) Working in conjunction with NASA Langley research scientists and simulator engineers, preparations were made for an ACFS validation study of ERP measures of workload. (5) A design specification was developed for a general purpose, computerized, workload assessment system that can function in simulators such as the ACFS.

  6. A Visual Basic simulation software tool for performance analysis of a membrane-based advanced water treatment plant.

    PubMed

    Pal, P; Kumar, R; Srivastava, N; Chowdhury, J

    2014-02-01

    A Visual Basic simulation software (WATTPPA) has been developed to analyse the performance of an advanced wastewater treatment plant. This user-friendly and menu-driven software is based on the dynamic mathematical model for an industrial wastewater treatment scheme that integrates chemical, biological and membrane-based unit operations. The software-predicted results corroborate very well with the experimental findings as indicated in the overall correlation coefficient of the order of 0.99. The software permits pre-analysis and manipulation of input data, helps in optimization and exhibits performance of an integrated plant visually on a graphical platform. It allows quick performance analysis of the whole system as well as the individual units. The software first of its kind in its domain and in the well-known Microsoft Excel environment is likely to be very useful in successful design, optimization and operation of an advanced hybrid treatment plant for hazardous wastewater.

  7. An Advanced, Interactive, High-Performance Liquid Chromatography Simulator and Instructor Resources

    PubMed Central

    Stoll, Dwight R.; Carr, Peter W.; Nagel, Megan L.; Vitha, Mark F.; Mabbott, Gary A.

    2013-01-01

    High-Performance Liquid Chromatography (HPLC) simulation software has long been recognized as an effective educational tool, yet we found that existing HPLC simulators are either too expensive, out-dated, or lack many important features we deemed necessary to make them widely useful for educational purposes. Here we describe a free, open-source HPLC simulator we developed that we believe meets this need. The web-based simulator is uniquely sophisticated, yet accessible to a diverse user group with varied expertise in HPLC. It features intuitive controls and indicators for a wide range of experimental conditions, and it displays a graphical chromatogram to provide immediate feedback when conditions are changed. The simulator can be found at hplcsimulator.org. At that website, we also provide a number of example problem sets that can be used by educators to more easily incorporate the simulator into their curriculum. Comments from students who used the simulator in an undergraduate Analytical Chemistry class were very positive. PMID:23543870

  8. An Advanced, Interactive, High-Performance Liquid Chromatography Simulator and Instructor Resources.

    PubMed

    Boswell, Paul G; Stoll, Dwight R; Carr, Peter W; Nagel, Megan L; Vitha, Mark F; Mabbott, Gary A

    2013-02-12

    High-Performance Liquid Chromatography (HPLC) simulation software has long been recognized as an effective educational tool, yet we found that existing HPLC simulators are either too expensive, out-dated, or lack many important features we deemed necessary to make them widely useful for educational purposes. Here we describe a free, open-source HPLC simulator we developed that we believe meets this need. The web-based simulator is uniquely sophisticated, yet accessible to a diverse user group with varied expertise in HPLC. It features intuitive controls and indicators for a wide range of experimental conditions, and it displays a graphical chromatogram to provide immediate feedback when conditions are changed. The simulator can be found at hplcsimulator.org. At that website, we also provide a number of example problem sets that can be used by educators to more easily incorporate the simulator into their curriculum. Comments from students who used the simulator in an undergraduate Analytical Chemistry class were very positive.

  9. Scalable Iterative Solvers Applied to 3D Parallel Simulation of Advanced Semiconductor Devices

    NASA Astrophysics Data System (ADS)

    García-Loureiro, A. J.; Aldegunde, M.; Seoane, N.

    2009-08-01

    We have studied the performance of a preconditioned iterative solver to speed up a 3D semiconductor device simulator. Since 3D simulations necessitate large computing resources, the choice of algorithms and their parameters become of utmost importance. This code uses a density gradient drift-diffusion semiconductor transport model based on the finite element method which is one of the most general and complex discretisation techniques. It has been implemented for a distributed memory multiprocessor environment using the Message Passing Interface (MPI) library. We have applied this simulator to a 67 nm effective gate length Si MOSFET.

  10. Real-time application of advanced three-dimensional graphic techniques for research aircraft simulation

    NASA Technical Reports Server (NTRS)

    Davis, Steven B.

    1990-01-01

    Visual aids are valuable assets to engineers for design, demonstration, and evaluation. Discussed here are a variety of advanced three-dimensional graphic techniques used to enhance the displays of test aircraft dynamics. The new software's capabilities are examined and possible future uses are considered.

  11. Akuna - Integrated Toolsets Supporting Advanced Subsurface Flow and Transport Simulations for Environmental Management

    SciTech Connect

    Schuchardt, Karen L.; Agarwal, Deborah A.; Finsterle, Stefan A.; Gable, Carl W.; Gorton, Ian; Gosink, Luke J.; Keating, Elizabeth H.; Lansing, Carina S.; Meyer, Joerg; Moeglein, William A.M.; Pau, George S.H.; Porter, Ellen A.; Purohit, Sumit; Rockhold, Mark L.; Shoshani, Arie; Sivaramakrishnan, Chandrika

    2012-04-24

    A next generation open source subsurface simulator and user environment for environmental management is being developed through a collaborative effort across Department of Energy National Laboratories. The flow and transport simulator, Amanzi, will be capable of modeling complex subsurface environments and processes using both unstructured and adaptive meshes at very fine spatial resolutions that require supercomputing-scale resources. The user environment, Akuna, provides users with a range of tools to manage environmental and simulator data sets, create models, manage and share simulation data, and visualize results. Underlying the user interface are core toolsets that provide algorithms for sensitivity analysis, parameter estimation, and uncertainty quantification. Akuna is open-source, cross platform software that is initially being demonstrated on the Hanford BC Cribs remediation site. In this paper, we describe the emerging capabilities of Akuna and illustrate how these are being applied to the BC Cribs site.

  12. [Research advances in simulating regional crop growth under water stress by remote sensing].

    PubMed

    Zhang, Li; Wang, Shili; Ma, Yuping

    2005-06-01

    It is of practical significance to simulate the regional crop growth under water stress, especially at regional scale. Combined with remote sensing information, crop growth simulation model could provide an effective way to estimate the regional crop growth, development and yield formation under water stress. In this paper, related research methods and results were summarized, and some problems needed to be further studied and resolved were discussed.

  13. Comprehensive studies on the accuracy of trap characterization by using advanced random telegraph noise simulator

    NASA Astrophysics Data System (ADS)

    Higashi, Yusuke; Matsuzawa, Kazuya; Ishihara, Takamitsu

    2015-04-01

    Our developed noise simulator can represent the dynamic behaviors of electron and hole trapping and de-trapping via interactions with both the Si substrate and the poly-Si gate. Simulations reveal that the conventional analytical model using the ratio between the capture and emission time constants yields large errors in the estimates of trap site positions due to interactions with the Si substrate and poly-Si gate especially in thin gate insulator MOSFETs.

  14. Advanced numerical techniques for accurate unsteady simulations of a wingtip vortex

    NASA Astrophysics Data System (ADS)

    Ahmad, Shakeel

    A numerical technique is developed to simulate the vortices associated with stationary and flapping wings. The Unsteady Reynolds-Averaged Navier-Stokes (URANS) equations are used over an unstructured grid. The present work assesses the locations of the origins of vortex generation, models those locations and develops a systematic mesh refinement strategy to simulate vortices more accurately using the URANS model. The vortex center plays a key role in the analysis of the simulation data. A novel approach to locating a vortex center is also developed referred to as the Max-Max criterion. Experimental validation of the simulated vortex from a stationary NACA0012 wing is achieved. The tangential velocity along the core of the vortex falls within five percent of the experimental data in the case of the stationary NACA0012 simulation. The wing surface pressure coefficient also matches with the experimental data. The refinement techniques are then focused on unsteady simulations of pitching and dual-mode wing flapping. Tip vortex strength, location, and wing surface pressure are analyzed. Links to vortex behavior and wing motion are inferred. Key words: vortex, tangential velocity, Cp, vortical flow, unsteady vortices, URANS, Max-Max, Vortex center

  15. An Aerodynamic Performance Evaluation of the NASA/Ames Research Center Advanced Concepts Flight Simulator. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Donohue, Paul F.

    1987-01-01

    The results of an aerodynamic performance evaluation of the National Aeronautics and Space Administration (NASA)/Ames Research Center Advanced Concepts Flight Simulator (ACFS), conducted in association with the Navy-NASA Joint Institute of Aeronautics, are presented. The ACFS is a full-mission flight simulator which provides an excellent platform for the critical evaluation of emerging flight systems and aircrew performance. The propulsion and flight dynamics models were evaluated using classical flight test techniques. The aerodynamic performance model of the ACFS was found to realistically represent that of current day, medium range transport aircraft. Recommendations are provided to enhance the capabilities of the ACFS to a level forecast for 1995 transport aircraft. The graphical and tabular results of this study will establish a performance section of the ACFS Operation's Manual.

  16. A study on optimization of hybrid drive train using Advanced Vehicle Simulator (ADVISOR)

    NASA Astrophysics Data System (ADS)

    Same, Adam; Stipe, Alex; Grossman, David; Park, Jae Wan

    This study investigates the advantages and disadvantages of three hybrid drive train configurations: series, parallel, and "through-the-ground" parallel. Power flow simulations are conducted with the MATLAB/Simulink-based software ADVISOR. These simulations are then applied in an application for the UC Davis SAE Formula Hybrid vehicle. ADVISOR performs simulation calculations for vehicle position using a combined backward/forward method. These simulations are used to study how efficiency and agility are affected by the motor, fuel converter, and hybrid configuration. Three different vehicle models are developed to optimize the drive train of a vehicle for three stages of the SAE Formula Hybrid competition: autocross, endurance, and acceleration. Input cycles are created based on rough estimates of track geometry. The output from these ADVISOR simulations is a series of plots of velocity profile and energy storage State of Charge that provide a good estimate of how the Formula Hybrid vehicle will perform on the given course. The most noticeable discrepancy between the input cycle and the actual velocity profile of the vehicle occurs during deceleration. A weighted ranking system is developed to organize the simulation results and to determine the best drive train configuration for the Formula Hybrid vehicle. Results show that the through-the-ground parallel configuration with front-mounted motors achieves an optimal balance of efficiency, simplicity, and cost. ADVISOR is proven to be a useful tool for vehicle power train design for the SAE Formula Hybrid competition. This vehicle model based on ADVISOR simulation is applicable to various studies concerning performance and efficiency of hybrid drive trains.

  17. Simulation of the hybrid and steady state advanced operating modes in ITER

    NASA Astrophysics Data System (ADS)

    Kessel, C. E.; Giruzzi, G.; Sips, A. C. C.; Budny, R. V.; Artaud, J. F.; Basiuk, V.; Imbeaux, F.; Joffrin, E.; Schneider, M.; Murakami, M.; Luce, T.; St. John, Holger; Oikawa, T.; Hayashi, N.; Takizuka, T.; Ozeki, T.; Na, Y.-S.; Park, J. M.; Garcia, J.; Tucillo, A. A.

    2007-09-01

    Integrated simulations are performed to establish a physics basis, in conjunction with present tokamak experiments, for the operating modes in the International Thermonuclear Experimental Reactor (ITER). Simulations of the hybrid mode are done using both fixed and free-boundary 1.5D transport evolution codes including CRONOS, ONETWO, TSC/TRANSP, TOPICS and ASTRA. The hybrid operating mode is simulated using the GLF23 and CDBM05 energy transport models. The injected powers are limited to the negative ion neutral beam, ion cyclotron and electron cyclotron heating systems. Several plasma parameters and source parameters are specified for the hybrid cases to provide a comparison of 1.5D core transport modelling assumptions, source physics modelling assumptions, as well as numerous peripheral physics modelling. Initial results indicate that very strict guidelines will need to be imposed on the application of GLF23, for example, to make useful comparisons. Some of the variations among the simulations are due to source models which vary widely among the codes used. In addition, there are a number of peripheral physics models that should be examined, some of which include fusion power production, bootstrap current, treatment of fast particles and treatment of impurities. The hybrid simulations project to fusion gains of 5.6-8.3, βN values of 2.1-2.6 and fusion powers ranging from 350 to 500 MW, under the assumptions outlined in section 3. Simulations of the steady state operating mode are done with the same 1.5D transport evolution codes cited above, except the ASTRA code. In these cases the energy transport model is more difficult to prescribe, so that energy confinement models will range from theory based to empirically based. The injected powers include the same sources as used for the hybrid with the possible addition of lower hybrid. The simulations of the steady state mode project to fusion gains of 3.5-7, βN values of 2.3-3.0 and fusion powers of 290 to 415 MW

  18. Simulation Modeling of Advanced Pilot Training: The Effects of a New Aircraft Family of Systems

    DTIC Science & Technology

    2014-03-01

    Vendor 4 Figure 2. Advanced Pilot Training The shaded portion of Figure 2 depicts T-38s utilized by the Air Education and Training Command...requirements and resource availability on student throughput. The model runs each scenario fifty times to generate the appropriate data in analysis...parameters in this study can be determined with 10 or 20 replications, however MTBM requires fifty replications to gain accuracy within ±.1 maintenance

  19. Energy Simulation studies in IEA/SHC Task 18 advanced glazing and associated materials for solar and building applications

    SciTech Connect

    Sullivan, R.; Selkowitz, S.; Lyons, P.

    1995-04-01

    Researchers participating in IEA/SHC Task 18 on advanced glazing materials have as their primary objective the development of new innovative glazing products such as high performance glazings, wavelength selective glazings, chromogenic optical switching devices, and light transport mechanisms that will lead to significant energy use reductions and increased comfort in commercial and residential buildings. Part of the Task 18 effort involves evaluation of the energy and comfort performance of these new glazings through the use of various performance analysis simulation tools. Eleven countries (Australia, Denmark, Finland, Germany, Italy, Netherlands, Norway, Spain, Sweden, Switzerland, and the United States) are contributing to this multi-year simulation study to better understand the complex heat transfer interactions that determine window performance. Each country has selected particular simulation programs and identified the following items to guide the simulation tasks: (1) geographic locations; (2) building types; (3) window systems and control strategies; and (4) analysis parameters of interest. This paper summarizes the results obtained thus far by several of the research organizations.

  20. Quantifying the Effect of Fast Charger Deployments on Electric Vehicle Utility and Travel Patterns via Advanced Simulation: Preprint

    SciTech Connect

    Wood, E.; Neubauer, J.; Burton, E.

    2015-02-01

    The disparate characteristics between conventional (CVs) and battery electric vehicles (BEVs) in terms of driving range, refill/recharge time, and availability of refuel/recharge infrastructure inherently limit the relative utility of BEVs when benchmarked against traditional driver travel patterns. However, given a high penetration of high-power public charging combined with driver tolerance for rerouting travel to facilitate charging on long-distance trips, the difference in utility between CVs and BEVs could be marginalized. We quantify the relationships between BEV utility, the deployment of fast chargers, and driver tolerance for rerouting travel and extending travel durations by simulating BEVs operated over real-world travel patterns using the National Renewable Energy Laboratory's Battery Lifetime Analysis and Simulation Tool for Vehicles (BLAST-V). With support from the U.S. Department of Energy's Vehicle Technologies Office, BLAST-V has been developed to include algorithms for estimating the available range of BEVs prior to the start of trips, for rerouting baseline travel to utilize public charging infrastructure when necessary, and for making driver travel decisions for those trips in the presence of available public charging infrastructure, all while conducting advanced vehicle simulations that account for battery electrical, thermal, and degradation response. Results from BLAST-V simulations on vehicle utility, frequency of inserted stops, duration of charging events, and additional time and distance necessary for rerouting travel are presented to illustrate how BEV utility and travel patterns can be affected by various fast charge deployments.

  1. A Microcomputer-Based Control And Simulation Of An Advanced Ipm Synchronous Machine Drive System For Electric Vehicle Propulsion

    NASA Astrophysics Data System (ADS)

    Bose, B. K.; Szczesny, P. M.

    1987-10-01

    Advanced digital control and computer-aided control system design techniques are playing key roles in the complex drive system design and control implementation. The paper describes a high performance microcomputer-based control and digital simulation of an inverter-fed interior permanent magnet (IPM) synchronous machine which uses Neodymium-Iron-Boron magnet. The fully operational four-quadrant drive system includes constant-torque region with zero speed operation and high speed field-weakening constant-power region. The control uses vector or field-oriented technique in constant-torque region with the direct axis aligned to the stator flux, whereas the constant-power region control is based on torque angle orientation of the impressed square-wave voltage. All the key feedback signals for the control are estimated with precision. The drive system is basically designed with an outer torque control loop for electric vehicle application, but speed and position control loops can be added for other industrial applications. The distributed microcomputer-based control system is based on Intel-8096 microcontroller and Texas Instruments TMS32010 type digital signal processor. The complete drive system has been simulated using the VAX-based simulation language SIMNON* to verify the feasibility of the control laws and to study the performances of the drive system. The simulation results are found to have excellent correlation with the laboratory breadboard tests.

  2. Advances in RNA molecular dynamics: a simulator's guide to RNA force fields.

    PubMed

    Vangaveti, Sweta; Ranganathan, Srivathsan V; Chen, Alan A

    2017-03-01

    Molecular simulations have become an essential tool for biochemical research. When they work properly, they are able to provide invaluable interpretations of experimental results and ultimately provide novel, experimentally testable predictions. Unfortunately, not all simulation models are created equal, and with inaccurate models it becomes unclear what is a bona fide prediction versus a simulation artifact. RNA models are still in their infancy compared to the many robust protein models that are widely in use, and for that reason the number of RNA force field revisions in recent years has been rapidly increasing. As there is no universally accepted 'best' RNA force field at the current time, RNA simulators must decide which one is most suited to their purposes, cognizant of its essential assumptions and their inherent strengths and weaknesses. Hopefully, armed with a better understanding of what goes inside the simulation 'black box,' RNA biochemists can devise novel experiments and provide crucial thermodynamic and structural data that will guide the development and testing of improved RNA models. WIREs RNA 2017, 8:e1396. doi: 10.1002/wrna.1396 For further resources related to this article, please visit the WIREs website.

  3. Predictive transport simulations of real-time profile control in JET advanced tokamak plasmas

    NASA Astrophysics Data System (ADS)

    Tala, T.; Laborde, L.; Mazon, D.; Moreau, D.; Corrigan, G.; Crisanti, F.; Garbet, X.; Heading, D.; Joffrin, E.; Litaudon, X.; Parail, V.; Salmi, A.; EFDA-JET workprogramme, contributors to the

    2005-09-01

    Predictive, time-dependent transport simulations with a semi-empirical plasma model have been used in closed-loop simulations to control the q-profile and the strength and location of the internal transport barrier (ITB). Five transport equations (Te, Ti, q, ne, vΦ) are solved, and the power levels of lower hybrid current drive, NBI and ICRH are calculated in a feedback loop determined by the feedback controller matrix. The real-time control (RTC) technique and algorithms used in the transport simulations are identical to those implemented and used in JET experiments (Laborde L. et al 2005 Plasma Phys. Control. Fusion 47 155 and Moreau D. et al 2003 Nucl. Fusion 43 870). The closed-loop simulations with RTC demonstrate that varieties of q-profiles and pressure profiles in the ITB can be achieved and controlled simultaneously. The simulations also showed that with the same RTC technique as used in JET experiments, it is possible to sustain the q-profiles and pressure profiles close to their set-point profiles for longer than the current diffusion time. In addition, the importance of being able to handle the multiple time scales to control the location and strength of the ITB is pointed out. Several future improvements and perspectives of the RTC scheme are presented.

  4. Advanced methods in global gyrokinetic full f particle simulation of tokamak transport

    SciTech Connect

    Ogando, F.; Heikkinen, J. A.; Henriksson, S.; Janhunen, S. J.; Kiviniemi, T. P.; Leerink, S.

    2006-11-30

    A new full f nonlinear gyrokinetic simulation code, named ELMFIRE, has been developed for simulating transport phenomena in tokamak plasmas. The code is based on a gyrokinetic particle-in-cell algorithm, which can consider electrons and ions jointly or separately, as well as arbitrary impurities. The implicit treatment of the ion polarization drift and the use of full f methods allow for simulations of strongly perturbed plasmas including wide orbit effects, steep gradients and rapid dynamic changes. This article presents in more detail the algorithms incorporated into ELMFIRE, as well as benchmarking comparisons to both neoclassical theory and other codes.Code ELMFIRE calculates plasma dynamics by following the evolution of a number of sample particles. Because of using an stochastic algorithm its results are influenced by statistical noise. The effect of noise on relevant magnitudes is analyzed.Turbulence spectra of FT-2 plasma has been calculated with ELMFIRE, obtaining results consistent with experimental data.

  5. Validation of advanced NSSS simulator model for loss-of-coolant accidents

    SciTech Connect

    Kao, S.P.; Chang, S.K.; Huang, H.C.

    1995-09-01

    The replacement of the NSSS (Nuclear Steam Supply System) model on the Millstone 2 full-scope simulator has significantly increased its fidelity to simulate adverse conditions in the RCS. The new simulator NSSS model is a real-time derivative of the Nuclear Plant Analyzer by ABB. The thermal-hydraulic model is a five-equation, non-homogeneous model for water, steam, and non-condensible gases. The neutronic model is a three-dimensional nodal diffusion model. In order to certify the new NSSS model for operator training, an extensive validation effort has been performed by benchmarking the model performance against RELAP5/MOD2. This paper presents the validation results for the cases of small-and large-break loss-of-coolant accidents (LOCA). Detailed comparisons in the phenomena of reflux-condensation, phase separation, and two-phase natural circulation are discussed.

  6. A Computational Methodology for Simulating Thermal Loss Testing of the Advanced Stirling Convertor

    NASA Technical Reports Server (NTRS)

    Reid, Terry V.; Wilson, Scott D.; Schifer, Nicholas A.; Briggs, Maxwell H.

    2011-01-01

    Convertor and generator testing is carried out in tests designed to characterize convertor performance when subjected to environments intended to simulate launch and space conditions. The value of net heat input must be known in order to calculate convertor efficiency and to validate convertor performance. Specially designed test hardware was used to verify and validate a two step methodology for the prediction of net heat input. This lessons learned from these simulations have been applied to previous convertor simulations. As heat is supplied to the convertors, electric power is produced and measured. Net heat input to the convertor is one parameter that will contribute to the calculation of efficiency. This parameter is not measured directly. Insulation Loss. Determine the current status of the thermal conductivity of the micro-porous insulation. - Match heat source and hot end temperatures. - Match temperature difference across Kaowool insulation

  7. A review of recent advances in numerical simulations of microscale fuel processor for hydrogen production

    NASA Astrophysics Data System (ADS)

    Holladay, J. D.; Wang, Y.

    2015-05-01

    Microscale (<5 W) reformers for hydrogen production have been investigated for over a decade. These devices are intended to provide hydrogen for small fuel cells. Due to the reformer's small size, numerical simulations are critical to understand heat and mass transfer phenomena occurring in the systems and help guide the further improvements. This paper reviews the development of the numerical codes and details the reaction equations used. The majority of the devices utilized methanol as the fuel due to methanol's low reforming temperature and high conversion, although, there are several methane fueled systems. The increased computational power and more complex codes have led to improved accuracy of numerical simulations. Initial models focused on the reformer, while more recently, the simulations began including other unit operations such as vaporizers, inlet manifolds, and combustors. These codes are critical for developing the next generation systems. The systems reviewed included plate reactors, microchannel reactors, and annulus reactors for both wash-coated and packed bed systems.

  8. Comparison of CFD simulations with experimental data for a tanker model advancing in waves

    NASA Astrophysics Data System (ADS)

    Orihara, Hideo

    2011-03-01

    In this paper, CFD simulation results for a tanker model are compared with experimental data over a range of wave conditions to verify a capability to predict the sea-keeping performance of practical hull forms. CFD simulations are conducted using WISDAM-X code which is capable of unsteady RANS calculations in arbitrary wave conditions. Comparisons are made of unsteady surface pressures, added resistance and ship motions in regular waves for cases of fully-loaded and ballast conditions of a large tanker model. It is shown that the simulation results agree fairly well with the experimental data, and that WISDAM-X code can predict sea-keeping performance of practical hull forms.

  9. Integrated Physics Advances in Simulation of Wave Interactions with Extended MHD Phenomena

    SciTech Connect

    Batchelor, Donald B; D'Azevedo, Eduardo; Bateman, Glenn; Bernholdt, David E; Berry, Lee A; Bonoli, P.; Bramley, R; Breslau, J.; Chance, M.; Chen, J.; Choi, M.; Elwasif, Wael R; Fu, GuoYong; Harvey, R. W.; Houlberg, Wayne A; Jaeger, Erwin Frederick; Jardin, S. C.; Keyes, David E; Klasky, Scott A; Kruger, Scott; Ku, Long-Poe; McCune, Douglas; Schissel, D.; Schnack, D.; Wright, J. C.

    2007-06-01

    The broad scientific objectives of the SWIM (Simulation of Wave Interaction with MHD) project are: (A) To improve our understanding of interactions that both RF wave and particle sources have on extended-MHD phenomena, and to substantially improve our capability for predicting and optimizing the performance of burning plasmas in devices such as ITER: and (B) To develop an integrated computational system for treating multi-physics phenomena with the required flexibility and extensibility to serve as a prototype for the Fusion Simulation Project (FSP).

  10. Steady state operation simulation of the Francis-99 turbine by means of advanced turbulence models

    NASA Astrophysics Data System (ADS)

    Gavrilov, A.; Dekterev, A.; Minakov, A.; Platonov, D.; Sentyabov, A.

    2017-01-01

    The paper presents numerical simulation of the flow in hydraulic turbine based on the experimental data of the II Francis-99 workshop. The calculation domain includes the wicket gate, runner and draft tube with rotating reference frame for the runner zone. Different turbulence models such as k-ω SST, ζ-f and RSM were considered. The calculations were performed by means of in-house CFD code SigmaFlow. The numerical simulation for part load, high load and best efficiency operation points were performed.

  11. An Advanced Objective Structured Clinical Examination Using Patient Simulators to Evaluate Pharmacy Students’ Skills in Physical Assessment

    PubMed Central

    Takamura, Norito; Ogata, Kenji; Setoguchi, Nao; Utsumi, Miho; Kourogi, Yasuyuki; Osaki, Takashi; Ozaki, Mineo; Sato, Keizo; Arimori, Kazuhiko

    2014-01-01

    Objective. To implement an advanced objective structured clinical examination (OSCE) in the curriculum and to evaluate Japanese pharmacy students’ skills in physical assessment such as measuring pulse and blood pressure, and assessing heart, lung, and intestinal sounds. Design. An advanced OSCE was implemented in a hospital pharmacy seminar as a compulsory subject. We programmed patient simulators with 21 different patient cases in which normal and abnormal physiological conditions were produced. The virtual patients were then used to evaluate the physical assessment skills of fifth-year pharmacy students. Assessment. Significant differences were observed between the average of all the detailed evaluations and the mean results for the following skills: pulse measurement, blood pressure measurement, deflating the cuff at a rate of 2-3 mmHg/sec, listening to heart sounds, and listening to lung sounds. Conclusion. Administering an advanced OSCE using virtual patients was an effective way of assessing pharmacy students’ skills in a realistic setting. Several areas in which pharmacy students require further training were identified. PMID:25657371

  12. High Level Requirements for the Nuclear Energy -- Knowledge Base for Advanced Modeling and Simulation (NE-KAMS)

    SciTech Connect

    Rich Johnson; Hyung Lee; Kimberlyn C. Mousseau

    2011-09-01

    The US Department of Energy, Office of Nuclear Energy (DOE-NE), has been tasked with the important mission of ensuring that nuclear energy remains a compelling and viable energy source in the U.S. The motivations behind this mission include cost-effectively meeting the expected increases in the power needs of the country, reducing carbon emissions and reducing dependence on foreign energy sources. In the near term, to ensure that nuclear power remains a key element of U.S. energy strategy and portfolio, the DOE-NE will be working with the nuclear industry to support safe and efficient operations of existing nuclear power plants. In the long term, to meet the increasing energy needs of the U.S., the DOE-NE will be investing in research and development (R&D) and working in concert with the nuclear industry to build and deploy new, safer and more efficient nuclear power plants. The safe and efficient operations of existing nuclear power plants and designing, licensing and deploying new reactor designs, however, will require focused R&D programs as well as the extensive use and leveraging of advanced modeling and simulation (M&S). M&S will play a key role in ensuring safe and efficient operations of existing and new nuclear reactors. The DOE-NE has been actively developing and promoting the use of advanced M&S in reactor design and analysis through its R&D programs, e.g., the Nuclear Energy Advanced Modeling and Simulation (NEAMS) and Consortium for Advanced Simulation of Light Water Reactors (CASL) programs. Also, nuclear reactor vendors are already using CFD and CSM, for design, analysis, and licensing. However, these M&S tools cannot be used with confidence for nuclear reactor applications unless accompanied and supported by verification and validation (V&V) and uncertainty quantification (UQ) processes and procedures which provide quantitative measures of uncertainty for specific applications. The Nuclear Energy Knowledge base for Advanced Modeling and Simulation

  13. Large-Scale Simulation of Nuclear Reactors: Issues and Perspectives

    SciTech Connect

    Merzari, Elia; Obabko, Aleks; Fischer, Paul; Halford, Noah; Walker, Justin; Siegel, Andrew; Yu, Yiqi

    2015-01-01

    Numerical simulation has been an intrinsic part of nuclear engineering research since its inception. In recent years a transition is occurring toward predictive, first-principle-based tools such as computational fluid dynamics. Even with the advent of petascale computing, however, such tools still have significant limitations. In the present work some of these issues, and in particular the presence of massive multiscale separation, are discussed, as well as some of the research conducted to mitigate them. Petascale simulations at high fidelity (large eddy simulation/direct numerical simulation) were conducted with the massively parallel spectral element code Nek5000 on a series of representative problems. These simulations shed light on the requirements of several types of simulation: (1) axial flow around fuel rods, with particular attention to wall effects; (2) natural convection in the primary vessel; and (3) flow in a rod bundle in the presence of spacing devices. The focus of the work presented here is on the lessons learned and the requirements to perform these simulations at exascale. Additional physical insight gained from these simulations is also emphasized.

  14. A Feedback Intervention to Increase Digital and Paper Checklist Performance in Technically Advanced Aircraft Simulation

    ERIC Educational Resources Information Center

    Rantz, William G.; Van Houten, Ron

    2011-01-01

    This study examined whether pilots operating a flight simulator completed digital or paper flight checklists more accurately after receiving postflight graphic and verbal feedback. The dependent variable was the number of checklist items completed correctly per flight. Following treatment, checklist completion with paper and digital checklists…

  15. Advances in Chimera Grid Tools for Multi-Body Dynamics Simulations and Script Creation

    NASA Technical Reports Server (NTRS)

    Chan, William M.

    2004-01-01

    This viewgraph presentation contains information about (1) Framework for multi-body dynamics - Geometry Manipulation Protocol (GMP), (2) Simulation procedure using Chimera Grid Tools (CGT) and OVERFLOW-2 (3) Further recent developments in Chimera Grid Tools OVERGRID, Grid modules, Script library and (4) Future work.

  16. Advanced Signal Processing for Integrated LES-RANS Simulations: Anti-aliasing Filters

    NASA Technical Reports Server (NTRS)

    Schlueter, J. U.

    2003-01-01

    Currently, a wide variety of flow phenomena are addressed with numerical simulations. Many flow solvers are optimized to simulate a limited spectrum of flow effects effectively, such as single parts of a flow system, but are either inadequate or too expensive to be applied to a very complex problem. As an example, the flow through a gas turbine can be considered. In the compressor and the turbine section, the flow solver has to be able to handle the moving blades, model the wall turbulence, and predict the pressure and density distribution properly. This can be done by a flow solver based on the Reynolds-Averaged Navier-Stokes (RANS) approach. On the other hand, the flow in the combustion chamber is governed by large scale turbulence, chemical reactions, and the presence of fuel spray. Experience shows that these phenomena require an unsteady approach. Hence, for the combustor, the use of a Large Eddy Simulation (LES) flow solver is desirable. While many design problems of a single flow passage can be addressed by separate computations, only the simultaneous computation of all parts can guarantee the proper prediction of multi-component phenomena, such as compressor/combustor instability and combustor/turbine hot-streak migration. Therefore, a promising strategy to perform full aero-thermal simulations of gas-turbine engines is the use of a RANS flow solver for the compressor sections, an LES flow solver for the combustor, and again a RANS flow solver for the turbine section.

  17. ADVANCED URBANIZED METEOROLOGICAL MODELING AND AIR QUALITY SIMULATIONS WITH CMAQ AT NEIGHBORHOOD SCALES

    EPA Science Inventory

    We present results from a study testing the new boundary layer parameterization method, the canopy drag approach (DA) which is designed to explicitly simulate the effects of buildings, street and tree canopies on the dynamic, thermodynamic structure and dispersion fields in urban...

  18. Simulations of the L-H transition on experimental advanced superconducting Tokamak

    SciTech Connect

    Weiland, Jan

    2014-12-15

    We have simulated the L-H transition on the EAST tokamak [Baonian Wan, EAST and HT-7 Teams, and International Collaborators, “Recent experiments in the EAST and HT-7 superconducting tokamaks,” Nucl. Fusion 49, 104011 (2009)] using a predictive transport code where ion and electron temperatures, electron density, and poloidal and toroidal momenta are simulated self consistently. This is, as far as we know, the first theory based simulation of an L-H transition including the whole radius and not making any assumptions about where the barrier should be formed. Another remarkable feature is that we get H-mode gradients in agreement with the α – α{sub d} diagram of Rogers et al. [Phys. Rev. Lett. 81, 4396 (1998)]. Then, the feedback loop emerging from the simulations means that the L-H power threshold increases with the temperature at the separatrix. This is a main feature of the C-mod experiments [Hubbard et al., Phys. Plasmas 14, 056109 (2007)]. This is also why the power threshold depends on the direction of the grad B drift in the scrape off layer and also why the power threshold increases with the magnetic field. A further significant general H-mode feature is that the density is much flatter in H-mode than in L-mode.

  19. Addressing unknown constants and metabolic network behaviors through petascale computing: understanding H2 production in green algae

    NASA Astrophysics Data System (ADS)

    Chang, Christopher; Alber, David; Graf, Peter; Kim, Kwiseon; Seibert, Michael

    2007-07-01

    ranging from distributed grids to unified petascale architectures.

  20. Message passing interface and multithreading hybrid for parallel molecular docking of large databases on petascale high performance computing machines.

    PubMed

    Zhang, Xiaohua; Wong, Sergio E; Lightstone, Felice C

    2013-04-30

    A mixed parallel scheme that combines message passing interface (MPI) and multithreading was implemented in the AutoDock Vina molecular docking program. The resulting program, named VinaLC, was tested on the petascale high performance computing (HPC) machines at Lawrence Livermore National Laboratory. To exploit the typical cluster-type supercomputers, thousands of docking calculations were dispatched by the master process to run simultaneously on thousands of slave processes, where each docking calculation takes one slave process on one node, and within the node each docking calculation runs via multithreading on multiple CPU cores and shared memory. Input and output of the program and the data handling within the program were carefully designed to deal with large databases and ultimately achieve HPC on a large number of CPU cores. Parallel performance analysis of the VinaLC program shows that the code scales up to more than 15K CPUs with a very low overhead cost of 3.94%. One million flexible compound docking calculations took only 1.4 h to finish on about 15K CPUs. The docking accuracy of VinaLC has been validated against the DUD data set by the re-docking of X-ray ligands and an enrichment study, 64.4% of the top scoring poses have RMSD values under 2.0 Å. The program has been demonstrated to have good enrichment performance on 70% of the targets in the DUD data set. An analysis of the enrichment factors calculated at various percentages of the screening database indicates VinaLC has very good early recovery of actives.

  1. Data Collection Methods for Validation of Advanced Multi-Resolution Fast Reactor Simulations

    SciTech Connect

    Tokuhiro, Akiro; Ruggles, Art; Pointer, David

    2015-01-22

    In pool-type Sodium Fast Reactors (SFR) the regions most susceptible to thermal striping are the upper instrumentation structure (UIS) and the intermediate heat exchanger (IHX). This project experimentally and computationally (CFD) investigated the thermal mixing in the region exiting the reactor core to the UIS. The thermal mixing phenomenon was simulated using two vertical jets at different velocities and temperatures as prototypic of two adjacent channels out of the core. Thermal jet mixing of anticipated flows at different temperatures and velocities were investigated. Velocity profiles are measured throughout the flow region using Ultrasonic Doppler Velocimetry (UDV), and temperatures along the geometric centerline between the jets were recorded using a thermocouple array. CFD simulations, using COMSOL, were used to initially understand the flow, then to design the experimental apparatus and finally to compare simulation results and measurements characterizing the flows. The experimental results and CFD simulations show that the flow field is characterized into three regions with respective transitions, namely, convective mixing, (flow direction) transitional, and post-mixing. Both experiments and CFD simulations support this observation. For the anticipated SFR conditions the flow is momentum dominated and thus thermal mixing is limited due to the short flow length associated from the exit of the core to the bottom of the UIS. This means that there will be thermal striping at any surface where poorly mixed streams impinge; rather unless lateral mixing is ‘actively promoted out of the core, thermal striping will prevail. Furthermore we note that CFD can be considered a ‘separate effects (computational) test’ and is recommended as part of any integral analysis. To this effect, poorly mixed streams then have potential impact on the rest of the SFR design and scaling, especially placement of internal components, such as the IHX that may see poorly mixed

  2. Advancing Nucleosynthesis in Core-Collapse Supernovae Models Using 2D CHIMERA Simulations

    NASA Astrophysics Data System (ADS)

    Harris, J. A.; Hix, W. R.; Chertkow, M. A.; Bruenn, S. W.; Lentz, E. J.; Messer, O. B.; Mezzacappa, A.; Blondin, J. M.; Marronetti, P.; Yakunin, K.

    2014-01-01

    The deaths of massive stars as core-collapse supernovae (CCSN) serve as a crucial link in understanding galactic chemical evolution since the birth of the universe via the Big Bang. We investigate CCSN in polar axisymmetric simulations using the multidimensional radiation hydrodynamics code CHIMERA. Computational costs have traditionally constrained the evolution of the nuclear composition in CCSN models to, at best, a 14-species α-network. However, the limited capacity of the α-network to accurately evolve detailed composition, the neutronization and the nuclear energy generation rate has fettered the ability of prior CCSN simulations to accurately reproduce the chemical abundances and energy distributions as known from observations. These deficits can be partially ameliorated by "post-processing" with a more realistic network. Lagrangian tracer particles placed throughout the star record the temporal evolution of the initial simulation and enable the extension of the nuclear network evolution by incorporating larger systems in post-processing nucleosynthesis calculations. We present post-processing results of the four ab initio axisymmetric CCSN 2D models of Bruenn et al. (2013) evolved with the smaller α-network, and initiated from stellar metallicity, non-rotating progenitors of mass 12, 15, 20, and 25 M⊙ from Woosley & Heger (2007). As a test of the limitations of post-processing, we provide preliminary results from an ongoing simulation of the 15 M⊙ model evolved with a realistic 150 species nuclear reaction network in situ. With more accurate energy generation rates and an improved determination of the thermodynamic trajectories of the tracer particles, we can better unravel the complicated multidimensional "mass-cut" in CCSN simulations and probe for less energetically significant nuclear processes like the νp-process and the r-process, which require still larger networks.

  3. Recent advances in renal hemodynamics: insights from bench experiments and computer simulations

    PubMed Central

    2015-01-01

    It has been long known that the kidney plays an essential role in the control of body fluids and blood pressure and that impairment of renal function may lead to the development of diseases such as hypertension (Guyton AC, Coleman TG, Granger Annu Rev Physiol 34: 13–46, 1972). In this review, we highlight recent advances in our understanding of renal hemodynamics, obtained from experimental and theoretical studies. Some of these studies were published in response to a recent Call for Papers of this journal: Renal Hemodynamics: Integrating with the Nephron and Beyond. PMID:25715984

  4. Monte Carlo simulation and scatter correction of the GE Advance PET scanner with SimSET and Geant4

    NASA Astrophysics Data System (ADS)

    Barret, Olivier; Carpenter, T. Adrian; Clark, John C.; Ansorge, Richard E.; Fryer, Tim D.

    2005-10-01

    For Monte Carlo simulations to be used as an alternative solution to perform scatter correction, accurate modelling of the scanner as well as speed is paramount. General-purpose Monte Carlo packages (Geant4, EGS, MCNP) allow a detailed description of the scanner but are not efficient at simulating voxel-based geometries (patient images). On the other hand, dedicated codes (SimSET, PETSIM) will perform well for voxel-based objects but will be poor in their capacity of simulating complex geometries such as a PET scanner. The approach adopted in this work was to couple a dedicated code (SimSET) with a general-purpose package (Geant4) to have the efficiency of the former and the capabilities of the latter. The combined SimSET+Geant4 code (SimG4) was assessed on the GE Advance PET scanner and compared to the use of SimSET only. A better description of the resolution and sensitivity of the scanner and of the scatter fraction was obtained with SimG4. The accuracy of scatter correction performed with SimG4 and SimSET was also assessed from data acquired with the 20 cm NEMA phantom. SimG4 was found to outperform SimSET and to give slightly better results than the GE scatter correction methods installed on the Advance scanner (curve fitting and scatter modelling for the 300-650 keV and 375-650 keV energy windows, respectively). In the presence of a hot source close to the edge of the field of view (as found in oxygen scans), the GE curve-fitting method was found to fail whereas SimG4 maintained its performance.

  5. Monte Carlo simulation and scatter correction of the GE advance PET scanner with SimSET and Geant4.

    PubMed

    Barret, Olivier; Carpenter, T Adrian; Clark, John C; Ansorge, Richard E; Fryer, Tim D

    2005-10-21

    For Monte Carlo simulations to be used as an alternative solution to perform scatter correction, accurate modelling of the scanner as well as speed is paramount. General-purpose Monte Carlo packages (Geant4, EGS, MCNP) allow a detailed description of the scanner but are not efficient at simulating voxel-based geometries (patient images). On the other hand, dedicated codes (SimSET, PETSIM) will perform well for voxel-based objects but will be poor in their capacity of simulating complex geometries such as a PET scanner. The approach adopted in this work was to couple a dedicated code (SimSET) with a general-purpose package (Geant4) to have the efficiency of the former and the capabilities of the latter. The combined SimSET+Geant4 code (SimG4) was assessed on the GE Advance PET scanner and compared to the use of SimSET only. A better description of the resolution and sensitivity of the scanner and of the scatter fraction was obtained with SimG4. The accuracy of scatter correction performed with SimG4 and SimSET was also assessed from data acquired with the 20 cm NEMA phantom. SimG4 was found to outperform SimSET and to give slightly better results than the GE scatter correction methods installed on the Advance scanner (curve fitting and scatter modelling for the 300-650 keV and 375-650 keV energy windows, respectively). In the presence of a hot source close to the edge of the field of view (as found in oxygen scans), the GE curve-fitting method was found to fail whereas SimG4 maintained its performance.

  6. Advanced time integration algorithms for dislocation dynamics simulations of work hardening

    NASA Astrophysics Data System (ADS)

    Sills, Ryan B.; Aghaei, Amin; Cai, Wei

    2016-05-01

    Efficient time integration is a necessity for dislocation dynamics simulations of work hardening to achieve experimentally relevant strains. In this work, an efficient time integration scheme using a high order explicit method with time step subcycling and a newly-developed collision detection algorithm are evaluated. First, time integrator performance is examined for an annihilating Frank-Read source, showing the effects of dislocation line collision. The integrator with subcycling is found to significantly out-perform other integration schemes. The performance of the time integration and collision detection algorithms is then tested in a work hardening simulation. The new algorithms show a 100-fold speed-up relative to traditional schemes. Subcycling is shown to improve efficiency significantly while maintaining an accurate solution, and the new collision algorithm allows an arbitrarily large time step size without missing collisions.

  7. Advanced time integration algorithms for dislocation dynamics simulations of work hardening

    DOE PAGES

    Sills, Ryan B.; Aghaei, Amin; Cai, Wei

    2016-04-25

    Efficient time integration is a necessity for dislocation dynamics simulations of work hardening to achieve experimentally relevant strains. In this work, an efficient time integration scheme using a high order explicit method with time step subcycling and a newly-developed collision detection algorithm are evaluated. First, time integrator performance is examined for an annihilating Frank–Read source, showing the effects of dislocation line collision. The integrator with subcycling is found to significantly out-perform other integration schemes. The performance of the time integration and collision detection algorithms is then tested in a work hardening simulation. The new algorithms show a 100-fold speed-up relativemore » to traditional schemes. As a result, subcycling is shown to improve efficiency significantly while maintaining an accurate solution, and the new collision algorithm allows an arbitrarily large time step size without missing collisions.« less

  8. Advanced time integration algorithms for dislocation dynamics simulations of work hardening

    SciTech Connect

    Sills, Ryan B.; Aghaei, Amin; Cai, Wei

    2016-04-25

    Efficient time integration is a necessity for dislocation dynamics simulations of work hardening to achieve experimentally relevant strains. In this work, an efficient time integration scheme using a high order explicit method with time step subcycling and a newly-developed collision detection algorithm are evaluated. First, time integrator performance is examined for an annihilating Frank–Read source, showing the effects of dislocation line collision. The integrator with subcycling is found to significantly out-perform other integration schemes. The performance of the time integration and collision detection algorithms is then tested in a work hardening simulation. The new algorithms show a 100-fold speed-up relative to traditional schemes. As a result, subcycling is shown to improve efficiency significantly while maintaining an accurate solution, and the new collision algorithm allows an arbitrarily large time step size without missing collisions.

  9. A review of recent advances of numerical simulations of microscale fuel processors for hydrogen production

    SciTech Connect

    Holladay, Jamelyn D.; Wang, Yong

    2015-05-01

    Microscale (<5W) reformers for hydrogen production have been investigated for over a decade. These devices are intended to provide hydrogen for small fuel cells. Due to the reformer’s small size, numerical simulations are critical to understand heat and mass transfer phenomena occurring in the systems. This paper reviews the development of the numerical codes and details the reaction equations used. The majority of the devices utilized methanol as the fuel due to methanol’s low reforming temperature and high conversion, although, there are several methane fueled systems. As computational power has decreased in cost and increased in availability, the codes increased in complexity and accuracy. Initial models focused on the reformer, while more recently, the simulations began including other unit operations such as vaporizers, inlet manifolds, and combustors. These codes are critical for developing the next generation systems. The systems reviewed included, plate reactors, microchannel reactors, annulus reactors, wash-coated, packed bed systems.

  10. A FEEDBACK INTERVENTION TO INCREASE DIGITAL AND PAPER CHECKLIST PERFORMANCE IN TECHNICALLY ADVANCED AIRCRAFT SIMULATION

    PubMed Central

    Rantz, William G; Van Houten, Ron

    2011-01-01

    This study examined whether pilots operating a flight simulator completed digital or paper flight checklists more accurately after receiving postflight graphic and verbal feedback. The dependent variable was the number of checklist items completed correctly per flight. Following treatment, checklist completion with paper and digital checklists increased from 38% and 39%, respectively, to nearly 100% and remained close to 100% after feedback and praise for improvement were withdrawn. Performance was maintained at or near 100% during follow-up probes. PMID:21541133

  11. Study of Plasma Liner Driven Magnetized Target Fusion Via Advanced Simulations

    SciTech Connect

    Samulyak, Roman V.; Parks, Paul

    2013-08-31

    The feasibility of the plasma liner driven Magnetized Target Fusion (MTF) via terascale numerical simulations will be assessed. In the MTF concept, a plasma liner, formed by merging of a number (60 or more) of radial, highly supersonic plasma jets, implodes on the target in the form of two compact plasma toroids, and compresses it to conditions of the fusion ignition. By avoiding major difficulties associated with both the traditional laser driven inertial confinement fusion and solid liner driven MTF, the plasma liner driven MTF potentially provides a low-cost and fast R&D path towards the demonstration of practical fusion energy. High fidelity numerical simulations of full nonlinear models associated with the plasma liner MTF using state-of-art numerical algorithms and terascale computing are necessary in order to resolve uncertainties and provide guidance for future experiments. At Stony Brook University, we have developed unique computational capabilities that ideally suite the MTF problem. The FronTier code, developed in collaboration with BNL and LANL under DOE funding including SciDAC for the simulation of 3D multi-material hydro and MHD flows, has beenbenchmarked and used for fundamental and engineering problems in energy science applications. We have performed 3D simulations of converging supersonic plasma jets, their merger and the formation of the plasma liner, and a study of the corresponding oblique shock problem. We have studied the implosion of the plasma liner on the magnetized plasma target by resolving Rayleigh-Taylor instabilities in 2D and 3D and other relevant physics and estimate thermodynamic conditions of the target at the moment of maximum compression and the hydrodynamic efficiency of the method.

  12. Advanced Conveyance Security Device System Scalability Assessment: Combined 802.15.4 and IP Network Simulation

    DTIC Science & Technology

    2009-02-01

    channel non-beaconed scan. The Test Report on CSD-DCP Latency [ 1 ] shows that using beacons for network discovery takes more time than not using beacons... 1 . SSC Pacific , “CSD Communications Test Report on CSD-DCP Latency and Multi- Lane Portal Wireless Range, Version 1.7,” Space and Naval Warfare...BIBLIOGRAPHY 1 . Bagrodia, R, et al, “An Accurate, Scalable Communication Effects Server for the FCS System of Systems Simulation Environment

  13. Training Research Program and Plans: Advanced Simulation in Undergraduate Pilot Training

    DTIC Science & Technology

    1975-06-01

    the area of recording techniques, perform- ance measures and proficiency imasurement using both on-line and off-lIne techniques. The ADACS facility can...performance equivalence approach is to provide, more objective data through direct comparison of perform- ance both In ihe aircraft and in the simulator...of the number of lines used to define an object and the shades of gray used to provide contrast between objects or between ob- jects and backgroun

  14. Advanced software development workstation: Effectiveness of constraint-checking. [spaceflight simulation and planning

    NASA Technical Reports Server (NTRS)

    Izygon, Michel

    1992-01-01

    This report summarizes the findings and lessons learned from the development of an intelligent user interface for a space flight planning simulation program, in the specific area related to constraint-checking. The different functionalities of the Graphical User Interface part and of the rule-based part of the system have been identified. Their respective domain of applicability for error prevention and error checking have been specified.

  15. A review of recent advances in the spherical harmonics expansion method for semiconductor device simulation.

    PubMed

    Rupp, K; Jungemann, C; Hong, S-M; Bina, M; Grasser, T; Jüngel, A

    The Boltzmann transport equation is commonly considered to be the best semi-classical description of carrier transport in semiconductors, providing precise information about the distribution of carriers with respect to time (one dimension), location (three dimensions), and momentum (three dimensions). However, numerical solutions for the seven-dimensional carrier distribution functions are very demanding. The most common solution approach is the stochastic Monte Carlo method, because the gigabytes of memory requirements of deterministic direct solution approaches has not been available until recently. As a remedy, the higher accuracy provided by solutions of the Boltzmann transport equation is often exchanged for lower computational expense by using simpler models based on macroscopic quantities such as carrier density and mean carrier velocity. Recent developments for the deterministic spherical harmonics expansion method have reduced the computational cost for solving the Boltzmann transport equation, enabling the computation of carrier distribution functions even for spatially three-dimensional device simulations within minutes to hours. We summarize recent progress for the spherical harmonics expansion method and show that small currents, reasonable execution times, and rare events such as low-frequency noise, which are all hard or even impossible to simulate with the established Monte Carlo method, can be handled in a straight-forward manner. The applicability of the method for important practical applications is demonstrated for noise simulation, small-signal analysis, hot-carrier degradation, and avalanche breakdown.

  16. Advanced fluid modeling and PIC/MCC simulations of low-pressure ccrf discharges

    NASA Astrophysics Data System (ADS)

    Becker, M. M.; Kählert, H.; Sun, A.; Bonitz, M.; Loffhagen, D.

    2017-04-01

    Comparative studies of capacitively coupled radio-frequency discharges in helium and argon at pressures between 10 and 80 Pa are presented applying two different fluid modeling approaches as well as two independently developed particle-in-cell/Monte Carlo collision (PIC/MCC) codes. The focus is on the analysis of the range of applicability of a recently proposed fluid model including an improved drift-diffusion approximation for the electron component as well as its comparison with fluid modeling results using the classical drift-diffusion approximation and benchmark results obtained by PIC/MCC simulations. Main features of this time- and space-dependent fluid model are given. It is found that the novel approach shows generally quite good agreement with the macroscopic properties derived by the kinetic simulations and is largely able to characterize qualitatively and quantitatively the discharge behavior even at conditions when the classical fluid modeling approach fails. Furthermore, the excellent agreement between the two PIC/MCC simulation codes using the velocity Verlet method for the integration of the equations of motion verifies their accuracy and applicability.

  17. Advanced Coupled Simulation of Borehole Thermal Energy Storage Systems and Above Ground Installations

    NASA Astrophysics Data System (ADS)

    Welsch, Bastian; Rühaak, Wolfram; Schulte, Daniel O.; Bär, Kristian; Sass, Ingo

    2016-04-01

    Seasonal thermal energy storage in borehole heat exchanger arrays is a promising technology to reduce primary energy consumption and carbon dioxide emissions. These systems usually consist of several subsystems like the heat source (e.g. solarthermics or a combined heat and power plant), the heat consumer (e.g. a heating system), diurnal storages (i.e. water tanks), the borehole thermal energy storage, additional heat sources for peak load coverage (e.g. a heat pump or a gas boiler) and the distribution network. For the design of an integrated system, numerical simulations of all subsystems are imperative. A separate simulation of the borehole energy storage is well-established but represents a simplification. In reality, the subsystems interact with each other. The fluid temperatures of the heat generation system, the heating system and the underground storage are interdependent and affect the performance of each subsystem. To take into account these interdependencies, we coupled a software for the simulation of the above ground facilities with a finite element software for the modeling of the heat flow in the subsurface and the borehole heat exchangers. This allows for a more realistic view on the entire system. Consequently, a finer adjustment of the system components and a more precise prognosis of the system's performance can be ensured.

  18. Advanced Spacecraft EM Modelling Based on Geometric Simplification Process and Multi-Methods Simulation

    NASA Astrophysics Data System (ADS)

    Leman, Samuel; Hoeppe, Frederic

    2016-05-01

    This paper is about the first results of a new generation of ElectroMagnetic (EM) methodology applied to spacecraft systems modelling in the low frequency range (system's dimensions are of the same order of magnitude as the wavelength).This innovative approach aims at implementing appropriate simplifications of the real system based on the identification of the dominant electrical and geometrical parameters driving the global EM behaviour. One rigorous but expensive simulation is performed to quantify the error generated by the use of simpler multi-models. If both the speed up of the simulation time and the quality of the EM response are satisfied, uncertainty simulation could be performed based on the simple models library implementing in a flexible and robust Kron's network formalism.This methodology is expected to open up new perspectives concerning fast parametric analysis, and deep understanding of systems behaviour. It will ensure the identification of main radiated and conducted coupling paths and the sensitive EM parameters in order to optimize the protections and to control the disturbance sources in spacecraft design phases.

  19. Advancing the boundaries of high-connectivity network simulation with distributed computing.

    PubMed

    Morrison, Abigail; Mehring, Carsten; Geisel, Theo; Aertsen, A D; Diesmann, Markus

    2005-08-01

    The availability of efficient and reliable simulation tools is one of the mission-critical technologies in the fast-moving field of computational neuroscience. Research indicates that higher brain functions emerge from large and complex cortical networks and their interactions. The large number of elements (neurons) combined with the high connectivity (synapses) of the biological network and the specific type of interactions impose severe constraints on the explorable system size that previously have been hard to overcome. Here we present a collection of new techniques combined to a coherent simulation tool removing the fundamental obstacle in the computational study of biological neural networks: the enormous number of synaptic contacts per neuron. Distributing an individual simulation over multiple computers enables the investigation of networks orders of magnitude larger than previously possible. The software scales excellently on a wide range of tested hardware, so it can be used in an interactive and iterative fashion for the development of ideas, and results can be produced quickly even for very large networks. In contrast to earlier approaches, a wide class of neuron models and synaptic dynamics can be represented.

  20. Parallel processing of real-time dynamic systems simulation on OSCAR (Optimally SCheduled Advanced multiprocessoR)

    NASA Technical Reports Server (NTRS)

    Kasahara, Hironori; Honda, Hiroki; Narita, Seinosuke

    1989-01-01

    Parallel processing of real-time dynamic systems simulation on a multiprocessor system named OSCAR is presented. In the simulation of dynamic systems, generally, the same calculation are repeated every time step. However, we cannot apply to Do-all or the Do-across techniques for parallel processing of the simulation since there exist data dependencies from the end of an iteration to the beginning of the next iteration and furthermore data-input and data-output are required every sampling time period. Therefore, parallelism inside the calculation required for a single time step, or a large basic block which consists of arithmetic assignment statements, must be used. In the proposed method, near fine grain tasks, each of which consists of one or more floating point operations, are generated to extract the parallelism from the calculation and assigned to processors by using optimal static scheduling at compile time in order to reduce large run time overhead caused by the use of near fine grain tasks. The practicality of the scheme is demonstrated on OSCAR (Optimally SCheduled Advanced multiprocessoR) which has been developed to extract advantageous features of static scheduling algorithms to the maximum extent.

  1. Kinetic-MHD hybrid simulation of fishbone modes excited by fast ions on the experimental advanced superconducting tokamak (EAST)

    NASA Astrophysics Data System (ADS)

    Pei, Youbin; Xiang, Nong; Hu, Youjun; Todo, Y.; Li, Guoqiang; Shen, Wei; Xu, Liqing

    2017-03-01

    Kinetic-MagnetoHydroDynamic hybrid simulations are carried out to investigate fishbone modes excited by fast ions on the Experimental Advanced Superconducting Tokamak. The simulations use realistic equilibrium reconstructed from experiment data with the constraint of the q = 1 surface location (q is the safety factor). Anisotropic slowing down distribution is used to model the distribution of the fast ions from neutral beam injection. The resonance condition is used to identify the interaction between the fishbone mode and the fast ions, which shows that the fishbone mode is simultaneously in resonance with the bounce motion of the trapped particles and the transit motion of the passing particles. Both the passing and trapped particles are important in destabilizing the fishbone mode. The simulations show that the mode frequency chirps down as the mode reaches the nonlinear stage, during which there is a substantial flattening of the perpendicular pressure of fast ions, compared with that of the parallel pressure. For passing particles, the resonance remains within the q = 1 surface, while, for trapped particles, the resonant location moves out radially during the nonlinear evolution. In addition, parameter scanning is performed to examine the dependence of the linear frequency and growth rate of fishbones on the pressure and injection velocity of fast ions.

  2. Investigation of the Hot-Stamping Process for Advanced High-Strength Steel Sheet by Numerical Simulation

    NASA Astrophysics Data System (ADS)

    Liu, H. S.; Xing, Z. W.; Bao, J.; Song, B. Y.

    2010-04-01

    Hot forming is a new way to manufacture complex-shaped components of advanced high-strength steel (AHSS) sheet with a minimum of spring-back. Numerical simulation is an effective way to examine the hot-forming process, particularly to determine thermal and thermo-mechanical characteristics and their dependencies on temperature, strain and strain rate. The flow behavior of the 22MnB5 AHSS is investigated through hot tensile tests. A 3D finite element (FE) model of hot-stamping process for the [InlineMediaObject not available: see fulltext.] shaped part is built under the ABAQUS/Explicit environment based on the solutions of several key problems, such as treatment of contact between blank and tools, determination of material characteristics and meshing, etc. Numerical simulation is carried out to investigate the influence of blank holder force (BHF) and die gap on the hot-forming process for the [InlineMediaObject not available: see fulltext.] shaped part. Numerical results show the FE model is effective in simulation of hot-forming process. Large BHF reduces the amount of spring-back and improves the contact of flange with tools while avoiding cracking of stamped part. Die gap has a considerable influence on the distribution of temperature on side walls; the larger the die gap, higher is the temperature on the sidewall of [InlineMediaObject not available: see fulltext.] shaped part.

  3. Simulation of nucleation and growth of atomic layer deposition phosphorus for doping of advanced FinFETs

    SciTech Connect

    Seidel, Thomas E.; Goldberg, Alexander; Halls, Mat D.; Current, Michael I.

    2016-01-15

    Simulations for the nucleation and growth of phosphorus films were carried out using density functional theory. The surface was represented by a Si{sub 9}H{sub 12} truncated cluster surface model with 2 × 1-reconstructured (100) Si-OH terminations for the initial reaction sites. Chemistries included phosphorous halides (PF{sub 3}, PCl{sub 3}, and PBr{sub 3}) and disilane (Si{sub 2}H{sub 6}). Atomic layer deposition (ALD) reaction sequences were illustrated with three-dimensional molecular models using sequential PF{sub 3} and Si{sub 2}H{sub 6} reactions and featuring SiFH{sub 3} as a byproduct. Exothermic reaction pathways were developed for both nucleation and growth for a Si-OH surface. Energetically favorable reactions for the deposition of four phosphorus atoms including lateral P–P bonding were simulated. This paper suggests energetically favorable thermodynamic reactions for the growth of elemental phosphorus on (100) silicon. Phosphorus layers made by ALD are an option for doping advanced fin field-effect transistors (FinFETs). Phosphorus may be thermally diffused into the silicon or recoil knocked in; simulations of the recoil profile of phosphorus into a FinFET surface are illustrated.

  4. Nuclear Energy Advanced Modeling and Simulation (NEAMS) Waste Integrated Performance and Safety Codes (IPSC) : FY10 development and integration.

    SciTech Connect

    Criscenti, Louise Jacqueline; Sassani, David Carl; Arguello, Jose Guadalupe, Jr.; Dewers, Thomas A.; Bouchard, Julie F.; Edwards, Harold Carter; Freeze, Geoffrey A.; Wang, Yifeng; Schultz, Peter Andrew

    2011-02-01

    This report describes the progress in fiscal year 2010 in developing the Waste Integrated Performance and Safety Codes (IPSC) in support of the U.S. Department of Energy (DOE) Office of Nuclear Energy Advanced Modeling and Simulation (NEAMS) Campaign. The goal of the Waste IPSC is to develop an integrated suite of computational modeling and simulation capabilities to quantitatively assess the long-term performance of waste forms in the engineered and geologic environments of a radioactive waste storage or disposal system. The Waste IPSC will provide this simulation capability (1) for a range of disposal concepts, waste form types, engineered repository designs, and geologic settings, (2) for a range of time scales and distances, (3) with appropriate consideration of the inherent uncertainties, and (4) in accordance with robust verification, validation, and software quality requirements. Waste IPSC activities in fiscal year 2010 focused on specifying a challenge problem to demonstrate proof of concept, developing a verification and validation plan, and performing an initial gap analyses to identify candidate codes and tools to support the development and integration of the Waste IPSC. The current Waste IPSC strategy is to acquire and integrate the necessary Waste IPSC capabilities wherever feasible, and develop only those capabilities that cannot be acquired or suitably integrated, verified, or validated. This year-end progress report documents the FY10 status of acquisition, development, and integration of thermal-hydrologic-chemical-mechanical (THCM) code capabilities, frameworks, and enabling tools and infrastructure.

  5. Simulation of concomitant magnetic fields on fast switched gradient coils used in advanced application of MRI

    NASA Astrophysics Data System (ADS)

    Salinas-Muciño, G.; Torres-García, E.; Hidalgo-Tobon, S.

    2012-10-01

    The process to produce an MR image includes nuclear alignment, RF excitation, spatial encoding, and image formation. To form an image, it is necessary to perform spatial localization of the MR signals, which is achieved using gradient coils. MRI requires the use of gradient coils that generate magnetic fields, which vary linearly with position over the imaging volume. Safety issues have been a motivation to study deeply the relation between the interaction of gradient magnetic field and the peripheral nerve stimulation. In this work is presented a numerical modeling between the concomitant magnetic fields produced by the gradient coils and the electric field induced in a cube with σ conductivity by the gradient field switching in pulse sequences as Eco planar Imaging (EPI), due to this kind of sequence is the most used in advance applications of magnetic resonance imaging as functional MRI, cardiac imaging or diffusion.

  6. A Simulation Study Comparing Incineration and Composting in a Mars-Based Advanced Life Support System

    NASA Technical Reports Server (NTRS)

    Hogan, John; Kang, Sukwon; Cavazzoni, Jim; Levri, Julie; Finn, Cory; Luna, Bernadette (Technical Monitor)

    2000-01-01

    The objective of this study is to compare incineration and composting in a Mars-based advanced life support (ALS) system. The variables explored include waste pre-processing requirements, reactor sizing and buffer capacities. The study incorporates detailed mathematical models of biomass production and waste processing into an existing dynamic ALS system model. The ALS system and incineration models (written in MATLAB/SIMULINK(c)) were developed at the NASA Ames Research Center. The composting process is modeled using first order kinetics, with different degradation rates for individual waste components (carbohydrates, proteins, fats, cellulose and lignin). The biomass waste streams are generated using modified "Eneray Cascade" crop models, which use light- and dark-cycle temperatures, irradiance, photoperiod, [CO2], planting density, and relative humidity as model inputs. The study also includes an evaluation of equivalent system mass (ESM).

  7. Advances in the Thermodynamics of Ideal Gases by Means of Computer Simulations

    NASA Astrophysics Data System (ADS)

    Sands, David; Dunning-Davies, Jeremy

    2010-12-01

    Irreversible thermodynamic processes in ideal gases are investigated by computer simulations of the compound piston. A hard-sphere model of the gas on either side of a compound piston shows that damping occurs naturally without invoking extraneous mechanisms such as friction. Inter-particle collisions are identified as being responsible, as these redistribute the particle energies by altering all the components of momentum. In collisions with the piston, on the other hand, only the component of particle momentum in the direction of the piston motion is affected. Thus inter-particle collisions effectively dissipate the energy of the piston. These ideas are then incorporated into a simpler, one dimensional model based on kinetic theory in which all the particles have the same initial energy and inter-particle collisions are simulated by randomly adjusting the energy distribution. Varying the rate of energy redistribution alters the rate of decay of the piston motion. In addition, this simple model allows thermal interactions with the walls of the vessel to be simulated easily, and we observe a second mechanism of damping due to delayed heating and cooling. These ideas lead directly to a macroscopic formulation of thermodynamics in terms of rate equations. The models give an insight into the micro-dynamical origins of irreversibility in ideal gases and allow the thermodynamics of these irreversible processes to be investigated. We find surprisingly simple relationships between the volume changes and characteristic pressures in the system. Finally, we apply these idea s to the Carnot cycle and show that a dynamic cycle is executed if the piston is allowed to move under alternately ideal isothermal and adiabatic conditions. In this dynamic Carnot cycle not only is work done but power is developed through the motion of the piston. The implications for classical thermodynamics are discussed briefly.

  8. Simultaneous Multiple-Jet Impacts in Concrete-Experiments and Advanced Computational Simulations

    SciTech Connect

    Baum, D.W.; Kuklo, R.M.; Routh, J.W.; Simonson, S.C.

    1999-08-12

    The simultaneous impact of multiple shaped-charge jets on a concrete target has been observed experimentally to lead to the formation of a larger and deeper entrance crater than would be expected from the superposition of the craters of the individual jets. The problem has been modeled with the 3-D simulation code ALE3D, running on massively parallel processors. These calculations indicate that the enlarged damage area is the result of tensile stresses caused by the interactions among the pressure waves simultaneously emanating from the three impact sites. This phenomenon has the potential for enhancing the penetration of a follow-on projectile.

  9. The dynamic information architecture system : an advanced simulation framework for military and civilian applications.

    SciTech Connect

    Campbell, A. P.; Hummel, J. R.

    1998-01-08

    DIAS, the Dynamic Information Architecture System, is an object-oriented simulation system that was designed to provide an integrating framework in which new or legacy software applications can operate in a context-driven frame of reference. DIAS provides a flexible and extensible mechanism to allow disparate, and mixed language, software applications to interoperate. DIAS captures the dynamic interplay between different processes or phenomena in the same frame of reference. Finally, DIAS accommodates a broad range of analysis contexts, with widely varying spatial and temporal resolutions and fidelity.

  10. Advanced Distributed Simulation Technology II (ADST-II) Maneuver Support Testbed. Final Report

    DTIC Science & Technology

    2007-11-02

    PENTIUM PRO w/Sound System ASTi RADIO SET 8 CHANNELS w/ 4 HEADSETS AA" WBS 1.2.5 "CONTROL"DESK - STEALTH - MODSAF - VIDEO SWITCH - DATA...LOGGER RADIOS: I • ASTi I f\\’ SINCCARS ^\\ WBS 1.2.7 WBS 1.111) 37" MITSU DISPLAYS fll STEALTHS • STRIPES • META VR • TASC • NPS...1.2.7 ASTi PC System Radio Simulators 1.2.10 Borland C++ Compiler Misc HW/SW 1.2.10 Dial A Tank SW Misc HW/SW 1.2.10 Microsoft Office Suite Site

  11. Utilizing NX Advanced Simulation for NASA's New Mobile Launcher for Ares-l

    NASA Technical Reports Server (NTRS)

    Brown, Christopher

    2010-01-01

    This slide presentation reviews the use of NX to simulate the new Mobile Launcher (ML) for the Ares-I. It includes: a comparison of the sizes of the Saturn 5, the Space Shuttle, the Ares I, and the Ares V, with the height, and payload capability; the loads control plan; drawings of the base framing, the underside of the ML, beam arrangement, and the finished base and the origin of the 3D CAD data. It also reviews the modeling approach, meshing. the assembly Finite Element Modeling, the model summary. and beam improvements.

  12. Researcher's guide to the NASA Ames Flight Simulator for Advanced Aircraft (FSAA)

    NASA Technical Reports Server (NTRS)

    Sinacori, J. B.; Stapleford, R. L.; Jewell, W. F.; Lehman, J. M.

    1977-01-01

    Performance, limitations, supporting software, and current checkout and operating procedures are presented for the flight simulator, in terms useful to the researcher who intends to use it. Suggestions to help the researcher prepare the experimental plan are also given. The FSAA's central computer, cockpit, and visual and motion systems are addressed individually but their interaction is considered as well. Data required, available options, user responsibilities, and occupancy procedures are given in a form that facilitates the initial communication required with the NASA operations' group.

  13. Application of Advanced Concepts and Techniques in Electromagnetic Topology Based Simulations: CRIPTE and Related Codes

    DTIC Science & Technology

    2008-12-01

    simulation validation process. The experimental setup used to study the interaction of the electromagnetic field with an aperture is seen in Fig. IVA -1. The...distance of 3 meters from the metallic plate to satisfy the far field condition at low frequencies. Semi-anechoic Chamber Fig. IVA -1 Experiment setup...transfer function is then calculated from recored electric fields as follows. Hani") ’trans,all {co)-Eu (co) "/’ EM ( IVA -1) where H^a)) is the

  14. Development of an Advanced Stimulation / Production Predictive Simulator for Enhanced Geothermal Systems

    SciTech Connect

    Pritchett, John W.

    2015-04-15

    There are several well-known obstacles to the successful deployment of EGS projects on a commercial scale, of course. EGS projects are expected to be deeper, on the average, than conventional “natural” geothermal reservoirs, and drilling costs are already a formidable barrier to conventional geothermal projects. Unlike conventional resources (which frequently announce their presence with natural manifestations such as geysers, hot springs and fumaroles), EGS prospects are likely to appear fairly undistinguished from the earth surface. And, of course, the probable necessity of fabricating a subterranean fluid circulation network to mine the heat from the rock (instead of simply relying on natural, pre-existing permeable fractures) adds a significant degree of uncertainty to the prospects for success. Accordingly, the basic motivation for the work presented herein was to try to develop a new set of tools that would be more suitable for this purpose. Several years ago, the Department of Energy’s Geothermal Technologies Office recognized this need and funded a cost-shared grant to our company (then SAIC, now Leidos) to partner with Geowatt AG of Zurich, Switzerland and undertake the development of a new reservoir simulator that would be more suitable for EGS forecasting than the existing tools. That project has now been completed and a new numerical geothermal reservoir simulator has been developed. It is named “HeatEx” (for “Heat Extraction”) and is almost completely new, although its methodology owes a great deal to other previous geothermal software development efforts, including Geowatt’s “HEX-S” code, the STAR and SPFRAC simulators developed here at SAIC/Leidos, the MINC approach originally developed at LBNL, and tracer analysis software originally formulated at INEL. Furthermore, the development effort was led by engineers with many years of experience in using reservoir simulation software to make meaningful forecasts for real geothermal

  15. Toward petascale computing in geosciences: application to the Hanford 300 area

    NASA Astrophysics Data System (ADS)

    Hammond, G. E.; Lichtner, P. C.; Mills, R. T.; Lu, C.

    2008-07-01

    Modeling uranium transport at the Hanford 300 Area presents new challenges for high performance computing. A field-scale three-dimensional domain with an hourly fluctuating Columbia river stage coupled to flow in highly permeable sediments results in fast groundwater flow rates requiring small time steps. In this work, high-performance computing has been applied to simulate variably saturated groundwater flow and tracer transport at the 300 Area using PFLOTRAN. Simulation results are presented for discretizations up to 10.8 million degrees of freedom, while PFLOTRAN performance was assessed on up to one billion degrees of freedom and 12,000 processor cores on Jaguar, the Cray XT4 supercomputer at ORNL.

  16. Toward petascale computing in geosciences: application to the Hanford 300 area

    SciTech Connect

    Hammond, Glenn; Lichtner, Peter; Mills, Richard T; Lu, Chuan

    2008-01-01

    Modeling uranium transport at the Hanford 300 Area presents new challenges for high performance computing. A field-scale three-dimensional domain with an hourly fluctuating Columbia river stage coupled to flow in highly permeable sediments results in fast groundwater flow rates requiring small time steps. In this work, high-performance computing has been applied to simulate variably saturated groundwater flow and tracer transport at the 300 Area using PFLOTRAN. Simulation results are presented for discretizations up to 10.8 million degrees of freedom, while PFLOTRAN performance was assessed on up to one billion degrees of freedom and 12,000 processor cores on Jaguar, the Cray XT4 supercomputer at ORNL.

  17. Toward petascale computing in geosciences: application to the Hanford 300 Area

    SciTech Connect

    Hammond, Glenn E.; Lichtner, Peter C.; Mills, Richard T.; Lu, Chuan

    2008-09-01

    Modeling uranium transport at the Hanford 300 Area presents new challenges for high performance computing. A field-scale three-dimensional domain with an hourly fluctuating Columbia river stage coupled to flow in highly permeable sediments results in fast groundwater flow rates requiring small time steps. In this work, high-performance computing has been applied to simulate variably saturated groundwater flow and tracer transport at the 300 Area using PFLOTRAN. Simulation results are presented for discretizations up to 10.8 million degrees of freedom, while PFLOTRAN performance was assessed on up to one billion degrees of freedom and 12,000 processor cores on Jaguar, the Cray XT4 supercomputer at ORNL.

  18. A Petascale Non-Hydrostatic Atmospheric Dynamical Core in the HOMME Framework

    SciTech Connect

    Tufo, Henry

    2015-09-15

    The High-Order Method Modeling Environment (HOMME) is a framework for building scalable, conserva- tive atmospheric models for climate simulation and general atmospheric-modeling applications. Its spatial discretizations are based on Spectral-Element (SE) and Discontinuous Galerkin (DG) methods. These are local methods employing high-order accurate spectral basis-functions that have been shown to perform well on massively parallel supercomputers at any resolution and scale particularly well at high resolutions. HOMME provides the framework upon which the CAM-SE community atmosphere model dynamical-core is constructed. In its current incarnation, CAM-SE employs the hydrostatic primitive-equations (PE) of motion, which limits its resolution to simulations coarser than 0.1 per grid cell. The primary objective of this project is to remove this resolution limitation by providing HOMME with the capabilities needed to build nonhydrostatic models that solve the compressible Euler/Navier-Stokes equations.

  19. Mastery Learning of Advanced Cardiac Life Support Skills by Internal Medicine Residents Using Simulation Technology and Deliberate Practice

    PubMed Central

    Wayne, Diane B; Butter, John; Siddall, Viva J; Fudala, Monica J; Wade, Leonard D; Feinglass, Joe; McGaghie, William C

    2006-01-01

    BACKGROUND Internal medicine residents must be competent in advanced cardiac life support (ACLS) for board certification. OBJECTIVE To use a medical simulator to assess postgraduate year 2 (PGY-2) residents' baseline proficiency in ACLS scenarios and evaluate the impact of an educational intervention grounded in deliberate practice on skill development to mastery standards. DESIGN Pretest-posttest design without control group. After baseline evaluation, residents received 4, 2-hour ACLS education sessions using a medical simulator. Residents were then retested. Residents who did not achieve a research-derived minimum passing score (MPS) on each ACLS problem had more deliberate practice and were retested until the MPS was reached. PARTICIPANTS Forty-one PGY-2 internal medicine residents in a university-affiliated program. MEASUREMENTS Observational checklists based on American Heart Association (AHA) guidelines with interrater and internal consistency reliability estimates; deliberate practice time needed for residents to achieve minimum competency standards; demographics; United States Medical Licensing Examination Step 1 and Step 2 scores; and resident ratings of program quality and utility. RESULTS Performance improved significantly after simulator training. All residents met or exceeded the mastery competency standard. The amount of practice time needed to reach the MPS was a powerful (negative) predictor of posttest performance. The education program was rated highly. CONCLUSIONS A curriculum featuring deliberate practice dramatically increased the skills of residents in ACLS scenarios. Residents needed different amounts of training time to achieve minimum competency standards. Residents enjoy training, evaluation, and feedback in a simulated clinical environment. This mastery learning program and other competency-based efforts illustrate outcome-based medical education that is now prominent in accreditation reform of residency education. PMID:16637824

  20. Advances in Rotor Performance and Turbulent Wake Simulation Using DES and Adaptive Mesh Refinement

    NASA Technical Reports Server (NTRS)

    Chaderjian, Neal M.

    2012-01-01

    Time-dependent Navier-Stokes simulations have been carried out for a rigid V22 rotor in hover, and a flexible UH-60A rotor in forward flight. Emphasis is placed on understanding and characterizing the effects of high-order spatial differencing, grid resolution, and Spalart-Allmaras (SA) detached eddy simulation (DES) in predicting the rotor figure of merit (FM) and resolving the turbulent rotor wake. The FM was accurately predicted within experimental error using SA-DES. Moreover, a new adaptive mesh refinement (AMR) procedure revealed a complex and more realistic turbulent rotor wake, including the formation of turbulent structures resembling vortical worms. Time-dependent flow visualization played a crucial role in understanding the physical mechanisms involved in these complex viscous flows. The predicted vortex core growth with wake age was in good agreement with experiment. High-resolution wakes for the UH-60A in forward flight exhibited complex turbulent interactions and turbulent worms, similar to the V22. The normal force and pitching moment coefficients were in good agreement with flight-test data.

  1. Development of advanced computational fluid dynamics tools and their application to simulation of internal turbulent flows

    NASA Astrophysics Data System (ADS)

    Emelyanov, V. N.; Karpenko, A. G.; Volkov, K. N.

    2015-06-01

    Modern graphics processing units (GPU) provide architectures and new programming models that enable to harness their large processing power and to design computational fluid dynamics (CFD) simulations at both high performance and low cost. Possibilities of the use of GPUs for the simulation of internal fluid flows are discussed. The finite volume method is applied to solve three-dimensional (3D) unsteady compressible Euler and Navier-Stokes equations on unstructured meshes. Compute Inified Device Architecture (CUDA) technology is used for programming implementation of parallel computational algorithms. Solution of some fluid dynamics problems on GPUs is presented and approaches to optimization of the CFD code related to the use of different types of memory are discussed. Speedup of solution on GPUs with respect to the solution on central processor unit (CPU) is compared with the use of different meshes and different methods of distribution of input data into blocks. Performance measurements show that numerical schemes developed achieve 20 to 50 speedup on GPU hardware compared to CPU reference implementation. The results obtained provide promising perspective for designing a GPU-based software framework for applications in CFD.

  2. "Partial Panel" Operator Training: Advanced Simulator Training to Enhance Situational Awareness in Off-Normal Situations

    SciTech Connect

    Dagle, Jeffery E.

    2006-06-01

    On August 14, 2003, the largest blackout in the history of the North American electricity grid occurred. The four root causes identified by the blackout investigation team were inadequate system understanding, inadequate situational awareness, inadequate tree trimming, and inadequate reliability coordinator diagnostic support. Three of these four root causes can be attributed to deficiencies in training, communication, and the tools used by the control room operators. Using the issues revealed in the August 14, 2003 blackout, and addressing concerns associated with the security of control systems, the Pacific Northwest National Laboratory (PNNL) developed a hands-on training curriculum that utilizes a dispatcher training simulator to evoke loss of situational awareness by the dispatcher. PNNL performed novel changes to the dispatcher training software in order to accomplish this training. This presentation will describe a vision for a future training environment that will incorporate hands-on training with a dispatcher training simulator in a realistic environment to train operators to recognize and respond to cyber security issues associated with their control systems.

  3. Advanced computational simulation for design and manufacturing of lightweight material components for automotive applications

    SciTech Connect

    Simunovic, S.; Aramayo, G.A.; Zacharia, T.; Toridis, T.G.; Bandak, F.; Ragland, C.L.

    1997-04-01

    Computational vehicle models for the analysis of lightweight material performance in automobiles have been developed through collaboration between Oak Ridge National Laboratory, the National Highway Transportation Safety Administration, and George Washington University. The vehicle models have been verified against experimental data obtained from vehicle collisions. The crashed vehicles were analyzed, and the main impact energy dissipation mechanisms were identified and characterized. Important structural parts were extracted and digitized and directly compared with simulation results. High-performance computing played a key role in the model development because it allowed for rapid computational simulations and model modifications. The deformation of the computational model shows a very good agreement with the experiments. This report documents the modifications made to the computational model and relates them to the observations and findings on the test vehicle. Procedural guidelines are also provided that the authors believe need to be followed to create realistic models of passenger vehicles that could be used to evaluate the performance of lightweight materials in automotive structural components.

  4. Simulating the dynamic behavior of chain drive systems by advanced CAE programs

    SciTech Connect

    Ross, J.; Meyer, J.

    1996-09-01

    Due to the increased requirements for chain drive systems of 4-stroke internal combustion engines CAE-tools are necessary to design the optimum dynamic system. In comparison to models used din the past the advantage of the new model CDD (Chain Drive Dynamics) is the capability of simulating the trajectory of each chain link around the drive system. Each chain link is represented by a mass with two degrees of freedom and is coupled to the next by a spring-damper element. The drive sprocket can be moved with a constant or non-constant speed. As in reality the other sprockets are driven by the running chain and can be excited by torques. Due to these unique model features it is possible to calculate all vibration types of the chain, polygon effects and radial or angular vibrations of the sprockets very accurately. The model includes the detailed simulation of a mechanical or a hydraulic tensioner as well. The method is ready to be coupled to other detailed calculation models (e.g. valve train systems, crankshaft, etc.). The high efficiency of the tool predicting the dynamic and acoustic behavior of a chain drive system will be demonstrated in comparison to measurements.

  5. Simulated flight acoustic investigation of treated ejector effectiveness on advanced mechanical suppresors for high velocity jet noise reduction

    NASA Technical Reports Server (NTRS)

    Brausch, J. F.; Motsinger, R. E.; Hoerst, D. J.

    1986-01-01

    Ten scale-model nozzles were tested in an anechoic free-jet facility to evaluate the acoustic characteristics of a mechanically suppressed inverted-velocity-profile coannular nozzle with an accoustically treated ejector system. The nozzle system used was developed from aerodynamic flow lines evolved in a previous contract, defined to incorporate the restraints imposed by the aerodynamic performance requirements of an Advanced Supersonic Technology/Variable Cycle Engine system through all its mission phases. Accoustic data of 188 test points were obtained, 87 under static and 101 under simulated flight conditions. The tests investigated variables of hardwall ejector application to a coannular nozzle with 20-chute outer annular suppressor, ejector axial positioning, treatment application to ejector and plug surfaces, and treatment design. Laser velocimeter, shadowgraph photograph, aerodynamic static pressure, and temperature measurement were acquired on select models to yield diagnositc information regarding the flow field and aerodynamic performance characteristics of the nozzles.

  6. Technology Advancements for Active Remote Sensing of Carbon Dioxide from Space using the ASCENDS CarbonHawk Experiment Simulator

    NASA Astrophysics Data System (ADS)

    Obland, M. D.; Nehrir, A. R.; Liu, Z.; Chen, S.; Campbell, J. F.; Lin, B.; Kooi, S. A.; Fan, T. F.; Choi, Y.; Plant, J.; Yang, M. M.; Browell, E. V.; Harrison, F. W.; Meadows, B.; Dobler, J. T.; Zaccheo, T. S.

    2015-12-01

    This work describes advances in critical lidar technologies and techniques developed as part of the ASCENDS CarbonHawk Experiment Simulator (ACES) system for measuring atmospheric column carbon dioxide (CO2) mixing ratios in support of the NASA Active Sensing of CO2 Emissions over Nights, Days, and Seasons (ASCENDS) mission. The ACES design demonstrates advancements in: (1) enhanced power-aperture product through the use and operation of multiple co-aligned laser transmitters and a multi-aperture telescope design; (2) high-efficiency, high-power Erbium-Doped Fiber Amplifiers (EDFAs); (3) high-bandwidth, low-noise HgCdTe detector and transimpedence amplifier (TIA) subsystem capable of long-duration operation; and (4) advanced algorithms for cloud and aerosol discrimination. The ACES instrument, an Intensity-Modulated Continuous-Wave (IM-CW) lidar, was designed for high-altitude aircraft operations and can be directly applied to space instrumentation to meet the ASCENDS mission requirements. Specifically, the lidar simultaneously transmits three IM-CW laser beams from the high power EDFAs operating near 1571 nm. The outgoing laser beams are aligned to the field of view of three fiber-coupled 17.8-cm diameter telescopes, and the backscattered light collected by the same three telescopes is sent to the detector/TIA subsystem, which has a bandwidth of 4.9 MHz and operates service-free with a tactical Dewar and cryocooler. The electronic bandwidth is only slightly higher than 1 MHz, effectively limiting the noise level. Two key laser modulation approaches are being tested to significantly mitigate the effects of thin clouds on the retrieved CO2 column amounts. This work provides an over view of these technologies, the modulation approaches, and results from recent test flights.

  7. Advances in Computational Radiation Biophysics for Cancer Therapy: Simulating Nano-Scale Damage by Low-Energy Electrons

    NASA Astrophysics Data System (ADS)

    Kuncic, Zdenka

    2015-10-01

    Computational radiation biophysics is a rapidly growing area that is contributing, alongside new hardware technologies, to ongoing developments in cancer imaging and therapy. Recent advances in theoretical and computational modeling have enabled the simulation of discrete, event-by-event interactions of very low energy (≪ 100 eV) electrons with water in its liquid thermodynamic phase. This represents a significant advance in our ability to investigate the initial stages of radiation induced biological damage at the molecular level. Such studies are important for the development of novel cancer treatment strategies, an example of which is given by microbeam radiation therapy (MRT). Here, new results are shown demonstrating that when excitations and ionizations are resolved down to nano-scales, their distribution extends well outside the primary microbeam path, into regions that are not directly irradiated. This suggests that radiation dose alone is insufficient to fully quantify biological damage. These results also suggest that the radiation cross-fire may be an important clue to understanding the different observed responses of healthy cells and tumor cells to MRT.

  8. Advances in Computational Radiation Biophysics for Cancer Therapy: Simulating Nano-Scale Damage by Low-Energy Electrons

    NASA Astrophysics Data System (ADS)

    Kuncic, Zdenka

    Computational radiation biophysics is a rapidly growing area that is contributing, alongside new hardware technologies, to ongoing developments in cancer imaging and therapy. Recent advances in theoretical and computational modeling have enabled the simulation of discrete, event-by-event interactions of very low energy (≪ 100 eV) electrons with water in its liquid thermodynamic phase. This represents a significant advance in our ability to investigate the initial stages of radiation induced biological damage at the molecular level. Such studies are important for the development of novel cancer treatment strategies, an example of which is given by microbeam radiation therapy (MRT). Here, new results are shown demonstrating that when excitations and ionizations are resolved down to nano-scales, their distribution extends well outside the primary microbeam path, into regions that are not directly irradiated. This suggests that radiation dose alone is insufficient to fully quantify biological damage. These results also suggest that the radiation cross-fire may be an important clue to understanding the different observed responses of healthy cells and tumor cells to MRT.

  9. Physics and Chemistry of Advanced Nanoscale Materials: Experiment, Simulation, and Theory

    NASA Astrophysics Data System (ADS)

    Kiang, Ching-Hwa

    1995-01-01

    This thesis discusses simulation and theory of lattice dynamics as well as experiments on novel forms of carbon. A new crystalline AgBr interaction potential was constructed by fitting literature experimental data. The shell model was successfully used to account for the polarizabilities of the ions. This approach overcame difficulties previous investigators faced in determining the AgBr potential. The very useful shell model was generalized to allow, for the first time, its use in dynamical simulations. The rapid shell dynamics, simulating the electron polarization, were integrated out in a generalized Born-Oppenheimer-like approach. The effective Hamiltonians were derived for both quantum and classical descriptions of the shells. The first crystallization and characterization of a metallofullerene were performed. Endohedral metallofullerenes were synthesized and characterized. Metals such as Sc, Y, and Er that formed stable compounds in fullerene cages were synthesized and products purified. The crystal structure of rm Sc_2C_{84} was determined by transmission electron microscopy study. Experimental studies on fullerenes and related materials lead to the first example of a catalytically -grown, fullerene-like material. We discovered that single -layer carbon nanotubes can be produced by vaporizing cobalt and carbon with an electric arc in a helium atmosphere. Catalyst promoters such as sulfur, bismuth, and lead were found not only to enhance the yield of single-layer nanotubes but also to produce tubes in a diameter range not accessible with cobalt alone. Sulfur, bismuth, and tungsten were found to catalyze the formation of cobalt crystals encapsulated in graphitic polyhedra. Various carbon structures were also produced concurrently, e.g. multilayer nanotubes, strings of carbon nanocompartments, carbon nanofibers, and metal-filled nanomaterials. Nanotubes were observed to undergo real-time structural changes under electron beam heating. A growth model of single

  10. Advancements in the Coupling of State-of-the-Art Energetic Particle and Magnetohydrodynamic Simulations

    NASA Astrophysics Data System (ADS)

    Gorby, M.

    2015-12-01

    Recent advancements in coupling the Earth Moon Mars Radiation Environment Module (EMMREM) and two MHD models, Magnetohydrodynamics Around a Sphere (MAS) and ENLIL, have yielded promising results for predicting differential energy flux and radiation doses at 1AU. The EMMREM+MAS coupling focuses on the details of particle acceleration due to CMEs initiated low in the corona (1Rs - 20Rs). The EMMREM+ENLIL coupling gives results for CMEs initiated at ~20Rs and is part of a predictive capability being developed in conjunction with the CCMC. The challenge in forming large solar energetic particle events in both the prompt scenario lower down or for a gradual CME further out is to have enhanced scattering within the acceleration regions while also allowing for efficient escape of accelerated particles downstream. We present here details of the MHD parameters and topology of a CME around the acceleration regions in the early evolution (below 2Rs), dose and flux predictions at 1AU, and how compression regions vs. shocks affect the evolution and spectrum of an SEP event.

  11. Advances in the simulation of toroidal gyro Landau fluid model turbulence

    SciTech Connect

    Waltz, R.E.; Kerbel, G.D.; Milovich, J.; Hammett, G.W.

    1994-12-01

    The gyro-Landau fluid (GLF) model equations for toroidal geometry have been recently applied to the study ion temperature gradient (ITG) mode turbulence using the 3D nonlinear ballooning mode representation (BMR). The present paper extends this work by treating some unresolved issues conceming ITG turbulence with adiabatic electrons. Although eddies are highly elongated in the radial direction long time radial correlation lengths are short and comparable to poloidal lengths. Although transport at vanishing shear is not particularly large, transport at reverse global shear, is significantly less. Electrostatic transport at moderate shear is not much effected by inclusion of local shear and average favorable curvature. Transport is suppressed when critical E{times}B rotational shear is comparable to the maximum linear growth rate with only a weak dependence on magnetic shear. Self consistent turbulent transport of toroidal momentum can result in a transport bifurcation at suffciently large r/(Rq). However the main thrust of the new formulation in the paper deals with advances in the development of finite beta GLF models with trapped electron and BMR numerical methods for treating the fast parallel field motion of the untrapped electrons.

  12. GPU-advanced 3D electromagnetic simulations of superconductors in the Ginzburg–Landau formalism

    SciTech Connect

    Stošić, Darko; Stošić, Dušan; Ludermir, Teresa

    2016-10-01

    Ginzburg–Landau theory is one of the most powerful phenomenological theories in physics, with particular predictive value in superconductivity. The formalism solves coupled nonlinear differential equations for both the electronic and magnetic responsiveness of a given superconductor to external electromagnetic excitations. With order parameter varying on the short scale of the coherence length, and the magnetic field being long-range, the numerical handling of 3D simulations becomes extremely challenging and time-consuming for realistic samples. Here we show precisely how one can employ graphics-processing units (GPUs) for this type of calculations, and obtain physics answers of interest in a reasonable time-frame – with speedup of over 100× compared to best available CPU implementations of the theory on a 256{sup 3} grid.

  13. Advances in quantum simulations of ATPase catalysis in the myosin motor.

    PubMed

    Kiani, Farooq Ahmad; Fischer, Stefan

    2015-04-01

    During its contraction cycle, the myosin motor catalyzes the hydrolysis of ATP. Several combined quantum/classical mechanics (QM/MM) studies of this step have been published, which substantially contributed to our thinking about the catalytic mechanism. The methodological difficulties encountered over the years in the simulation of this complex reaction are now understood: (a) Polarization of the protein peptide groups surrounding the highly charged ATP(4-) cannot be neglected. (b) Some unsuspected protein groups need to be treated QM. (c) Interactions with the γ-phosphate versus the β-phosphate favor a concurrent versus a sequential mechanism, respectively. Thus, these practical aspects strongly influence the computed mechanism, and should be considered when studying other catalyzed phosphor-ester hydrolysis reactions, such as in ATPases or GTPases.

  14. Numerical simulation of the reactive flow in advanced (HSR) combustors using KIVA-2

    NASA Technical Reports Server (NTRS)

    Winowich, Nicholas S.

    1991-01-01

    Recent work has been done with the goal of establishing ultralow emission aircraft gas turbine combustors. A significant portion of the effort is the development of three dimensional computational combustor models. The KIVA-II computer code which is based on the Implicit Continuous Eulerian Difference mesh Arbitrary Lagrangian Eulerian (ICED-ALE) numerical scheme is one of the codes selected by NASA to achieve these goals. This report involves a simulation of jet injection through slanted slots within the Rich burn/Quick quench/Lean burn (RQL) baseline experimental rig. The RQL combustor distinguishes three regions of combustion. This work specifically focuses on modeling the quick quench mixer region in which secondary injection air is introduced radially through 12 equally spaced slots around the mixer circumference. Steady state solutions are achieved with modifications to the KIVA-II program. Work currently underway will evaluate thermal mixing as a function of injection air velocity and angle of inclination of the slots.

  15. Advances in molecular dynamics simulation of ultra-precision machining of hard and brittle materials

    NASA Astrophysics Data System (ADS)

    Guo, Xiaoguang; Li, Qiang; Liu, Tao; Kang, Renke; Jin, Zhuji; Guo, Dongming

    2016-12-01

    Hard and brittle materials, such as silicon, SiC, and optical glasses, are widely used in aerospace, military, integrated circuit, and other fields because of their excellent physical and chemical properties. However, these materials display poor machinability because of their hard and brittle properties. Damages such as surface micro-crack and subsurface damage often occur during machining of hard and brittle materials. Ultra-precision machining is widely used in processing hard and brittle materials to obtain nanoscale machining quality. However, the theoretical mechanism underlying this method remains unclear. This paper provides a review of present research on the molecular dynamics simulation of ultra-precision machining of hard and brittle materials. The future trends in this field are also discussed.

  16. Numerical Simulations of Optical Turbulence Using an Advanced Atmospheric Prediction Model: Implications for Adaptive Optics Design

    NASA Astrophysics Data System (ADS)

    Alliss, R.

    2014-09-01

    Optical turbulence (OT) acts to distort light in the atmosphere, degrading imagery from astronomical telescopes and reducing the data quality of optical imaging and communication links. Some of the degradation due to turbulence can be corrected by adaptive optics. However, the severity of optical turbulence, and thus the amount of correction required, is largely dependent upon the turbulence at the location of interest. Therefore, it is vital to understand the climatology of optical turbulence at such locations. In many cases, it is impractical and expensive to setup instrumentation to characterize the climatology of OT, so numerical simulations become a less expensive and convenient alternative. The strength of OT is characterized by the refractive index structure function Cn2, which in turn is used to calculate atmospheric seeing parameters. While attempts have been made to characterize Cn2 using empirical models, Cn2 can be calculated more directly from Numerical Weather Prediction (NWP) simulations using pressure, temperature, thermal stability, vertical wind shear, turbulent Prandtl number, and turbulence kinetic energy (TKE). In this work we use the Weather Research and Forecast (WRF) NWP model to generate Cn2 climatologies in the planetary boundary layer and free atmosphere, allowing for both point-to-point and ground-to-space seeing estimates of the Fried Coherence length (ro) and other seeing parameters. Simulations are performed using a multi-node linux cluster using the Intel chip architecture. The WRF model is configured to run at 1km horizontal resolution and centered on the Mauna Loa Observatory (MLO) of the Big Island. The vertical resolution varies from 25 meters in the boundary layer to 500 meters in the stratosphere. The model top is 20 km. The Mellor-Yamada-Janjic (MYJ) TKE scheme has been modified to diagnose the turbulent Prandtl number as a function of the Richardson number, following observations by Kondo and others. This modification

  17. A Damage Model for the Simulation of Delamination in Advanced Composites under Variable-Mode Loading

    NASA Technical Reports Server (NTRS)

    Turon, A.; Camanho, P. P.; Costa, J.; Davila, C. G.

    2006-01-01

    A thermodynamically consistent damage model is proposed for the simulation of progressive delamination in composite materials under variable-mode ratio. The model is formulated in the context of Damage Mechanics. A novel constitutive equation is developed to model the initiation and propagation of delamination. A delamination initiation criterion is proposed to assure that the formulation can account for changes in the loading mode in a thermodynamically consistent way. The formulation accounts for crack closure effects to avoid interfacial penetration of two adjacent layers after complete decohesion. The model is implemented in a finite element formulation, and the numerical predictions are compared with experimental results obtained in both composite test specimens and structural components.

  18. GPU-advanced 3D electromagnetic simulations of superconductors in the Ginzburg-Landau formalism

    NASA Astrophysics Data System (ADS)

    Stošić, Darko; Stošić, Dušan; Ludermir, Teresa; Stošić, Borko; Milošević, Milorad V.

    2016-10-01

    Ginzburg-Landau theory is one of the most powerful phenomenological theories in physics, with particular predictive value in superconductivity. The formalism solves coupled nonlinear differential equations for both the electronic and magnetic responsiveness of a given superconductor to external electromagnetic excitations. With order parameter varying on the short scale of the coherence length, and the magnetic field being long-range, the numerical handling of 3D simulations becomes extremely challenging and time-consuming for realistic samples. Here we show precisely how one can employ graphics-processing units (GPUs) for this type of calculations, and obtain physics answers of interest in a reasonable time-frame - with speedup of over 100× compared to best available CPU implementations of the theory on a 2563 grid.

  19. Numerical simulation of fine blanking process using fully coupled advanced constitutive equations with ductile damage

    NASA Astrophysics Data System (ADS)

    Labergere, C.; Saanouni, K.; Benafia, S.; Galmiche, J.; Sulaiman, H.

    2013-05-01

    This paper presents the modelling and adaptive numerical simulation of the fine blanking process. Thermodynamically-consistent constitutive equations, strongly coupled with ductile damage, together with specific boundary conditions (particular command of forces on blank holder and counterpunch) are presented. This model is implemented into ABAQUS/EXPLICIT using the Vumat user subroutine and connected with an adaptive 2D remeshing procedure. The different material parameters are identified for the steel S600MC using experimental tensile tests conducted until the final fracture. A parametric study aiming to examine the sensitivity of the process parameters (die radius, clearance die/punch) to the punch force and fracture surfaces topology (convex zone, sheared zone, fracture zone and the burr).

  20. Evaluation of Temperature Gradient in Advanced Automated Directional Solidification Furnace (AADSF) by Numerical Simulation

    NASA Technical Reports Server (NTRS)

    Bune, Andris V.; Gillies, Donald C.; Lehoczky, Sandor L.

    1996-01-01

    A numerical model of heat transfer using combined conduction, radiation and convection in AADSF was used to evaluate temperature gradients in the vicinity of the crystal/melt interface for variety of hot and cold zone set point temperatures specifically for the growth of mercury cadmium telluride (MCT). Reverse usage of hot and cold zones was simulated to aid the choice of proper orientation of crystal/melt interface regarding residual acceleration vector without actual change of furnace location on board the orbiter. It appears that an additional booster heater will be extremely helpful to ensure desired temperature gradient when hot and cold zones are reversed. Further efforts are required to investigate advantages/disadvantages of symmetrical furnace design (i.e. with similar length of hot and cold zones).

  1. Decadal Simulation and Comprehensive Evaluation of CESM/CAM5 with Advanced Chemistry, Aerosol Microphysics, and Aerosol-Cloud Interactions

    NASA Astrophysics Data System (ADS)

    He, J.; Glotfelty, T.; Zhang, Y.

    2013-12-01

    Community Earth System Model (CESM) is a global Earth system model that was developed by National Center for Atmospheric Research (NCAR) to simulate the entire Earth system by coupling physical climate system with chemistry, biogeochemistry, biology and human systems. It can also quantify the certainties and uncertainties in Earth system feedbacks on time scales up to centuries and longer. The Community Atmosphere Model version 5.1 (CAM5.1) is the atmosphere component of CESM version 1.0.5. CESM/CAM5.1 has been applied by NCAR to simulate climate change as part of the Intergovernmental Panel on Climate Change (IPCC) Fifth Assessment Report (AR5). The IPCC-AR5 indicates that the uncertainties associated with cloud, aerosol, and their feedbacks, as well as uncertainties in near- and long-term projections are emerging issues to be addressed by the scientific community. CESM/CAM5.1 has been recently further developed and improved with advanced treatments for gas-phase chemistry, aerosol chemistry and dynamics, and aerosol-cloud interactions by North Carolina State University (NCSU) to reduce the uncertainties associated with those treatments in the model predictions. Our ultimate goal is to enhance CESM/CAM5's capability in representing current atmosphere and projecting future climate change. In this work, as the first step toward this goal, the NCSU's version of CESM/CAM5 with those advanced treatments is applied for 2001-2010, which will provide valuable information about the model's capability in capturing the decadal variation trend in climate and its potential in projecting future climate changes. The model simulation is conducted at a horizontal resolution of 0.9o × 1.25o and a vertical resolution of 30 layers. The simulation results based on 10-year average are evaluated comprehensively with a variety of datasets, including global surface observations of meteorological and radiative variables; satellite observations of the column mass of chemical species and

  2. Advances in helical stent design and fabrication thermal treatment and structural interaction studies of the simulated plaque-laden artery

    NASA Astrophysics Data System (ADS)

    Welch, Tre Raymond

    Advancements in processing biomaterials have lead to the development of bioresorbable PLLA drug-loaded stents with different geometric configurations. To further advance the technology, systematic studies have been carried out. This dissertation consists of five specific aims: (1) To characterize the effects of thermal annealing on the mechanical characteristics of PLLA helical stent, (2) To characterize the mechanical characteristics of a PLLA double helix stent, (3) To characterize the physical and chemical properties of PLLA films impregnated with niacin and curcumin, (4) To characterize the mechanical interaction of expanded stent and vascular wall with both model simulation and experimental studies using PDMS phantom arteries, (5) To simulate the stent-plaque-artery interactions using computer models. Results and their significances in bioresorbable PLLA drug-loaded stents technology as well as clinical prospects will be presented. For Aim1, thermal annealing is shown to improve mechanical characteristics of the helical stent, including pressure-diameter response curves, incremental stiffness, and collapse pressure. Differential scanning calorimetric analysis of stent fiber reveals that thermal annealing contribute to increased percent crystallinity, thus enhanced mechanical characteristics of the stent. For Aim 2, the new double helix design was shown to leads to improved mechanical characteristics of stent, including pressure-diameter response curves, incremental stiffness, and collapse pressure. Further, it was found to lead to an increased percent crystallinity and reduced degradation rate. For Aim 3, the changes in mechanical properties, crystallinity in PLLA polymer loaded with curcumin, or niacin, or both from that of control are clearly delineated. Results from Aim 4 shed lights on the mechanical disturbance in the vicinity of deployed stent and vascular wall as well as the abnormal shear stresses on the vascular endothelium. Their implications in

  3. On-the-Fly Kinetic Monte Carlo Simulation of Aqueous Phase Advanced Oxidation Processes.

    PubMed

    Guo, Xin; Minakata, Daisuke; Crittenden, John

    2015-08-04

    We have developed an on-the-fly kinetic Monte Carlo (KMC) model to predict the degradation mechanisms and fates of intermediates and byproducts that are produced during aqueous-phase advanced oxidation processes (AOPs). The on-the-fly KMC model is composed of a reaction pathway generator, a reaction rate constant estimator, a mechanistic reduction module, and a KMC solver. The novelty of this work is that we develop the pathway as we march forward in time rather than developing the pathway before we use the KMC method to solve the equations. As a result, we have fewer reactions to consider, and we have greater computational efficiency. We have verified this on-the-fly KMC model for the degradation of polyacrylamide (PAM) using UV light and titanium dioxide (i.e., UV/TiO2). Using the on-the-fly KMC model, we were able to predict the time-dependent profiles of the average molecular weight for PAM. The model provided detailed and quantitative insights into the time evolution of the molecular weight distribution and reaction mechanism. We also verified our on-the-fly KMC model for the destruction of (1) acetone, (2) trichloroethylene (TCE), and (3) polyethylene glycol (PEG) for the ultraviolet light and hydrogen peroxide AOP. We demonstrated that the on-the-fly KMC model can achieve the same accuracy as the computer-based first-principles KMC (CF-KMC) model, which has already been validated in our earlier work. The on-the-fly KMC is particularly suitable for molecules with large molecular weights (e.g., polymers) because the degradation mechanisms for large molecules can result in hundreds of thousands to even millions of reactions. The ordinary differential equations (ODEs) that describe the degradation pathways cannot be solved using traditional numerical methods, but the KMC can solve these equations.

  4. Advancing adaptive optics technology: Laboratory turbulence simulation and optimization of laser guide stars

    NASA Astrophysics Data System (ADS)

    Rampy, Rachel A.

    Since Galileo's first telescope some 400 years ago, astronomers have been building ever-larger instruments. Yet only within the last two decades has it become possible to realize the potential angular resolutions of large ground-based telescopes, by using adaptive optics (AO) technology to counter the blurring effects of Earth's atmosphere. And only within the past decade have the development of laser guide stars (LGS) extended AO capabilities to observe science targets nearly anywhere in the sky. Improving turbulence simulation strategies and LGS are the two main topics of my research. In the first part of this thesis, I report on the development of a technique for manufacturing phase plates for simulating atmospheric turbulence in the laboratory. The process involves strategic application of clear acrylic paint onto a transparent substrate. Results of interferometric characterization of the plates are described and compared to Kolmogorov statistics. The range of r0 (Fried's parameter) achieved thus far is 0.2--1.2 mm at 650 nm measurement wavelength, with a Kolmogorov power law. These plates proved valuable at the Laboratory for Adaptive Optics at University of California, Santa Cruz, where they have been used in the Multi-Conjugate Adaptive Optics testbed, during integration and testing of the Gemini Planet Imager, and as part of the calibration system of the on-sky AO testbed named ViLLaGEs (Visible Light Laser Guidestar Experiments). I present a comparison of measurements taken by ViLLaGEs of the power spectrum of a plate and the real sky turbulence. The plate is demonstrated to follow Kolmogorov theory well, while the sky power spectrum does so in a third of the data. This method of fabricating phase plates has been established as an effective and low-cost means of creating simulated turbulence. Due to the demand for such devices, they are now being distributed to other members of the AO community. The second topic of this thesis pertains to understanding and

  5. Science based integrated approach to advanced nuclear fuel development - integrated multi-scale multi-physics hierarchical modeling and simulation framework Part III: cladding

    SciTech Connect

    Tome, Carlos N; Caro, J A; Lebensohn, R A; Unal, Cetin; Arsenlis, A; Marian, J; Pasamehmetoglu, K

    2010-01-01

    Advancing the performance of Light Water Reactors, Advanced Nuclear Fuel Cycles, and Advanced Reactors, such as the Next Generation Nuclear Power Plants, requires enhancing our fundamental understanding of fuel and materials behavior under irradiation. The capability to accurately model the nuclear fuel systems to develop predictive tools is critical. Not only are fabrication and performance models needed to understand specific aspects of the nuclear fuel, fully coupled fuel simulation codes are required to achieve licensing of specific nuclear fuel designs for operation. The backbone of these codes, models, and simulations is a fundamental understanding and predictive capability for simulating the phase and microstructural behavior of the nuclear fuel system materials and matrices. In this paper we review the current status of the advanced modeling and simulation of nuclear reactor cladding, with emphasis on what is available and what is to be developed in each scale of the project, how we propose to pass information from one scale to the next, and what experimental information is required for benchmarking and advancing the modeling at each scale level.

  6. Petascale computation performance of lightweight multiscale cardiac models using hybrid programming models.

    PubMed

    Pope, Bernard J; Fitch, Blake G; Pitman, Michael C; Rice, John J; Reumann, Matthias

    2011-01-01

    Future multiscale and multiphysics models must use the power of high performance computing (HPC) systems to enable research into human disease, translational medical science, and treatment. Previously we showed that computationally efficient multiscale models will require the use of sophisticated hybrid programming models, mixing distributed message passing processes (e.g. the message passing interface (MPI)) with multithreading (e.g. OpenMP, POSIX pthreads). The objective of this work is to compare the performance of such hybrid programming models when applied to the simulation of a lightweight multiscale cardiac model. Our results show that the hybrid models do not perform favourably when compared to an implementation using only MPI which is in contrast to our results using complex physiological models. Thus, with regards to lightweight multiscale cardiac models, the user may not need to increase programming complexity by using a hybrid programming approach. However, considering that model complexity will increase as well as the HPC system size in both node count and number of cores per node, it is still foreseeable that we will achieve faster than real time multiscale cardiac simulations on these systems using hybrid programming models.

  7. A station blackout simulation for the Advanced Neutron Source Reactor using the integrated primary and secondary system model

    SciTech Connect

    Schneider, E.A.

    1994-06-01

    The Advanced Neutron Source Reactor (ANSR) is a research reactor to be built at Oak Ridge National Laboratory. This paper deals with thermal-hydraulic analysis of ANSR`s cooling systems during nominal and transient conditions, with the major effort focusing upon the construction and testing of computer models of the reactor`s primary, secondary and reflector vessel cooling systems. The code RELAP5 was used to simulate transients, such as loss of coolant accidents and loss of off-site power, as well as to model the behavior of the reactor in steady state. Three stages are involved in constructing and using a RELAP5 model: (1) construction and encoding of the desired model, (2) testing and adjustment of the model until a satisfactory steady state is achieved, and (3) running actual transients using the steady-state results obtained earlier as initial conditions. By use of the ANSR design specifications, a model of the reactor`s primary and secondary cooling systems has been constructed to run a transient simulating a loss of off-site power. This incident assumes a pump coastdown in both the primary and secondary loops. The results determine whether the reactor can survive the transition from forced convection to natural circulation.

  8. Advances in Turbulent Combustion Dynamics Simulations in Bluff-Body Stabilized Flames

    NASA Astrophysics Data System (ADS)

    Tovar, Jonathan Michael

    This work examines the three main aspects of bluff-body stabilized flames: stationary combustion, lean blow-out, and thermo-acoustic instabilities. For the cases of stationary combustion and lean blow-out, an improved version of the Linear Eddy Model approach is used, while in the case of thermo-acoustic instabilities, the effect of boundary conditions on the predictions are studied. The improved version couples the Linear Eddy Model with the full-set of resolved scale Large Eddy Simulation equations for continuity, momentum, energy, and species transport. In traditional implementations the species equations are generally solved using a Lagrangian method which has some significant limitations. The novelty in this work is that the Eulerian species concentration equations are solved at the resolved scale and the Linear Eddy Model is strictly used to close the species production term. In this work, the improved Linear Eddy Model approach is applied to predict the flame properties inside the Volvo rig and it is shown to over-predict the flame temperature and normalized velocity when compared to experimental data using a premixed single step global propane reaction with an equivalence ratio of 0.65. The model is also applied to predict lean blow-out and is shown to predict a stable flame at an equivalence ratio of 0.5 when experiments achieve flame extinction at an equivalence ratio of 0.55. The improved Linear Eddy Model is, however, shown to be closer to experimental data than a comparable reactive flow simulation that uses laminar closure of the species source terms. The thermo-acoustic analysis is performed on a combustor rig designed at the Air Force Research Laboratory. The analysis is performed using a premixed single step global methane reaction for laminar reactive flow and shows that imposing a non-physical boundary condition at the rig exhaust will result in the suppression of acoustic content inside the domain and can alter the temperature contours in non

  9. Performance Refactoring of Instrumentation, Measurement, and Analysis Technologies for Petascale Computing. The PRIMA Project

    SciTech Connect

    Malony, Allen D.; Wolf, Felix G.

    2014-01-31

    The growing number of cores provided by today’s high-­end computing systems present substantial challenges to application developers in their pursuit of parallel efficiency. To find the most effective optimization strategy, application developers need insight into the runtime behavior of their code. The University of Oregon (UO) and the Juelich Supercomputing Centre of Forschungszentrum Juelich (FZJ) develop the performance analysis tools TAU and Scalasca, respectively, which allow high-­performance computing (HPC) users to collect and analyze relevant performance data – even at very large scales. TAU and Scalasca are considered among the most advanced parallel performance systems available, and are used extensively across HPC centers in the U.S., Germany, and around the world. The TAU and Scalasca groups share a heritage of parallel performance tool research and partnership throughout the past fifteen years. Indeed, the close interactions of the two groups resulted in a cross-­fertilization of tool ideas and technologies that pushed TAU and Scalasca to what they are today. It also produced two performance systems with an increasing degree of functional overlap. While each tool has its specific analysis focus, the tools were implementing measurement infrastructures that were substantially similar. Because each tool provides complementary performance analysis, sharing of measurement results is valuable to provide the user with more facets to understand performance behavior. However, each measurement system was producing performance data in different formats, requiring data interoperability tools to be created. A common measurement and instrumentation system was needed to more closely integrate TAU and Scalasca and to avoid the duplication of development and maintenance effort. The PRIMA (Performance Refactoring of Instrumentation, Measurement, and Analysis) project was proposed over three years ago as a joint international effort between UO and FZJ to

  10. Performance Refactoring of Instrumentation, Measurement, and Analysis Technologies for Petascale Computing: the PRIMA Project

    SciTech Connect

    Malony, Allen D.; Wolf, Felix G.

    2014-01-31

    The growing number of cores provided by today’s high-end computing systems present substantial challenges to application developers in their pursuit of parallel efficiency. To find the most effective optimization strategy, application developers need insight into the runtime behavior of their code. The University of Oregon (UO) and the Juelich Supercomputing Centre of Forschungszentrum Juelich (FZJ) develop the performance analysis tools TAU and Scalasca, respectively, which allow high-performance computing (HPC) users to collect and analyze relevant performance data – even at very large scales. TAU and Scalasca are considered among the most advanced parallel performance systems available, and are used extensively across HPC centers in the U.S., Germany, and around the world. The TAU and Scalasca groups share a heritage of parallel performance tool research and partnership throughout the past fifteen years. Indeed, the close interactions of the two groups resulted in a cross-fertilization of tool ideas and technologies that pushed TAU and Scalasca to what they are today. It also produced two performance systems with an increasing degree of functional overlap. While each tool has its specific analysis focus, the tools were implementing measurement infrastructures that were substantially similar. Because each tool provides complementary performance analysis, sharing of measurement results is valuable to provide the user with more facets to understand performance behavior. However, each measurement system was producing performance data in different formats, requiring data interoperability tools to be created. A common measurement and instrumentation system was needed to more closely integrate TAU and Scalasca and to avoid the duplication of development and maintenance effort. The PRIMA (Performance Refactoring of Instrumentation, Measurement, and Analysis) project was proposed over three years ago as a joint international effort between UO and FZJ to accomplish

  11. Simulation of the cabling process for Rutherford cables: An advanced finite element model

    NASA Astrophysics Data System (ADS)

    Cabanes, J.; Garlasche, M.; Bordini, B.; Dallocchio, A.

    2016-12-01

    In all existing large particle accelerators (Tevatron, HERA, RHIC, LHC) the main superconducting magnets are based on Rutherford cables, which are characterized by having: strands fully transposed with respect to the magnetic field, a significant compaction that assures a large engineering critical current density and a geometry that allows efficient winding of the coils. The Nb3Sn magnets developed in the framework of the HL-LHC project for improving the luminosity of the Large Hadron Collider (LHC) are also based on Rutherford cables. Due to the characteristics of Nb3Sn wires, the cabling process has become a crucial step in the magnet manufacturing. During cabling the wires experience large plastic deformations that strongly modify the geometrical dimensions of the sub-elements constituting the superconducting strand. These deformations are particularly severe on the cable edges and can result in a significant reduction of the cable critical current as well as of the Residual Resistivity Ratio (RRR) of the stabilizing copper. In order to understand the main parameters that rule the cabling process and their impact on the cable performance, CERN has developed a 3D Finite Element (FE) model based on the LS-Dyna® software that simulates the whole cabling process. In the paper the model is presented together with a comparison between experimental and numerical results for a copper cable produced at CERN.

  12. Advancing predictive models for particulate formation in turbulent flames via massively parallel direct numerical simulations

    PubMed Central

    Bisetti, Fabrizio; Attili, Antonio; Pitsch, Heinz

    2014-01-01

    Combustion of fossil fuels is likely to continue for the near future due to the growing trends in energy consumption worldwide. The increase in efficiency and the reduction of pollutant emissions from combustion devices are pivotal to achieving meaningful levels of carbon abatement as part of the ongoing climate change efforts. Computational fluid dynamics featuring adequate combustion models will play an increasingly important role in the design of more efficient and cleaner industrial burners, internal combustion engines, and combustors for stationary power generation and aircraft propulsion. Today, turbulent combustion modelling is hindered severely by the lack of data that are accurate and sufficiently complete to assess and remedy model deficiencies effectively. In particular, the formation of pollutants is a complex, nonlinear and multi-scale process characterized by the interaction of molecular and turbulent mixing with a multitude of chemical reactions with disparate time scales. The use of direct numerical simulation (DNS) featuring a state of the art description of the underlying chemistry and physical processes has contributed greatly to combustion model development in recent years. In this paper, the analysis of the intricate evolution of soot formation in turbulent flames demonstrates how DNS databases are used to illuminate relevant physico-chemical mechanisms and to identify modelling needs. PMID:25024412

  13. Simulation study of a novel 3D SPAD pixel in an advanced FD-SOI technology

    NASA Astrophysics Data System (ADS)

    Vignetti, M. M.; Calmon, F.; Lesieur, P.; Savoy-Navarro, A.

    2017-02-01

    In this paper, a novel SPAD architecture implemented in a Fully-Depleted Silicon-On-Insulator (SOI) CMOS technology is presented. Thanks to its intrinsic vertical 3D structure, the proposed solution is expected to allow further scaling of the pixel size while ensuring high fill factors. Moreover the pixel and the detector electronics can benefit of the well-known advantages brought by SOI technology with respect to bulk CMOS, such as higher speed and lower power consumption. TCAD simulations based on realistic process parameters and dedicated post-processing analysis are carried out in order to optimize and validate the avalanche diode architecture for an optimal electric field distribution in the device but also to extract the main parameters of the SPAD, such as the breakdown voltage, the avalanche triggering probability, the dark count rate and the photon detection probability. A comparison between the efficiency in back-side and front-side approaches is carried out with a particular focus on time-of-flight applications.

  14. Correlations between visual test results and flying performance on the advanced simulator for pilot training (ASPT).

    PubMed

    Kruk, R; Regan, D; Beverley, K I; Longridge, T

    1981-08-01

    Looking for visual differences in pilots to account for differences in flying performance, we tested five groups of subjects: Air Force primary student jet pilots, graduating (T38 aircraft) students, Air Force pilot instructors, and two control groups made up of experienced nonpilot aircrew and nonflying civilians. This interim report compares 13 different visual test results with low-visibility landing performance on the Air Force Human Resources Laboratory ASPT simulator. Performance was assessed by the number of crashes and by the distance of the aircraft from the runway threshold at the time of the first visual flight correction. Our main finding was that, for student pilots, landing performance correlated with tracking performance for a target that changed size (as if moving in depth) and also with tracking performance for a target that moved sideways. On the other hand, landing performance correlated comparatively weakly with psychophysical thresholds for motion and contrast. For student pilots, several of the visual tests gave results that correlated with flying grades in T37 and T38 jet aircraft. Tracking tests clearly distinguished between the nonflying group and all the flying groups. On the other hand, visual threshold tests did not distinguish between nonflying and flying groups except for grating contrast, which distinguished between the nonflying group and the pilot instructors. The sideways-motion tracking task was sensitive enough to distinguish between the various flying groups.

  15. Simulation studies of time-control procedures for the advanced air traffic control system

    NASA Technical Reports Server (NTRS)

    Tobias, L.; Alcabin, M.; Erzberger, H.; Obrien, P. J.

    1985-01-01

    The problem of mixing aircraft equipped with time-controlled guidance systems and unequipped aircraft in the terminal area has been investigated via a real-time air traffic control simulation. These four-dimensional (4D) guidance systems can predict and control the touchdown time of an aircraft to an accuracy of a few seconds throughout the descent. The objectives of this investigation were to (1) develop scheduling algorithms and operational procedures for various traffic mixes that ranged from 25% to 75% 4D-equipped aircraft; (2) examine the effect of time errors at 120 n. mi. from touchdown on touchdown time scheduling of the various mix conditions; and (3) develop efficient algorithms and procedures to null the initial time errors prior to reaching the final control sector, 30 n. mi. from touchdown. Results indicate substantial reduction in controller workload and an increase in orderliness when more than 25% of the aircraft are equipped with 4D guidance systems; initial random errors of up to + or - 2 min can be handled via a single speed advisory issued in the arrival control sector, thus avoiding disruption of the time schedule.

  16. Advanced Discontinuous Galerkin Algorithms and First Open-Field Line Turbulence Simulations

    NASA Astrophysics Data System (ADS)

    Hammett, G. W.; Hakim, A.; Shi, E. L.

    2016-10-01

    New versions of Discontinuous Galerkin (DG) algorithms have interesting features that may help with challenging problems of higher-dimensional kinetic problems. We are developing the gyrokinetic code Gkeyll based on DG. DG also has features that may help with the next generation of Exascale computers. Higher-order methods do more FLOPS to extract more information per byte, thus reducing memory and communications costs (which are a bottleneck at exascale). DG uses efficient Gaussian quadrature like finite elements, but keeps the calculation local for the kinetic solver, also reducing communication. Sparse grid methods might further reduce the cost significantly in higher dimensions. The inner product norm can be chosen to preserve energy conservation with non-polynomial basis functions (such as Maxwellian-weighted bases), which can be viewed as a Petrov-Galerkin method. This allows a full- F code to benefit from similar Gaussian quadrature as used in popular δf gyrokinetic codes. Consistent basis functions avoid high-frequency numerical modes from electromagnetic terms. We will show our first results of 3 x + 2 v simulations of open-field line/SOL turbulence in a simple helical geometry (like Helimak/TORPEX), with parameters from LAPD, TORPEX, and NSTX. Supported by the Max-Planck/Princeton Center for Plasma Physics, the SciDAC Center for the Study of Plasma Microturbulence, and DOE Contract DE-AC02-09CH11466.

  17. Advancing predictive models for particulate formation in turbulent flames via massively parallel direct numerical simulations.

    PubMed

    Bisetti, Fabrizio; Attili, Antonio; Pitsch, Heinz

    2014-08-13

    Combustion of fossil fuels is likely to continue for the near future due to the growing trends in energy consumption worldwide. The increase in efficiency and the reduction of pollutant emissions from combustion devices are pivotal to achieving meaningful levels of carbon abatement as part of the ongoing climate change efforts. Computational fluid dynamics featuring adequate combustion models will play an increasingly important role in the design of more efficient and cleaner industrial burners, internal combustion engines, and combustors for stationary power generation and aircraft propulsion. Today, turbulent combustion modelling is hindered severely by the lack of data that are accurate and sufficiently complete to assess and remedy model deficiencies effectively. In particular, the formation of pollutants is a complex, nonlinear and multi-scale process characterized by the interaction of molecular and turbulent mixing with a multitude of chemical reactions with disparate time scales. The use of direct numerical simulation (DNS) featuring a state of the art description of the underlying chemistry and physical processes has contributed greatly to combustion model development in recent years. In this paper, the analysis of the intricate evolution of soot formation in turbulent flames demonstrates how DNS databases are used to illuminate relevant physico-chemical mechanisms and to identify modelling needs.

  18. Simulation of the hybrid Tunka Advanced International Gamma-ray and Cosmic ray Astrophysics (TAIGA)

    NASA Astrophysics Data System (ADS)

    Kunnas, M.; Astapov, I.; Barbashina, N.; Beregnev, S.; Bogdanov, A.; Bogorodskii, D.; Boreyko, V.; Brückner, M.; Budnev, N.; Chiavassa, A.; Chvalaev, O.; Dyachok, A.; Epimakhov, S.; Eremin, T.; Gafarov, A.; Gorbunov, N.; Grebenyuk, V.; Gress, O.; Gress, T.; Grinyuk, A.; Grishin, O.; Horns, D.; Ivanova, A.; Karpov, N.; Kalmykov, N.; Kazarina, Y.; Kindin, V.; Kirichkov, N.; Kiryuhin, S.; Kokoulin, R.; Kompaniets, K.; Konstantinov, E.; Korobchenko, A.; Korosteleva, E.; Kozhin, V.; Kuzmichev, L.; Lenok, V.; Lubsandorzhiev, B.; Lubsandorzhiev, N.; Mirgazov, R.; Mirzoyan, R.; Monkhoev, R.; Nachtigall, R.; Pakhorukov, A.; Panasyuk, M.; Pankov, L.; Perevalov, A.; Petrukhin, A.; Platonov, V.; Poleschuk, V.; Popescu, M.; Popova, E.; Porelli, A.; Porokhovoy, S.; Prosin, V.; Ptuskin, V.; Romanov, V.; Rubtsov, G. I.; Müger; Rybov, E.; Samoliga, V.; Satunin, P.; Saunkin, A.; Savinov, V.; Semeney, Yu; Shaibonov (junior, B.; Silaev, A.; Silaev (junior, A.; Skurikhin, A.; Slunecka, M.; Spiering, C.; Sveshnikova, L.; Tabolenko, V.; Tkachenko, A.; Tkachev, L.; Tluczykont, M.; Veslopopov, A.; Veslopopova, E.; Voronov, D.; Wischnewski, R.; Yashin, I.; Yurin, K.; Zagorodnikov, A.; Zirakashvili, V.; Zurbanov, V.

    2015-08-01

    Up to several 10s of TeV, Imaging Air Cherenkov Telescopes (IACTs) have proven to be the instruments of choice for GeV/TeV gamma-ray astronomy due to their good reconstrucion quality and gamma-hadron separation power. However, sensitive observations at and above 100 TeV require very large effective areas (10 km2 and more), which is difficult and expensive to achieve. The alternative to IACTs are shower front sampling arrays (non-imaging technique or timing-arrays) with a large area and a wide field of view. Such experiments provide good core position, energy and angular resolution, but only poor gamma-hadron separation. Combining both experimental approaches, using the strengths of both techniques, could optimize the sensitivity to the highest energies. The TAIGA project plans to combine the non-imaging HiSCORE [8] array with small (∼10m2) imaging telescopes. This paper covers simulation results of this hybrid approach.

  19. Advancement in polarimetric glucose sensing: simulation and measurement of birefringence properties of cornea

    NASA Astrophysics Data System (ADS)

    Malik, Bilal H.; Coté, Gerard L.

    2011-03-01

    Clinical guidelines dictate that frequent blood glucose monitoring in diabetic patients is critical towards proper management of the disease. Although, several different types of glucose monitors are now commercially available, most of these devices are invasive, thereby adversely affecting patient compliance. To this end, optical polarimetric glucose sensing through the eye has been proposed as a potential noninvasive means to aid in the control of diabetes. Arguably, the most critical and limiting factor towards successful application of such a technique is the time varying corneal birefringence due to eye motion artifact. We present a spatially variant uniaxial eye model to serve as a tool towards better understanding of the cornea's birefringence properties. The simulations show that index-unmatched coupling of light is spatially limited to a smaller range when compared to the index-matched situation. Polarimetric measurements on rabbits' eyes indicate relative agreement between the modeled and experimental values of corneal birefringence. In addition, the observed rotation in the plane of polarized light for multiple wavelengths demonstrates the potential for using a dual-wavelength polarimetric approach to overcome the noise due to timevarying corneal birefringence. These results will ultimately aid us in the development of an appropriate eye coupling mechanism for in vivo polarimetric glucose measurements.

  20. MHD Simulation of Magnetic Nozzle Plasma with the NIMROD Code: Applications to the VASIMR Advanced Space Propulsion Concept

    NASA Astrophysics Data System (ADS)

    Tarditi, Alfonso G.; Shebalin, John V.

    2002-11-01

    A simulation study with the NIMROD code [1] is being carried on to investigate the efficiency of the thrust generation process and the properties of the plasma detachment in a magnetic nozzle. In the simulation, hot plasma is injected in the magnetic nozzle, modeled as a 2D, axi-symmetric domain. NIMROD has two-fluid, 3D capabilities but the present runs are being conducted within the MHD, 2D approximation. As the plasma travels through the magnetic field, part of its thermal energy is converted into longitudinal kinetic energy, along the axis of the nozzle. The plasma eventually detaches from the magnetic field at a certain distance from the nozzle throat where the kinetic energy becomes larger than the magnetic energy. Preliminary NIMROD 2D runs have been benchmarked with a particle trajectory code showing satisfactory results [2]. Further testing is here reported with the emphasis on the analysis of the diffusion rate across the field lines and of the overall nozzle efficiency. These simulation runs are specifically designed for obtaining comparisons with laboratory measurements of the VASIMR experiment, by looking at the evolution of the radial plasma density and temperature profiles in the nozzle. VASIMR (Variable Specific Impulse Magnetoplasma Rocket, [3]) is an advanced space propulsion concept currently under experimental development at the Advanced Space Propulsion Laboratory, NASA Johnson Space Center. A plasma (typically ionized Hydrogen or Helium) is generated by a RF (Helicon) discharge and heated by an Ion Cyclotron Resonance Heating antenna. The heated plasma is then guided into a magnetic nozzle to convert the thermal plasma energy into effective thrust. The VASIMR system has no electrodes and a solenoidal magnetic field produced by an asymmetric mirror configuration ensures magnetic insulation of the plasma from the material surfaces. By powering the plasma source and the heating antenna at different levels it is possible to vary smoothly of the

  1. Open-Source Development of the Petascale Reactive Flow and Transport Code PFLOTRAN

    NASA Astrophysics Data System (ADS)

    Hammond, G. E.; Andre, B.; Bisht, G.; Johnson, T.; Karra, S.; Lichtner, P. C.; Mills, R. T.

    2013-12-01

    Open-source software development has become increasingly popular in recent years. Open-source encourages collaborative and transparent software development and promotes unlimited free redistribution of source code to the public. Open-source development is good for science as it reveals implementation details that are critical to scientific reproducibility, but generally excluded from journal publications. In addition, research funds that would have been spent on licensing fees can be redirected to code development that benefits more scientists. In 2006, the developers of PFLOTRAN open-sourced their code under the U.S. Department of Energy SciDAC-II program. Since that time, the code has gained popularity among code developers and users from around the world seeking to employ PFLOTRAN to simulate thermal, hydraulic, mechanical and biogeochemical processes in the Earth's surface/subsurface environment. PFLOTRAN is a massively-parallel subsurface reactive multiphase flow and transport simulator designed from the ground up to run efficiently on computing platforms ranging from the laptop to leadership-class supercomputers, all from a single code base. The code employs domain decomposition for parallelism and is founded upon the well-established and open-source parallel PETSc and HDF5 frameworks. PFLOTRAN leverages modern Fortran (i.e. Fortran 2003-2008) in its extensible object-oriented design. The use of this progressive, yet domain-friendly programming language has greatly facilitated collaboration in the code's software development. Over the past year, PFLOTRAN's top-level data structures were refactored as Fortran classes (i.e. extendible derived types) to improve the flexibility of the code, ease the addition of new process models, and enable coupling to external simulators. For instance, PFLOTRAN has been coupled to the parallel electrical resistivity tomography code E4D to enable hydrogeophysical inversion while the same code base can be used as a third

  2. Advanced finite-difference time-domain techniques for simulation of optical devices with complex material properties and geometric configurations

    NASA Astrophysics Data System (ADS)

    Zhou, Dong

    2005-11-01

    Modeling and simulation play increasingly more important roles in the development and commercialization of optical devices and integrated circuits. The current trend in photonic technologies is to push the level of integration and to utilize materials and structures of increasing complexity. On the other hand, the superb characteristics of free-space and fiber-optics continue to hold strong position to serve a wide range of applications. All these constitute significant challenges for the computer-aided modeling, simulation, and design of such optical devices and systems. The research work in this thesis deals with investigation and development of advanced finite-difference time-domain (FDTD) methods with focus on emerging optical devices and integrated circuits with complex material and/or structural properties. On the material aspects, we consider in a systematic fashion the dispersive and anisotropic characteristics of different materials (i.e., insulators, semiconductors, and conductors) in a broad wavelength range. The Lorentz model is examined and adapted as a general model for treating the material dispersion in the context of FDTD solutions. A dispersive FDTD method based on the multi-term Lorentz dispersive model is developed and employed for the modeling and design of the optical devices. In the FDTD scheme, the perfectly matched layer (PML) boundary condition is extended to the dispersive medium with arbitrary high order Lorentz terms. Finally, a parameter extraction scheme that links the Lorentz model to the experimental results is established. Further, the dispersive FDTD method is then applied to modeling and simulation of magneto-optical (MO) disk system, in combination of the vector diffraction theory. While the former is used for analysis of the interaction of the focused optical field interacting with the conducting materials on the surface of disk, the latter is to simulate the beam propagation from the objective lens to the disk surface. The

  3. Challenge problem and milestones for : Nuclear Energy Advanced Modeling and Simulation (NEAMS) waste Integrated Performance and Safety Codes (IPSC).

    SciTech Connect

    Freeze, Geoffrey A.; Wang, Yifeng; Howard, Robert; McNeish, Jerry A.; Schultz, Peter Andrew; Arguello, Jose Guadalupe, Jr.

    2010-09-01

    This report describes the specification of a challenge problem and associated challenge milestones for the Waste Integrated Performance and Safety Codes (IPSC) supporting the U.S. Department of Energy (DOE) Office of Nuclear Energy Advanced Modeling and Simulation (NEAMS) Campaign. The NEAMS challenge problems are designed to demonstrate proof of concept and progress towards IPSC goals. The goal of the Waste IPSC is to develop an integrated suite of modeling and simulation capabilities to quantitatively assess the long-term performance of waste forms in the engineered and geologic environments of a radioactive waste storage or disposal system. The Waste IPSC will provide this simulation capability (1) for a range of disposal concepts, waste form types, engineered repository designs, and geologic settings, (2) for a range of time scales and distances, (3) with appropriate consideration of the inherent uncertainties, and (4) in accordance with robust verification, validation, and software quality requirements. To demonstrate proof of concept and progress towards these goals and requirements, a Waste IPSC challenge problem is specified that includes coupled thermal-hydrologic-chemical-mechanical (THCM) processes that describe (1) the degradation of a borosilicate glass waste form and the corresponding mobilization of radionuclides (i.e., the processes that produce the radionuclide source term), (2) the associated near-field physical and chemical environment for waste emplacement within a salt formation, and (3) radionuclide transport in the near field (i.e., through the engineered components - waste form, waste package, and backfill - and the immediately adjacent salt). The initial details of a set of challenge milestones that collectively comprise the full challenge problem are also specified.

  4. Flexible composition and execution of high performance, high fidelity multiscale biomedical simulations

    PubMed Central

    Groen, D.; Borgdorff, J.; Bona-Casas, C.; Hetherington, J.; Nash, R. W.; Zasada, S. J.; Saverchenko, I.; Mamonski, M.; Kurowski, K.; Bernabeu, M. O.; Hoekstra, A. G.; Coveney, P. V.

    2013-01-01

    Multiscale simulations are essential in the biomedical domain to accurately model human physiology. We present a modular approach for designing, constructing and executing multiscale simulations on a wide range of resources, from laptops to petascale supercomputers, including combinations of these. Our work features two multiscale applications, in-stent restenosis and cerebrovascular bloodflow, which combine multiple existing single-scale applications to create a multiscale simulation. These applications can be efficiently coupled, deployed and executed on computers up to the largest (peta) scale, incurring a coupling overhead of 1–10% of the total execution time. PMID:24427530

  5. Solving a Large Scale Thermal Radiation Problem Using an Interoperable Executive Library Framework on Petascale Supercomputers

    SciTech Connect

    Wong, Kwai; D'Azevedo, Ed F; Hu, Harvy; Kail, Andrew A; Su, Shiquan

    2015-01-01

    We present a novel methodology to compute the transient thermal condi- tion of a set of objects in an open space environment. The governing energy equa- tion and the convective energy transfer are solved by the sparse iterative solvers. The average radiating energy on a set of surfaces is represented by a linear system of the radiosity equations, which is factorized by an out-of-core parallel Cholesky decomposition solver. The coupling and interplay of the direct radiosity solver us- ing GPUs and the CPU-based sparse solver are handled by a light weight software integrator called Interoperable Executive Library (IEL). IEL manages the distribu- tion of data and memory, coordinates communication among parallel processes, and also directs execution of the set of loosely coupled physics tasks as warranted by the thermal condition of the simulated object and its surrounding environment.

  6. Advanced simulation methods to detect resonant frequency stack up in focal plane design

    NASA Astrophysics Data System (ADS)

    Adams, Craig; Malone, Neil R.; Torres, Raymond; Fajardo, Armando; Vampola, John; Drechsler, William; Parlato, Russell; Cobb, Christopher; Randolph, Max; Chiourn, Surath; Swinehart, Robert

    2014-09-01

    Wire used to connect focal plane electrical connections to external electrical circuitry can be modeled using the length, diameter and loop height to determine the resonant frequency. The design of the adjacent electric board and mounting platform can also be analyzed. The combined resonant frequency analysis can then be used to decouple the different component resonant frequencies to eliminate the potential for metal fatigue in the wires. It is important to note that the nominal maximum stress values that cause metal fatigue can be much less than the ultimate tensile stress limit or the yield stress limit and are degraded further at resonant frequencies. It is critical that tests be done to qualify designs that are not easily simulated due to material property variation and complex structures. Sine wave vibration testing is a critical component of qualification vibration and provides the highest accuracy in determining the resonant frequencies which can be reduced or uncorrelated improving the structural performance of the focal plane assembly by small changes in design damping or modern space material selection. Vibration flow down from higher levels of assembly needs consideration for intermediary hardware, which may amplify or attenuate the full up system vibration profile. A simple pass through of vibration requirements may result in over test or missing amplified resonant frequencies that can cause system failure. Examples are shown of metal wire fatigue such as discoloration and microscopic cracks which are visible at the submicron level by the use of a scanning electron microscope. While it is important to model and test resonant frequencies the Focal plane must also be constrained such that Coefficient of Thermal expansion mismatches are allowed to move and not overstress the FPA.

  7. FISPACT-II: An Advanced Simulation System for Activation, Transmutation and Material Modelling

    NASA Astrophysics Data System (ADS)

    Sublet, J.-Ch.; Eastwood, J. W.; Morgan, J. G.; Gilbert, M. R.; Fleming, M.; Arter, W.

    2017-01-01

    Fispact-II is a code system and library database for modelling activation-transmutation processes, depletion-burn-up, time dependent inventory and radiation damage source terms caused by nuclear reactions and decays. The Fispact-II code, written in object-style Fortran, follows the evolution of material irradiated by neutrons, alphas, gammas, protons, or deuterons, and provides a wide range of derived radiological output quantities to satisfy most needs for nuclear applications. It can be used with any ENDF-compliant group library data for nuclear reactions, particle-induced and spontaneous fission yields, and radioactive decay (including but not limited to TENDL-2015, ENDF/B-VII.1, JEFF-3.2, JENDL-4.0u, CENDL-3.1 processed into fine-group-structure files, GEFY-5.2 and UKDD-16), as well as resolved and unresolved resonance range probability tables for self-shielding corrections and updated radiological hazard indices. The code has many novel features including: extension of the energy range up to 1 GeV; additional neutron physics including self-shielding effects, temperature dependence, thin and thick target yields; pathway analysis; and sensitivity and uncertainty quantification and propagation using full covariance data. The latest ENDF libraries such as TENDL encompass thousands of target isotopes. Nuclear data libraries for Fispact-II are prepared from these using processing codes PREPRO, NJOY and CALENDF. These data include resonance parameters, cross sections with covariances, probability tables in the resonance ranges, PKA spectra, kerma, dpa, gas and radionuclide production and energy-dependent fission yields, supplemented with all 27 decay types. All such data for the five most important incident particles are provided in evaluated data tables. The Fispact-II simulation software is described in detail in this paper, together with the nuclear data libraries. The Fispact-II system also includes several utility programs for code-use optimisation

  8. MADNESS: A Multiresolution, Adaptive Numerical Environment for Scientific Simulation

    SciTech Connect

    Harrison, Robert J.; Beylkin, Gregory; Bischoff, Florian A.; Calvin, Justus A.; Fann, George I.; Fosso-Tande, Jacob; Galindo, Diego; Hammond, Jeff R.; Hartman-Baker, Rebecca; Hill, Judith C.; Jia, Jun; Kottmann, Jakob S.; Yvonne Ou, M-J.; Pei, Junchen; Ratcliff, Laura E.; Reuter, Matthew G.; Richie-Halford, Adam C.; Romero, Nichols A.; Sekino, Hideo; Shelton, William A.; Sundahl, Bryan E.; Thornton, W. Scott; Valeev, Edward F.; Vázquez-Mayagoitia, Álvaro; Vence, Nicholas; Yanai, Takeshi; Yokoi, Yukina

    2016-01-01

    MADNESS (multiresolution adaptive numerical environment for scientific simulation) is a high-level software environment for solving integral and differential equations in many dimensions that uses adaptive and fast harmonic analysis methods with guaranteed precision based on multiresolution analysis and separated representations. Underpinning the numerical capabilities is a powerful petascale parallel programming environment that aims to increase both programmer productivity and code scalability. This paper describes the features and capabilities of MADNESS and briefly discusses some current applications in chemistry and several areas of physics.

  9. MADNESS: A Multiresolution, Adaptive Numerical Environment for Scientific Simulation

    SciTech Connect

    Harrison, Robert J.; Beylkin, Gregory; Bischoff, Florian A.; Calvin, Justus A.; Fann, George I.; Fosso-Tande, Jacob; Galindo, Diego; Hammond, Jeff R.; Hartman-Baker, Rebecca; Hill, Judith C.; Jia, Jun; Kottmann, Jakob S.; Yvonne Ou, M-J.; Pei, Junchen; Ratcliff, Laura E.; Reuter, Matthew G.; Richie-Halford, Adam C.; Romero, Nichols A.; Sekino, Hideo; Shelton, William A.; Sundahl, Bryan E.; Thornton, W. Scott; Valeev, Edward F.; Vázquez-Mayagoitia, Álvaro; Vence, Nicholas; Yanai, Takeshi; Yokoi, Yukina

    2016-01-01

    We present MADNESS (multiresolution adaptive numerical environment for scientific simulation) that is a high-level software environment for solving integral and differential equations in many dimensions that uses adaptive and fast harmonic analysis methods with guaranteed precision that are based on multiresolution analysis and separated representations. Underpinning the numerical capabilities is a powerful petascale parallel programming environment that aims to increase both programmer productivity and code scalability. This paper describes the features and capabilities of MADNESS and briefly discusses some current applications in chemistry and several areas of physics.

  10. Leveraging data analytics, patterning simulations and metrology models to enhance CD metrology accuracy for advanced IC nodes

    NASA Astrophysics Data System (ADS)

    Rana, Narender; Zhang, Yunlin; Kagalwala, Taher; Hu, Lin; Bailey, Todd

    2014-04-01

    Integrated Circuit (IC) technology is changing in multiple ways: 193i to EUV exposure, planar to non-planar device architecture, from single exposure lithography to multiple exposure and DSA patterning etc. Critical dimension (CD) control requirement is becoming stringent and more exhaustive: CD and process window are shrinking., three sigma CD control of < 2 nm is required in complex geometries, and metrology uncertainty of < 0.2 nm is required to achieve the target CD control for advanced IC nodes (e.g. 14 nm, 10 nm and 7 nm nodes). There are fundamental capability and accuracy limits in all the metrology techniques that are detrimental to the success of advanced IC nodes. Reference or physical CD metrology is provided by CD-AFM, and TEM while workhorse metrology is provided by CD-SEM, Scatterometry, Model Based Infrared Reflectrometry (MBIR). Precision alone is not sufficient moving forward. No single technique is sufficient to ensure the required accuracy of patterning. The accuracy of CD-AFM is ~1 nm and precision in TEM is poor due to limited statistics. CD-SEM, scatterometry and MBIR need to be calibrated by reference measurements for ensuring the accuracy of patterned CDs and patterning models. There is a dire need of measurement with < 0.5 nm accuracy and the industry currently does not have that capability with inline measurments. Being aware of the capability gaps for various metrology techniques, we have employed data processing techniques and predictive data analytics, along with patterning simulation and metrology models, and data integration techniques to selected applications demonstrating the potential solution and practicality of such an approach to enhance CD metrology accuracy. Data from multiple metrology techniques has been analyzed in multiple ways to extract information with associated uncertainties and integrated to extract the useful and more accurate CD and profile information of the structures. This paper presents the optimization of

  11. Lightweighting Automotive Materials for Increased Fuel Efficiency and Delivering Advanced Modeling and Simulation Capabilities to U.S. Manufacturers

    SciTech Connect

    Hale, Steve

    2013-09-11

    Abstract The National Center for Manufacturing Sciences (NCMS) worked with the U.S. Department of Energy (DOE), National Energy Technology Laboratory (NETL), to bring together research and development (R&D) collaborations to develop and accelerate the knowledgebase and infrastructure for lightweighting materials and manufacturing processes for their use in structural and applications in the automotive sector. The purpose/importance of this DOE program: • 2016 CAFÉ standards. • Automotive industry technology that shall adopt the insertion of lightweighting material concepts towards manufacturing of production vehicles. • Development and manufacture of advanced research tools for modeling and simulation (M&S) applications to reduce manufacturing and material costs. • U.S. competitiveness that will help drive the development and manufacture of the next generation of materials. NCMS established a focused portfolio of applied R&D projects utilizing lightweighting materials for manufacture into automotive structures and components. Areas that were targeted in this program: • Functionality of new lightweighting materials to meet present safety requirements. • Manufacturability using new lightweighting materials. • Cost reduction for the development and use of new lightweighting materials. The automotive industry’s future continuously evolves through innovation, and lightweight materials are key in achieving a new era of lighter, more efficient vehicles. Lightweight materials are among the technical advances needed to achieve fuel/energy efficiency and reduce carbon dioxide (CO2) emissions: • Establish design criteria methodology to identify the best materials for lightweighting. • Employ state-of-the-art design tools for optimum material development for their specific applications. • Match new manufacturing technology to production volume. • Address new process variability with new production-ready processes.

  12. Data compression in the petascale astronomy era: A GERLUMPH case study

    NASA Astrophysics Data System (ADS)

    Vohl, D.; Fluke, C. J.; Vernardos, G.

    2015-09-01

    As the volume of data grows, astronomers are increasingly faced with choices on what data to keep-and what to throw away. Recent work evaluating the JPEG2000 (ISO/IEC 15444) standards as a future data format standard in astronomy has shown promising results on observational data. However, there is still a need to evaluate its potential on other type of astronomical data, such as from numerical simulations. GERLUMPH (the GPU-Enabled High Resolution cosmological MicroLensing parameter survey) represents an example of a data intensive project in theoretical astrophysics. In the next phase of processing, the ≈ 27 terabyte GERLUMPH dataset is set to grow by a factor of 100-well beyond the current storage capabilities of the supercomputing facility on which it resides. In order to minimize bandwidth usage, file transfer time, and storage space, this work evaluates several data compression techniques. Specifically, we investigate off-the-shelf and custom lossless compression algorithms as well as the lossy JPEG2000 compression format. Results of lossless compression algorithms on GERLUMPH data products show small compression ratios (1.35:1 to 4.69:1 of input file size) varying with the nature of the input data. Our results suggests that JPEG2000 could be suitable for other numerical datasets stored as gridded data or volumetric data. When approaching lossy data compression, one should keep in mind the intended purposes of the data to be compressed, and evaluate the effect of the loss on future analysis. In our case study, lossy compression and a high compression ratio do not significantly compromise the intended use of the data for constraining quasar source profiles from cosmological microlensing.

  13. Advancements for Active Remote Sensing of Carbon Dioxide from Space using the ASCENDS CarbonHawk Experiment Simulator: First Results

    NASA Astrophysics Data System (ADS)

    Obland, M. D.; Nehrir, A. R.; Lin, B.; Harrison, F. W.; Kooi, S. A.; Choi, Y.; Plant, J.; Yang, M. M.; Antill, C.; Campbell, J. F.; Ismail, S.; Browell, E. V.; Meadows, B.; Dobler, J. T.; Zaccheo, T. S.; Moore, B., III; Crowell, S.

    2014-12-01

    The ASCENDS CarbonHawk Experiment Simulator (ACES) is an Intensity-Modulated Continuous-Wave lidar system recently developed at NASA Langley Research Center that seeks to advance technologies and techniques critical to measuring atmospheric column carbon dioxide (CO2) mixing ratios in support of the NASA Active Sensing of CO2 Emissions over Nights, Days, and Seasons (ASCENDS) mission. These advancements include: (1) increasing the power-aperture product to approach ASCENDS mission requirements by implementing multi-aperture telescopes and multiple co-aligned laser transmitters; (2) incorporating high-efficiency, high-power Erbium-Doped Fiber Amplifiers (EDFAs); (3) developing and incorporating a high-bandwidth, low-noise HgCdTe detector and transimpedence amplifier (TIA) subsystem capable of long-duration operation on Global Hawk aircraft, and (4) advancing algorithms for cloud and aerosol discrimination. The ACES instrument architecture is being developed for operation on high-altitude aircraft and will be directly scalable to meet the ASCENDS mission requirements. ACES simultaneously transmits five laser beams: three from commercial EDFAs operating near 1571 nm, and two from the Exelis oxygen (O2) Raman fiber laser amplifier system operating near 1260 nm. The Integrated-Path Differential Absorption (IPDA) lidar approach is used at both wavelengths to independently measure the CO2 and O2 column number densities and retrieve the average column CO2 mixing ratio. The outgoing laser beams are aligned to the field of view of ACES' three fiber-coupled 17.8-cm diameter athermal telescopes. The backscattered light collected by the three telescopes is sent to the detector/TIA subsystem, which has a bandwidth of 4.7 MHz and operates service-free using a tactical dewar and cryocooler. Two key laser modulation approaches are being tested to significantly mitigate the effects of thin clouds on the retrieved CO2 column amounts. Full instrument development concluded in the

  14. Hybrid MPI/OpenMP First Principles Materials Science Codes for Intel Xeon Phi (MIC) based HPC: The Petascale and Beyond

    NASA Astrophysics Data System (ADS)

    Canning, Andrew; Deslippe, Jack; Chelikowsky, James; Louie, Steven G.

    2015-03-01

    Exploiting the full potential of present petascale and future exascale supercomputers based on many core chips requires a high level of threading on the node as well as reduced communications between the nodes to scale to large node counts. We will present results for a variety of first principles materials science codes (Berkeley-GW, PARATEC, PARSEC) on Intel Xeon Phi (MIC) based supercomputers for algorithms using hybrid OpenMP/MPI parallelism to obtain both efficiently threaded single chip performance and parallel scaling to large node counts. Support provided through the SciDAC program funded by U.S. Department of Energy, Office of Science, ASCR and BES under Contract No. DE-AC02-05CH11231 at LBNL and Award No. DESC0008877 at UT, Austin.

  15. Stress trajectory and advanced hydraulic-fracture simulations for the Eastern Gas Shales Project. Final report, April 30, 1981-July 30, 1983

    SciTech Connect

    Advani, S.H.; Lee, J.K.

    1983-01-01

    A summary review of hydraulic fracture modeling is given. Advanced hydraulic fracture model formulations and simulation, using the finite element method, are presented. The numerical examples include the determination of fracture width, height, length, and stress intensity factors with the effects of frac fluid properties, layered strata, in situ stresses, and joints. Future model extensions are also recommended. 66 references, 23 figures.

  16. Simulation of upper airway occlusion without and with mandibular advancement in obstructive sleep apnea using fluid-structure interaction.

    PubMed

    Zhao, Moyin; Barber, Tracie; Cistulli, Peter A; Sutherland, Kate; Rosengarten, Gary

    2013-10-18

    Obstructive Sleep Apnea (OSA) is a common sleep disorder characterized by repetitive collapse of the upper airway (UA). One treatment option is a mandibular advancement splint (MAS) which protrudes the lower jaw, stabilizing the airway. However not all patients respond to MAS therapy and individual effects are not well understood. Simulations of airway behavior may represent a non-invasive means to understand OSA and individual treatment responses. Our aims were (1) to analyze UA occlusion and flow dynamics in OSA using the fluid structure interaction (FSI) method, and (2) to observe changes with MAS. Magnetic resonance imaging (MRI) scans were obtained at baseline and with MAS in a known treatment responder. Computational models of the patients' UA geometry were reconstructed for both conditions. The FSI model demonstrated full collapse of the UA (maximum 5.83mm) pre-treatment (without MAS). The UA collapse was located at the oropharynx with low oropharyngeal pressure (-51.18Pa to -39.08Pa) induced by velopharyngeal jet flow (maximum 10.0m/s). By comparison, simulation results from the UA with MAS, showed smaller deformation (maximum 2.03mm), matching the known clinical response. Our FSI modeling method was validated by physical experiment on a 1:1 flexible UA model fabricated using 3D steriolithography. This is the first study of airflow dynamics in a deformable UA structure and inspiratory flow. These results expand on previous UA models using computational fluid dynamics (CFD), and lay a platform for application of computational models to study biomechanical properties of the UA in the pathogenesis and treatment of OSA.

  17. Sandia National Laboratories Advanced Simulation and Computing (ASC) software quality plan : ASC software quality engineering practices Version 3.0.

    SciTech Connect

    Turgeon, Jennifer L.; Minana, Molly A.; Hackney, Patricia; Pilch, Martin M.

    2009-01-01

    The purpose of the Sandia National Laboratories (SNL) Advanced Simulation and Computing (ASC) Software Quality Plan is to clearly identify the practices that are the basis for continually improving the quality of ASC software products. Quality is defined in the US Department of Energy/National Nuclear Security Agency (DOE/NNSA) Quality Criteria, Revision 10 (QC-1) as 'conformance to customer requirements and expectations'. This quality plan defines the SNL ASC Program software quality engineering (SQE) practices and provides a mapping of these practices to the SNL Corporate Process Requirement (CPR) 001.3.6; 'Corporate Software Engineering Excellence'. This plan also identifies ASC management's and the software project teams responsibilities in implementing the software quality practices and in assessing progress towards achieving their software quality goals. This SNL ASC Software Quality Plan establishes the signatories commitments to improving software products by applying cost-effective SQE practices. This plan enumerates the SQE practices that comprise the development of SNL ASC's software products and explains the project teams opportunities for tailoring and implementing the practices.

  18. Takeoff certification considerations for large subsonic and supersonic transport airplanes using the Ames flight simulator for advanced aircraft

    NASA Technical Reports Server (NTRS)

    Snyder, C. T.; Drinkwater, F. J., III; Fry, E. B.; Forrest, R. D.

    1973-01-01

    Data for use in development of takeoff airworthiness standards for new aircraft designs such as the supersonic transport (SST) and the large wide-body subsonic jet transport are provided. An advanced motion simulator was used to compare the performance and handling characteristics of three representative large jet transports during specific flight certification tasks. Existing regulatory constraints and methods for determining rotation speed were reviewed, and the effects on takeoff performance of variations in rotation speed, pitch attitude, and pitch attitude rate during the rotation maneuver were analyzed. A limited quantity of refused takeoff information was obtained. The aerodynamics, wing loading, and thrust-to-weight ratio of the subject SST resulted in takeoff speeds limited by climb (rather than lift-off) considerations. Take-off speeds based on U.S. subsonic transport requirements were found unacceptable because of the criticality of rotation-abuse effects on one-engine-inoperative climb performance. Adequate safety margin was provided by takeoff speeds based on proposed Anglo-French supersonic transport (TSS) criteria, with the limiting criterion being that takeoff safety speed be at least 1.15 times the one-engine-inoperative zero-rate-of-climb speed. Various observations related to SST certification are presented.

  19. A novel feedback algorithm for simulating controlled dynamics and confinement in the advanced reversed-field pinch

    SciTech Connect

    Dahlin, J.-E.; Scheffel, J.

    2005-06-15

    In the advanced reversed-field pinch (RFP), the current density profile is externally controlled to diminish tearing instabilities. Thus the scaling of energy confinement time with plasma current and density is improved substantially as compared to the conventional RFP. This may be numerically simulated by introducing an ad hoc electric field, adjusted to generate a tearing mode stable parallel current density profile. In the present work a current profile control algorithm, based on feedback of the fluctuating electric field in Ohm's law, is introduced into the resistive magnetohydrodynamic code DEBSP [D. D. Schnack and D. C. Baxter, J. Comput. Phys. 55, 485 (1984); D. D. Schnack, D. C. Barnes, Z. Mikic, D. S. Marneal, E. J. Caramana, and R. A. Nebel, Comput. Phys. Commun. 43, 17 (1986)]. The resulting radial magnetic field is decreased considerably, causing an increase in energy confinement time and poloidal {beta}. It is found that the parallel current density profile spontaneously becomes hollow, and that a formation, being related to persisting resistive g modes, appears close to the reversal surface.

  20. Boron neutron capture therapy applie