Sample records for high-performance computing facility

  1. High-Performance Computing User Facility | Computational Science | NREL

    Science.gov Websites

    User Facility High-Performance Computing User Facility The High-Performance Computing User Facility technologies. Photo of the Peregrine supercomputer The High Performance Computing (HPC) User Facility provides Gyrfalcon Mass Storage System. Access Our HPC User Facility Learn more about these systems and how to access

  2. High-Performance Computing Data Center | Energy Systems Integration

    Science.gov Websites

    Facility | NREL High-Performance Computing Data Center High-Performance Computing Data Center The Energy Systems Integration Facility's High-Performance Computing Data Center is home to Peregrine -the largest high-performance computing system in the world exclusively dedicated to advancing

  3. Facilities | Integrated Energy Solutions | NREL

    Science.gov Websites

    strategies needed to optimize our entire energy system. A photo of the high-performance computer at NREL . High-Performance Computing Data Center High-performance computing facilities at NREL provide high-speed

  4. High-Performance Computing and Visualization | Energy Systems Integration

    Science.gov Websites

    Facility | NREL High-Performance Computing and Visualization High-Performance Computing and Visualization High-performance computing (HPC) and visualization at NREL propel technology innovation as a . Capabilities High-Performance Computing NREL is home to Peregrine-the largest high-performance computing system

  5. Redirecting Under-Utilised Computer Laboratories into Cluster Computing Facilities

    ERIC Educational Resources Information Center

    Atkinson, John S.; Spenneman, Dirk H. R.; Cornforth, David

    2005-01-01

    Purpose: To provide administrators at an Australian university with data on the feasibility of redirecting under-utilised computer laboratories facilities into a distributed high performance computing facility. Design/methodology/approach: The individual log-in records for each computer located in the computer laboratories at the university were…

  6. Expanding the Scope of High-Performance Computing Facilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Uram, Thomas D.; Papka, Michael E.

    The high-performance computing centers of the future will expand their roles as service providers, and as the machines scale up, so should the sizes of the communities they serve. National facilities must cultivate their users as much as they focus on operating machines reliably. The authors present five interrelated topic areas that are essential to expanding the value provided to those performing computational science.

  7. Money for Research, Not for Energy Bills: Finding Energy and Cost Savings in High Performance Computer Facility Designs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Drewmark Communications; Sartor, Dale; Wilson, Mark

    2010-07-01

    High-performance computing facilities in the United States consume an enormous amount of electricity, cutting into research budgets and challenging public- and private-sector efforts to reduce energy consumption and meet environmental goals. However, these facilities can greatly reduce their energy demand through energy-efficient design of the facility itself. Using a case study of a facility under design, this article discusses strategies and technologies that can be used to help achieve energy reductions.

  8. HEPCloud, a New Paradigm for HEP Facilities: CMS Amazon Web Services Investigation

    DOE PAGES

    Holzman, Burt; Bauerdick, Lothar A. T.; Bockelman, Brian; ...

    2017-09-29

    Historically, high energy physics computing has been performed on large purpose-built computing systems. These began as single-site compute facilities, but have evolved into the distributed computing grids used today. Recently, there has been an exponential increase in the capacity and capability of commercial clouds. Cloud resources are highly virtualized and intended to be able to be flexibly deployed for a variety of computing tasks. There is a growing interest among the cloud providers to demonstrate the capability to perform large-scale scientific computing. In this paper, we discuss results from the CMS experiment using the Fermilab HEPCloud facility, which utilized bothmore » local Fermilab resources and virtual machines in the Amazon Web Services Elastic Compute Cloud. We discuss the planning, technical challenges, and lessons learned involved in performing physics workflows on a large-scale set of virtualized resources. Additionally, we will discuss the economics and operational efficiencies when executing workflows both in the cloud and on dedicated resources.« less

  9. HEPCloud, a New Paradigm for HEP Facilities: CMS Amazon Web Services Investigation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holzman, Burt; Bauerdick, Lothar A. T.; Bockelman, Brian

    Historically, high energy physics computing has been performed on large purpose-built computing systems. These began as single-site compute facilities, but have evolved into the distributed computing grids used today. Recently, there has been an exponential increase in the capacity and capability of commercial clouds. Cloud resources are highly virtualized and intended to be able to be flexibly deployed for a variety of computing tasks. There is a growing interest among the cloud providers to demonstrate the capability to perform large-scale scientific computing. In this paper, we discuss results from the CMS experiment using the Fermilab HEPCloud facility, which utilized bothmore » local Fermilab resources and virtual machines in the Amazon Web Services Elastic Compute Cloud. We discuss the planning, technical challenges, and lessons learned involved in performing physics workflows on a large-scale set of virtualized resources. Additionally, we will discuss the economics and operational efficiencies when executing workflows both in the cloud and on dedicated resources.« less

  10. Specialized computer architectures for computational aerodynamics

    NASA Technical Reports Server (NTRS)

    Stevenson, D. K.

    1978-01-01

    In recent years, computational fluid dynamics has made significant progress in modelling aerodynamic phenomena. Currently, one of the major barriers to future development lies in the compute-intensive nature of the numerical formulations and the relative high cost of performing these computations on commercially available general purpose computers, a cost high with respect to dollar expenditure and/or elapsed time. Today's computing technology will support a program designed to create specialized computing facilities to be dedicated to the important problems of computational aerodynamics. One of the still unresolved questions is the organization of the computing components in such a facility. The characteristics of fluid dynamic problems which will have significant impact on the choice of computer architecture for a specialized facility are reviewed.

  11. DOE High Performance Computing Operational Review (HPCOR): Enabling Data-Driven Scientific Discovery at HPC Facilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gerber, Richard; Allcock, William; Beggio, Chris

    2014-10-17

    U.S. Department of Energy (DOE) High Performance Computing (HPC) facilities are on the verge of a paradigm shift in the way they deliver systems and services to science and engineering teams. Research projects are producing a wide variety of data at unprecedented scale and level of complexity, with community-specific services that are part of the data collection and analysis workflow. On June 18-19, 2014 representatives from six DOE HPC centers met in Oakland, CA at the DOE High Performance Operational Review (HPCOR) to discuss how they can best provide facilities and services to enable large-scale data-driven scientific discovery at themore » DOE national laboratories. The report contains findings from that review.« less

  12. On Laminar to Turbulent Transition of Arc-Jet Flow in the NASA Ames Panel Test Facility

    NASA Technical Reports Server (NTRS)

    Gokcen, Tahir; Alunni, Antonella I.

    2012-01-01

    This paper provides experimental evidence and supporting computational analysis to characterize the laminar to turbulent flow transition in a high enthalpy arc-jet facility at NASA Ames Research Center. The arc-jet test data obtained in the 20 MW Panel Test Facility include measurements of surface pressure and heat flux on a water-cooled calibration plate, and measurements of surface temperature on a reaction-cured glass coated tile plate. Computational fluid dynamics simulations are performed to characterize the arc-jet test environment and estimate its parameters consistent with the facility and calibration measurements. The present analysis comprises simulations of the nonequilibrium flowfield in the facility nozzle, test box, and flowfield over test articles. Both laminar and turbulent simulations are performed, and the computed results are compared with the experimental measurements, including Stanton number dependence on Reynolds number. Comparisons of computed and measured surface heat fluxes (and temperatures), along with the accompanying analysis, confirm that that the boundary layer in the Panel Test Facility flow is transitional at certain archeater conditions.

  13. Prediction and characterization of application power use in a high-performance computing environment

    DOE PAGES

    Bugbee, Bruce; Phillips, Caleb; Egan, Hilary; ...

    2017-02-27

    Power use in data centers and high-performance computing (HPC) facilities has grown in tandem with increases in the size and number of these facilities. Substantial innovation is needed to enable meaningful reduction in energy footprints in leadership-class HPC systems. In this paper, we focus on characterizing and investigating application-level power usage. We demonstrate potential methods for predicting power usage based on a priori and in situ characteristics. Lastly, we highlight a potential use case of this method through a simulated power-aware scheduler using historical jobs from a real scientific HPC system.

  14. Using high-performance networks to enable computational aerosciences applications

    NASA Technical Reports Server (NTRS)

    Johnson, Marjory J.

    1992-01-01

    One component of the U.S. Federal High Performance Computing and Communications Program (HPCCP) is the establishment of a gigabit network to provide a communications infrastructure for researchers across the nation. This gigabit network will provide new services and capabilities, in addition to increased bandwidth, to enable future applications. An understanding of these applications is necessary to guide the development of the gigabit network and other high-performance networks of the future. In this paper we focus on computational aerosciences applications run remotely using the Numerical Aerodynamic Simulation (NAS) facility located at NASA Ames Research Center. We characterize these applications in terms of network-related parameters and relate user experiences that reveal limitations imposed by the current wide-area networking infrastructure. Then we investigate how the development of a nationwide gigabit network would enable users of the NAS facility to work in new, more productive ways.

  15. Construction of Blaze at the University of Illinois at Chicago: A Shared, High-Performance, Visual Computer for Next-Generation Cyberinfrastructure-Accelerated Scientific, Engineering, Medical and Public Policy Research

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Maxine D.; Leigh, Jason

    2014-02-17

    The Blaze high-performance visual computing system serves the high-performance computing research and education needs of University of Illinois at Chicago (UIC). Blaze consists of a state-of-the-art, networked, computer cluster and ultra-high-resolution visualization system called CAVE2(TM) that is currently not available anywhere in Illinois. This system is connected via a high-speed 100-Gigabit network to the State of Illinois' I-WIRE optical network, as well as to national and international high speed networks, such as the Internet2, and the Global Lambda Integrated Facility. This enables Blaze to serve as an on-ramp to national cyberinfrastructure, such as the National Science Foundation’s Blue Waters petascalemore » computer at the National Center for Supercomputing Applications at the University of Illinois at Chicago and the Department of Energy’s Argonne Leadership Computing Facility (ALCF) at Argonne National Laboratory. DOE award # DE-SC005067, leveraged with NSF award #CNS-0959053 for “Development of the Next-Generation CAVE Virtual Environment (NG-CAVE),” enabled us to create a first-of-its-kind high-performance visual computing system. The UIC Electronic Visualization Laboratory (EVL) worked with two U.S. companies to advance their commercial products and maintain U.S. leadership in the global information technology economy. New applications are being enabled with the CAVE2/Blaze visual computing system that is advancing scientific research and education in the U.S. and globally, and help train the next-generation workforce.« less

  16. High performance network and channel-based storage

    NASA Technical Reports Server (NTRS)

    Katz, Randy H.

    1991-01-01

    In the traditional mainframe-centered view of a computer system, storage devices are coupled to the system through complex hardware subsystems called input/output (I/O) channels. With the dramatic shift towards workstation-based computing, and its associated client/server model of computation, storage facilities are now found attached to file servers and distributed throughout the network. We discuss the underlying technology trends that are leading to high performance network-based storage, namely advances in networks, storage devices, and I/O controller and server architectures. We review several commercial systems and research prototypes that are leading to a new approach to high performance computing based on network-attached storage.

  17. LBNL Computational ResearchTheory Facility Groundbreaking - Full Press Conference. Feb 1st, 2012

    ScienceCinema

    Yelick, Kathy

    2018-01-24

    Energy Secretary Steven Chu, along with Berkeley Lab and UC leaders, broke ground on the Lab's Computational Research and Theory (CRT) facility yesterday. The CRT will be at the forefront of high-performance supercomputing research and be DOE's most efficient facility of its kind. Joining Secretary Chu as speakers were Lab Director Paul Alivisatos, UC President Mark Yudof, Office of Science Director Bill Brinkman, and UC Berkeley Chancellor Robert Birgeneau. The festivities were emceed by Associate Lab Director for Computing Sciences, Kathy Yelick, and Berkeley Mayor Tom Bates joined in the shovel ceremony.

  18. LBNL Computational Research and Theory Facility Groundbreaking. February 1st, 2012

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yelick, Kathy

    2012-02-02

    Energy Secretary Steven Chu, along with Berkeley Lab and UC leaders, broke ground on the Lab's Computational Research and Theory (CRT) facility yesterday. The CRT will be at the forefront of high-performance supercomputing research and be DOE's most efficient facility of its kind. Joining Secretary Chu as speakers were Lab Director Paul Alivisatos, UC President Mark Yudof, Office of Science Director Bill Brinkman, and UC Berkeley Chancellor Robert Birgeneau. The festivities were emceed by Associate Lab Director for Computing Sciences, Kathy Yelick, and Berkeley Mayor Tom Bates joined in the shovel ceremony.

  19. LBNL Computational Research and Theory Facility Groundbreaking. February 1st, 2012

    ScienceCinema

    Yelick, Kathy

    2017-12-09

    Energy Secretary Steven Chu, along with Berkeley Lab and UC leaders, broke ground on the Lab's Computational Research and Theory (CRT) facility yesterday. The CRT will be at the forefront of high-performance supercomputing research and be DOE's most efficient facility of its kind. Joining Secretary Chu as speakers were Lab Director Paul Alivisatos, UC President Mark Yudof, Office of Science Director Bill Brinkman, and UC Berkeley Chancellor Robert Birgeneau. The festivities were emceed by Associate Lab Director for Computing Sciences, Kathy Yelick, and Berkeley Mayor Tom Bates joined in the shovel ceremony.

  20. Sandia QIS Capabilities.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Muller, Richard P.

    2017-07-01

    Sandia National Laboratories has developed a broad set of capabilities in quantum information science (QIS), including elements of quantum computing, quantum communications, and quantum sensing. The Sandia QIS program is built atop unique DOE investments at the laboratories, including the MESA microelectronics fabrication facility, the Center for Integrated Nanotechnologies (CINT) facilities (joint with LANL), the Ion Beam Laboratory, and ASC High Performance Computing (HPC) facilities. Sandia has invested $75 M of LDRD funding over 12 years to develop unique, differentiating capabilities that leverage these DOE infrastructure investments.

  1. High-Performance Computing Data Center Efficiency Dashboard | Computational

    Science.gov Websites

    recovery water (ERW) loop Heat exchanger for energy recovery Thermosyphon Heat exchanger between ERW loop and cooling tower loop Evaporative cooling towers Learn more about our energy-efficient facility

  2. Data Transfer Study HPSS Archiving

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wynne, James; Parete-Koon, Suzanne T; Mitchell, Quinn

    2015-01-01

    The movement of the large amounts of data produced by codes run in a High Performance Computing (HPC) environment can be a bottleneck for project workflows. To balance filesystem capacity and performance requirements, HPC centers enforce data management policies to purge old files to make room for new computation and analysis results. Users at Oak Ridge Leadership Computing Facility (OLCF) and many other HPC user facilities must archive data to avoid data loss during purges, therefore the time associated with data movement for archiving is something that all users must consider. This study observed the difference in transfer speed frommore » the originating location on the Lustre filesystem to the more permanent High Performance Storage System (HPSS). The tests were done with a number of different transfer methods for files that spanned a variety of sizes and compositions that reflect OLCF user data. This data will be used to help users of Titan and other Cray supercomputers plan their workflow and data transfers so that they are most efficient for their project. We will also discuss best practice for maintaining data at shared user facilities.« less

  3. Making Cloud Computing Available For Researchers and Innovators (Invited)

    NASA Astrophysics Data System (ADS)

    Winsor, R.

    2010-12-01

    High Performance Computing (HPC) facilities exist in most academic institutions but are almost invariably over-subscribed. Access is allocated based on academic merit, the only practical method of assigning valuable finite compute resources. Cloud computing on the other hand, and particularly commercial clouds, draw flexibly on an almost limitless resource as long as the user has sufficient funds to pay the bill. How can the commercial cloud model be applied to scientific computing? Is there a case to be made for a publicly available research cloud and how would it be structured? This talk will explore these themes and describe how Cybera, a not-for-profit non-governmental organization in Alberta Canada, aims to leverage its high speed research and education network to provide cloud computing facilities for a much wider user base.

  4. Partners | Energy Systems Integration Facility | NREL

    Science.gov Websites

    Renewable Electricity to Grid Integration Evaluation of New Technology IGBT Industry Asetek High Performance Energy Commission High Performance Computing & Visualization Real-Time Data Collection for Institute/Schneider Electric Renewable Electricity to Grid Integration End-to-End Communication and Control

  5. Validation of MCNP6 Version 1.0 with the ENDF/B-VII.1 Cross Section Library for Plutonium Metals, Oxides, and Solutions on the High Performance Computing Platform Moonlight

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chapman, Bryan Scott; Gough, Sean T.

    This report documents a validation of the MCNP6 Version 1.0 computer code on the high performance computing platform Moonlight, for operations at Los Alamos National Laboratory (LANL) that involve plutonium metals, oxides, and solutions. The validation is conducted using the ENDF/B-VII.1 continuous energy group cross section library at room temperature. The results are for use by nuclear criticality safety personnel in performing analysis and evaluation of various facility activities involving plutonium materials.

  6. Unified, Cross-Platform, Open-Source Library Package for High-Performance Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kozacik, Stephen

    Compute power is continually increasing, but this increased performance is largely found in sophisticated computing devices and supercomputer resources that are difficult to use, resulting in under-utilization. We developed a unified set of programming tools that will allow users to take full advantage of the new technology by allowing them to work at a level abstracted away from the platform specifics, encouraging the use of modern computing systems, including government-funded supercomputer facilities.

  7. GPU-based High-Performance Computing for Radiation Therapy

    PubMed Central

    Jia, Xun; Ziegenhein, Peter; Jiang, Steve B.

    2014-01-01

    Recent developments in radiotherapy therapy demand high computation powers to solve challenging problems in a timely fashion in a clinical environment. Graphics processing unit (GPU), as an emerging high-performance computing platform, has been introduced to radiotherapy. It is particularly attractive due to its high computational power, small size, and low cost for facility deployment and maintenance. Over the past a few years, GPU-based high-performance computing in radiotherapy has experienced rapid developments. A tremendous amount of studies have been conducted, in which large acceleration factors compared with the conventional CPU platform have been observed. In this article, we will first give a brief introduction to the GPU hardware structure and programming model. We will then review the current applications of GPU in major imaging-related and therapy-related problems encountered in radiotherapy. A comparison of GPU with other platforms will also be presented. PMID:24486639

  8. High Performance Computing Operations Review Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cupps, Kimberly C.

    2013-12-19

    The High Performance Computing Operations Review (HPCOR) meeting—requested by the ASC and ASCR program headquarters at DOE—was held November 5 and 6, 2013, at the Marriott Hotel in San Francisco, CA. The purpose of the review was to discuss the processes and practices for HPC integration and its related software and facilities. Experiences and lessons learned from the most recent systems deployed were covered in order to benefit the deployment of new systems.

  9. Experimental investigation of nozzle/plume aerodynamics at hypersonic speeds

    NASA Technical Reports Server (NTRS)

    Bogdanoff, David W.; Cambier, Jean-Luc; Papadopoulos, Perikles

    1994-01-01

    Much of the work involved the Ames 16-Inch Shock Tunnel facility. The facility was reactivated and upgraded, a data acquisition system was configured and upgraded several times, several facility calibrations were performed and test entries with a wedge model with hydrogen injection and a full scramjet combustor model, with hydrogen injection, were performed. Extensive CFD modeling of the flow in the facility was done. This includes modeling of the unsteady flow in the driver and driven tubes and steady flow modeling of the nozzle flow. Other modeling efforts include simulations of non-equilibrium flows and turbulence, plasmas, light gas guns and the use of non-ideal gas equations of state. New experimental techniques to improve the performance of gas guns, shock tubes and tunnels and scramjet combustors were conceived and studied computationally. Ways to improve scramjet engine performance using steady and pulsed detonation waves were also studied computationally. A number of studies were performed on the operation of the ram accelerator, including investigations of in-tube gasdynamic heating and the use of high explosives to raise the velocity capability of the device.

  10. Computational biomedicine: a challenge for the twenty-first century.

    PubMed

    Coveney, Peter V; Shublaq, Nour W

    2012-01-01

    With the relentless increase of computer power and the widespread availability of digital patient-specific medical data, we are now entering an era when it is becoming possible to develop predictive models of human disease and pathology, which can be used to support and enhance clinical decision-making. The approach amounts to a grand challenge to computational science insofar as we need to be able to provide seamless yet secure access to large scale heterogeneous personal healthcare data in a facile way, typically integrated into complex workflows-some parts of which may need to be run on high performance computers-in a facile way that is integrated into clinical decision support software. In this paper, we review the state of the art in terms of case studies drawn from neurovascular pathologies and HIV/AIDS. These studies are representative of a large number of projects currently being performed within the Virtual Physiological Human initiative. They make demands of information technology at many scales, from the desktop to national and international infrastructures for data storage and processing, linked by high performance networks.

  11. HPC Annual Report 2017

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dennig, Yasmin

    Sandia National Laboratories has a long history of significant contributions to the high performance community and industry. Our innovative computer architectures allowed the United States to become the first to break the teraFLOP barrier—propelling us to the international spotlight. Our advanced simulation and modeling capabilities have been integral in high consequence US operations such as Operation Burnt Frost. Strong partnerships with industry leaders, such as Cray, Inc. and Goodyear, have enabled them to leverage our high performance computing (HPC) capabilities to gain a tremendous competitive edge in the marketplace. As part of our continuing commitment to providing modern computing infrastructuremore » and systems in support of Sandia missions, we made a major investment in expanding Building 725 to serve as the new home of HPC systems at Sandia. Work is expected to be completed in 2018 and will result in a modern facility of approximately 15,000 square feet of computer center space. The facility will be ready to house the newest National Nuclear Security Administration/Advanced Simulation and Computing (NNSA/ASC) Prototype platform being acquired by Sandia, with delivery in late 2019 or early 2020. This new system will enable continuing advances by Sandia science and engineering staff in the areas of operating system R&D, operation cost effectiveness (power and innovative cooling technologies), user environment and application code performance.« less

  12. Rapid prototyping facility for flight research in artificial-intelligence-based flight systems concepts

    NASA Technical Reports Server (NTRS)

    Duke, E. L.; Regenie, V. A.; Deets, D. A.

    1986-01-01

    The Dryden Flight Research Facility of the NASA Ames Research Facility of the NASA Ames Research Center is developing a rapid prototyping facility for flight research in flight systems concepts that are based on artificial intelligence (AI). The facility will include real-time high-fidelity aircraft simulators, conventional and symbolic processors, and a high-performance research aircraft specially modified to accept commands from the ground-based AI computers. This facility is being developed as part of the NASA-DARPA automated wingman program. This document discusses the need for flight research and for a national flight research facility for the rapid prototyping of AI-based avionics systems and the NASA response to those needs.

  13. A rapid prototyping facility for flight research in advanced systems concepts

    NASA Technical Reports Server (NTRS)

    Duke, Eugene L.; Brumbaugh, Randal W.; Disbrow, James D.

    1989-01-01

    The Dryden Flight Research Facility of the NASA Ames Research Facility of the NASA Ames Research Center is developing a rapid prototyping facility for flight research in flight systems concepts that are based on artificial intelligence (AI). The facility will include real-time high-fidelity aircraft simulators, conventional and symbolic processors, and a high-performance research aircraft specially modified to accept commands from the ground-based AI computers. This facility is being developed as part of the NASA-DARPA automated wingman program. This document discusses the need for flight research and for a national flight research facility for the rapid prototyping of AI-based avionics systems and the NASA response to those needs.

  14. The development of an automated flight test management system for flight test planning and monitoring

    NASA Technical Reports Server (NTRS)

    Hewett, Marle D.; Tartt, David M.; Duke, Eugene L.; Antoniewicz, Robert F.; Brumbaugh, Randal W.

    1988-01-01

    The development of an automated flight test management system (ATMS) as a component of a rapid-prototyping flight research facility for AI-based flight systems concepts is described. The rapid-prototyping facility includes real-time high-fidelity simulators, numeric and symbolic processors, and high-performance research aircraft modified to accept commands for a ground-based remotely augmented vehicle facility. The flight system configuration of the ATMS includes three computers: the TI explorer LX and two GOULD SEL 32/27s.

  15. Development and application of computational aerothermodynamics flowfield computer codes

    NASA Technical Reports Server (NTRS)

    Venkatapathy, Ethiraj

    1994-01-01

    Research was performed in the area of computational modeling and application of hypersonic, high-enthalpy, thermo-chemical nonequilibrium flow (Aerothermodynamics) problems. A number of computational fluid dynamic (CFD) codes were developed and applied to simulate high altitude rocket-plume, the Aeroassist Flight Experiment (AFE), hypersonic base flow for planetary probes, the single expansion ramp model (SERN) connected with the National Aerospace Plane, hypersonic drag devices, hypersonic ramp flows, ballistic range models, shock tunnel facility nozzles, transient and steady flows in the shock tunnel facility, arc-jet flows, thermochemical nonequilibrium flows around simple and complex bodies, axisymmetric ionized flows of interest to re-entry, unsteady shock induced combustion phenomena, high enthalpy pulsed facility simulations, and unsteady shock boundary layer interactions in shock tunnels. Computational modeling involved developing appropriate numerical schemes for the flows on interest and developing, applying, and validating appropriate thermochemical processes. As part of improving the accuracy of the numerical predictions, adaptive grid algorithms were explored, and a user-friendly, self-adaptive code (SAGE) was developed. Aerothermodynamic flows of interest included energy transfer due to strong radiation, and a significant level of effort was spent in developing computational codes for calculating radiation and radiation modeling. In addition, computational tools were developed and applied to predict the radiative heat flux and spectra that reach the model surface.

  16. High Performance Distributed Computing in a Supercomputer Environment: Computational Services and Applications Issues

    NASA Technical Reports Server (NTRS)

    Kramer, Williams T. C.; Simon, Horst D.

    1994-01-01

    This tutorial proposes to be a practical guide for the uninitiated to the main topics and themes of high-performance computing (HPC), with particular emphasis to distributed computing. The intent is first to provide some guidance and directions in the rapidly increasing field of scientific computing using both massively parallel and traditional supercomputers. Because of their considerable potential computational power, loosely or tightly coupled clusters of workstations are increasingly considered as a third alternative to both the more conventional supercomputers based on a small number of powerful vector processors, as well as high massively parallel processors. Even though many research issues concerning the effective use of workstation clusters and their integration into a large scale production facility are still unresolved, such clusters are already used for production computing. In this tutorial we will utilize the unique experience made at the NAS facility at NASA Ames Research Center. Over the last five years at NAS massively parallel supercomputers such as the Connection Machines CM-2 and CM-5 from Thinking Machines Corporation and the iPSC/860 (Touchstone Gamma Machine) and Paragon Machines from Intel were used in a production supercomputer center alongside with traditional vector supercomputers such as the Cray Y-MP and C90.

  17. Crosscut report: Exascale Requirements Reviews, March 9–10, 2017 – Tysons Corner, Virginia. An Office of Science review sponsored by: Advanced Scientific Computing Research, Basic Energy Sciences, Biological and Environmental Research, Fusion Energy Sciences, High Energy Physics, Nuclear Physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gerber, Richard; Hack, James; Riley, Katherine

    The mission of the U.S. Department of Energy Office of Science (DOE SC) is the delivery of scientific discoveries and major scientific tools to transform our understanding of nature and to advance the energy, economic, and national security missions of the United States. To achieve these goals in today’s world requires investments in not only the traditional scientific endeavors of theory and experiment, but also in computational science and the facilities that support large-scale simulation and data analysis. The Advanced Scientific Computing Research (ASCR) program addresses these challenges in the Office of Science. ASCR’s mission is to discover, develop, andmore » deploy computational and networking capabilities to analyze, model, simulate, and predict complex phenomena important to DOE. ASCR supports research in computational science, three high-performance computing (HPC) facilities — the National Energy Research Scientific Computing Center (NERSC) at Lawrence Berkeley National Laboratory and Leadership Computing Facilities at Argonne (ALCF) and Oak Ridge (OLCF) National Laboratories — and the Energy Sciences Network (ESnet) at Berkeley Lab. ASCR is guided by science needs as it develops research programs, computers, and networks at the leading edge of technologies. As we approach the era of exascale computing, technology changes are creating challenges for science programs in SC for those who need to use high performance computing and data systems effectively. Numerous significant modifications to today’s tools and techniques will be needed to realize the full potential of emerging computing systems and other novel computing architectures. To assess these needs and challenges, ASCR held a series of Exascale Requirements Reviews in 2015–2017, one with each of the six SC program offices,1 and a subsequent Crosscut Review that sought to integrate the findings from each. Participants at the reviews were drawn from the communities of leading domain scientists, experts in computer science and applied mathematics, ASCR facility staff, and DOE program managers in ASCR and the respective program offices. The purpose of these reviews was to identify mission-critical scientific problems within the DOE Office of Science (including experimental facilities) and determine the requirements for the exascale ecosystem that would be needed to address those challenges. The exascale ecosystem includes exascale computing systems, high-end data capabilities, efficient software at scale, libraries, tools, and other capabilities. This effort will contribute to the development of a strategic roadmap for ASCR compute and data facility investments and will help the ASCR Facility Division establish partnerships with Office of Science stakeholders. It will also inform the Office of Science research needs and agenda. The results of the six reviews have been published in reports available on the web at http://exascaleage.org/. This report presents a summary of the individual reports and of common and crosscutting findings, and it identifies opportunities for productive collaborations among the DOE SC program offices.« less

  18. Grids and clouds in the Czech NGI

    NASA Astrophysics Data System (ADS)

    Kundrát, Jan; Adam, Martin; Adamová, Dagmar; Chudoba, Jiří; Kouba, Tomáš; Lokajíček, Miloš; Mikula, Alexandr; Říkal, Václav; Švec, Jan; Vohnout, Rudolf

    2016-09-01

    There are several infrastructure operators within the Czech Republic NGI (National Grid Initiative) which provide users with access to high-performance computing facilities over a grid and cloud interface. This article focuses on those where the primary author has personal first-hand experience. We cover some operational issues as well as the history of these facilities.

  19. National research and education network

    NASA Technical Reports Server (NTRS)

    Villasenor, Tony

    1991-01-01

    Some goals of this network are as follows: Extend U.S. technological leadership in high performance computing and computer communications; Provide wide dissemination and application of the technologies both to the speed and the pace of innovation and to serve the national economy, national security, education, and the global environment; and Spur gains in the U.S. productivity and industrial competitiveness by making high performance computing and networking technologies an integral part of the design and production process. Strategies for achieving these goals are as follows: Support solutions to important scientific and technical challenges through a vigorous R and D effort; Reduce the uncertainties to industry for R and D and use of this technology through increased cooperation between government, industry, and universities and by the continued use of government and government funded facilities as a prototype user for early commercial HPCC products; and Support underlying research, network, and computational infrastructures on which U.S. high performance computing technology is based.

  20. Monitoring SLAC High Performance UNIX Computing Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lettsome, Annette K.; /Bethune-Cookman Coll. /SLAC

    2005-12-15

    Knowledge of the effectiveness and efficiency of computers is important when working with high performance systems. The monitoring of such systems is advantageous in order to foresee possible misfortunes or system failures. Ganglia is a software system designed for high performance computing systems to retrieve specific monitoring information. An alternative storage facility for Ganglia's collected data is needed since its default storage system, the round-robin database (RRD), struggles with data integrity. The creation of a script-driven MySQL database solves this dilemma. This paper describes the process took in the creation and implementation of the MySQL database for use by Ganglia.more » Comparisons between data storage by both databases are made using gnuplot and Ganglia's real-time graphical user interface.« less

  1. ESIF 2016: Modernizing Our Grid and Energy System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Van Becelaere, Kimberly

    This 2016 annual report highlights work conducted at the Energy Systems Integration Facility (ESIF) in FY 2016, including grid modernization, high-performance computing and visualization, and INTEGRATE projects.

  2. The multi-modal Australian ScienceS Imaging and Visualization Environment (MASSIVE) high performance computing infrastructure: applications in neuroscience and neuroinformatics research

    PubMed Central

    Goscinski, Wojtek J.; McIntosh, Paul; Felzmann, Ulrich; Maksimenko, Anton; Hall, Christopher J.; Gureyev, Timur; Thompson, Darren; Janke, Andrew; Galloway, Graham; Killeen, Neil E. B.; Raniga, Parnesh; Kaluza, Owen; Ng, Amanda; Poudel, Govinda; Barnes, David G.; Nguyen, Toan; Bonnington, Paul; Egan, Gary F.

    2014-01-01

    The Multi-modal Australian ScienceS Imaging and Visualization Environment (MASSIVE) is a national imaging and visualization facility established by Monash University, the Australian Synchrotron, the Commonwealth Scientific Industrial Research Organization (CSIRO), and the Victorian Partnership for Advanced Computing (VPAC), with funding from the National Computational Infrastructure and the Victorian Government. The MASSIVE facility provides hardware, software, and expertise to drive research in the biomedical sciences, particularly advanced brain imaging research using synchrotron x-ray and infrared imaging, functional and structural magnetic resonance imaging (MRI), x-ray computer tomography (CT), electron microscopy and optical microscopy. The development of MASSIVE has been based on best practice in system integration methodologies, frameworks, and architectures. The facility has: (i) integrated multiple different neuroimaging analysis software components, (ii) enabled cross-platform and cross-modality integration of neuroinformatics tools, and (iii) brought together neuroimaging databases and analysis workflows. MASSIVE is now operational as a nationally distributed and integrated facility for neuroinfomatics and brain imaging research. PMID:24734019

  3. Ubiquitous Green Computing Techniques for High Demand Applications in Smart Environments

    PubMed Central

    Zapater, Marina; Sanchez, Cesar; Ayala, Jose L.; Moya, Jose M.; Risco-Martín, José L.

    2012-01-01

    Ubiquitous sensor network deployments, such as the ones found in Smart cities and Ambient intelligence applications, require constantly increasing high computational demands in order to process data and offer services to users. The nature of these applications imply the usage of data centers. Research has paid much attention to the energy consumption of the sensor nodes in WSNs infrastructures. However, supercomputing facilities are the ones presenting a higher economic and environmental impact due to their very high power consumption. The latter problem, however, has been disregarded in the field of smart environment services. This paper proposes an energy-minimization workload assignment technique, based on heterogeneity and application-awareness, that redistributes low-demand computational tasks from high-performance facilities to idle nodes with low and medium resources in the WSN infrastructure. These non-optimal allocation policies reduce the energy consumed by the whole infrastructure and the total execution time. PMID:23112621

  4. Ubiquitous green computing techniques for high demand applications in Smart environments.

    PubMed

    Zapater, Marina; Sanchez, Cesar; Ayala, Jose L; Moya, Jose M; Risco-Martín, José L

    2012-01-01

    Ubiquitous sensor network deployments, such as the ones found in Smart cities and Ambient intelligence applications, require constantly increasing high computational demands in order to process data and offer services to users. The nature of these applications imply the usage of data centers. Research has paid much attention to the energy consumption of the sensor nodes in WSNs infrastructures. However, supercomputing facilities are the ones presenting a higher economic and environmental impact due to their very high power consumption. The latter problem, however, has been disregarded in the field of smart environment services. This paper proposes an energy-minimization workload assignment technique, based on heterogeneity and application-awareness, that redistributes low-demand computational tasks from high-performance facilities to idle nodes with low and medium resources in the WSN infrastructure. These non-optimal allocation policies reduce the energy consumed by the whole infrastructure and the total execution time.

  5. A Benders based rolling horizon algorithm for a dynamic facility location problem

    DOE PAGES

    Marufuzzaman,, Mohammad; Gedik, Ridvan; Roni, Mohammad S.

    2016-06-28

    This study presents a well-known capacitated dynamic facility location problem (DFLP) that satisfies the customer demand at a minimum cost by determining the time period for opening, closing, or retaining an existing facility in a given location. To solve this challenging NP-hard problem, this paper develops a unique hybrid solution algorithm that combines a rolling horizon algorithm with an accelerated Benders decomposition algorithm. Extensive computational experiments are performed on benchmark test instances to evaluate the hybrid algorithm’s efficiency and robustness in solving the DFLP problem. Computational results indicate that the hybrid Benders based rolling horizon algorithm consistently offers high qualitymore » feasible solutions in a much shorter computational time period than the standalone rolling horizon and accelerated Benders decomposition algorithms in the experimental range.« less

  6. NASA low-speed centrifugal compressor for 3-D viscous code assessment and fundamental flow physics research

    NASA Technical Reports Server (NTRS)

    Hathaway, M. D.; Wood, J. R.; Wasserbauer, C. A.

    1991-01-01

    A low speed centrifugal compressor facility recently built by the NASA Lewis Research Center is described. The purpose of this facility is to obtain detailed flow field measurements for computational fluid dynamic code assessment and flow physics modeling in support of Army and NASA efforts to advance small gas turbine engine technology. The facility is heavily instrumented with pressure and temperature probes, both in the stationary and rotating frames of reference, and has provisions for flow visualization and laser velocimetry. The facility will accommodate rotational speeds to 2400 rpm and is rated at pressures to 1.25 atm. The initial compressor stage being tested is geometrically and dynamically representative of modern high-performance centrifugal compressor stages with the exception of Mach number levels. Preliminary experimental investigations of inlet and exit flow uniformly and measurement repeatability are presented. These results demonstrate the high quality of the data which may be expected from this facility. The significance of synergism between computational fluid dynamic analysis and experimentation throughout the development of the low speed centrifugal compressor facility is demonstrated.

  7. Instrument Systems Analysis and Verification Facility (ISAVF) users guide

    NASA Technical Reports Server (NTRS)

    Davis, J. F.; Thomason, J. O.; Wolfgang, J. L.

    1985-01-01

    The ISAVF facility is primarily an interconnected system of computers, special purpose real time hardware, and associated generalized software systems, which will permit the Instrument System Analysts, Design Engineers and Instrument Scientists, to perform trade off studies, specification development, instrument modeling, and verification of the instrument, hardware performance. It is not the intent of the ISAVF to duplicate or replace existing special purpose facilities such as the Code 710 Optical Laboratories or the Code 750 Test and Evaluation facilities. The ISAVF will provide data acquisition and control services for these facilities, as needed, using remote computer stations attached to the main ISAVF computers via dedicated communication lines.

  8. High-Resiliency and Auto-Scaling of Large-Scale Cloud Computing for OCO-2 L2 Full Physics Processing

    NASA Astrophysics Data System (ADS)

    Hua, H.; Manipon, G.; Starch, M.; Dang, L. B.; Southam, P.; Wilson, B. D.; Avis, C.; Chang, A.; Cheng, C.; Smyth, M.; McDuffie, J. L.; Ramirez, P.

    2015-12-01

    Next generation science data systems are needed to address the incoming flood of data from new missions such as SWOT and NISAR where data volumes and data throughput rates are order of magnitude larger than present day missions. Additionally, traditional means of procuring hardware on-premise are already limited due to facilities capacity constraints for these new missions. Existing missions, such as OCO-2, may also require high turn-around time for processing different science scenarios where on-premise and even traditional HPC computing environments may not meet the high processing needs. We present our experiences on deploying a hybrid-cloud computing science data system (HySDS) for the OCO-2 Science Computing Facility to support large-scale processing of their Level-2 full physics data products. We will explore optimization approaches to getting best performance out of hybrid-cloud computing as well as common issues that will arise when dealing with large-scale computing. Novel approaches were utilized to do processing on Amazon's spot market, which can potentially offer ~10X costs savings but with an unpredictable computing environment based on market forces. We will present how we enabled high-tolerance computing in order to achieve large-scale computing as well as operational cost savings.

  9. Jefferson Lab Virtual Tour

    ScienceCinema

    None

    2018-01-16

    Take a virtual tour of the campus of Thomas Jefferson National Accelerator Facility. You can see inside our two accelerators, three experimental areas, accelerator component fabrication and testing areas, high-performance computing areas and laser labs.

  10. Expanding Your Laboratory by Accessing Collaboratory Resources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoyt, David W.; Burton, Sarah D.; Peterson, Michael R.

    2004-03-01

    The Environmental Molecular Sciences Laboratory (EMSL) in Richland, Washington, is the home of a research facility setup by the United States Department of Energy (DOE). The facility is atypical because it houses over 100 cutting-edge research systems for the use of researchers all over the United States and the world. Access to the lab is requested through a peer-review proposal process and the scientists who use the facility are generally referred to as ‘users’. There are six main research facilities housed in EMSL, all of which host visiting researchers. Several of these facilities also participate in the EMSL Collaboratory, amore » remote access capability supported by EMSL operations funds. Of these, the High-Field Magnetic Resonance Facility (HFMRF) and Molecular Science Computing Facility (MSCF) have a significant number of their users performing remote work. The HFMRF in EMSL currently houses 12 NMR spectrometers that range in magnet field strength from 7.05T to 21.1T. Staff associated with the NMR facility offers scientific expertise in the areas of structural biology, solid-state materials/catalyst characterization, and magnetic resonance imaging (MRI) techniques. The way in which the HFMRF operates, with a high level of dedication to remote operation across the full suite of High-Field NMR spectrometers, has earned it the name “Virtual NMR Facility”. This review will focus on the operational aspects of remote research done in the High-Field Magnetic Resonance Facility and the computer tools that make remote experiments possible.« less

  11. Laboratories | Energy Systems Integration Facility | NREL

    Science.gov Websites

    laboratories to be safely divided into multiple test stand locations (or "capability hubs") to enable Fabrication Laboratory Energy Systems High-Pressure Test Laboratory Energy Systems Integration Laboratory Energy Systems Sensor Laboratory Fuel Cell Development and Test Laboratory High-Performance Computing

  12. Wind Energy Facilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Laurie, Carol

    2017-02-01

    This book takes readers inside the places where daily discoveries shape the next generation of wind power systems. Energy Department laboratory facilities span the United States and offer wind research capabilities to meet industry needs. The facilities described in this book make it possible for industry players to increase reliability, improve efficiency, and reduce the cost of wind energy -- one discovery at a time. Whether you require blade testing or resource characterization, grid integration or high-performance computing, Department of Energy laboratory facilities offer a variety of capabilities to meet your wind research needs.

  13. Wind Energy Facilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Office of Energy Efficiency and Renewable Energy

    This book takes readers inside the places where daily discoveries shape the next generation of wind power systems. Energy Department laboratory facilities span the United States and offer wind research capabilities to meet industry needs. The facilities described in this book make it possible for industry players to increase reliability, improve efficiency, and reduce the cost of wind energy -- one discovery at a time. Whether you require blade testing or resource characterization, grid integration or high-performance computing, Department of Energy laboratory facilities offer a variety of capabilities to meet your wind research needs.

  14. High-Performance Computing Data Center Power Usage Effectiveness |

    Science.gov Websites

    Power Usage Effectiveness When the Energy Systems Integration Facility (ESIF) was conceived, NREL set an , ventilation, and air conditioning (HVAC), which captures fan walls, fan coils that support the data center

  15. A C++11 implementation of arbitrary-rank tensors for high-performance computing

    NASA Astrophysics Data System (ADS)

    Aragón, Alejandro M.

    2014-06-01

    This article discusses an efficient implementation of tensors of arbitrary rank by using some of the idioms introduced by the recently published C++ ISO Standard (C++11). With the aims at providing a basic building block for high-performance computing, a single Array class template is carefully crafted, from which vectors, matrices, and even higher-order tensors can be created. An expression template facility is also built around the array class template to provide convenient mathematical syntax. As a result, by using templates, an extra high-level layer is added to the C++ language when dealing with algebraic objects and their operations, without compromising performance. The implementation is tested running on both CPU and GPU.

  16. A C++11 implementation of arbitrary-rank tensors for high-performance computing

    NASA Astrophysics Data System (ADS)

    Aragón, Alejandro M.

    2014-11-01

    This article discusses an efficient implementation of tensors of arbitrary rank by using some of the idioms introduced by the recently published C++ ISO Standard (C++11). With the aims at providing a basic building block for high-performance computing, a single Array class template is carefully crafted, from which vectors, matrices, and even higher-order tensors can be created. An expression template facility is also built around the array class template to provide convenient mathematical syntax. As a result, by using templates, an extra high-level layer is added to the C++ language when dealing with algebraic objects and their operations, without compromising performance. The implementation is tested running on both CPU and GPU.

  17. Human-Computer Interaction and Virtual Environments

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K. (Compiler)

    1995-01-01

    The proceedings of the Workshop on Human-Computer Interaction and Virtual Environments are presented along with a list of attendees. The objectives of the workshop were to assess the state-of-technology and level of maturity of several areas in human-computer interaction and to provide guidelines for focused future research leading to effective use of these facilities in the design/fabrication and operation of future high-performance engineering systems.

  18. Parallel computing works

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    An account of the Caltech Concurrent Computation Program (C{sup 3}P), a five year project that focused on answering the question: Can parallel computers be used to do large-scale scientific computations '' As the title indicates, the question is answered in the affirmative, by implementing numerous scientific applications on real parallel computers and doing computations that produced new scientific results. In the process of doing so, C{sup 3}P helped design and build several new computers, designed and implemented basic system software, developed algorithms for frequently used mathematical computations on massively parallel machines, devised performance models and measured the performance of manymore » computers, and created a high performance computing facility based exclusively on parallel computers. While the initial focus of C{sup 3}P was the hypercube architecture developed by C. Seitz, many of the methods developed and lessons learned have been applied successfully on other massively parallel architectures.« less

  19. Integration of Russian Tier-1 Grid Center with High Performance Computers at NRC-KI for LHC experiments and beyond HENP

    NASA Astrophysics Data System (ADS)

    Belyaev, A.; Berezhnaya, A.; Betev, L.; Buncic, P.; De, K.; Drizhuk, D.; Klimentov, A.; Lazin, Y.; Lyalin, I.; Mashinistov, R.; Novikov, A.; Oleynik, D.; Polyakov, A.; Poyda, A.; Ryabinkin, E.; Teslyuk, A.; Tkachenko, I.; Yasnopolskiy, L.

    2015-12-01

    The LHC experiments are preparing for the precision measurements and further discoveries that will be made possible by higher LHC energies from April 2015 (LHC Run2). The need for simulation, data processing and analysis would overwhelm the expected capacity of grid infrastructure computing facilities deployed by the Worldwide LHC Computing Grid (WLCG). To meet this challenge the integration of the opportunistic resources into LHC computing model is highly important. The Tier-1 facility at Kurchatov Institute (NRC-KI) in Moscow is a part of WLCG and it will process, simulate and store up to 10% of total data obtained from ALICE, ATLAS and LHCb experiments. In addition Kurchatov Institute has supercomputers with peak performance 0.12 PFLOPS. The delegation of even a fraction of supercomputing resources to the LHC Computing will notably increase total capacity. In 2014 the development a portal combining a Tier-1 and a supercomputer in Kurchatov Institute was started to provide common interfaces and storage. The portal will be used not only for HENP experiments, but also by other data- and compute-intensive sciences like biology with genome sequencing analysis; astrophysics with cosmic rays analysis, antimatter and dark matter search, etc.

  20. Real-time data-intensive computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parkinson, Dilworth Y., E-mail: dyparkinson@lbl.gov; Chen, Xian; Hexemer, Alexander

    2016-07-27

    Today users visit synchrotrons as sources of understanding and discovery—not as sources of just light, and not as sources of data. To achieve this, the synchrotron facilities frequently provide not just light but often the entire end station and increasingly, advanced computational facilities that can reduce terabytes of data into a form that can reveal a new key insight. The Advanced Light Source (ALS) has partnered with high performance computing, fast networking, and applied mathematics groups to create a “super-facility”, giving users simultaneous access to the experimental, computational, and algorithmic resources to make this possible. This combination forms an efficientmore » closed loop, where data—despite its high rate and volume—is transferred and processed immediately and automatically on appropriate computing resources, and results are extracted, visualized, and presented to users or to the experimental control system, both to provide immediate insight and to guide decisions about subsequent experiments during beamtime. We will describe our work at the ALS ptychography, scattering, micro-diffraction, and micro-tomography beamlines.« less

  1. Sensing and Active Flow Control for Advanced BWB Propulsion-Airframe Integration Concepts

    NASA Technical Reports Server (NTRS)

    Fleming, John; Anderson, Jason; Ng, Wing; Harrison, Neal

    2005-01-01

    In order to realize the substantial performance benefits of serpentine boundary layer ingesting diffusers, this study investigated the use of enabling flow control methods to reduce engine-face flow distortion. Computational methods and novel flow control modeling techniques were utilized that allowed for rapid, accurate analysis of flow control geometries. Results were validated experimentally using the Techsburg Ejector-based wind tunnel facility; this facility is capable of simulating the high-altitude, high subsonic Mach number conditions representative of BWB cruise conditions.

  2. High Energy Physics Exascale Requirements Review. An Office of Science review sponsored jointly by Advanced Scientific Computing Research and High Energy Physics, June 10-12, 2015, Bethesda, Maryland

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Habib, Salman; Roser, Robert; Gerber, Richard

    The U.S. Department of Energy (DOE) Office of Science (SC) Offices of High Energy Physics (HEP) and Advanced Scientific Computing Research (ASCR) convened a programmatic Exascale Requirements Review on June 10–12, 2015, in Bethesda, Maryland. This report summarizes the findings, results, and recommendations derived from that meeting. The high-level findings and observations are as follows. Larger, more capable computing and data facilities are needed to support HEP science goals in all three frontiers: Energy, Intensity, and Cosmic. The expected scale of the demand at the 2025 timescale is at least two orders of magnitude — and in some cases greatermore » — than that available currently. The growth rate of data produced by simulations is overwhelming the current ability of both facilities and researchers to store and analyze it. Additional resources and new techniques for data analysis are urgently needed. Data rates and volumes from experimental facilities are also straining the current HEP infrastructure in its ability to store and analyze large and complex data volumes. Appropriately configured leadership-class facilities can play a transformational role in enabling scientific discovery from these datasets. A close integration of high-performance computing (HPC) simulation and data analysis will greatly aid in interpreting the results of HEP experiments. Such an integration will minimize data movement and facilitate interdependent workflows. Long-range planning between HEP and ASCR will be required to meet HEP’s research needs. To best use ASCR HPC resources, the experimental HEP program needs (1) an established, long-term plan for access to ASCR computational and data resources, (2) the ability to map workflows to HPC resources, (3) the ability for ASCR facilities to accommodate workflows run by collaborations potentially comprising thousands of individual members, (4) to transition codes to the next-generation HPC platforms that will be available at ASCR facilities, (5) to build up and train a workforce capable of developing and using simulations and analysis to support HEP scientific research on next-generation systems.« less

  3. High-Performance Computing Data Center Waste Heat Reuse | Computational

    Science.gov Websites

    control room With heat exchangers, heat energy in the energy recovery water (ERW) loop becomes available to heat the facility's process hot water (PHW) loop. Once heated, the PHW loop supplies: Active loop in the courtyard of the ESIF's main entrance District heating loop: If additional heat is needed

  4. Challenges of Future High-End Computing

    NASA Technical Reports Server (NTRS)

    Bailey, David; Kutler, Paul (Technical Monitor)

    1998-01-01

    The next major milestone in high performance computing is a sustained rate of one Pflop/s (also written one petaflops, or 10(circumflex)15 floating-point operations per second). In addition to prodigiously high computational performance, such systems must of necessity feature very large main memories, as well as comparably high I/O bandwidth and huge mass storage facilities. The current consensus of scientists who have studied these issues is that "affordable" petaflops systems may be feasible by the year 2010, assuming that certain key technologies continue to progress at current rates. One important question is whether applications can be structured to perform efficiently on such systems, which are expected to incorporate many thousands of processors and deeply hierarchical memory systems. To answer these questions, advanced performance modeling techniques, including simulation of future architectures and applications, may be required. It may also be necessary to formulate "latency tolerant algorithms" and other completely new algorithmic approaches for certain applications. This talk will give an overview of these challenges.

  5. Numerical Investigation of Double-Cone Flows with High Enthalpy Effects

    NASA Astrophysics Data System (ADS)

    Nompelis, I.; Candler, G. V.

    2009-01-01

    A numerical study of shock/shock and shock/boundary layer interactions generated by a double-cone model that is placed in a hypersonic free-stream is presented. Computational results are compared with the experimental measurements made at the CUBRC LENS facility for nitrogen flows at high enthalpy conditions. The CFD predictions agree well with surface pressure and heat-flux measurements for all but one of the double-cone cases that have been studied by the authors. Unsteadiness is observed in computations of one of the LENS cases, however for this case the experimental measurements show that the flowfield is steady. To understand this discrepancy, several double-cone experiments performed in two different facilities with both air and nitrogen as the working gas are examined in the present study. Computational results agree well with measurements made in both the AEDC tunnel 9 and the CUBRC LENS facility for double-cone flows at low free-stream Reynolds numbers where the flow is steady. It is shown that at higher free- stream pressures the double-cone simulations develop instabilities that result in an unsteady separation.

  6. The Practical Obstacles of Data Transfer: Why researchers still love scp

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nam, Hai Ah; Hill, Jason J; Parete-Koon, Suzanne T

    The importance of computing facilities is heralded every six months with the announcement of the new Top500 list, showcasing the world s fastest supercomputers. Unfortu- nately, with great computing capability does not come great long-term data storage capacity, which often means users must move their data to their local site archive, to remote sites where they may be doing future computation or anal- ysis, or back to their home institution, else face the dreaded data purge that most HPC centers employ to keep utiliza- tion of large parallel filesystems low to manage performance and capacity. At HPC centers, data transfermore » is crucial to the scientific workflow and will increase in importance as computing systems grow in size. The Energy Sciences Net- work (ESnet) recently launched its fifth generation network, a 100 Gbps high-performance, unclassified national network connecting more than 40 DOE research sites to support scientific research and collaboration. Despite the tenfold increase in bandwidth to DOE research sites amenable to multiple data transfer streams and high throughput, in prac- tice, researchers often under-utilize the network and resort to painfully-slow single stream transfer methods such as scp to avoid the complexity of using multiple stream tools such as GridFTP and bbcp, and contend with frustration from the lack of consistency of available tools between sites. In this study we survey and assess the data transfer methods pro- vided at several DOE supported computing facilities, includ- ing both leadership-computing facilities, connected through ESnet. We present observed transfer rates, suggested opti- mizations, and discuss the obstacles the tools must overcome to receive wide-spread adoption over scp.« less

  7. UTILIZATION OF COMPUTER FACILITIES IN THE MATHEMATICS AND BUSINESS CURRICULUM IN A LARGE SUBURBAN HIGH SCHOOL.

    ERIC Educational Resources Information Center

    RENO, MARTIN; AND OTHERS

    A STUDY WAS UNDERTAKEN TO EXPLORE IN A QUALITATIVE WAY THE POSSIBLE UTILIZATION OF COMPUTER AND DATA PROCESSING METHODS IN HIGH SCHOOL EDUCATION. OBJECTIVES WERE--(1) TO ESTABLISH A WORKING RELATIONSHIP WITH A COMPUTER FACILITY SO THAT ABLE STUDENTS AND THEIR TEACHERS WOULD HAVE ACCESS TO THE FACILITIES, (2) TO DEVELOP A UNIT FOR THE UTILIZATION…

  8. Stockpile Stewardship: How We Ensure the Nuclear Deterrent Without Testing

    ScienceCinema

    None

    2018-01-16

    In the 1990s, the U.S. nuclear weapons program shifted emphasis from developing new designs to dismantling thousands of existing weapons and maintaining a much smaller enduring stockpile. The United States ceased underground nuclear testing, and the Department of Energy created the Stockpile Stewardship Program to maintain the safety, security, and reliability of the U.S. nuclear deterrent without full-scale testing. This video gives a behind the scenes look at a set of unique capabilities at Lawrence Livermore that are indispensable to the Stockpile Stewardship Program: high performance computing, the Superblock category II nuclear facility, the JASPER a two stage gas gun, the High Explosive Applications Facility (HEAF), the National Ignition Facility (NIF), and the Site 300 contained firing facility.

  9. THC-MP: High performance numerical simulation of reactive transport and multiphase flow in porous media

    NASA Astrophysics Data System (ADS)

    Wei, Xiaohui; Li, Weishan; Tian, Hailong; Li, Hongliang; Xu, Haixiao; Xu, Tianfu

    2015-07-01

    The numerical simulation of multiphase flow and reactive transport in the porous media on complex subsurface problem is a computationally intensive application. To meet the increasingly computational requirements, this paper presents a parallel computing method and architecture. Derived from TOUGHREACT that is a well-established code for simulating subsurface multi-phase flow and reactive transport problems, we developed a high performance computing THC-MP based on massive parallel computer, which extends greatly on the computational capability for the original code. The domain decomposition method was applied to the coupled numerical computing procedure in the THC-MP. We designed the distributed data structure, implemented the data initialization and exchange between the computing nodes and the core solving module using the hybrid parallel iterative and direct solver. Numerical accuracy of the THC-MP was verified through a CO2 injection-induced reactive transport problem by comparing the results obtained from the parallel computing and sequential computing (original code). Execution efficiency and code scalability were examined through field scale carbon sequestration applications on the multicore cluster. The results demonstrate successfully the enhanced performance using the THC-MP on parallel computing facilities.

  10. Integration of High-Performance Computing into Cloud Computing Services

    NASA Astrophysics Data System (ADS)

    Vouk, Mladen A.; Sills, Eric; Dreher, Patrick

    High-Performance Computing (HPC) projects span a spectrum of computer hardware implementations ranging from peta-flop supercomputers, high-end tera-flop facilities running a variety of operating systems and applications, to mid-range and smaller computational clusters used for HPC application development, pilot runs and prototype staging clusters. What they all have in common is that they operate as a stand-alone system rather than a scalable and shared user re-configurable resource. The advent of cloud computing has changed the traditional HPC implementation. In this article, we will discuss a very successful production-level architecture and policy framework for supporting HPC services within a more general cloud computing infrastructure. This integrated environment, called Virtual Computing Lab (VCL), has been operating at NC State since fall 2004. Nearly 8,500,000 HPC CPU-Hrs were delivered by this environment to NC State faculty and students during 2009. In addition, we present and discuss operational data that show that integration of HPC and non-HPC (or general VCL) services in a cloud can substantially reduce the cost of delivering cloud services (down to cents per CPU hour).

  11. Energy Systems Integration Facility (ESIF): Golden, CO - Energy Integration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sheppy, Michael; VanGeet, Otto; Pless, Shanti

    2015-03-01

    At NREL's Energy Systems Integration Facility (ESIF) in Golden, Colo., scientists and engineers work to overcome challenges related to how the nation generates, delivers and uses energy by modernizing the interplay between energy sources, infrastructure, and data. Test facilities include a megawatt-scale ac electric grid, photovoltaic simulators and a load bank. Additionally, a high performance computing data center (HPCDC) is dedicated to advancing renewable energy and energy efficient technologies. A key design strategy is to use waste heat from the HPCDC to heat parts of the building. The ESIF boasts an annual EUI of 168.3 kBtu/ft2. This article describes themore » building's procurement, design and first year of performance.« less

  12. Ku-Band rendezvous radar performance computer simulation model

    NASA Technical Reports Server (NTRS)

    Magnusson, H. G.; Goff, M. F.

    1984-01-01

    All work performed on the Ku-band rendezvous radar performance computer simulation model program since the release of the preliminary final report is summarized. Developments on the program fall into three distinct categories: (1) modifications to the existing Ku-band radar tracking performance computer model; (2) the addition of a highly accurate, nonrealtime search and acquisition performance computer model to the total software package developed on this program; and (3) development of radar cross section (RCS) computation models for three additional satellites. All changes in the tracking model involved improvements in the automatic gain control (AGC) and the radar signal strength (RSS) computer models. Although the search and acquisition computer models were developed under the auspices of the Hughes Aircraft Company Ku-Band Integrated Radar and Communications Subsystem program office, they have been supplied to NASA as part of the Ku-band radar performance comuter model package. Their purpose is to predict Ku-band acquisition performance for specific satellite targets on specific missions. The RCS models were developed for three satellites: the Long Duration Exposure Facility (LDEF) spacecraft, the Solar Maximum Mission (SMM) spacecraft, and the Space Telescopes.

  13. Ku-Band rendezvous radar performance computer simulation model

    NASA Astrophysics Data System (ADS)

    Magnusson, H. G.; Goff, M. F.

    1984-06-01

    All work performed on the Ku-band rendezvous radar performance computer simulation model program since the release of the preliminary final report is summarized. Developments on the program fall into three distinct categories: (1) modifications to the existing Ku-band radar tracking performance computer model; (2) the addition of a highly accurate, nonrealtime search and acquisition performance computer model to the total software package developed on this program; and (3) development of radar cross section (RCS) computation models for three additional satellites. All changes in the tracking model involved improvements in the automatic gain control (AGC) and the radar signal strength (RSS) computer models. Although the search and acquisition computer models were developed under the auspices of the Hughes Aircraft Company Ku-Band Integrated Radar and Communications Subsystem program office, they have been supplied to NASA as part of the Ku-band radar performance comuter model package. Their purpose is to predict Ku-band acquisition performance for specific satellite targets on specific missions. The RCS models were developed for three satellites: the Long Duration Exposure Facility (LDEF) spacecraft, the Solar Maximum Mission (SMM) spacecraft, and the Space Telescopes.

  14. A high level language for a high performance computer

    NASA Technical Reports Server (NTRS)

    Perrott, R. H.

    1978-01-01

    The proposed computational aerodynamic facility will join the ranks of the supercomputers due to its architecture and increased execution speed. At present, the languages used to program these supercomputers have been modifications of programming languages which were designed many years ago for sequential machines. A new programming language should be developed based on the techniques which have proved valuable for sequential programming languages and incorporating the algorithmic techniques required for these supercomputers. The design objectives for such a language are outlined.

  15. NASA Aeronautics: Research and Technology Program Highlights

    NASA Technical Reports Server (NTRS)

    1990-01-01

    This report contains numerous color illustrations to describe the NASA programs in aeronautics. The basic ideas involved are explained in brief paragraphs. The seven chapters deal with Subsonic aircraft, High-speed transport, High-performance military aircraft, Hypersonic/Transatmospheric vehicles, Critical disciplines, National facilities and Organizations & installations. Some individual aircraft discussed are : the SR-71 aircraft, aerospace planes, the high-speed civil transport (HSCT), the X-29 forward-swept wing research aircraft, and the X-31 aircraft. Critical disciplines discussed are numerical aerodynamic simulation, computational fluid dynamics, computational structural dynamics and new experimental testing techniques.

  16. Wakefield Computations for the CLIC PETS using the Parallel Finite Element Time-Domain Code T3P

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Candel, A; Kabel, A.; Lee, L.

    In recent years, SLAC's Advanced Computations Department (ACD) has developed the high-performance parallel 3D electromagnetic time-domain code, T3P, for simulations of wakefields and transients in complex accelerator structures. T3P is based on advanced higher-order Finite Element methods on unstructured grids with quadratic surface approximation. Optimized for large-scale parallel processing on leadership supercomputing facilities, T3P allows simulations of realistic 3D structures with unprecedented accuracy, aiding the design of the next generation of accelerator facilities. Applications to the Compact Linear Collider (CLIC) Power Extraction and Transfer Structure (PETS) are presented.

  17. Air Vehicles Division Computational Structural Analysis Facilities Policy and Guidelines for Users

    DTIC Science & Technology

    2005-05-01

    34 Thermal " as appropriate and the tolerance set to "default". b) Create the model geometry. c) Create the finite elements. d) Create the...linear, non-linear, dynamic, thermal , acoustic analysis. The modelling of composite materials, creep, fatigue and plasticity are also covered...perform professional, high quality finite element analysis (FEA). FE analysts from many tasks within AVD are using the facilities to conduct FEA with

  18. Evaluation of the long-term performance of six alternative disposal methods for LLRW

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kossik, R.; Sharp, G.; Chau, T.

    1995-12-31

    The State of New York has carried out a comparison of six alternative disposal methods for low-level radioactive waste (LLRW). An important part of these evaluations involved quantitatively analyzing the long-term (10,000 yr) performance of the methods with respect to dose to humans, radionuclide concentrations in the environment, and cumulative release from the facility. Four near-surface methods (covered above-grade vault, uncovered above-grade vault, below-grade vault, augered holes) and two mine methods (vertical shaft mine and drift mine) were evaluated. Each method was analyzed for several generic site conditions applicable for the state. The evaluations were carried out using RIP (Repositorymore » Integration Program), an integrated, total system performance assessment computer code which has been applied to radioactive waste disposal facilities both in the U.S. (Yucca Mountain, WIPP) and worldwide. The evaluations indicate that mines in intact low-permeability rock and near-surface facilities with engineered covers generally have a high potential to perform well (within regulatory limits). Uncovered above-grade vaults and mines in highly fractured crystalline rock, however, have a high potential to perform poorly, exceeding regulatory limits.« less

  19. Experience in using commercial clouds in CMS

    NASA Astrophysics Data System (ADS)

    Bauerdick, L.; Bockelman, B.; Dykstra, D.; Fuess, S.; Garzoglio, G.; Girone, M.; Gutsche, O.; Holzman, B.; Hufnagel, D.; Kim, H.; Kennedy, R.; Mason, D.; Spentzouris, P.; Timm, S.; Tiradani, A.; Vaandering, E.; CMS Collaboration

    2017-10-01

    Historically high energy physics computing has been performed on large purpose-built computing systems. In the beginning there were single site computing facilities, which evolved into the Worldwide LHC Computing Grid (WLCG) used today. The vast majority of the WLCG resources are used for LHC computing and the resources are scheduled to be continuously used throughout the year. In the last several years there has been an explosion in capacity and capability of commercial and academic computing clouds. Cloud resources are highly virtualized and intended to be able to be flexibly deployed for a variety of computing tasks. There is a growing interest amongst the cloud providers to demonstrate the capability to perform large scale scientific computing. In this presentation we will discuss results from the CMS experiment using the Fermilab HEPCloud Facility, which utilized both local Fermilab resources and Amazon Web Services (AWS). The goal was to work with AWS through a matching grant to demonstrate a sustained scale approximately equal to half of the worldwide processing resources available to CMS. We will discuss the planning and technical challenges involved in organizing the most IO intensive CMS workflows on a large-scale set of virtualized resource provisioned by the Fermilab HEPCloud. We will describe the data handling and data management challenges. Also, we will discuss the economic issues and cost and operational efficiency comparison to our dedicated resources. At the end we will consider the changes in the working model of HEP computing in a domain with the availability of large scale resources scheduled at peak times.

  20. Experience in using commercial clouds in CMS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bauerdick, L.; Bockelman, B.; Dykstra, D.

    Historically high energy physics computing has been performed on large purposebuilt computing systems. In the beginning there were single site computing facilities, which evolved into the Worldwide LHC Computing Grid (WLCG) used today. The vast majority of the WLCG resources are used for LHC computing and the resources are scheduled to be continuously used throughout the year. In the last several years there has been an explosion in capacity and capability of commercial and academic computing clouds. Cloud resources are highly virtualized and intended to be able to be flexibly deployed for a variety of computing tasks. There is amore » growing interest amongst the cloud providers to demonstrate the capability to perform large scale scientific computing. In this presentation we will discuss results from the CMS experiment using the Fermilab HEPCloud Facility, which utilized both local Fermilab resources and Amazon Web Services (AWS). The goal was to work with AWS through a matching grant to demonstrate a sustained scale approximately equal to half of the worldwide processing resources available to CMS. We will discuss the planning and technical challenges involved in organizing the most IO intensive CMS workflows on a large-scale set of virtualized resource provisioned by the Fermilab HEPCloud. We will describe the data handling and data management challenges. Also, we will discuss the economic issues and cost and operational efficiency comparison to our dedicated resources. At the end we will consider the changes in the working model of HEP computing in a domain with the availability of large scale resources scheduled at peak times.« less

  1. The National Virtual Observatory

    NASA Astrophysics Data System (ADS)

    Hanisch, Robert J.

    2001-06-01

    The National Virtual Observatory is a distributed computational facility that will provide access to the ``virtual sky''-the federation of astronomical data archives, object catalogs, and associated information services. The NVO's ``virtual telescope'' is a common framework for requesting, retrieving, and manipulating information from diverse, distributed resources. The NVO will make it possible to seamlessly integrate data from the new all-sky surveys, enabling cross-correlations between multi-Terabyte catalogs and providing transparent access to the underlying image or spectral data. Success requires high performance computational systems, high bandwidth network services, agreed upon standards for the exchange of metadata, and collaboration among astronomers, astronomical data and information service providers, information technology specialists, funding agencies, and industry. International cooperation at the onset will help to assure that the NVO simultaneously becomes a global facility. .

  2. Stockpile Stewardship: How We Ensure the Nuclear Deterrent Without Testing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2014-09-04

    In the 1990s, the U.S. nuclear weapons program shifted emphasis from developing new designs to dismantling thousands of existing weapons and maintaining a much smaller enduring stockpile. The United States ceased underground nuclear testing, and the Department of Energy created the Stockpile Stewardship Program to maintain the safety, security, and reliability of the U.S. nuclear deterrent without full-scale testing. This video gives a behind the scenes look at a set of unique capabilities at Lawrence Livermore that are indispensable to the Stockpile Stewardship Program: high performance computing, the Superblock category II nuclear facility, the JASPER a two stage gas gun,more » the High Explosive Applications Facility (HEAF), the National Ignition Facility (NIF), and the Site 300 contained firing facility.« less

  3. Local storage federation through XRootD architecture for interactive distributed analysis

    NASA Astrophysics Data System (ADS)

    Colamaria, F.; Colella, D.; Donvito, G.; Elia, D.; Franco, A.; Luparello, G.; Maggi, G.; Miniello, G.; Vallero, S.; Vino, G.

    2015-12-01

    A cloud-based Virtual Analysis Facility (VAF) for the ALICE experiment at the LHC has been deployed in Bari. Similar facilities are currently running in other Italian sites with the aim to create a federation of interoperating farms able to provide their computing resources for interactive distributed analysis. The use of cloud technology, along with elastic provisioning of computing resources as an alternative to the grid for running data intensive analyses, is the main challenge of these facilities. One of the crucial aspects of the user-driven analysis execution is the data access. A local storage facility has the disadvantage that the stored data can be accessed only locally, i.e. from within the single VAF. To overcome such a limitation a federated infrastructure, which provides full access to all the data belonging to the federation independently from the site where they are stored, has been set up. The federation architecture exploits both cloud computing and XRootD technologies, in order to provide a dynamic, easy-to-use and well performing solution for data handling. It should allow the users to store the files and efficiently retrieve the data, since it implements a dynamic distributed cache among many datacenters in Italy connected to one another through the high-bandwidth national network. Details on the preliminary architecture implementation and performance studies are discussed.

  4. Mixing HTC and HPC Workloads with HTCondor and Slurm

    NASA Astrophysics Data System (ADS)

    Hollowell, C.; Barnett, J.; Caramarcu, C.; Strecker-Kellogg, W.; Wong, A.; Zaytsev, A.

    2017-10-01

    Traditionally, the RHIC/ATLAS Computing Facility (RACF) at Brookhaven National Laboratory (BNL) has only maintained High Throughput Computing (HTC) resources for our HEP/NP user community. We’ve been using HTCondor as our batch system for many years, as this software is particularly well suited for managing HTC processor farm resources. Recently, the RACF has also begun to design/administrate some High Performance Computing (HPC) systems for a multidisciplinary user community at BNL. In this paper, we’ll discuss our experiences using HTCondor and Slurm in an HPC context, and our facility’s attempts to allow our HTC and HPC processing farms/clusters to make opportunistic use of each other’s computing resources.

  5. Reynolds Number Effects on Leading Edge Radius Variations of a Supersonic Transport at Transonic Conditions

    NASA Technical Reports Server (NTRS)

    Rivers, S. M. B.; Wahls, R. A.; Owens, L. R.

    2001-01-01

    A computational study focused on leading-edge radius effects and associated Reynolds number sensitivity for a High Speed Civil Transport configuration at transonic conditions was conducted as part of NASA's High Speed Research Program. The primary purposes were to assess the capabilities of computational fluid dynamics to predict Reynolds number effects for a range of leading-edge radius distributions on a second-generation supersonic transport configuration, and to evaluate the potential performance benefits of each at the transonic cruise condition. Five leading-edge radius distributions are described, and the potential performance benefit including the Reynolds number sensitivity for each is presented. Computational results for two leading-edge radius distributions are compared with experimental results acquired in the National Transonic Facility over a broad Reynolds number range.

  6. Template Interfaces for Agile Parallel Data-Intensive Science

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ramakrishnan, Lavanya; Gunter, Daniel; Pastorello, Gilerto Z.

    Tigres provides a programming library to compose and execute large-scale data-intensive scientific workflows from desktops to supercomputers. DOE User Facilities and large science collaborations are increasingly generating large enough data sets that it is no longer practical to download them to a desktop to operate on them. They are instead stored at centralized compute and storage resources such as high performance computing (HPC) centers. Analysis of this data requires an ability to run on these facilities, but with current technologies, scaling an analysis to an HPC center and to a large data set is difficult even for experts. Tigres ismore » addressing the challenge of enabling collaborative analysis of DOE Science data through a new concept of reusable "templates" that enable scientists to easily compose, run and manage collaborative computational tasks. These templates define common computation patterns used in analyzing a data set.« less

  7. Making Advanced Scientific Algorithms and Big Scientific Data Management More Accessible

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Venkatakrishnan, S. V.; Mohan, K. Aditya; Beattie, Keith

    2016-02-14

    Synchrotrons such as the Advanced Light Source (ALS) at Lawrence Berkeley National Laboratory are known as user facilities. They are sources of extremely bright X-ray beams, and scientists come from all over the world to perform experiments that require these beams. As the complexity of experiments has increased, and the size and rates of data sets has exploded, managing, analyzing and presenting the data collected at synchrotrons has been an increasing challenge. The ALS has partnered with high performance computing, fast networking, and applied mathematics groups to create a"super-facility", giving users simultaneous access to the experimental, computational, and algorithmic resourcesmore » to overcome this challenge. This combination forms an efficient closed loop, where data despite its high rate and volume is transferred and processed, in many cases immediately and automatically, on appropriate compute resources, and results are extracted, visualized, and presented to users or to the experimental control system, both to provide immediate insight and to guide decisions about subsequent experiments during beam-time. In this paper, We will present work done on advanced tomographic reconstruction algorithms to support users of the 3D micron-scale imaging instrument (Beamline 8.3.2, hard X-ray micro-tomography).« less

  8. A High Performance Computing Study of a Scalable FISST-Based Approach to Multi-Target, Multi-Sensor Tracking

    NASA Astrophysics Data System (ADS)

    Hussein, I.; Wilkins, M.; Roscoe, C.; Faber, W.; Chakravorty, S.; Schumacher, P.

    2016-09-01

    Finite Set Statistics (FISST) is a rigorous Bayesian multi-hypothesis management tool for the joint detection, classification and tracking of multi-sensor, multi-object systems. Implicit within the approach are solutions to the data association and target label-tracking problems. The full FISST filtering equations, however, are intractable. While FISST-based methods such as the PHD and CPHD filters are tractable, they require heavy moment approximations to the full FISST equations that result in a significant loss of information contained in the collected data. In this paper, we review Smart Sampling Markov Chain Monte Carlo (SSMCMC) that enables FISST to be tractable while avoiding moment approximations. We study the effect of tuning key SSMCMC parameters on tracking quality and computation time. The study is performed on a representative space object catalog with varying numbers of RSOs. The solution is implemented in the Scala computing language at the Maui High Performance Computing Center (MHPCC) facility.

  9. ASCR Cybersecurity for Scientific Computing Integrity - Research Pathways and Ideas Workshop

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peisert, Sean; Potok, Thomas E.; Jones, Todd

    At the request of the U.S. Department of Energy's (DOE) Office of Science (SC) Advanced Scientific Computing Research (ASCR) program office, a workshop was held June 2-3, 2015, in Gaithersburg, MD, to identify potential long term (10 to +20 year) cybersecurity fundamental basic research and development challenges, strategies and roadmap facing future high performance computing (HPC), networks, data centers, and extreme-scale scientific user facilities. This workshop was a follow-on to the workshop held January 7-9, 2015, in Rockville, MD, that examined higher level ideas about scientific computing integrity specific to the mission of the DOE Office of Science. Issues includedmore » research computation and simulation that takes place on ASCR computing facilities and networks, as well as network-connected scientific instruments, such as those run by various DOE Office of Science programs. Workshop participants included researchers and operational staff from DOE national laboratories, as well as academic researchers and industry experts. Participants were selected based on the submission of abstracts relating to the topics discussed in the previous workshop report [1] and also from other ASCR reports, including "Abstract Machine Models and Proxy Architectures for Exascale Computing" [27], the DOE "Preliminary Conceptual Design for an Exascale Computing Initiative" [28], and the January 2015 machine learning workshop [29]. The workshop was also attended by several observers from DOE and other government agencies. The workshop was divided into three topic areas: (1) Trustworthy Supercomputing, (2) Extreme-Scale Data, Knowledge, and Analytics for Understanding and Improving Cybersecurity, and (3) Trust within High-end Networking and Data Centers. Participants were divided into three corresponding teams based on the category of their abstracts. The workshop began with a series of talks from the program manager and workshop chair, followed by the leaders for each of the three topics and a representative of each of the four major DOE Office of Science Advanced Scientific Computing Research Facilities: the Argonne Leadership Computing Facility (ALCF), the Energy Sciences Network (ESnet), the National Energy Research Scientific Computing Center (NERSC), and the Oak Ridge Leadership Computing Facility (OLCF). The rest of the workshop consisted of topical breakout discussions and focused writing periods that produced much of this report.« less

  10. A research study for the preliminary definition of an aerophysics free-flight laboratory facility

    NASA Technical Reports Server (NTRS)

    Canning, Thomas N.

    1988-01-01

    A renewed interest in hypervelocity vehicles requires an increase in the knowledge of aerodynamic phenomena. Tests conducted with ground-based facilities can be used both to better understand the physics of hypervelocity flight, and to calibrate and validate computer codes designed to predict vehicle performance in the hypervelocity environment. This research reviews the requirements for aerothermodynamic testing and discusses the ballistic range and its capabilities. Examples of the kinds of testing performed in typical high performance ballistic ranges are described. We draw heavily on experience obtained in the ballistics facilities at NASA Ames Research Center, Moffett Field, California. Prospects for improving the capabilities of the ballistic range by using advanced instrumentation are discussed. Finally, recent developments in gun technology and their application to extend the capability of the ballistic range are summarized.

  11. Real-time data reduction capabilities at the Langley 7 by 10 foot high speed tunnel

    NASA Technical Reports Server (NTRS)

    Fox, C. H., Jr.

    1980-01-01

    The 7 by 10 foot high speed tunnel performs a wide range of tests employing a variety of model installation methods. To support the reduction of static data from this facility, a generalized wind tunnel data reduction program had been developed for use on the Langley central computer complex. The capabilities of a version of this generalized program adapted for real time use on a dedicated on-site computer are discussed. The input specifications, instructions for the console operator, and full descriptions of the algorithms are included.

  12. AHPCRC (Army High Performance Computing Research Center) Bulletin. Volume 1, Issue 2

    DTIC Science & Technology

    2011-01-01

    area and the researchers working on these projects. Also inside: news from the AHPCRC consortium partners at Morgan State University and the NASA ...Computing Research Center is provided by the supercomputing and research facilities at Stanford University and at the NASA Ames Research Center at...atomic and molecular level, he said. He noted that “every general would like to have” a Star Trek -like holodeck, where holographic avatars could

  13. High Performance Computing Facility Operational Assessment 2015: Oak Ridge Leadership Computing Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barker, Ashley D.; Bernholdt, David E.; Bland, Arthur S.

    Oak Ridge National Laboratory’s (ORNL’s) Leadership Computing Facility (OLCF) continues to surpass its operational target goals: supporting users; delivering fast, reliable systems; creating innovative solutions for high-performance computing (HPC) needs; and managing risks, safety, and security aspects associated with operating one of the most powerful computers in the world. The results can be seen in the cutting-edge science delivered by users and the praise from the research community. Calendar year (CY) 2015 was filled with outstanding operational results and accomplishments: a very high rating from users on overall satisfaction that ties the highest-ever mark set in CY 2014; the greatestmore » number of core-hours delivered to research projects; the largest percentage of capability usage since the OLCF began tracking the metric in 2009; and success in delivering on the allocation of 60, 30, and 10% of core hours offered for the INCITE (Innovative and Novel Computational Impact on Theory and Experiment), ALCC (Advanced Scientific Computing Research Leadership Computing Challenge), and Director’s Discretionary programs, respectively. These accomplishments, coupled with the extremely high utilization rate, represent the fulfillment of the promise of Titan: maximum use by maximum-size simulations. The impact of all of these successes and more is reflected in the accomplishments of OLCF users, with publications this year in notable journals Nature, Nature Materials, Nature Chemistry, Nature Physics, Nature Climate Change, ACS Nano, Journal of the American Chemical Society, and Physical Review Letters, as well as many others. The achievements included in the 2015 OLCF Operational Assessment Report reflect first-ever or largest simulations in their communities; for example Titan enabled engineers in Los Angeles and the surrounding region to design and begin building improved critical infrastructure by enabling the highest-resolution Cybershake map for Southern California to date. The Titan system provides the largest extant heterogeneous architecture for computing and computational science. Usage is high, delivering on the promise of a system well-suited for capability simulations for science. This success is due in part to innovations in tracking and reporting the activity on the compute nodes, and using this information to further enable and optimize applications, extending and balancing workload across the entire node. The OLCF continues to invest in innovative processes, tools, and resources necessary to meet continuing user demand. The facility’s leadership in data analysis and workflows was featured at the Department of Energy (DOE) booth at SC15, for the second year in a row, highlighting work with researchers from the National Library of Medicine coupled with unique computational and data resources serving experimental and observational data across facilities. Effective operations of the OLCF play a key role in the scientific missions and accomplishments of its users. Building on the exemplary year of 2014, as shown by the 2014 Operational Assessment Report (OAR) review committee response in Appendix A, this OAR delineates the policies, procedures, and innovations implemented by the OLCF to continue delivering a multi-petaflop resource for cutting-edge research. This report covers CY 2015, which, unless otherwise specified, denotes January 1, 2015, through December 31, 2015.« less

  14. A large high vacuum, high pumping speed space simulation chamber for electric propulsion

    NASA Technical Reports Server (NTRS)

    Grisnik, Stanley P.; Parkes, James E.

    1994-01-01

    Testing high power electric propulsion devices poses unique requirements on space simulation facilities. Very high pumping speeds are required to maintain high vacuum levels while handling large volumes of exhaust products. These pumping speeds are significantly higher than those available in most existing vacuum facilities. There is also a requirement for relatively large vacuum chamber dimensions to minimize facility wall/thruster plume interactions and to accommodate far field plume diagnostic measurements. A 4.57 m (15 ft) diameter by 19.2 m (63 ft) long vacuum chamber at NASA Lewis Research Center is described. The chamber utilizes oil diffusion pumps in combination with cryopanels to achieve high vacuum pumping speeds at high vacuum levels. The facility is computer controlled for all phases of operation from start-up, through testing, to shutdown. The computer control system increases the utilization of the facility and reduces the manpower requirements needed for facility operations.

  15. Boundary Layer Transition and Trip Effectiveness on an Apollo Capsule in the JAXA High Enthalpy Shock Tunnel (HIEST) Facility

    NASA Technical Reports Server (NTRS)

    Kirk, Lindsay C.; Lillard, Randolph P.; Olejniczak, Joseph; Tanno, Hideyuki

    2015-01-01

    Computational assessments were performed to size boundary layer trips for a scaled Apollo capsule model in the High Enthalpy Shock Tunnel (HIEST) facility at the JAXA Kakuda Space Center in Japan. For stagnation conditions between 2 MJ/kg and 20 MJ/kg and between 10 MPa and 60 MPa, the appropriate trips were determined to be between 0.2 mm and 1.3 mm high, which provided kappa/delta values on the heatshield from 0.15 to 2.25. The tripped configuration consisted of an insert with a series of diamond shaped trips along the heatshield downstream of the stagnation point. Surface heat flux measurements were obtained on a capsule with a 250 mm diameter, 6.4% scale model, and pressure measurements were taken at axial stations along the nozzle walls. At low enthalpy conditions, the computational predictions agree favorably to the test data along the heatshield centerline. However, agreement becomes less favorable as the enthalpy increases conditions. The measured surface heat flux on the heatshield from the HIEST facility was under-predicted by the computations in these cases. Both smooth and tripped configurations were tested for comparison, and a post-test computational analysis showed that kappa/delta values based on the as-measured stagnation conditions ranged between 0.5 and 1.2. Tripped configurations for both 0.6 mm and 0.8 mm trip heights were able to effectively trip the flow to fully turbulent for a range of freestream conditions.

  16. Costa - Introduction to 2015 Annual Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Costa, James E.

    In parallel with Sandia National Laboratories having two major locations (NM and CA), along with a number of smaller facilities across the nation, so too is the distribution of scientific, engineering and computing resources. As a part of Sandia’s Institutional Computing Program, CA site-based Sandia computer scientists and engineers have been providing mission and research staff with local CA resident expertise on computing options while also focusing on two growing high performance computing research problems. The first is how to increase system resilience to failure, as machines grow larger, more complex and heterogeneous. The second is how to ensure thatmore » computer hardware and configurations are optimized for specialized data analytical mission needs within the overall Sandia computing environment, including the HPC subenvironment. All of these activities support the larger Sandia effort in accelerating development and integration of high performance computing into national security missions. Sandia continues to both promote national R&D objectives, including the recent Presidential Executive Order establishing the National Strategic Computing Initiative and work to ensure that the full range of computing services and capabilities are available for all mission responsibilities, from national security to energy to homeland defense.« less

  17. Flow Characterization Studies of the 10-MW TP3 Arc-Jet Facility: Probe Sweeps

    NASA Technical Reports Server (NTRS)

    Goekcen, Tahir; Alunni, Antonella I.

    2016-01-01

    This paper reports computational simulations and analysis in support of calibration and flow characterization tests in a high enthalpy arc-jet facility at NASA Ames Research Center. These tests were conducted in the NASA Ames 10-MW TP3 facility using flat-faced stagnation calorimeters at six conditions corresponding to the steps of a simulated flight heating profile. Data were obtained using a conical nozzle test configuration in which the models were placed in a free jet downstream of the nozzle. Experimental surveys of arc-jet test flow with pitot pressure and heat flux probes were also performed at these arc-heater conditions, providing assessment of the flow uniformity and valuable data for the flow characterization. Two different sets of pitot pressure and heat probes were used: 9.1-mm sphere-cone probes (nose radius of 4.57 mm or 0.18 in) with null-point heat flux gages, and 15.9-mm (0.625 in) diameter hemisphere probes with Gardon gages. The probe survey data clearly show that the test flow in the TP3 facility is not uniform at most conditions (not even axisymmetric at some conditions), and the extent of non-uniformity is highly dependent on various arc-jet parameters such as arc current, mass flow rate, and the amount of cold-gas injection at the arc-heater plenum. The present analysis comprises computational fluid dynamics simulations of the nonequilibrium flowfield in the facility nozzle and test box, including the models tested. Comparisons of computations with the experimental measurements show reasonably good agreement except at the extreme low pressure conditions of the facility envelope.

  18. GPU Acceleration of the Locally Selfconsistent Multiple Scattering Code for First Principles Calculation of the Ground State and Statistical Physics of Materials

    NASA Astrophysics Data System (ADS)

    Eisenbach, Markus

    The Locally Self-consistent Multiple Scattering (LSMS) code solves the first principles Density Functional theory Kohn-Sham equation for a wide range of materials with a special focus on metals, alloys and metallic nano-structures. It has traditionally exhibited near perfect scalability on massively parallel high performance computer architectures. We present our efforts to exploit GPUs to accelerate the LSMS code to enable first principles calculations of O(100,000) atoms and statistical physics sampling of finite temperature properties. Using the Cray XK7 system Titan at the Oak Ridge Leadership Computing Facility we achieve a sustained performance of 14.5PFlop/s and a speedup of 8.6 compared to the CPU only code. This work has been sponsored by the U.S. Department of Energy, Office of Science, Basic Energy Sciences, Material Sciences and Engineering Division and by the Office of Advanced Scientific Computing. This work used resources of the Oak Ridge Leadership Computing Facility, which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC05-00OR22725.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lingerfelt, Eric J; Endeve, Eirik; Hui, Yawei

    Improvements in scientific instrumentation allow imaging at mesoscopic to atomic length scales, many spectroscopic modes, and now--with the rise of multimodal acquisition systems and the associated processing capability--the era of multidimensional, informationally dense data sets has arrived. Technical issues in these combinatorial scientific fields are exacerbated by computational challenges best summarized as a necessity for drastic improvement in the capability to transfer, store, and analyze large volumes of data. The Bellerophon Environment for Analysis of Materials (BEAM) platform provides material scientists the capability to directly leverage the integrated computational and analytical power of High Performance Computing (HPC) to perform scalablemore » data analysis and simulation and manage uploaded data files via an intuitive, cross-platform client user interface. This framework delivers authenticated, "push-button" execution of complex user workflows that deploy data analysis algorithms and computational simulations utilizing compute-and-data cloud infrastructures and HPC environments like Titan at the Oak Ridge Leadershp Computing Facility (OLCF).« less

  20. An application of high authority/low authority control and positivity

    NASA Technical Reports Server (NTRS)

    Seltzer, S. M.; Irwin, D.; Tollison, D.; Waites, H. B.

    1988-01-01

    Control Dynamics Company (CDy), in conjunction with NASA Marshall Space Flight Center (MSFC), has supported the U.S. Air Force Wright Aeronautical Laboratory (AFWAL) in conducting an investigation of the implementation of several DOD controls techniques. These techniques are to provide vibration suppression and precise attitude control for flexible space structures. AFWAL issued a contract to Control Dynamics to perform this work under the Active Control Technique Evaluation for Spacecraft (ACES) Program. The High Authority Control/Low Authority Control (HAC/LAC) and Positivity controls techniques, which were cultivated under the DARPA Active Control of Space Structures (ACOSS) Program, were applied to a structural model of the NASA/MSFC Ground Test Facility ACES configuration. The control systems design were accomplished and linear post-analyses of the closed-loop systems are provided. The control system designs take into account effects of sampling and delay in the control computer. Nonlinear simulation runs were used to verify the control system designs and implementations in the facility control computers. Finally, test results are given to verify operations of the control systems in the test facility.

  1. The role of dedicated data computing centers in the age of cloud computing

    NASA Astrophysics Data System (ADS)

    Caramarcu, Costin; Hollowell, Christopher; Strecker-Kellogg, William; Wong, Antonio; Zaytsev, Alexandr

    2017-10-01

    Brookhaven National Laboratory (BNL) anticipates significant growth in scientific programs with large computing and data storage needs in the near future and has recently reorganized support for scientific computing to meet these needs. A key component is the enhanced role of the RHIC-ATLAS Computing Facility (RACF) in support of high-throughput and high-performance computing (HTC and HPC) at BNL. This presentation discusses the evolving role of the RACF at BNL, in light of its growing portfolio of responsibilities and its increasing integration with cloud (academic and for-profit) computing activities. We also discuss BNL’s plan to build a new computing center to support the new responsibilities of the RACF and present a summary of the cost benefit analysis done, including the types of computing activities that benefit most from a local data center vs. cloud computing. This analysis is partly based on an updated cost comparison of Amazon EC2 computing services and the RACF, which was originally conducted in 2012.

  2. An Automated DAKOTA and VULCAN-CFD Framework with Application to Supersonic Facility Nozzle Flowpath Optimization

    NASA Technical Reports Server (NTRS)

    Axdahl, Erik L.

    2015-01-01

    Removing human interaction from design processes by using automation may lead to gains in both productivity and design precision. This memorandum describes efforts to incorporate high fidelity numerical analysis tools into an automated framework and applying that framework to applications of practical interest. The purpose of this effort was to integrate VULCAN-CFD into an automated, DAKOTA-enabled framework with a proof-of-concept application being the optimization of supersonic test facility nozzles. It was shown that the optimization framework could be deployed on a high performance computing cluster with the flow of information handled effectively to guide the optimization process. Furthermore, the application of the framework to supersonic test facility nozzle flowpath design and optimization was demonstrated using multiple optimization algorithms.

  3. Performance evaluation of the Engineering Analysis and Data Systems (EADS) 2

    NASA Technical Reports Server (NTRS)

    Debrunner, Linda S.

    1994-01-01

    The Engineering Analysis and Data System (EADS)II (1) was installed in March 1993 to provide high performance computing for science and engineering at Marshall Space Flight Center (MSFC). EADS II increased the computing capabilities over the existing EADS facility in the areas of throughput and mass storage. EADS II includes a Vector Processor Compute System (VPCS), a Virtual Memory Compute System (CFS), a Common Output System (COS), as well as Image Processing Station, Mini Super Computers, and Intelligent Workstations. These facilities are interconnected by a sophisticated network system. This work considers only the performance of the VPCS and the CFS. The VPCS is a Cray YMP. The CFS is implemented on an RS 6000 using the UniTree Mass Storage System. To better meet the science and engineering computing requirements, EADS II must be monitored, its performance analyzed, and appropriate modifications for performance improvement made. Implementing this approach requires tool(s) to assist in performance monitoring and analysis. In Spring 1994, PerfStat 2.0 was purchased to meet these needs for the VPCS and the CFS. PerfStat(2) is a set of tools that can be used to analyze both historical and real-time performance data. Its flexible design allows significant user customization. The user identifies what data is collected, how it is classified, and how it is displayed for evaluation. Both graphical and tabular displays are supported. The capability of the PerfStat tool was evaluated, appropriate modifications to EADS II to optimize throughput and enhance productivity were suggested and implemented, and the effects of these modifications on the systems performance were observed. In this paper, the PerfStat tool is described, then its use with EADS II is outlined briefly. Next, the evaluation of the VPCS, as well as the modifications made to the system are described. Finally, conclusions are drawn and recommendations for future worked are outlined.

  4. The grand challenge of managing the petascale facility.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aiken, R. J.; Mathematics and Computer Science

    2007-02-28

    This report is the result of a study of networks and how they may need to evolve to support petascale leadership computing and science. As Dr. Ray Orbach, director of the Department of Energy's Office of Science, says in the spring 2006 issue of SciDAC Review, 'One remarkable example of growth in unexpected directions has been in high-end computation'. In the same article Dr. Michael Strayer states, 'Moore's law suggests that before the end of the next cycle of SciDAC, we shall see petaflop computers'. Given the Office of Science's strong leadership and support for petascale computing and facilities, wemore » should expect to see petaflop computers in operation in support of science before the end of the decade, and DOE/SC Advanced Scientific Computing Research programs are focused on making this a reality. This study took its lead from this strong focus on petascale computing and the networks required to support such facilities, but it grew to include almost all aspects of the DOE/SC petascale computational and experimental science facilities, all of which will face daunting challenges in managing and analyzing the voluminous amounts of data expected. In addition, trends indicate the increased coupling of unique experimental facilities with computational facilities, along with the integration of multidisciplinary datasets and high-end computing with data-intensive computing; and we can expect these trends to continue at the petascale level and beyond. Coupled with recent technology trends, they clearly indicate the need for including capability petascale storage, networks, and experiments, as well as collaboration tools and programming environments, as integral components of the Office of Science's petascale capability metafacility. The objective of this report is to recommend a new cross-cutting program to support the management of petascale science and infrastructure. The appendices of the report document current and projected DOE computation facilities, science trends, and technology trends, whose combined impact can affect the manageability and stewardship of DOE's petascale facilities. This report is not meant to be all-inclusive. Rather, the facilities, science projects, and research topics presented are to be considered examples to clarify a point.« less

  5. A large-scale computer facility for computational aerodynamics

    NASA Technical Reports Server (NTRS)

    Bailey, F. R.; Ballhaus, W. F., Jr.

    1985-01-01

    As a result of advances related to the combination of computer system technology and numerical modeling, computational aerodynamics has emerged as an essential element in aerospace vehicle design methodology. NASA has, therefore, initiated the Numerical Aerodynamic Simulation (NAS) Program with the objective to provide a basis for further advances in the modeling of aerodynamic flowfields. The Program is concerned with the development of a leading-edge, large-scale computer facility. This facility is to be made available to Government agencies, industry, and universities as a necessary element in ensuring continuing leadership in computational aerodynamics and related disciplines. Attention is given to the requirements for computational aerodynamics, the principal specific goals of the NAS Program, the high-speed processor subsystem, the workstation subsystem, the support processing subsystem, the graphics subsystem, the mass storage subsystem, the long-haul communication subsystem, the high-speed data-network subsystem, and software.

  6. Computational Analyses in Support of Sub-scale Diffuser Testing for the A-3 Facility. Part 1; Steady Predictions

    NASA Technical Reports Server (NTRS)

    Allgood, Daniel C.; Graham, Jason S.; Ahuja, Vineet; Hosangadi, Ashvin

    2010-01-01

    Simulation technology can play an important role in rocket engine test facility design and development by assessing risks, providing analysis of dynamic pressure and thermal loads, identifying failure modes and predicting anomalous behavior of critical systems. Advanced numerical tools assume greater significance in supporting testing and design of high altitude testing facilities and plume induced testing environments of high thrust engines because of the greater inter-dependence and synergy in the functioning of the different sub-systems. This is especially true for facilities such as the proposed A-3 facility at NASA SSC because of a challenging operating envelope linked to variable throttle conditions at relatively low chamber pressures. Facility designs in this case will require a complex network of diffuser ducts, steam ejector trains, fast operating valves, cooling water systems and flow diverters that need to be characterized for steady state performance. In this paper, we will demonstrate with the use of CFD analyses s advanced capability to evaluate supersonic diffuser and steam ejector performance in a sub-scale A-3 facility at NASA Stennis Space Center (SSC) where extensive testing was performed. Furthermore, the focus in this paper relates to modeling of critical sub-systems and components used in facilities such as the A-3 facility. The work here will address deficiencies in empirical models and current CFD analyses that are used for design of supersonic diffusers/turning vanes/ejectors as well as analyses for confined plumes and venting processes. The primary areas that will be addressed are: (1) supersonic diffuser performance including analyses of thermal loads (2) accurate shock capturing in the diffuser duct; (3) effect of turning duct on the performance of the facility (4) prediction of mass flow rates and performance classification for steam ejectors (5) comparisons with test data from sub-scale diffuser testing and assessment of confidence levels in CFD based flowpath modeling of the facility. The analyses tools used here expand on the multi-element unstructured CFD which has been tailored and validated for impingement dynamics of dry plumes, complex valve/feed systems, and high pressure propellant delivery systems used in engine and component test stands at NASA SSC. The analyses performed in the evaluation of the sub-scale diffuser facility explored several important factors that influence modeling and understanding of facility operation such as (a) importance of modeling the facility with Real Gas approximation, (b) approximating the cluster of steam ejector nozzles as a single annular nozzle, (c) existence of mixed subsonic/supersonic flow downstream of the turning duct, and (d) inadequacy of two-equation turbulence models in predicting the correct pressurization in the turning duct and expansion of the second stage steam ejectors. The procedure used for modeling the facility was as follows: (i) The engine, test cell and first stage ejectors were simulated with an axisymmetric approximation (ii) the turning duct, second stage ejectors and the piping downstream of the second stage ejectors were analyzed with a three-dimensional simulation utilizing a half-plane symmetry approximation. The solution i.e. primitive variables such as pressure, velocity components, temperature and turbulence quantities were passed from the first computational domain and specified as a supersonic boundary condition for the second simulation. (iii) The third domain comprised of the exit diffuser and the region in the vicinity of the facility (primary included to get the correct shock structure at the exit of the facility and entrainment characteristics). The first set of simulations comprising the engine, test cell and first stage ejectors was carried out both as a turbulent real gas calculation as well as a turbulent perfect gas calculation. A comparison for the two cases (Real Turbulent and Perfect gas turbulent) of the Ma Number distribution and temperature distributions are shown in Figures 1 and 2 respectively.

  7. The HEPCloud Facility: elastic computing for High Energy Physics - The NOvA Use Case

    NASA Astrophysics Data System (ADS)

    Fuess, S.; Garzoglio, G.; Holzman, B.; Kennedy, R.; Norman, A.; Timm, S.; Tiradani, A.

    2017-10-01

    The need for computing in the HEP community follows cycles of peaks and valleys mainly driven by conference dates, accelerator shutdown, holiday schedules, and other factors. Because of this, the classical method of provisioning these resources at providing facilities has drawbacks such as potential overprovisioning. As the appetite for computing increases, however, so does the need to maximize cost efficiency by developing a model for dynamically provisioning resources only when needed. To address this issue, the HEPCloud project was launched by the Fermilab Scientific Computing Division in June 2015. Its goal is to develop a facility that provides a common interface to a variety of resources, including local clusters, grids, high performance computers, and community and commercial Clouds. Initially targeted experiments include CMS and NOvA, as well as other Fermilab stakeholders. In its first phase, the project has demonstrated the use of the “elastic” provisioning model offered by commercial clouds, such as Amazon Web Services. In this model, resources are rented and provisioned automatically over the Internet upon request. In January 2016, the project demonstrated the ability to increase the total amount of global CMS resources by 58,000 cores from 150,000 cores - a 38 percent increase - in preparation for the Recontres de Moriond. In March 2016, the NOvA experiment has also demonstrated resource burst capabilities with an additional 7,300 cores, achieving a scale almost four times as large as the local allocated resources and utilizing the local AWS s3 storage to optimize data handling operations and costs. NOvA was using the same familiar services used for local computations, such as data handling and job submission, in preparation for the Neutrino 2016 conference. In both cases, the cost was contained by the use of the Amazon Spot Instance Market and the Decision Engine, a HEPCloud component that aims at minimizing cost and job interruption. This paper describes the Fermilab HEPCloud Facility and the challenges overcome for the CMS and NOvA communities.

  8. Data management and its role in delivering science at DOE BES user facilities - Past, Present, and Future

    NASA Astrophysics Data System (ADS)

    Miller, Stephen D.; Herwig, Kenneth W.; Ren, Shelly; Vazhkudai, Sudharshan S.; Jemian, Pete R.; Luitz, Steffen; Salnikov, Andrei A.; Gaponenko, Igor; Proffen, Thomas; Lewis, Paul; Green, Mark L.

    2009-07-01

    The primary mission of user facilities operated by Basic Energy Sciences under the Department of Energy is to produce data for users in support of open science and basic research [1]. We trace back almost 30 years of history across selected user facilities illustrating the evolution of facility data management practices and how these practices have related to performing scientific research. The facilities cover multiple techniques such as X-ray and neutron scattering, imaging and tomography sciences. Over time, detector and data acquisition technologies have dramatically increased the ability to produce prolific volumes of data challenging the traditional paradigm of users taking data home upon completion of their experiments to process and publish their results. During this time, computing capacity has also increased dramatically, though the size of the data has grown significantly faster than the capacity of one's laptop to manage and process this new facility produced data. Trends indicate that this will continue to be the case for yet some time. Thus users face a quandary for how to manage today's data complexity and size as these may exceed the computing resources users have available to themselves. This same quandary can also stifle collaboration and sharing. Realizing this, some facilities are already providing web portal access to data and computing thereby providing users access to resources they need [2]. Portal based computing is now driving researchers to think about how to use the data collected at multiple facilities in an integrated way to perform their research, and also how to collaborate and share data. In the future, inter-facility data management systems will enable next tier cross-instrument-cross facility scientific research fuelled by smart applications residing upon user computer resources. We can learn from the medical imaging community that has been working since the early 1990's to integrate data from across multiple modalities to achieve better diagnoses [3] - similarly, data fusion across BES facilities will lead to new scientific discoveries.

  9. LXtoo: an integrated live Linux distribution for the bioinformatics community

    PubMed Central

    2012-01-01

    Background Recent advances in high-throughput technologies dramatically increase biological data generation. However, many research groups lack computing facilities and specialists. This is an obstacle that remains to be addressed. Here, we present a Linux distribution, LXtoo, to provide a flexible computing platform for bioinformatics analysis. Findings Unlike most of the existing live Linux distributions for bioinformatics limiting their usage to sequence analysis and protein structure prediction, LXtoo incorporates a comprehensive collection of bioinformatics software, including data mining tools for microarray and proteomics, protein-protein interaction analysis, and computationally complex tasks like molecular dynamics. Moreover, most of the programs have been configured and optimized for high performance computing. Conclusions LXtoo aims to provide well-supported computing environment tailored for bioinformatics research, reducing duplication of efforts in building computing infrastructure. LXtoo is distributed as a Live DVD and freely available at http://bioinformatics.jnu.edu.cn/LXtoo. PMID:22813356

  10. LXtoo: an integrated live Linux distribution for the bioinformatics community.

    PubMed

    Yu, Guangchuang; Wang, Li-Gen; Meng, Xiao-Hua; He, Qing-Yu

    2012-07-19

    Recent advances in high-throughput technologies dramatically increase biological data generation. However, many research groups lack computing facilities and specialists. This is an obstacle that remains to be addressed. Here, we present a Linux distribution, LXtoo, to provide a flexible computing platform for bioinformatics analysis. Unlike most of the existing live Linux distributions for bioinformatics limiting their usage to sequence analysis and protein structure prediction, LXtoo incorporates a comprehensive collection of bioinformatics software, including data mining tools for microarray and proteomics, protein-protein interaction analysis, and computationally complex tasks like molecular dynamics. Moreover, most of the programs have been configured and optimized for high performance computing. LXtoo aims to provide well-supported computing environment tailored for bioinformatics research, reducing duplication of efforts in building computing infrastructure. LXtoo is distributed as a Live DVD and freely available at http://bioinformatics.jnu.edu.cn/LXtoo.

  11. Automating NEURON Simulation Deployment in Cloud Resources.

    PubMed

    Stockton, David B; Santamaria, Fidel

    2017-01-01

    Simulations in neuroscience are performed on local servers or High Performance Computing (HPC) facilities. Recently, cloud computing has emerged as a potential computational platform for neuroscience simulation. In this paper we compare and contrast HPC and cloud resources for scientific computation, then report how we deployed NEURON, a widely used simulator of neuronal activity, in three clouds: Chameleon Cloud, a hybrid private academic cloud for cloud technology research based on the OpenStack software; Rackspace, a public commercial cloud, also based on OpenStack; and Amazon Elastic Cloud Computing, based on Amazon's proprietary software. We describe the manual procedures and how to automate cloud operations. We describe extending our simulation automation software called NeuroManager (Stockton and Santamaria, Frontiers in Neuroinformatics, 2015), so that the user is capable of recruiting private cloud, public cloud, HPC, and local servers simultaneously with a simple common interface. We conclude by performing several studies in which we examine speedup, efficiency, total session time, and cost for sets of simulations of a published NEURON model.

  12. Automating NEURON Simulation Deployment in Cloud Resources

    PubMed Central

    Santamaria, Fidel

    2016-01-01

    Simulations in neuroscience are performed on local servers or High Performance Computing (HPC) facilities. Recently, cloud computing has emerged as a potential computational platform for neuroscience simulation. In this paper we compare and contrast HPC and cloud resources for scientific computation, then report how we deployed NEURON, a widely used simulator of neuronal activity, in three clouds: Chameleon Cloud, a hybrid private academic cloud for cloud technology research based on the Open-Stack software; Rackspace, a public commercial cloud, also based on OpenStack; and Amazon Elastic Cloud Computing, based on Amazon’s proprietary software. We describe the manual procedures and how to automate cloud operations. We describe extending our simulation automation software called NeuroManager (Stockton and Santamaria, Frontiers in Neuroinformatics, 2015), so that the user is capable of recruiting private cloud, public cloud, HPC, and local servers simultaneously with a simple common interface. We conclude by performing several studies in which we examine speedup, efficiency, total session time, and cost for sets of simulations of a published NEURON model. PMID:27655341

  13. Extremely Scalable Spiking Neuronal Network Simulation Code: From Laptops to Exascale Computers.

    PubMed

    Jordan, Jakob; Ippen, Tammo; Helias, Moritz; Kitayama, Itaru; Sato, Mitsuhisa; Igarashi, Jun; Diesmann, Markus; Kunkel, Susanne

    2018-01-01

    State-of-the-art software tools for neuronal network simulations scale to the largest computing systems available today and enable investigations of large-scale networks of up to 10 % of the human cortex at a resolution of individual neurons and synapses. Due to an upper limit on the number of incoming connections of a single neuron, network connectivity becomes extremely sparse at this scale. To manage computational costs, simulation software ultimately targeting the brain scale needs to fully exploit this sparsity. Here we present a two-tier connection infrastructure and a framework for directed communication among compute nodes accounting for the sparsity of brain-scale networks. We demonstrate the feasibility of this approach by implementing the technology in the NEST simulation code and we investigate its performance in different scaling scenarios of typical network simulations. Our results show that the new data structures and communication scheme prepare the simulation kernel for post-petascale high-performance computing facilities without sacrificing performance in smaller systems.

  14. Extremely Scalable Spiking Neuronal Network Simulation Code: From Laptops to Exascale Computers

    PubMed Central

    Jordan, Jakob; Ippen, Tammo; Helias, Moritz; Kitayama, Itaru; Sato, Mitsuhisa; Igarashi, Jun; Diesmann, Markus; Kunkel, Susanne

    2018-01-01

    State-of-the-art software tools for neuronal network simulations scale to the largest computing systems available today and enable investigations of large-scale networks of up to 10 % of the human cortex at a resolution of individual neurons and synapses. Due to an upper limit on the number of incoming connections of a single neuron, network connectivity becomes extremely sparse at this scale. To manage computational costs, simulation software ultimately targeting the brain scale needs to fully exploit this sparsity. Here we present a two-tier connection infrastructure and a framework for directed communication among compute nodes accounting for the sparsity of brain-scale networks. We demonstrate the feasibility of this approach by implementing the technology in the NEST simulation code and we investigate its performance in different scaling scenarios of typical network simulations. Our results show that the new data structures and communication scheme prepare the simulation kernel for post-petascale high-performance computing facilities without sacrificing performance in smaller systems. PMID:29503613

  15. EOS MLS Science Data Processing System: A Description of Architecture and Capabilities

    NASA Technical Reports Server (NTRS)

    Cuddy, David T.; Echeverri, Mark D.; Wagner, Paul A.; Hanzel, Audrey T.; Fuller, Ryan A.

    2006-01-01

    This paper describes the architecture and capabilities of the Science Data Processing System (SDPS) for the EOS MLS. The SDPS consists of two major components--the Science Computing Facility and the Science Investigator-led Processing System. The Science Computing Facility provides the facilities for the EOS MLS Science Team to perform the functions of scientific algorithm development, processing software development, quality control of data products, and scientific analyses. The Science Investigator-led Processing System processes and reprocesses the science data for the entire mission and delivers the data products to the Science Computing Facility and to the Goddard Space Flight Center Earth Science Distributed Active Archive Center, which archives and distributes the standard science products.

  16. Performance Predictions for Proposed ILS Facilities at St. Louis Municipal Airport

    DOT National Transportation Integrated Search

    1978-01-01

    The results of computer simulations of performance of proposed ILS facilities on Runway 12L/30R at St. Louis Municipal Airport (Lambert Field) are reported. These simulations indicate that an existing industrial complex located near the runway is com...

  17. NREL, Hewlett-Packard Developed Ultra-Efficient, High-Performance Computing

    Science.gov Websites

    and allows the heat captured from the supercomputer to provide all the heating needs for the Energy Systems Integration Facility. And there's even enough heat left over to melt snow outside on sidewalks during the winter. During the summer, the unused heat can be rejected via cooling towers. R&D

  18. A ``Cyber Wind Facility'' for HPC Wind Turbine Field Experiments

    NASA Astrophysics Data System (ADS)

    Brasseur, James; Paterson, Eric; Schmitz, Sven; Campbell, Robert; Vijayakumar, Ganesh; Lavely, Adam; Jayaraman, Balaji; Nandi, Tarak; Jha, Pankaj; Dunbar, Alex; Motta-Mena, Javier; Craven, Brent; Haupt, Sue

    2013-03-01

    The Penn State ``Cyber Wind Facility'' (CWF) is a high-fidelity multi-scale high performance computing (HPC) environment in which ``cyber field experiments'' are designed and ``cyber data'' collected from wind turbines operating within the atmospheric boundary layer (ABL) environment. Conceptually the ``facility'' is akin to a high-tech wind tunnel with controlled physical environment, but unlike a wind tunnel it replicates commercial-scale wind turbines operating in the field and forced by true atmospheric turbulence with controlled stability state. The CWF is created from state-of-the-art high-accuracy technology geometry and grid design and numerical methods, and with high-resolution simulation strategies that blend unsteady RANS near the surface with high fidelity large-eddy simulation (LES) in separated boundary layer, blade and rotor wake regions, embedded within high-resolution LES of the ABL. CWF experiments complement physical field facility experiments that can capture wider ranges of meteorological events, but with minimal control over the environment and with very small numbers of sensors at low spatial resolution. I shall report on the first CWF experiments aimed at dynamical interactions between ABL turbulence and space-time wind turbine loadings. Supported by DOE and NSF.

  19. A comprehensive approach to decipher biological computation to achieve next generation high-performance exascale computing.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    James, Conrad D.; Schiess, Adrian B.; Howell, Jamie

    2013-10-01

    The human brain (volume=1200cm3) consumes 20W and is capable of performing > 10^16 operations/s. Current supercomputer technology has reached 1015 operations/s, yet it requires 1500m^3 and 3MW, giving the brain a 10^12 advantage in operations/s/W/cm^3. Thus, to reach exascale computation, two achievements are required: 1) improved understanding of computation in biological tissue, and 2) a paradigm shift towards neuromorphic computing where hardware circuits mimic properties of neural tissue. To address 1), we will interrogate corticostriatal networks in mouse brain tissue slices, specifically with regard to their frequency filtering capabilities as a function of input stimulus. To address 2), we willmore » instantiate biological computing characteristics such as multi-bit storage into hardware devices with future computational and memory applications. Resistive memory devices will be modeled, designed, and fabricated in the MESA facility in consultation with our internal and external collaborators.« less

  20. Automated Heat-Flux-Calibration Facility

    NASA Technical Reports Server (NTRS)

    Liebert, Curt H.; Weikle, Donald H.

    1989-01-01

    Computer control speeds operation of equipment and processing of measurements. New heat-flux-calibration facility developed at Lewis Research Center. Used for fast-transient heat-transfer testing, durability testing, and calibration of heat-flux gauges. Calibrations performed at constant or transient heat fluxes ranging from 1 to 6 MW/m2 and at temperatures ranging from 80 K to melting temperatures of most materials. Facility developed because there is need to build and calibrate very-small heat-flux gauges for Space Shuttle main engine (SSME).Includes lamp head attached to side of service module, an argon-gas-recirculation module, reflector, heat exchanger, and high-speed positioning system. This type of automated heat-flux calibration facility installed in industrial plants for onsite calibration of heat-flux gauges measuring fluxes of heat in advanced gas-turbine and rocket engines.

  1. High data volume and transfer rate techniques used at NASA's image processing facility

    NASA Technical Reports Server (NTRS)

    Heffner, P.; Connell, E.; Mccaleb, F.

    1978-01-01

    Data storage and transfer operations at a new image processing facility are described. The equipment includes high density digital magnetic tape drives and specially designed controllers to provide an interface between the tape drives and computerized image processing systems. The controller performs the functions necessary to convert the continuous serial data stream from the tape drive to a word-parallel blocked data stream which then goes to the computer-based system. With regard to the tape packing density, 1.8 times 10 to the tenth data bits are stored on a reel of one-inch tape. System components and their operation are surveyed, and studies on advanced storage techniques are summarized.

  2. Space Shuttle Underside Astronaut Communications Performance Evaluation

    NASA Technical Reports Server (NTRS)

    Hwu, Shian U.; Dobbins, Justin A.; Loh, Yin-Chung; Kroll, Quin D.; Sham, Catherine C.

    2005-01-01

    The Space Shuttle Ultra High Frequency (UHF) communications system is planned to provide Radio Frequency (RF) coverage for astronauts working underside of the Space Shuttle Orbiter (SSO) for thermal tile inspection and repairing. This study is to assess the Space Shuttle UHF communication performance for astronauts in the shadow region without line-of-sight (LOS) to the Space Shuttle and Space Station UHF antennas. To insure the RF coverage performance at anticipated astronaut worksites, the link margin between the UHF antennas and Extravehicular Activity (EVA) Astronauts with significant vehicle structure blockage was analyzed. A series of near-field measurements were performed using the NASA/JSC Anechoic Chamber Antenna test facilities. Computational investigations were also performed using the electromagnetic modeling techniques. The computer simulation tool based on the Geometrical Theory of Diffraction (GTD) was used to compute the signal strengths. The signal strength was obtained by computing the reflected and diffracted fields along the propagation paths between the transmitting and receiving antennas. Based on the results obtained in this study, RF coverage for UHF communication links was determined for the anticipated astronaut worksite in the shadow region underneath the Space Shuttle.

  3. Data Acquisition Systems

    NASA Technical Reports Server (NTRS)

    1994-01-01

    In the mid-1980s, Kinetic Systems and Langley Research Center determined that high speed CAMAC (Computer Automated Measurement and Control) data acquisition systems could significantly improve Langley's ARTS (Advanced Real Time Simulation) system. The ARTS system supports flight simulation R&D, and the CAMAC equipment allowed 32 high performance simulators to be controlled by centrally located host computers. This technology broadened Kinetic Systems' capabilities and led to several commercial applications. One of them is General Atomics' fusion research program. Kinetic Systems equipment allows tokamak data to be acquired four to 15 times more rapidly. Ford Motor company uses the same technology to control and monitor transmission testing facilities.

  4. Scenario-based modeling for multiple allocation hub location problem under disruption risk: multiple cuts Benders decomposition approach

    NASA Astrophysics Data System (ADS)

    Yahyaei, Mohsen; Bashiri, Mahdi

    2017-12-01

    The hub location problem arises in a variety of domains such as transportation and telecommunication systems. In many real-world situations, hub facilities are subject to disruption. This paper deals with the multiple allocation hub location problem in the presence of facilities failure. To model the problem, a two-stage stochastic formulation is developed. In the proposed model, the number of scenarios grows exponentially with the number of facilities. To alleviate this issue, two approaches are applied simultaneously. The first approach is to apply sample average approximation to approximate the two stochastic problem via sampling. Then, by applying the multiple cuts Benders decomposition approach, computational performance is enhanced. Numerical studies show the effective performance of the SAA in terms of optimality gap for small problem instances with numerous scenarios. Moreover, performance of multi-cut Benders decomposition is assessed through comparison with the classic version and the computational results reveal the superiority of the multi-cut approach regarding the computational time and number of iterations.

  5. User interface concerns

    NASA Technical Reports Server (NTRS)

    Redhed, D. D.

    1978-01-01

    Three possible goals for the Numerical Aerodynamic Simulation Facility (NASF) are: (1) a computational fluid dynamics (as opposed to aerodynamics) algorithm development tool; (2) a specialized research laboratory facility for nearly intractable aerodynamics problems that industry encounters; and (3) a facility for industry to use in its normal aerodynamics design work that requires high computing rates. The central system issue for industry use of such a computer is the quality of the user interface as implemented in some kind of a front end to the vector processor.

  6. Use of Continuous Integration Tools for Application Performance Monitoring

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vergara Larrea, Veronica G; Joubert, Wayne; Fuson, Christopher B

    High performance computing systems are becom- ing increasingly complex, both in node architecture and in the multiple layers of software stack required to compile and run applications. As a consequence, the likelihood is increasing for application performance regressions to occur as a result of routine upgrades of system software components which interact in complex ways. The purpose of this study is to evaluate the effectiveness of continuous integration tools for application performance monitoring on HPC systems. In addition, this paper also describes a prototype system for application perfor- mance monitoring based on Jenkins, a Java-based continuous integration tool. The monitoringmore » system described leverages several features in Jenkins to track application performance results over time. Preliminary results and lessons learned from monitoring applications on Cray systems at the Oak Ridge Leadership Computing Facility are presented.« less

  7. High-Performance Computing Data Center Cooling System Energy Efficiency |

    Science.gov Websites

    approaches involve a cooling distribution unit (CDU) (2), which interfaces with the facility cooling loop and to the energy recovery water (ERW) loop (5), which is a closed-loop system. There are three heat rejection options for this IT load: When possible, heat energy from the energy recovery loop is transferred

  8. Scidac-Data: Enabling Data Driven Modeling of Exascale Computing

    DOE PAGES

    Mubarak, Misbah; Ding, Pengfei; Aliaga, Leo; ...

    2017-11-23

    Here, the SciDAC-Data project is a DOE-funded initiative to analyze and exploit two decades of information and analytics that have been collected by the Fermilab data center on the organization, movement, and consumption of high energy physics (HEP) data. The project analyzes the analysis patterns and data organization that have been used by NOvA, MicroBooNE, MINERvA, CDF, D0, and other experiments to develop realistic models of HEP analysis workflows and data processing. The SciDAC-Data project aims to provide both realistic input vectors and corresponding output data that can be used to optimize and validate simulations of HEP analysis. These simulationsmore » are designed to address questions of data handling, cache optimization, and workflow structures that are the prerequisites for modern HEP analysis chains to be mapped and optimized to run on the next generation of leadership-class exascale computing facilities. We present the use of a subset of the SciDAC-Data distributions, acquired from analysis of approximately 71,000 HEP workflows run on the Fermilab data center and corresponding to over 9 million individual analysis jobs, as the input to detailed queuing simulations that model the expected data consumption and caching behaviors of the work running in high performance computing (HPC) and high throughput computing (HTC) environments. In particular we describe how the Sequential Access via Metadata (SAM) data-handling system in combination with the dCache/Enstore-based data archive facilities has been used to develop radically different models for analyzing the HEP data. We also show how the simulations may be used to assess the impact of design choices in archive facilities.« less

  9. Scidac-Data: Enabling Data Driven Modeling of Exascale Computing

    NASA Astrophysics Data System (ADS)

    Mubarak, Misbah; Ding, Pengfei; Aliaga, Leo; Tsaris, Aristeidis; Norman, Andrew; Lyon, Adam; Ross, Robert

    2017-10-01

    The SciDAC-Data project is a DOE-funded initiative to analyze and exploit two decades of information and analytics that have been collected by the Fermilab data center on the organization, movement, and consumption of high energy physics (HEP) data. The project analyzes the analysis patterns and data organization that have been used by NOvA, MicroBooNE, MINERvA, CDF, D0, and other experiments to develop realistic models of HEP analysis workflows and data processing. The SciDAC-Data project aims to provide both realistic input vectors and corresponding output data that can be used to optimize and validate simulations of HEP analysis. These simulations are designed to address questions of data handling, cache optimization, and workflow structures that are the prerequisites for modern HEP analysis chains to be mapped and optimized to run on the next generation of leadership-class exascale computing facilities. We present the use of a subset of the SciDAC-Data distributions, acquired from analysis of approximately 71,000 HEP workflows run on the Fermilab data center and corresponding to over 9 million individual analysis jobs, as the input to detailed queuing simulations that model the expected data consumption and caching behaviors of the work running in high performance computing (HPC) and high throughput computing (HTC) environments. In particular we describe how the Sequential Access via Metadata (SAM) data-handling system in combination with the dCache/Enstore-based data archive facilities has been used to develop radically different models for analyzing the HEP data. We also show how the simulations may be used to assess the impact of design choices in archive facilities.

  10. Scidac-Data: Enabling Data Driven Modeling of Exascale Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mubarak, Misbah; Ding, Pengfei; Aliaga, Leo

    Here, the SciDAC-Data project is a DOE-funded initiative to analyze and exploit two decades of information and analytics that have been collected by the Fermilab data center on the organization, movement, and consumption of high energy physics (HEP) data. The project analyzes the analysis patterns and data organization that have been used by NOvA, MicroBooNE, MINERvA, CDF, D0, and other experiments to develop realistic models of HEP analysis workflows and data processing. The SciDAC-Data project aims to provide both realistic input vectors and corresponding output data that can be used to optimize and validate simulations of HEP analysis. These simulationsmore » are designed to address questions of data handling, cache optimization, and workflow structures that are the prerequisites for modern HEP analysis chains to be mapped and optimized to run on the next generation of leadership-class exascale computing facilities. We present the use of a subset of the SciDAC-Data distributions, acquired from analysis of approximately 71,000 HEP workflows run on the Fermilab data center and corresponding to over 9 million individual analysis jobs, as the input to detailed queuing simulations that model the expected data consumption and caching behaviors of the work running in high performance computing (HPC) and high throughput computing (HTC) environments. In particular we describe how the Sequential Access via Metadata (SAM) data-handling system in combination with the dCache/Enstore-based data archive facilities has been used to develop radically different models for analyzing the HEP data. We also show how the simulations may be used to assess the impact of design choices in archive facilities.« less

  11. Development of a SaaS application probe to the physical properties of the Earth's interior: An attempt at moving HPC to the cloud

    NASA Astrophysics Data System (ADS)

    Huang, Qian

    2014-09-01

    Scientific computing often requires the availability of a massive number of computers for performing large-scale simulations, and computing in mineral physics is no exception. In order to investigate physical properties of minerals at extreme conditions in computational mineral physics, parallel computing technology is used to speed up the performance by utilizing multiple computer resources to process a computational task simultaneously thereby greatly reducing computation time. Traditionally, parallel computing has been addressed by using High Performance Computing (HPC) solutions and installed facilities such as clusters and super computers. Today, it has been seen that there is a tremendous growth in cloud computing. Infrastructure as a Service (IaaS), the on-demand and pay-as-you-go model, creates a flexible and cost-effective mean to access computing resources. In this paper, a feasibility report of HPC on a cloud infrastructure is presented. It is found that current cloud services in IaaS layer still need to improve performance to be useful to research projects. On the other hand, Software as a Service (SaaS), another type of cloud computing, is introduced into an HPC system for computing in mineral physics, and an application of which is developed. In this paper, an overall description of this SaaS application is presented. This contribution can promote cloud application development in computational mineral physics, and cross-disciplinary studies.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Laros, James H.; Grant, Ryan; Levenhagen, Michael J.

    Measuring and controlling the power and energy consumption of high performance computing systems by various components in the software stack is an active research area. Implementations in lower level software layers are beginning to emerge in some production systems, which is very welcome. To be most effective, a portable interface to measurement and control features would significantly facilitate participation by all levels of the software stack. We present a proposal for a standard power Application Programming Interface (API) that endeavors to cover the entire software space, from generic hardware interfaces to the input from the computer facility manager.

  13. New project to support scientific collaboration electronically

    NASA Astrophysics Data System (ADS)

    Clauer, C. R.; Rasmussen, C. E.; Niciejewski, R. J.; Killeen, T. L.; Kelly, J. D.; Zambre, Y.; Rosenberg, T. J.; Stauning, P.; Friis-Christensen, E.; Mende, S. B.; Weymouth, T. E.; Prakash, A.; McDaniel, S. E.; Olson, G. M.; Finholt, T. A.; Atkins, D. E.

    A new multidisciplinary effort is linking research in the upper atmospheric and space, computer, and behavioral sciences to develop a prototype electronic environment for conducting team science worldwide. A real-world electronic collaboration testbed has been established to support scientific work centered around the experimental operations being conducted with instruments from the Sondrestrom Upper Atmospheric Research Facility in Kangerlussuaq, Greenland. Such group computing environments will become an important component of the National Information Infrastructure initiative, which is envisioned as the high-performance communications infrastructure to support national scientific research.

  14. Load management strategy for Particle-In-Cell simulations in high energy particle acceleration

    NASA Astrophysics Data System (ADS)

    Beck, A.; Frederiksen, J. T.; Dérouillat, J.

    2016-09-01

    In the wake of the intense effort made for the experimental CILEX project, numerical simulation campaigns have been carried out in order to finalize the design of the facility and to identify optimal laser and plasma parameters. These simulations bring, of course, important insight into the fundamental physics at play. As a by-product, they also characterize the quality of our theoretical and numerical models. In this paper, we compare the results given by different codes and point out algorithmic limitations both in terms of physical accuracy and computational performances. These limitations are illustrated in the context of electron laser wakefield acceleration (LWFA). The main limitation we identify in state-of-the-art Particle-In-Cell (PIC) codes is computational load imbalance. We propose an innovative algorithm to deal with this specific issue as well as milestones towards a modern, accurate high-performance PIC code for high energy particle acceleration.

  15. Towards a Scalable and Adaptive Application Support Platform for Large-Scale Distributed E-Sciences in High-Performance Network Environments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Chase Qishi; Zhu, Michelle Mengxia

    The advent of large-scale collaborative scientific applications has demonstrated the potential for broad scientific communities to pool globally distributed resources to produce unprecedented data acquisition, movement, and analysis. System resources including supercomputers, data repositories, computing facilities, network infrastructures, storage systems, and display devices have been increasingly deployed at national laboratories and academic institutes. These resources are typically shared by large communities of users over Internet or dedicated networks and hence exhibit an inherent dynamic nature in their availability, accessibility, capacity, and stability. Scientific applications using either experimental facilities or computation-based simulations with various physical, chemical, climatic, and biological models featuremore » diverse scientific workflows as simple as linear pipelines or as complex as a directed acyclic graphs, which must be executed and supported over wide-area networks with massively distributed resources. Application users oftentimes need to manually configure their computing tasks over networks in an ad hoc manner, hence significantly limiting the productivity of scientists and constraining the utilization of resources. The success of these large-scale distributed applications requires a highly adaptive and massively scalable workflow platform that provides automated and optimized computing and networking services. This project is to design and develop a generic Scientific Workflow Automation and Management Platform (SWAMP), which contains a web-based user interface specially tailored for a target application, a set of user libraries, and several easy-to-use computing and networking toolkits for application scientists to conveniently assemble, execute, monitor, and control complex computing workflows in heterogeneous high-performance network environments. SWAMP will enable the automation and management of the entire process of scientific workflows with the convenience of a few mouse clicks while hiding the implementation and technical details from end users. Particularly, we will consider two types of applications with distinct performance requirements: data-centric and service-centric applications. For data-centric applications, the main workflow task involves large-volume data generation, catalog, storage, and movement typically from supercomputers or experimental facilities to a team of geographically distributed users; while for service-centric applications, the main focus of workflow is on data archiving, preprocessing, filtering, synthesis, visualization, and other application-specific analysis. We will conduct a comprehensive comparison of existing workflow systems and choose the best suited one with open-source code, a flexible system structure, and a large user base as the starting point for our development. Based on the chosen system, we will develop and integrate new components including a black box design of computing modules, performance monitoring and prediction, and workflow optimization and reconfiguration, which are missing from existing workflow systems. A modular design for separating specification, execution, and monitoring aspects will be adopted to establish a common generic infrastructure suited for a wide spectrum of science applications. We will further design and develop efficient workflow mapping and scheduling algorithms to optimize the workflow performance in terms of minimum end-to-end delay, maximum frame rate, and highest reliability. We will develop and demonstrate the SWAMP system in a local environment, the grid network, and the 100Gpbs Advanced Network Initiative (ANI) testbed. The demonstration will target scientific applications in climate modeling and high energy physics and the functions to be demonstrated include workflow deployment, execution, steering, and reconfiguration. Throughout the project period, we will work closely with the science communities in the fields of climate modeling and high energy physics including Spallation Neutron Source (SNS) and Large Hadron Collider (LHC) projects to mature the system for production use.« less

  16. NREL's Building-Integrated Supercomputer Provides Heating and Efficient Computing (Fact Sheet)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    2014-09-01

    NREL's Energy Systems Integration Facility (ESIF) is meant to investigate new ways to integrate energy sources so they work together efficiently, and one of the key tools to that investigation, a new supercomputer, is itself a prime example of energy systems integration. NREL teamed with Hewlett-Packard (HP) and Intel to develop the innovative warm-water, liquid-cooled Peregrine supercomputer, which not only operates efficiently but also serves as the primary source of building heat for ESIF offices and laboratories. This innovative high-performance computer (HPC) can perform more than a quadrillion calculations per second as part of the world's most energy-efficient HPC datamore » center.« less

  17. FermiLib v0.1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    MCCLEAN, JARROD; HANER, THOMAS; STEIGER, DAMIAN

    FermiLib is an open source software package designed to facilitate the development and testing of algorithms for simulations of fermionic systems on quantum computers. Fermionic simulations represent an important application of early quantum devices with a lot of potential high value targets, such as quantum chemistry for the development of new catalysts. This software strives to provide a link between the required domain expertise in specific fermionic applications and quantum computing to enable more users to directly interface with, and develop for, these applications. It is an extensible Python library designed to interface with the high performance quantum simulator, ProjectQ,more » as well as application specific software such as PSI4 from the domain of quantum chemistry. Such software is key to enabling effective user facilities in quantum computation research.« less

  18. Computer program determines performance efficiency of remote measuring systems

    NASA Technical Reports Server (NTRS)

    Merewether, E. K.

    1966-01-01

    Computer programs control and evaluate instrumentation system performance for numerous rocket engine test facilities and prescribe calibration and maintenance techniques to maintain the systems within process specifications. Similar programs can be written for other test equipment in an industry such as the petrochemical industry.

  19. Job Management Requirements for NAS Parallel Systems and Clusters

    NASA Technical Reports Server (NTRS)

    Saphir, William; Tanner, Leigh Ann; Traversat, Bernard

    1995-01-01

    A job management system is a critical component of a production supercomputing environment, permitting oversubscribed resources to be shared fairly and efficiently. Job management systems that were originally designed for traditional vector supercomputers are not appropriate for the distributed-memory parallel supercomputers that are becoming increasingly important in the high performance computing industry. Newer job management systems offer new functionality but do not solve fundamental problems. We address some of the main issues in resource allocation and job scheduling we have encountered on two parallel computers - a 160-node IBM SP2 and a cluster of 20 high performance workstations located at the Numerical Aerodynamic Simulation facility. We describe the requirements for resource allocation and job management that are necessary to provide a production supercomputing environment on these machines, prioritizing according to difficulty and importance, and advocating a return to fundamental issues.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    The Computing and Communications (C) Division is responsible for the Laboratory's Integrated Computing Network (ICN) as well as Laboratory-wide communications. Our computing network, used by 8,000 people distributed throughout the nation, constitutes one of the most powerful scientific computing facilities in the world. In addition to the stable production environment of the ICN, we have taken a leadership role in high-performance computing and have established the Advanced Computing Laboratory (ACL), the site of research on experimental, massively parallel computers; high-speed communication networks; distributed computing; and a broad variety of advanced applications. The computational resources available in the ACL are ofmore » the type needed to solve problems critical to national needs, the so-called Grand Challenge'' problems. The purpose of this publication is to inform our clients of our strategic and operating plans in these important areas. We review major accomplishments since late 1990 and describe our strategic planning goals and specific projects that will guide our operations over the next few years. Our mission statement, planning considerations, and management policies and practices are also included.« less

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    The Computing and Communications (C) Division is responsible for the Laboratory`s Integrated Computing Network (ICN) as well as Laboratory-wide communications. Our computing network, used by 8,000 people distributed throughout the nation, constitutes one of the most powerful scientific computing facilities in the world. In addition to the stable production environment of the ICN, we have taken a leadership role in high-performance computing and have established the Advanced Computing Laboratory (ACL), the site of research on experimental, massively parallel computers; high-speed communication networks; distributed computing; and a broad variety of advanced applications. The computational resources available in the ACL are ofmore » the type needed to solve problems critical to national needs, the so-called ``Grand Challenge`` problems. The purpose of this publication is to inform our clients of our strategic and operating plans in these important areas. We review major accomplishments since late 1990 and describe our strategic planning goals and specific projects that will guide our operations over the next few years. Our mission statement, planning considerations, and management policies and practices are also included.« less

  2. Genetic algorithm based task reordering to improve the performance of batch scheduled massively parallel scientific applications

    DOE PAGES

    Sankaran, Ramanan; Angel, Jordan; Brown, W. Michael

    2015-04-08

    The growth in size of networked high performance computers along with novel accelerator-based node architectures has further emphasized the importance of communication efficiency in high performance computing. The world's largest high performance computers are usually operated as shared user facilities due to the costs of acquisition and operation. Applications are scheduled for execution in a shared environment and are placed on nodes that are not necessarily contiguous on the interconnect. Furthermore, the placement of tasks on the nodes allocated by the scheduler is sub-optimal, leading to performance loss and variability. Here, we investigate the impact of task placement on themore » performance of two massively parallel application codes on the Titan supercomputer, a turbulent combustion flow solver (S3D) and a molecular dynamics code (LAMMPS). Benchmark studies show a significant deviation from ideal weak scaling and variability in performance. The inter-task communication distance was determined to be one of the significant contributors to the performance degradation and variability. A genetic algorithm-based parallel optimization technique was used to optimize the task ordering. This technique provides an improved placement of the tasks on the nodes, taking into account the application's communication topology and the system interconnect topology. As a result, application benchmarks after task reordering through genetic algorithm show a significant improvement in performance and reduction in variability, therefore enabling the applications to achieve better time to solution and scalability on Titan during production.« less

  3. Study of Solid State Drives performance in PROOF distributed analysis system

    NASA Astrophysics Data System (ADS)

    Panitkin, S. Y.; Ernst, M.; Petkus, R.; Rind, O.; Wenaus, T.

    2010-04-01

    Solid State Drives (SSD) is a promising storage technology for High Energy Physics parallel analysis farms. Its combination of low random access time and relatively high read speed is very well suited for situations where multiple jobs concurrently access data located on the same drive. It also has lower energy consumption and higher vibration tolerance than Hard Disk Drive (HDD) which makes it an attractive choice in many applications raging from personal laptops to large analysis farms. The Parallel ROOT Facility - PROOF is a distributed analysis system which allows to exploit inherent event level parallelism of high energy physics data. PROOF is especially efficient together with distributed local storage systems like Xrootd, when data are distributed over computing nodes. In such an architecture the local disk subsystem I/O performance becomes a critical factor, especially when computing nodes use multi-core CPUs. We will discuss our experience with SSDs in PROOF environment. We will compare performance of HDD with SSD in I/O intensive analysis scenarios. In particular we will discuss PROOF system performance scaling with a number of simultaneously running analysis jobs.

  4. 1999 NCCS Highlights

    NASA Technical Reports Server (NTRS)

    Bennett, Jerome (Technical Monitor)

    2002-01-01

    The NASA Center for Computational Sciences (NCCS) is a high-performance scientific computing facility operated, maintained and managed by the Earth and Space Data Computing Division (ESDCD) of NASA Goddard Space Flight Center's (GSFC) Earth Sciences Directorate. The mission of the NCCS is to advance leading-edge science by providing the best people, computers, and data storage systems to NASA's Earth and space sciences programs and those of other U.S. Government agencies, universities, and private institutions. Among the many computationally demanding Earth science research efforts supported by the NCCS in Fiscal Year 1999 (FY99) are the NASA Seasonal-to-Interannual Prediction Project, the NASA Search and Rescue Mission, Earth gravitational model development efforts, the National Weather Service's North American Observing System program, Data Assimilation Office studies, a NASA-sponsored project at the Center for Ocean-Land-Atmosphere Studies, a NASA-sponsored microgravity project conducted by researchers at the City University of New York and the University of Pennsylvania, the completion of a satellite-derived global climate data set, simulations of a new geodynamo model, and studies of Earth's torque. This document presents highlights of these research efforts and an overview of the NCCS, its facilities, and its people.

  5. Experimental and Computational Investigation of a Translating-Throat Single-Expansion-Ramp Nozzle

    NASA Technical Reports Server (NTRS)

    Deere, Karen A.; Asbury, Scott C.

    1999-01-01

    An experimental and computational study was conducted on a high-speed, single-expansion-ramp nozzle (SERN) concept designed for efficient off-design performance. The translating-throat SERN concept adjusts the axial location of the throat to provide a variable expansion ratio and allow a more optimum jet exhaust expansion at various flight conditions in an effort to maximize nozzle performance. Three design points (throat locations) were investigated to simulate the operation of this concept at subsonic-transonic, low supersonic, and high supersonic flight conditions. The experimental study was conducted in the jet exit test facility at the Langley Research Center. Internal nozzle performance was obtained at nozzle pressure ratios (NPR's) up to 13 for six nozzles with design nozzle pressure ratios near 9, 42, and 102. Two expansion-ramp surfaces, one concave and one convex, were tested for each design point. Paint-oil flow and focusing schlieren flow visualization techniques were utilized to acquire additional flow data at selected NPR'S. The Navier-Stokes code, PAB3D, was used with a two-equation k-e turbulence model for the computational study. Nozzle performance characteristics were predicted at nozzle pressure ratios of 5, 9, and 13 for the concave ramp, low Mach number nozzle and at 10, 13, and 102 for the concave ramp, high Mach number nozzle.

  6. ERA 1103 UNIVAC 2 Calculating Machine

    NASA Image and Video Library

    1955-09-21

    The new 10-by 10-Foot Supersonic Wind Tunnel at the Lewis Flight Propulsion Laboratory included high tech data acquisition and analysis systems. The reliable gathering of pressure, speed, temperature, and other data from test runs in the facilities was critical to the research process. Throughout the 1940s and early 1950s female employees, known as computers, recorded all test data and performed initial calculations by hand. The introduction of punch card computers in the late 1940s gradually reduced the number of hands-on calculations. In the mid-1950s new computational machines were installed in the office building of the 10-by 10-Foot tunnel. The new systems included this UNIVAC 1103 vacuum tube computer—the lab’s first centralized computer system. The programming was done on paper tape and fed into the machine. The 10-by 10 computer center also included the Lewis-designed Computer Automated Digital Encoder (CADDE) and Digital Automated Multiple Pressure Recorder (DAMPR) systems which converted test data to binary-coded decimal numbers and recorded test pressures automatically, respectively. The systems primarily served the 10-by 10, but were also applied to the other large facilities. Engineering Research Associates (ERA) developed the initial UNIVAC computer for the Navy in the late 1940s. In 1952 the company designed a commercial version, the UNIVAC 1103. The 1103 was the first computer designed by Seymour Cray and the first commercially successful computer.

  7. Enabling Earth Science: The Facilities and People of the NCCS

    NASA Technical Reports Server (NTRS)

    2002-01-01

    The NCCS's mass data storage system allows scientists to store and manage the vast amounts of data generated by these computations, and its high-speed network connections allow the data to be accessed quickly from the NCCS archives. Some NCCS users perform studies that are directly related to their ability to run computationally expensive and data-intensive simulations. Because the number and type of questions scientists research often are limited by computing power, the NCCS continually pursues the latest technologies in computing, mass storage, and networking technologies. Just as important as the processors, tapes, and routers of the NCCS are the personnel who administer this hardware, create and manage accounts, maintain security, and assist the scientists, often working one on one with them.

  8. HPC Access Using KVM over IP

    DTIC Science & Technology

    2007-06-08

    Lightwave VDE /200 KVM-over-Fiber (Keyboard, Video and Mouse) devices installed throughout the TARDEC campus. Implementation of this system required...development effort through the pursuit of an Army-funded Phase-II Small Business Innovative Research (SBIR) effort with IP Video Systems (formerly known as...visualization capabilities of a DoD High- Performance Computing facility, many advanced features are necessary. TARDEC-HPC’s SBIR with IP Video Systems

  9. The HEPCloud Facility: elastic computing for High Energy Physics – The NOvA Use Case

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fuess, S.; Garzoglio, G.; Holzman, B.

    The need for computing in the HEP community follows cycles of peaks and valleys mainly driven by conference dates, accelerator shutdown, holiday schedules, and other factors. Because of this, the classical method of provisioning these resources at providing facilities has drawbacks such as potential overprovisioning. As the appetite for computing increases, however, so does the need to maximize cost efficiency by developing a model for dynamically provisioning resources only when needed. To address this issue, the HEPCloud project was launched by the Fermilab Scientific Computing Division in June 2015. Its goal is to develop a facility that provides a commonmore » interface to a variety of resources, including local clusters, grids, high performance computers, and community and commercial Clouds. Initially targeted experiments include CMS and NOvA, as well as other Fermilab stakeholders. In its first phase, the project has demonstrated the use of the “elastic” provisioning model offered by commercial clouds, such as Amazon Web Services. In this model, resources are rented and provisioned automatically over the Internet upon request. In January 2016, the project demonstrated the ability to increase the total amount of global CMS resources by 58,000 cores from 150,000 cores - a 25 percent increase - in preparation for the Recontres de Moriond. In March 2016, the NOvA experiment has also demonstrated resource burst capabilities with an additional 7,300 cores, achieving a scale almost four times as large as the local allocated resources and utilizing the local AWS s3 storage to optimize data handling operations and costs. NOvA was using the same familiar services used for local computations, such as data handling and job submission, in preparation for the Neutrino 2016 conference. In both cases, the cost was contained by the use of the Amazon Spot Instance Market and the Decision Engine, a HEPCloud component that aims at minimizing cost and job interruption. This paper describes the Fermilab HEPCloud Facility and the challenges overcome for the CMS and NOvA communities.« less

  10. Low-power logic computing realized in a single electric-double-layer MoS2 transistor gated with polymer electrolyte

    NASA Astrophysics Data System (ADS)

    Guo, Junjie; Xie, Dingdong; Yang, Bingchu; Jiang, Jie

    2018-06-01

    Due to its mechanical flexibility, large bandgap and carrier mobility, atomically thin molybdenum disulphide (MoS2) has attracted widespread attention. However, it still lacks a facile route to fabricate a low-power high-performance logic gates/circuits before it gets the real application. Herein, we reported a facile and environment-friendly method to establish the low-power logic function in a single MoS2 field-effect transistor (FET) configuration gated with a polymer electrolyte. Such low-power and high-performance MoS2 FET can be implemented by using water-soluble polyvinyl alcohol (PVA) polymer as proton-conducting electric-double-layer (EDL) dielectric layer. It exhibited an ultra-low voltage (1.5 V) and a good performance with a high current on/off ratio (Ion/off) of 1 × 105, a large electron mobility (μ) of 47.5 cm2/V s, and a small subthreshold swing (S) of 0.26 V/dec, respectively. The inverter can be realized by using such a single MoS2 EDL FET with a gain of ∼4 at the operation voltage of only ∼1 V. Most importantly, the neuronal AND logic computing can be also demonstrated by using such a double-lateral-gate single MoS2 EDL transistor. These results show an effective step for future applications of 2D MoS2 FETs for integrated electronic engineering and low-energy environment-friendly green electronics.

  11. Lightweight scheduling of elastic analysis containers in a competitive cloud environment: a Docked Analysis Facility for ALICE

    NASA Astrophysics Data System (ADS)

    Berzano, D.; Blomer, J.; Buncic, P.; Charalampidis, I.; Ganis, G.; Meusel, R.

    2015-12-01

    During the last years, several Grid computing centres chose virtualization as a better way to manage diverse use cases with self-consistent environments on the same bare infrastructure. The maturity of control interfaces (such as OpenNebula and OpenStack) opened the possibility to easily change the amount of resources assigned to each use case by simply turning on and off virtual machines. Some of those private clouds use, in production, copies of the Virtual Analysis Facility, a fully virtualized and self-contained batch analysis cluster capable of expanding and shrinking automatically upon need: however, resources starvation occurs frequently as expansion has to compete with other virtual machines running long-living batch jobs. Such batch nodes cannot relinquish their resources in a timely fashion: the more jobs they run, the longer it takes to drain them and shut off, and making one-job virtual machines introduces a non-negligible virtualization overhead. By improving several components of the Virtual Analysis Facility we have realized an experimental “Docked” Analysis Facility for ALICE, which leverages containers instead of virtual machines for providing performance and security isolation. We will present the techniques we have used to address practical problems, such as software provisioning through CVMFS, as well as our considerations on the maturity of containers for High Performance Computing. As the abstraction layer is thinner, our Docked Analysis Facilities may feature a more fine-grained sizing, down to single-job node containers: we will show how this approach will positively impact automatic cluster resizing by deploying lightweight pilot containers instead of replacing central queue polls.

  12. The change in critical technologies for computational physics

    NASA Technical Reports Server (NTRS)

    Watson, Val

    1990-01-01

    It is noted that the types of technology required for computational physics are changing as the field matures. Emphasis has shifted from computer technology to algorithm technology and, finally, to visual analysis technology as areas of critical research for this field. High-performance graphical workstations tied to a supercommunicator with high-speed communications along with the development of especially tailored visualization software has enabled analysis of highly complex fluid-dynamics simulations. Particular reference is made here to the development of visual analysis tools at NASA's Numerical Aerodynamics Simulation Facility. The next technology which this field requires is one that would eliminate visual clutter by extracting key features of simulations of physics and technology in order to create displays that clearly portray these key features. Research in the tuning of visual displays to human cognitive abilities is proposed. The immediate transfer of technology to all levels of computers, specifically the inclusion of visualization primitives in basic software developments for all work stations and PCs, is recommended.

  13. Automatic Between-Pulse Analysis of DIII-D Experimental Data Performed Remotely on a Supercomputer at Argonne Leadership Computing Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kostuk, M.; Uram, T. D.; Evans, T.

    For the first time, an automatically triggered, between-pulse fusion science analysis code was run on-demand at a remotely located supercomputer at Argonne Leadership Computing Facility (ALCF, Lemont, IL) in support of in-process experiments being performed at DIII-D (San Diego, CA). This represents a new paradigm for combining geographically distant experimental and high performance computing (HPC) facilities to provide enhanced data analysis that is quickly available to researchers. Enhanced analysis improves the understanding of the current pulse, translating into a more efficient use of experimental resources, and to the quality of the resultant science. The analysis code used here, called SURFMN,more » calculates the magnetic structure of the plasma using Fourier transform. Increasing the number of Fourier components provides a more accurate determination of the stochastic boundary layer near the plasma edge by better resolving magnetic islands, but requires 26 minutes to complete using local DIII-D resources, putting it well outside the useful time range for between pulse analysis. These islands relate to confinement and edge localized mode (ELM) suppression, and may be controlled by adjusting coil currents for the next pulse. Argonne has ensured on-demand execution of SURFMN by providing a reserved queue, a specialized service that launches the code after receiving an automatic trigger, and with network access from the worker nodes for data transfer. Runs are executed on 252 cores of ALCF’s Cooley cluster and the data is available locally at DIII-D within three minutes of triggering. The original SURFMN design limits additional improvements with more cores, however our work shows a path forward where codes that benefit from thousands of processors can run between pulses.« less

  14. Automatic Between-Pulse Analysis of DIII-D Experimental Data Performed Remotely on a Supercomputer at Argonne Leadership Computing Facility

    DOE PAGES

    Kostuk, M.; Uram, T. D.; Evans, T.; ...

    2018-02-01

    For the first time, an automatically triggered, between-pulse fusion science analysis code was run on-demand at a remotely located supercomputer at Argonne Leadership Computing Facility (ALCF, Lemont, IL) in support of in-process experiments being performed at DIII-D (San Diego, CA). This represents a new paradigm for combining geographically distant experimental and high performance computing (HPC) facilities to provide enhanced data analysis that is quickly available to researchers. Enhanced analysis improves the understanding of the current pulse, translating into a more efficient use of experimental resources, and to the quality of the resultant science. The analysis code used here, called SURFMN,more » calculates the magnetic structure of the plasma using Fourier transform. Increasing the number of Fourier components provides a more accurate determination of the stochastic boundary layer near the plasma edge by better resolving magnetic islands, but requires 26 minutes to complete using local DIII-D resources, putting it well outside the useful time range for between pulse analysis. These islands relate to confinement and edge localized mode (ELM) suppression, and may be controlled by adjusting coil currents for the next pulse. Argonne has ensured on-demand execution of SURFMN by providing a reserved queue, a specialized service that launches the code after receiving an automatic trigger, and with network access from the worker nodes for data transfer. Runs are executed on 252 cores of ALCF’s Cooley cluster and the data is available locally at DIII-D within three minutes of triggering. The original SURFMN design limits additional improvements with more cores, however our work shows a path forward where codes that benefit from thousands of processors can run between pulses.« less

  15. Feasibility of MHD submarine propulsion. Phase II, MHD propulsion: Testing in a two Tesla test facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Doss, E.D.; Sikes, W.C.

    1992-09-01

    This report describes the work performed during Phase 1 and Phase 2 of the collaborative research program established between Argonne National Laboratory (ANL) and Newport News Shipbuilding and Dry Dock Company (NNS). Phase I of the program focused on the development of computer models for Magnetohydrodynamic (MHD) propulsion. Phase 2 focused on the experimental validation of the thruster performance models and the identification, through testing, of any phenomena which may impact the attractiveness of this propulsion system for shipboard applications. The report discusses in detail the work performed in Phase 2 of the program. In Phase 2, a two Teslamore » test facility was designed, built, and operated. The facility test loop, its components, and their design are presented. The test matrix and its rationale are discussed. Representative experimental results of the test program are presented, and are compared to computer model predictions. In general, the results of the tests and their comparison with the predictions indicate that thephenomena affecting the performance of MHD seawater thrusters are well understood and can be accurately predicted with the developed thruster computer models.« less

  16. A Bioinformatics Facility for NASA

    NASA Technical Reports Server (NTRS)

    Schweighofer, Karl; Pohorille, Andrew

    2006-01-01

    Building on an existing prototype, we have fielded a facility with bioinformatics technologies that will help NASA meet its unique requirements for biological research. This facility consists of a cluster of computers capable of performing computationally intensive tasks, software tools, databases and knowledge management systems. Novel computational technologies for analyzing and integrating new biological data and already existing knowledge have been developed. With continued development and support, the facility will fulfill strategic NASA s bioinformatics needs in astrobiology and space exploration. . As a demonstration of these capabilities, we will present a detailed analysis of how spaceflight factors impact gene expression in the liver and kidney for mice flown aboard shuttle flight STS-108. We have found that many genes involved in signal transduction, cell cycle, and development respond to changes in microgravity, but that most metabolic pathways appear unchanged.

  17. Hybrid massively parallel fast sweeping method for static Hamilton-Jacobi equations

    NASA Astrophysics Data System (ADS)

    Detrixhe, Miles; Gibou, Frédéric

    2016-10-01

    The fast sweeping method is a popular algorithm for solving a variety of static Hamilton-Jacobi equations. Fast sweeping algorithms for parallel computing have been developed, but are severely limited. In this work, we present a multilevel, hybrid parallel algorithm that combines the desirable traits of two distinct parallel methods. The fine and coarse grained components of the algorithm take advantage of heterogeneous computer architecture common in high performance computing facilities. We present the algorithm and demonstrate its effectiveness on a set of example problems including optimal control, dynamic games, and seismic wave propagation. We give results for convergence, parallel scaling, and show state-of-the-art speedup values for the fast sweeping method.

  18. Advanced Simulation & Computing FY15 Implementation Plan Volume 2, Rev. 0.5

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCoy, Michel; Archer, Bill; Matzen, M. Keith

    2014-09-16

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the surety and reliability of the U.S. nuclear stockpile. The SSP uses nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of experimental facilities and programs, and the computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities andmore » computational resources that support annual stockpile assessment and certification, study advanced nuclear weapons design and manufacturing processes, analyze accident scenarios and weapons aging, and provide the tools to enable stockpile Life Extension Programs (LEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balance of resource, including technical staff, hardware, simulation software, and computer science solutions. As the program approaches the end of its second decade, ASC is intently focused on increasing predictive capabilities in a three-dimensional (3D) simulation environment while maintaining support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (sufficient resolution, dimensionality, and scientific details), quantify critical margins and uncertainties, and resolve increasingly difficult analyses needed for the SSP. Where possible, the program also enables the use of high-performance simulation and computing tools to address broader national security needs, such as foreign nuclear weapon assessments and counternuclear terrorism.« less

  19. Computational challenges in atomic, molecular and optical physics.

    PubMed

    Taylor, Kenneth T

    2002-06-15

    Six challenges are discussed. These are the laser-driven helium atom; the laser-driven hydrogen molecule and hydrogen molecular ion; electron scattering (with ionization) from one-electron atoms; the vibrational and rotational structure of molecules such as H(3)(+) and water at their dissociation limits; laser-heated clusters; and quantum degeneracy and Bose-Einstein condensation. The first four concern fundamental few-body systems where use of high-performance computing (HPC) is currently making possible accurate modelling from first principles. This leads to reliable predictions and support for laboratory experiment as well as true understanding of the dynamics. Important aspects of these challenges addressable only via a terascale facility are set out. Such a facility makes the last two challenges in the above list meaningfully accessible for the first time, and the scientific interest together with the prospective role for HPC in these is emphasized.

  20. Water-Based Coating Simplifies Circuit Board Manufacturing

    NASA Technical Reports Server (NTRS)

    2008-01-01

    The Structures and Materials Division at Glenn Research Center is devoted to developing advanced, high-temperature materials and processes for future aerospace propulsion and power generation systems. The Polymers Branch falls under this division, and it is involved in the development of high-performance materials, including polymers for high-temperature polymer matrix composites; nanocomposites for both high- and low-temperature applications; durable aerogels; purification and functionalization of carbon nanotubes and their use in composites; computational modeling of materials and biological systems and processes; and developing polymer-derived molecular sensors. Essentially, this branch creates high-performance materials to reduce the weight and boost performance of components for space missions and aircraft engine components. Under the leadership of chemical engineer, Dr. Michael Meador, the Polymers Branch boasts world-class laboratories, composite manufacturing facilities, testing stations, and some of the best scientists in the field.

  1. Intermediate Palomar Transient Factory: Realtime Image Subtraction Pipeline

    DOE PAGES

    Cao, Yi; Nugent, Peter E.; Kasliwal, Mansi M.

    2016-09-28

    A fast-turnaround pipeline for realtime data reduction plays an essential role in discovering and permitting followup observations to young supernovae and fast-evolving transients in modern time-domain surveys. In this paper, we present the realtime image subtraction pipeline in the intermediate Palomar Transient Factory. By using highperformance computing, efficient databases, and machine-learning algorithms, this pipeline manages to reliably deliver transient candidates within 10 minutes of images being taken. Our experience in using high-performance computing resources to process big data in astronomy serves as a trailblazer to dealing with data from large-scale time-domain facilities in the near future.

  2. Intermediate Palomar Transient Factory: Realtime Image Subtraction Pipeline

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cao, Yi; Nugent, Peter E.; Kasliwal, Mansi M.

    A fast-turnaround pipeline for realtime data reduction plays an essential role in discovering and permitting followup observations to young supernovae and fast-evolving transients in modern time-domain surveys. In this paper, we present the realtime image subtraction pipeline in the intermediate Palomar Transient Factory. By using highperformance computing, efficient databases, and machine-learning algorithms, this pipeline manages to reliably deliver transient candidates within 10 minutes of images being taken. Our experience in using high-performance computing resources to process big data in astronomy serves as a trailblazer to dealing with data from large-scale time-domain facilities in the near future.

  3. Numerical simulation of long-duration blast wave evolution in confined facilities

    NASA Astrophysics Data System (ADS)

    Togashi, F.; Baum, J. D.; Mestreau, E.; Löhner, R.; Sunshine, D.

    2010-10-01

    The objective of this research effort was to investigate the quasi-steady flow field produced by explosives in confined facilities. In this effort we modeled tests in which a high explosive (HE) cylindrical charge was hung in the center of a room and detonated. The HEs used for the tests were C-4 and AFX 757. While C-4 is just slightly under-oxidized and is typically modeled as an ideal explosive, AFX 757 includes a significant percentage of aluminum particles, so long-time afterburning and energy release must be considered. The Lawrence Livermore National Laboratory (LLNL)-produced thermo-chemical equilibrium algorithm, “Cheetah”, was used to estimate the remaining burnable detonation products. From these remaining species, the afterburning energy was computed and added to the flow field. Computations of the detonation and afterburn of two HEs in the confined multi-room facility were performed. The results demonstrate excellent agreement with available experimental data in terms of blast wave time of arrival, peak shock amplitude, reverberation, and total impulse (and hence, total energy release, via either the detonation or afterburn processes.

  4. Safety Precautions and Operating Procedures in an (A)BSL-4 Laboratory: 4. Medical Imaging Procedures.

    PubMed

    Byrum, Russell; Keith, Lauren; Bartos, Christopher; St Claire, Marisa; Lackemeyer, Matthew G; Holbrook, Michael R; Janosko, Krisztina; Barr, Jason; Pusl, Daniela; Bollinger, Laura; Wada, Jiro; Coe, Linda; Hensley, Lisa E; Jahrling, Peter B; Kuhn, Jens H; Lentz, Margaret R

    2016-10-03

    Medical imaging using animal models for human diseases has been utilized for decades; however, until recently, medical imaging of diseases induced by high-consequence pathogens has not been possible. In 2014, the National Institutes of Health, National Institute of Allergy and Infectious Diseases, Integrated Research Facility at Fort Detrick opened an Animal Biosafety Level 4 (ABSL-4) facility to assess the clinical course and pathology of infectious diseases in experimentally infected animals. Multiple imaging modalities including computed tomography (CT), magnetic resonance imaging, positron emission tomography, and single photon emission computed tomography are available to researchers for these evaluations. The focus of this article is to describe the workflow for safely obtaining a CT image of a live guinea pig in an ABSL-4 facility. These procedures include animal handling, anesthesia, and preparing and monitoring the animal until recovery from sedation. We will also discuss preparing the imaging equipment, performing quality checks, communication methods from "hot side" (containing pathogens) to "cold side," and moving the animal from the holding room to the imaging suite.

  5. Icing simulation: A survey of computer models and experimental facilities

    NASA Technical Reports Server (NTRS)

    Potapczuk, M. G.; Reinmann, J. J.

    1991-01-01

    A survey of the current methods for simulation of the response of an aircraft or aircraft subsystem to an icing encounter is presented. The topics discussed include a computer code modeling of aircraft icing and performance degradation, an evaluation of experimental facility simulation capabilities, and ice protection system evaluation tests in simulated icing conditions. Current research focussed on upgrading simulation fidelity of both experimental and computational methods is discussed. The need for increased understanding of the physical processes governing ice accretion, ice shedding, and iced airfoil aerodynamics is examined.

  6. Icing simulation: A survey of computer models and experimental facilities

    NASA Technical Reports Server (NTRS)

    Potapczuk, M. G.; Reinmann, J. J.

    1991-01-01

    A survey of the current methods for simulation of the response of an aircraft or aircraft subsystem to an icing encounter is presented. The topics discussed include a computer code modeling of aircraft icing and performance degradation, an evaluation of experimental facility simulation capabilities, and ice protection system evaluation tests in simulated icing conditions. Current research focused on upgrading simulation fidelity of both experimental and computational methods is discussed. The need for the increased understanding of the physical processes governing ice accretion, ice shedding, and iced aerodynamics is examined.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mubarak, Misbah; Ross, Robert B.

    This technical report describes the experiments performed to validate the MPI performance measurements reported by the CODES dragonfly network simulation with the Theta Cray XC system at the Argonne Leadership Computing Facility (ALCF).

  8. Neutron Radiography and Computed Tomography at Oak Ridge National Laboratory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Raine, Dudley A. III; Hubbard, Camden R.; Whaley, Paul M.

    1997-12-31

    The capability to perform neutron radiography and computed tomography is being developed at Oak Ridge National Laboratory. The facility will be located at the High Flux Isotope Reactor (HFIR), which has the highest steady state neutron flux of any reactor in the world. The Monte Carlo N-Particle transport code (MCNP), versions 4A and 4B, has been used extensively in the design phase of the facility to predict and optimize the operating characteristics, and to ensure the safety of personnel working in and around the blockhouse. Neutrons are quite penetrating in most engineering materials and can be useful to detect internalmore » flaws and features. Hydrogen atoms, such as in a hydrocarbon fuel, lubricant or a metal hydride, are relatively opaque to neutron transmission. Thus, neutron based tomography or radiography is ideal to image their presence. The source flux also provides unparalleled flexibility for future upgrades, including real time radiography where dynamic processes can be observed. A novel tomography detector has been designed using optical fibers and digital technology to provide a large dynamic range for reconstructions. Film radiography is also available for high resolution imaging applications. This paper summarizes the results of the design phase of this facility and the potential benefits to science and industry.« less

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    Some of the major technical questions associated with the burial of radioactive high-level wastes in geologic formations are related to the thermal environments generated by the waste and the impact of this dissipated heat on the surrounding environment. The design of a high level waste storage facility must be such that the temperature variations that occur do not adversely affect operating personnel and equipment. The objective of this investigation was to assist OWI by determining the thermal environment that would be experienced by personnel and equipment in a waste storage facility in salt. Particular emphasis was placed on determining themore » maximum floor and air temperatures with and without ventilation in the first 30 years after waste emplacement. The assumed facility design differs somewhat from those previously analyzed and reported, but many of the previous parametric surveys are useful for comparison. In this investigation a number of 2-dimensional and 3-dimensional simulations of the heat flow in a repository have been performed on the HEATING5 and TRUMP heat transfer codes. The representative repository constructs used in the simulations are described, as well as the computational models and computer codes. Results of the simulations are presented and discussed. Comparisons are made between the recent results and those from previous analyses. Finally, a summary of study limitations, comparisons, and conclusions is given.« less

  10. One-Dimensional Spontaneous Raman Measurements of Temperature Made in a Gas Turbine Combustor

    NASA Technical Reports Server (NTRS)

    Hicks, Yolanda R.; Locke, Randy J.; DeGroot, Wilhelmus A.; Anderson, Robert C.

    2002-01-01

    The NASA Glenn Research Center is working with the aeronautics industry to develop highly fuel-efficient and environmentally friendly gas turbine combustor technology. This effort includes testing new hardware designs at conditions that simulate the high-temperature, high-pressure environment expected in the next-generation of high-performance engines. Glenn has the only facilities in which such tests can be performed. One aspect of these tests is the use of nonintrusive optical and laser diagnostics to measure combustion species concentration, fuel/air ratio, fuel drop size, and velocity, and to visualize the fuel injector spray pattern and some combustion species distributions. These data not only help designers to determine the efficacy of specific designs, but provide a database for computer modelers and enhance our understanding of the many processes that take place within a combustor. Until recently, we lacked one critical capability, the ability to measure temperature. This article summarizes our latest developments in that area. Recently, we demonstrated the first-ever use of spontaneous Raman scattering to measure combustion temperatures within the Advanced Subsonics Combustion Rig (ASCR) sector rig. We also established the highest rig pressure ever achieved for a continuous-flow combustor facility, 54.4 bar. The ASCR facility can provide operating pressures from 1 to 60 bar (60 atm). This photograph shows the Raman system setup next to the ASCR rig. The test was performed using a NASA-concept fuel injector and Jet-A fuel over a range of air inlet temperatures, pressures, and fuel/air ratios.

  11. Parallelization of NAS Benchmarks for Shared Memory Multiprocessors

    NASA Technical Reports Server (NTRS)

    Waheed, Abdul; Yan, Jerry C.; Saini, Subhash (Technical Monitor)

    1998-01-01

    This paper presents our experiences of parallelizing the sequential implementation of NAS benchmarks using compiler directives on SGI Origin2000 distributed shared memory (DSM) system. Porting existing applications to new high performance parallel and distributed computing platforms is a challenging task. Ideally, a user develops a sequential version of the application, leaving the task of porting to new generations of high performance computing systems to parallelization tools and compilers. Due to the simplicity of programming shared-memory multiprocessors, compiler developers have provided various facilities to allow the users to exploit parallelism. Native compilers on SGI Origin2000 support multiprocessing directives to allow users to exploit loop-level parallelism in their programs. Additionally, supporting tools can accomplish this process automatically and present the results of parallelization to the users. We experimented with these compiler directives and supporting tools by parallelizing sequential implementation of NAS benchmarks. Results reported in this paper indicate that with minimal effort, the performance gain is comparable with the hand-parallelized, carefully optimized, message-passing implementations of the same benchmarks.

  12. ARC-1980-AC80-0512-2

    NASA Image and Video Library

    1980-06-05

    N-231 High Reynolds Number Channel Facility (An example of a Versatile Wind Tunnel) Tunnel 1 I is a blowdown Facility that utilizes interchangeable test sections and nozzles. The facility provides experimental support for the fluid mechanics research, including experimental verification of aerodynamic computer codes and boundary-layer and airfoil studies that require high Reynolds number simulation. (Tunnel 1)

  13. A Look at the Impact of High-End Computing Technologies on NASA Missions

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Dunbar, Jill; Hardman, John; Bailey, F. Ron; Wheeler, Lorien; Rogers, Stuart

    2012-01-01

    From its bold start nearly 30 years ago and continuing today, the NASA Advanced Supercomputing (NAS) facility at Ames Research Center has enabled remarkable breakthroughs in the space agency s science and engineering missions. Throughout this time, NAS experts have influenced the state-of-the-art in high-performance computing (HPC) and related technologies such as scientific visualization, system benchmarking, batch scheduling, and grid environments. We highlight the pioneering achievements and innovations originating from and made possible by NAS resources and know-how, from early supercomputing environment design and software development, to long-term simulation and analyses critical to design safe Space Shuttle operations and associated spinoff technologies, to the highly successful Kepler Mission s discovery of new planets now capturing the world s imagination.

  14. An inventory of aeronautical ground research facilities. Volume 4: Engineering flight simulation facilities

    NASA Technical Reports Server (NTRS)

    Pirrello, C. J.; Hardin, R. D.; Capelluro, L. P.; Harrison, W. D.

    1971-01-01

    The general purpose capabilities of government and industry in the area of real time engineering flight simulation are discussed. The information covers computer equipment, visual systems, crew stations, and motion systems, along with brief statements of facility capabilities. Facility construction and typical operational costs are included where available. The facilities provide for economical and safe solutions to vehicle design, performance, control, and flying qualities problems of manned and unmanned flight systems.

  15. (abstract) Simple Spreadsheet Thermal Models for Cryogenic Applications

    NASA Technical Reports Server (NTRS)

    Nash, A. E.

    1994-01-01

    Self consistent circuit analog thermal models, that can be run in commercial spreadsheet programs on personal computers, have been created to calculate the cooldown and steady state performance of cryogen cooled Dewars. The models include temperature dependent conduction and radiation effects. The outputs of the models provide temperature distribution and Dewar performance information. These models have been used to analyze the Cryogenic Telescope Test Facility (CTTF). The facility will be on line in early 1995 for its first user, the Infrared Telescope Technology Testbed (ITTT), for the Space Infrared Telescope Facility (SIRTF) at JPL. The model algorithm as well as a comparison of the model predictions and actual performance of this facility will be presented.

  16. Simple Spreadsheet Thermal Models for Cryogenic Applications

    NASA Technical Reports Server (NTRS)

    Nash, Alfred

    1995-01-01

    Self consistent circuit analog thermal models that can be run in commercial spreadsheet programs on personal computers have been created to calculate the cooldown and steady state performance of cryogen cooled Dewars. The models include temperature dependent conduction and radiation effects. The outputs of the models provide temperature distribution and Dewar performance information. these models have been used to analyze the SIRTF Telescope Test Facility (STTF). The facility has been brought on line for its first user, the Infrared Telescope Technology Testbed (ITTT), for the Space Infrared Telescope Facility (SIRTF) at JPL. The model algorithm as well as a comparison between the models' predictions and actual performance of this facility will be presented.

  17. Applied Computational Fluid Dynamics at NASA Ames Research Center

    NASA Technical Reports Server (NTRS)

    Holst, Terry L.; Kwak, Dochan (Technical Monitor)

    1994-01-01

    The field of Computational Fluid Dynamics (CFD) has advanced to the point where it can now be used for many applications in fluid mechanics research and aerospace vehicle design. A few applications being explored at NASA Ames Research Center will be presented and discussed. The examples presented will range in speed from hypersonic to low speed incompressible flow applications. Most of the results will be from numerical solutions of the Navier-Stokes or Euler equations in three space dimensions for general geometry applications. Computational results will be used to highlight the presentation as appropriate. Advances in computational facilities including those associated with NASA's CAS (Computational Aerosciences) Project of the Federal HPCC (High Performance Computing and Communications) Program will be discussed. Finally, opportunities for future research will be presented and discussed. All material will be taken from non-sensitive, previously-published and widely-disseminated work.

  18. Aerodynamic design trends for commercial aircraft

    NASA Technical Reports Server (NTRS)

    Hilbig, R.; Koerner, H.

    1986-01-01

    Recent research on advanced-configuration commercial aircraft at DFVLR is surveyed, with a focus on aerodynamic approaches to improved performance. Topics examined include transonic wings with variable camber or shock/boundary-layer control, wings with reduced friction drag or laminarized flow, prop-fan propulsion, and unusual configurations or wing profiles. Drawings, diagrams, and graphs of predicted performance are provided, and the need for extensive development efforts using powerful computer facilities, high-speed and low-speed wind tunnels, and flight tests of models (mounted on specially designed carrier aircraft) is indicated.

  19. The UK Human Genome Mapping Project online computing service.

    PubMed

    Rysavy, F R; Bishop, M J; Gibbs, G P; Williams, G W

    1992-04-01

    This paper presents an overview of computing and networking facilities developed by the Medical Research Council to provide online computing support to the Human Genome Mapping Project (HGMP) in the UK. The facility is connected to a number of other computing facilities in various centres of genetics and molecular biology research excellence, either directly via high-speed links or through national and international wide-area networks. The paper describes the design and implementation of the current system, a 'client/server' network of Sun, IBM, DEC and Apple servers, gateways and workstations. A short outline of online computing services currently delivered by this system to the UK human genetics research community is also provided. More information about the services and their availability could be obtained by a direct approach to the UK HGMP-RC.

  20. High-temperature behavior of advanced spacecraft TPS

    NASA Technical Reports Server (NTRS)

    Pallix, Joan

    1994-01-01

    The objective of this work has been to develop more efficient, lighter weight, and higher temperature thermal protection systems (TPS) for future reentry space vehicles. The research carried out during this funding period involved the design, analysis, testing, fabrication, and characterization of thermal protection materials to be used on future hypersonic vehicles. This work is important for the prediction of material performance at high temperature and aids in the design of thermal protection systems for a number of programs including programs such as the National Aerospace Plane (NASP), Pegasus and Pegasus/SWERVE, the Comet Rendezvous and Flyby Vehicle (CRAF), and the Mars mission entry vehicles. Research has been performed in two main areas including development and testing of thermal protection systems (TPS) and computational research. A variety of TPS materials and coatings have been developed during this funding period. Ceramic coatings were developed for flexible insulations as well as for low density ceramic insulators. Chemical vapor deposition processes were established for the fabrication of ceramic matrix composites. Experimental testing and characterization of these materials has been carried out in the NASA Ames Research Center Thermophysics Facilities and in the Ames time-of-flight mass spectrometer facility. By means of computation, we have been better able to understand the flow structure and properties of the TPS components and to estimate the aerothermal heating, stress, ablation rate, thermal response, and shape change on the surfaces of TPS. In addition, work for the computational surface thermochemistry project has included modification of existing computer codes and creating new codes to model material response and shape change on atmospheric entry vehicles in a variety of environments (e.g., earth and Mars atmospheres).

  1. High-temperature behavior of advanced spacecraft TPS

    NASA Astrophysics Data System (ADS)

    Pallix, Joan

    1994-05-01

    The objective of this work has been to develop more efficient, lighter weight, and higher temperature thermal protection systems (TPS) for future reentry space vehicles. The research carried out during this funding period involved the design, analysis, testing, fabrication, and characterization of thermal protection materials to be used on future hypersonic vehicles. This work is important for the prediction of material performance at high temperature and aids in the design of thermal protection systems for a number of programs including programs such as the National Aerospace Plane (NASP), Pegasus and Pegasus/SWERVE, the Comet Rendezvous and Flyby Vehicle (CRAF), and the Mars mission entry vehicles. Research has been performed in two main areas including development and testing of thermal protection systems (TPS) and computational research. A variety of TPS materials and coatings have been developed during this funding period. Ceramic coatings were developed for flexible insulations as well as for low density ceramic insulators. Chemical vapor deposition processes were established for the fabrication of ceramic matrix composites. Experimental testing and characterization of these materials has been carried out in the NASA Ames Research Center Thermophysics Facilities and in the Ames time-of-flight mass spectrometer facility. By means of computation, we have been better able to understand the flow structure and properties of the TPS components and to estimate the aerothermal heating, stress, ablation rate, thermal response, and shape change on the surfaces of TPS. In addition, work for the computational surface thermochemistry project has included modification of existing computer codes and creating new codes to model material response and shape change on atmospheric entry vehicles in a variety of environments (e.g., earth and Mars atmospheres).

  2. Using Modeling and Simulation to Complement Testing for Increased Understanding of Weapon Subassembly Response.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wong, Michael K.; Davidson, Megan

    As part of Sandia’s nuclear deterrence mission, the B61-12 Life Extension Program (LEP) aims to modernize the aging weapon system. Modernization requires requalification and Sandia is using high performance computing to perform advanced computational simulations to better understand, evaluate, and verify weapon system performance in conjunction with limited physical testing. The Nose Bomb Subassembly (NBSA) of the B61-12 is responsible for producing a fuzing signal upon ground impact. The fuzing signal is dependent upon electromechanical impact sensors producing valid electrical fuzing signals at impact. Computer generated models were used to assess the timing between the impact sensor’s response to themore » deceleration of impact and damage to major components and system subassemblies. The modeling and simulation team worked alongside the physical test team to design a large-scale reverse ballistic test to not only assess system performance, but to also validate their computational models. The reverse ballistic test conducted at Sandia’s sled test facility sent a rocket sled with a representative target into a stationary B61-12 (NBSA) to characterize the nose crush and functional response of NBSA components. Data obtained from data recorders and high-speed photometrics were integrated with previously generated computer models in order to refine and validate the model’s ability to reliably simulate real-world effects. Large-scale tests are impractical to conduct for every single impact scenario. By creating reliable computer models, we can perform simulations that identify trends and produce estimates of outcomes over the entire range of required impact conditions. Sandia’s HPCs enable geometric resolution that was unachievable before, allowing for more fidelity and detail, and creating simulations that can provide insight to support evaluation of requirements and performance margins. As computing resources continue to improve, researchers at Sandia are hoping to improve these simulations so they provide increasingly credible analysis of the system response and performance over the full range of conditions.« less

  3. VisIVO: A Library and Integrated Tools for Large Astrophysical Dataset Exploration

    NASA Astrophysics Data System (ADS)

    Becciani, U.; Costa, A.; Ersotelos, N.; Krokos, M.; Massimino, P.; Petta, C.; Vitello, F.

    2012-09-01

    VisIVO provides an integrated suite of tools and services that can be used in many scientific fields. VisIVO development starts in the Virtual Observatory framework. VisIVO allows users to visualize meaningfully highly-complex, large-scale datasets and create movies of these visualizations based on distributed infrastructures. VisIVO supports high-performance, multi-dimensional visualization of large-scale astrophysical datasets. Users can rapidly obtain meaningful visualizations while preserving full and intuitive control of the relevant parameters. VisIVO consists of VisIVO Desktop - a stand-alone application for interactive visualization on standard PCs, VisIVO Server - a platform for high performance visualization, VisIVO Web - a custom designed web portal, VisIVOSmartphone - an application to exploit the VisIVO Server functionality and the latest VisIVO features: VisIVO Library allows a job running on a computational system (grid, HPC, etc.) to produce movies directly with the code internal data arrays without the need to produce intermediate files. This is particularly important when running on large computational facilities, where the user wants to have a look at the results during the data production phase. For example, in grid computing facilities, images can be produced directly in the grid catalogue while the user code is running in a system that cannot be directly accessed by the user (a worker node). The deployment of VisIVO on the DG and gLite is carried out with the support of EDGI and EGI-Inspire projects. Depending on the structure and size of datasets under consideration, the data exploration process could take several hours of CPU for creating customized views and the production of movies could potentially last several days. For this reason an MPI parallel version of VisIVO could play a fundamental role in increasing performance, e.g. it could be automatically deployed on nodes that are MPI aware. A central concept in our development is thus to produce unified code that can run either on serial nodes or in parallel by using HPC oriented grid nodes. Another important aspect, to obtain as high performance as possible, is the integration of VisIVO processes with grid nodes where GPUs are available. We have selected CUDA for implementing a range of computationally heavy modules. VisIVO is supported by EGI-Inspire, EDGI and SCI-BUS projects.

  4. Application of Adjoint Method and Spectral-Element Method to Tomographic Inversion of Regional Seismological Structure Beneath Japanese Islands

    NASA Astrophysics Data System (ADS)

    Tsuboi, S.; Miyoshi, T.; Obayashi, M.; Tono, Y.; Ando, K.

    2014-12-01

    Recent progress in large scale computing by using waveform modeling technique and high performance computing facility has demonstrated possibilities to perform full-waveform inversion of three dimensional (3D) seismological structure inside the Earth. We apply the adjoint method (Liu and Tromp, 2006) to obtain 3D structure beneath Japanese Islands. First we implemented Spectral-Element Method to K-computer in Kobe, Japan. We have optimized SPECFEM3D_GLOBE (Komatitsch and Tromp, 2002) by using OpenMP so that the code fits hybrid architecture of K-computer. Now we could use 82,134 nodes of K-computer (657,072 cores) to compute synthetic waveform with about 1 sec accuracy for realistic 3D Earth model and its performance was 1.2 PFLOPS. We use this optimized SPECFEM3D_GLOBE code and take one chunk around Japanese Islands from global mesh and compute synthetic seismograms with accuracy of about 10 second. We use GAP-P2 mantle tomography model (Obayashi et al., 2009) as an initial 3D model and use as many broadband seismic stations available in this region as possible to perform inversion. We then use the time windows for body waves and surface waves to compute adjoint sources and calculate adjoint kernels for seismic structure. We have performed several iteration and obtained improved 3D structure beneath Japanese Islands. The result demonstrates that waveform misfits between observed and theoretical seismograms improves as the iteration proceeds. We now prepare to use much shorter period in our synthetic waveform computation and try to obtain seismic structure for basin scale model, such as Kanto basin, where there are dense seismic network and high seismic activity. Acknowledgements: This research was partly supported by MEXT Strategic Program for Innovative Research. We used F-net seismograms of the National Research Institute for Earth Science and Disaster Prevention.

  5. Administration of Computer Resources.

    ERIC Educational Resources Information Center

    Franklin, Gene F.

    Computing at Stanford University has, until recently, been performed at one of five facilities. The Stanford hospital operates an IBM 370/135 mainly for administrative use. The university business office has an IBM 370/145 for its administrative needs and support of the medical clinic. Under the supervision of the Stanford Computation Center are…

  6. A Fluid Structure Algorithm with Lagrange Multipliers to Model Free Swimming

    NASA Astrophysics Data System (ADS)

    Sahin, Mehmet; Dilek, Ezgi

    2017-11-01

    A new monolithic approach is prosed to solve the fluid-structure interaction (FSI) problem with Lagrange multipliers in order to model free swimming/flying. In the present approach, the fluid domain is modeled by the incompressible Navier-Stokes equations and discretized using an Arbitrary Lagrangian-Eulerian (ALE) formulation based on the stable side-centered unstructured finite volume method. The solid domain is modeled by the constitutive laws for the nonlinear Saint Venant-Kirchhoff material and the classical Galerkin finite element method is used to discretize the governing equations in a Lagrangian frame. In order to impose the body motion/deformation, the distance between the constraint pair nodes is imposed using the Lagrange multipliers, which is independent from the frame of reference. The resulting algebraic linear equations are solved in a fully coupled manner using a dual approach (null space method). The present numerical algorithm is initially validated for the classical FSI benchmark problems and then applied to the free swimming of three linked ellipses. The authors are grateful for the use of the computing resources provided by the National Center for High Performance Computing (UYBHM) under Grant Number 10752009 and the computing facilities at TUBITAK-ULAKBIM, High Performance and Grid Computing Center.

  7. Scientific Visualization in High Speed Network Environments

    NASA Technical Reports Server (NTRS)

    Vaziri, Arsi; Kutler, Paul (Technical Monitor)

    1997-01-01

    In several cases, new visualization techniques have vastly increased the researcher's ability to analyze and comprehend data. Similarly, the role of networks in providing an efficient supercomputing environment have become more critical and continue to grow at a faster rate than the increase in the processing capabilities of supercomputers. A close relationship between scientific visualization and high-speed networks in providing an important link to support efficient supercomputing is identified. The two technologies are driven by the increasing complexities and volume of supercomputer data. The interaction of scientific visualization and high-speed networks in a Computational Fluid Dynamics simulation/visualization environment are given. Current capabilities supported by high speed networks, supercomputers, and high-performance graphics workstations at the Numerical Aerodynamic Simulation Facility (NAS) at NASA Ames Research Center are described. Applied research in providing a supercomputer visualization environment to support future computational requirements are summarized.

  8. Comparison of x ray computed tomography number to proton relative linear stopping power conversion functions using a standard phantom.

    PubMed

    Moyers, M F

    2014-06-01

    Adequate evaluation of the results from multi-institutional trials involving light ion beam treatments requires consideration of the planning margins applied to both targets and organs at risk. A major uncertainty that affects the size of these margins is the conversion of x ray computed tomography numbers (XCTNs) to relative linear stopping powers (RLSPs). Various facilities engaged in multi-institutional clinical trials involving proton beams have been applying significantly different margins in their patient planning. This study was performed to determine the variance in the conversion functions used at proton facilities in the U.S.A. wishing to participate in National Cancer Institute sponsored clinical trials. A simplified method of determining the conversion function was developed using a standard phantom containing only water and aluminum. The new method was based on the premise that all scanners have their XCTNs for air and water calibrated daily to constant values but that the XCTNs for high density/high atomic number materials are variable with different scanning conditions. The standard phantom was taken to 10 different proton facilities and scanned with the local protocols resulting in 14 derived conversion functions which were compared to the conversion functions used at the local facilities. For tissues within ±300 XCTN of water, all facility functions produced converted RLSP values within ±6% of the values produced by the standard function and within 8% of the values from any other facility's function. For XCTNs corresponding to lung tissue, converted RLSP values differed by as great as ±8% from the standard and up to 16% from the values of other facilities. For XCTNs corresponding to low-density immobilization foam, the maximum to minimum values differed by as much as 40%. The new method greatly simplifies determination of the conversion function, reduces ambiguity, and in the future could promote standardization between facilities. Although it was not possible from these experiments to determine which conversion function is most appropriate, the variation between facilities suggests that the margins used in some facilities to account for the uncertainty in converting XCTNs to RLSPs may be too small.

  9. Hybrid massively parallel fast sweeping method for static Hamilton–Jacobi equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Detrixhe, Miles, E-mail: mdetrixhe@engineering.ucsb.edu; University of California Santa Barbara, Santa Barbara, CA, 93106; Gibou, Frédéric, E-mail: fgibou@engineering.ucsb.edu

    The fast sweeping method is a popular algorithm for solving a variety of static Hamilton–Jacobi equations. Fast sweeping algorithms for parallel computing have been developed, but are severely limited. In this work, we present a multilevel, hybrid parallel algorithm that combines the desirable traits of two distinct parallel methods. The fine and coarse grained components of the algorithm take advantage of heterogeneous computer architecture common in high performance computing facilities. We present the algorithm and demonstrate its effectiveness on a set of example problems including optimal control, dynamic games, and seismic wave propagation. We give results for convergence, parallel scaling,more » and show state-of-the-art speedup values for the fast sweeping method.« less

  10. Computational Nuclear Physics and Post Hartree-Fock Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lietz, Justin; Sam, Novario; Hjorth-Jensen, M.

    We present a computational approach to infinite nuclear matter employing Hartree-Fock theory, many-body perturbation theory and coupled cluster theory. These lectures are closely linked with those of chapters 9, 10 and 11 and serve as input for the correlation functions employed in Monte Carlo calculations in chapter 9, the in-medium similarity renormalization group theory of dense fermionic systems of chapter 10 and the Green's function approach in chapter 11. We provide extensive code examples and benchmark calculations, allowing thereby an eventual reader to start writing her/his own codes. We start with an object-oriented serial code and end with discussions onmore » strategies for porting the code to present and planned high-performance computing facilities.« less

  11. Computer and laboratory simulation of interactions between spacecraft surfaces and charged-particle environments

    NASA Technical Reports Server (NTRS)

    Stevens, N. J.

    1979-01-01

    Cases where the charged-particle environment acts on the spacecraft (e.g., spacecraft charging phenomena) and cases where a system on the spacecraft causes the interaction (e.g., high voltage space power systems) are considered. Both categories were studied in ground simulation facilities to understand the processes involved and to measure the pertinent parameters. Computer simulations are based on the NASA Charging Analyzer Program (NASCAP) code. Analytical models are developed in this code and verified against the experimental data. Extrapolation from the small test samples to space conditions are made with this code. Typical results from laboratory and computer simulations are presented for both types of interactions. Extrapolations from these simulations to performance in space environments are discussed.

  12. Development of a CCD array as an imaging detector for advanced X-ray astrophysics facilities

    NASA Technical Reports Server (NTRS)

    Schwartz, D. A.

    1981-01-01

    The development of a charge coupled device (CCD) X-ray imager for a large aperture, high angular resolution X-ray telescope is discussed. Existing CCDs were surveyed and three candidate concepts were identified. An electronic camera control and computer interface, including software to drive a Fairchild 211 CCD, is described. In addition a vacuum mounting and cooling system is discussed. Performance data for the various components are given.

  13. Capsule modeling of high foot implosion experiments on the National Ignition Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clark, D. S.; Kritcher, A. L.; Milovich, J. L.

    This study summarizes the results of detailed, capsule-only simulations of a set of high foot implosion experiments conducted on the National Ignition Facility (NIF). These experiments span a range of ablator thicknesses, laser powers, and laser energies, and modeling these experiments as a set is important to assess whether the simulation model can reproduce the trends seen experimentally as the implosion parameters were varied. Two-dimensional (2D) simulations have been run including a number of effects—both nominal and off-nominal—such as hohlraum radiation asymmetries, surface roughness, the capsule support tent, and hot electron pre-heat. Selected three-dimensional simulations have also been run tomore » assess the validity of the 2D axisymmetric approximation. As a composite, these simulations represent the current state of understanding of NIF high foot implosion performance using the best and most detailed computational model available. While the most detailed simulations show approximate agreement with the experimental data, it is evident that the model remains incomplete and further refinements are needed. Nevertheless, avenues for improved performance are clearly indicated.« less

  14. Capsule modeling of high foot implosion experiments on the National Ignition Facility

    DOE PAGES

    Clark, D. S.; Kritcher, A. L.; Milovich, J. L.; ...

    2017-03-21

    This study summarizes the results of detailed, capsule-only simulations of a set of high foot implosion experiments conducted on the National Ignition Facility (NIF). These experiments span a range of ablator thicknesses, laser powers, and laser energies, and modeling these experiments as a set is important to assess whether the simulation model can reproduce the trends seen experimentally as the implosion parameters were varied. Two-dimensional (2D) simulations have been run including a number of effects—both nominal and off-nominal—such as hohlraum radiation asymmetries, surface roughness, the capsule support tent, and hot electron pre-heat. Selected three-dimensional simulations have also been run tomore » assess the validity of the 2D axisymmetric approximation. As a composite, these simulations represent the current state of understanding of NIF high foot implosion performance using the best and most detailed computational model available. While the most detailed simulations show approximate agreement with the experimental data, it is evident that the model remains incomplete and further refinements are needed. Nevertheless, avenues for improved performance are clearly indicated.« less

  15. Real science at the petascale.

    PubMed

    Saksena, Radhika S; Boghosian, Bruce; Fazendeiro, Luis; Kenway, Owain A; Manos, Steven; Mazzeo, Marco D; Sadiq, S Kashif; Suter, James L; Wright, David; Coveney, Peter V

    2009-06-28

    We describe computational science research that uses petascale resources to achieve scientific results at unprecedented scales and resolution. The applications span a wide range of domains, from investigation of fundamental problems in turbulence through computational materials science research to biomedical applications at the forefront of HIV/AIDS research and cerebrovascular haemodynamics. This work was mainly performed on the US TeraGrid 'petascale' resource, Ranger, at Texas Advanced Computing Center, in the first half of 2008 when it was the largest computing system in the world available for open scientific research. We have sought to use this petascale supercomputer optimally across application domains and scales, exploiting the excellent parallel scaling performance found on up to at least 32 768 cores for certain of our codes in the so-called 'capability computing' category as well as high-throughput intermediate-scale jobs for ensemble simulations in the 32-512 core range. Furthermore, this activity provides evidence that conventional parallel programming with MPI should be successful at the petascale in the short to medium term. We also report on the parallel performance of some of our codes on up to 65 636 cores on the IBM Blue Gene/P system at the Argonne Leadership Computing Facility, which has recently been named the fastest supercomputer in the world for open science.

  16. User's manual for COAST 4: a code for costing and sizing tokamaks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sink, D. A.; Iwinski, E. M.

    1979-09-01

    The purpose of this report is to document the computer program COAST 4 for the user/analyst. COAST, COst And Size Tokamak reactors, provides complete and self-consistent size models for the engineering features of D-T burning tokamak reactors and associated facilities involving a continuum of performance including highly beam driven through ignited plasma devices. TNS (The Next Step) devices with no tritium breeding or electrical power production are handled as well as power producing and fissile producing fusion-fission hybrid reactors. The code has been normalized with a TFTR calculation which is consistent with cost, size, and performance data published in themore » conceptual design report for that device. Information on code development, computer implementation and detailed user instructions are included in the text.« less

  17. It Takes A 'Village of Partnerships' To Raise A 'Big Data Facility' In A 'Big Data World'.

    NASA Astrophysics Data System (ADS)

    Evans, B. J. K.; Wyborn, L. A.

    2015-12-01

    The National Computational Infrastructure (NCI) at the Australian National University (ANU) has collocated a priority set of national and international data assets that span a wide range of domains from climate, oceans, geophysics, environment, astronomy, bioinformatics and the social sciences. The data are located on a 10 PB High Performance Data (HPD) Node that is integrated with a High Performance Computing (HPC) facility to enable a new style of Data-intensive in-situ analysis. Investigators can either log in via direct access to the data collections: access is also provided via modern standards-based web services. The NCI integrated HPD/HPC facility is supported by a 'village' of partnerships. NCI itself operates as a formal partnership between the ANU and three major National Scientific Agencies: CSIRO, the Bureau of Meteorology (BoM) and Geoscience Australia (GA). These same agencies are also the custodians of many of the national data collections hosted at NCI, and in partnership with other collaborating national and overseas organisations have agreed to work together to develop a shared data environment and use standards that enable interoperability between the collections, rather than isolating their collections as separate entities that each agency runs independently. To effectively analyse these complex and large volume data sets, NCI has entered into a series of national and national partnerships with international agencies to provide world-class digital analytical environments that allow computational to be conducted and shared. The ability for government and research to work in partnership at the NCI has been well established over the last decade, mainly with BoM, CSIRO, and GA. New emerging industry linkages are now being encouraged by revised government agendas and these promises to foster a new series of partnerships that will increase uptake of this major government funded infrastructure and promise to foster further collaboration and innovation.

  18. Fault tolerant computer control for a Maglev transportation system

    NASA Technical Reports Server (NTRS)

    Lala, Jaynarayan H.; Nagle, Gail A.; Anagnostopoulos, George

    1994-01-01

    Magnetically levitated (Maglev) vehicles operating on dedicated guideways at speeds of 500 km/hr are an emerging transportation alternative to short-haul air and high-speed rail. They have the potential to offer a service significantly more dependable than air and with less operating cost than both air and high-speed rail. Maglev transportation derives these benefits by using magnetic forces to suspend a vehicle 8 to 200 mm above the guideway. Magnetic forces are also used for propulsion and guidance. The combination of high speed, short headways, stringent ride quality requirements, and a distributed offboard propulsion system necessitates high levels of automation for the Maglev control and operation. Very high levels of safety and availability will be required for the Maglev control system. This paper describes the mission scenario, functional requirements, and dependability and performance requirements of the Maglev command, control, and communications system. A distributed hierarchical architecture consisting of vehicle on-board computers, wayside zone computers, a central computer facility, and communication links between these entities was synthesized to meet the functional and dependability requirements on the maglev. Two variations of the basic architecture are described: the Smart Vehicle Architecture (SVA) and the Zone Control Architecture (ZCA). Preliminary dependability modeling results are also presented.

  19. Comparisons Between NO PLIF Imaging and CFD Simulations of Mixing Flowfields for High-Speed Fuel Injectors

    NASA Technical Reports Server (NTRS)

    Drozda, Tomasz, G.; Cabell, Karen F.; Ziltz, Austin R.; Hass, Neil E.; Inman, Jennifer A.; Burns, Ross A.; Bathel, Brett F.; Danehy, Paul M.; Abul-Huda, Yasin M.; Gamba, Mirko

    2017-01-01

    The current work compares experimentally and computationally obtained nitric oxide (NO) planar laser-induced fluorescence (PLIF) images of the mixing flowfields for three types of high-speed fuel injectors: a strut, a ramp, and a rectangular flush-wall. These injection devices, which exhibited promising mixing performance at lower flight Mach numbers, are currently being studied as a part of the Enhanced Injection and Mixing Project (EIMP) at the NASA Langley Research Center. The EIMP aims to investigate scramjet fuel injection and mixing physics, and improve the understanding of underlying physical processes relevant to flight Mach numbers greater than eight. In the experiments, conducted in the NASA Langley Arc-Heated Scramjet Test Facility (AHSTF), the injectors are placed downstream of a Mach 6 facility nozzle, which simulates the high Mach number air flow at the entrance of a scramjet combustor. Helium is used as an inert substitute for hydrogen fuel. The PLIF is obtained by using a tunable laser to excite the NO, which is present in the AHSTF air as a direct result of arc-heating. Consequently, the absence of signal is an indication of pure helium (fuel). The PLIF images computed from the computational fluid dynamics (CFD) simulations are obtained by combining a fluorescence model for NO with the Reynolds-Averaged Simulation results carried out using the VULCAN-CFD solver to obtain a computational equivalent of the experimentally measured PLIF signal. The measured NO PLIF signal is mainly a function of NO concentration allowing for semi-quantitative comparisons between the CFD and the experiments. The PLIF signal intensity is also sensitive to pressure and temperature variations in the flow, allowing additional flow features to be identified and compared with the CFD. Good agreement between the PLIF and the CFD results provides increased confidence in the CFD simulations for investigations of injector performance.

  20. Parallel computing in enterprise modeling.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goldsby, Michael E.; Armstrong, Robert C.; Shneider, Max S.

    2008-08-01

    This report presents the results of our efforts to apply high-performance computing to entity-based simulations with a multi-use plugin for parallel computing. We use the term 'Entity-based simulation' to describe a class of simulation which includes both discrete event simulation and agent based simulation. What simulations of this class share, and what differs from more traditional models, is that the result sought is emergent from a large number of contributing entities. Logistic, economic and social simulations are members of this class where things or people are organized or self-organize to produce a solution. Entity-based problems never have an a priorimore » ergodic principle that will greatly simplify calculations. Because the results of entity-based simulations can only be realized at scale, scalable computing is de rigueur for large problems. Having said that, the absence of a spatial organizing principal makes the decomposition of the problem onto processors problematic. In addition, practitioners in this domain commonly use the Java programming language which presents its own problems in a high-performance setting. The plugin we have developed, called the Parallel Particle Data Model, overcomes both of these obstacles and is now being used by two Sandia frameworks: the Decision Analysis Center, and the Seldon social simulation facility. While the ability to engage U.S.-sized problems is now available to the Decision Analysis Center, this plugin is central to the success of Seldon. Because Seldon relies on computationally intensive cognitive sub-models, this work is necessary to achieve the scale necessary for realistic results. With the recent upheavals in the financial markets, and the inscrutability of terrorist activity, this simulation domain will likely need a capability with ever greater fidelity. High-performance computing will play an important part in enabling that greater fidelity.« less

  1. High-Throughput Computing on High-Performance Platforms: A Case Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oleynik, D; Panitkin, S; Matteo, Turilli

    The computing systems used by LHC experiments has historically consisted of the federation of hundreds to thousands of distributed resources, ranging from small to mid-size resource. In spite of the impressive scale of the existing distributed computing solutions, the federation of small to mid-size resources will be insufficient to meet projected future demands. This paper is a case study of how the ATLAS experiment has embraced Titan -- a DOE leadership facility in conjunction with traditional distributed high- throughput computing to reach sustained production scales of approximately 52M core-hours a years. The three main contributions of this paper are: (i)more » a critical evaluation of design and operational considerations to support the sustained, scalable and production usage of Titan; (ii) a preliminary characterization of a next generation executor for PanDA to support new workloads and advanced execution modes; and (iii) early lessons for how current and future experimental and observational systems can be integrated with production supercomputers and other platforms in a general and extensible manner.« less

  2. Software systems for modeling articulated figures

    NASA Technical Reports Server (NTRS)

    Phillips, Cary B.

    1989-01-01

    Research in computer animation and simulation of human task performance requires sophisticated geometric modeling and user interface tools. The software for a research environment should present the programmer with a powerful but flexible substrate of facilities for displaying and manipulating geometric objects, yet insure that future tools have a consistent and friendly user interface. Jack is a system which provides a flexible and extensible programmer and user interface for displaying and manipulating complex geometric figures, particularly human figures in a 3D working environment. It is a basic software framework for high-performance Silicon Graphics IRIS workstations for modeling and manipulating geometric objects in a general but powerful way. It provides a consistent and user-friendly interface across various applications in computer animation and simulation of human task performance. Currently, Jack provides input and control for applications including lighting specification and image rendering, anthropometric modeling, figure positioning, inverse kinematics, dynamic simulation, and keyframe animation.

  3. Enabling a high throughput real time data pipeline for a large radio telescope array with GPUs

    NASA Astrophysics Data System (ADS)

    Edgar, R. G.; Clark, M. A.; Dale, K.; Mitchell, D. A.; Ord, S. M.; Wayth, R. B.; Pfister, H.; Greenhill, L. J.

    2010-10-01

    The Murchison Widefield Array (MWA) is a next-generation radio telescope currently under construction in the remote Western Australia Outback. Raw data will be generated continuously at 5 GiB s-1, grouped into 8 s cadences. This high throughput motivates the development of on-site, real time processing and reduction in preference to archiving, transport and off-line processing. Each batch of 8 s data must be completely reduced before the next batch arrives. Maintaining real time operation will require a sustained performance of around 2.5 TFLOP s-1 (including convolutions, FFTs, interpolations and matrix multiplications). We describe a scalable heterogeneous computing pipeline implementation, exploiting both the high computing density and FLOP-per-Watt ratio of modern GPUs. The architecture is highly parallel within and across nodes, with all major processing elements performed by GPUs. Necessary scatter-gather operations along the pipeline are loosely synchronized between the nodes hosting the GPUs. The MWA will be a frontier scientific instrument and a pathfinder for planned peta- and exa-scale facilities.

  4. Computational Analyses in Support of Sub-scale Diffuser Testing for the A-3 Facility. Part 1; Steady Predictions

    NASA Technical Reports Server (NTRS)

    Allgood, Daniel C.; Graham, Jason S.; Ahuja, Vineet; Hosangadi, Ashvin

    2008-01-01

    Simulation technology can play an important role in rocket engine test facility design and development by assessing risks, providing analysis of dynamic pressure and thermal loads, identifying failure modes and predicting anomalous behavior of critical systems. Advanced numerical tools assume greater significance in supporting testing and design of high altitude testing facilities and plume induced testing environments of high thrust engines because of the greater inter-dependence and synergy in the functioning of the different sub-systems. This is especially true for facilities such as the proposed A-3 facility at NASA SSC because of a challenging operating envelope linked to variable throttle conditions at relatively low chamber pressures. Facility designs in this case will require a complex network of diffuser ducts, steam ejector trains, fast operating valves, cooling water systems and flow diverters that need to be characterized for steady state performance. In this paper, we will demonstrate with the use of CFD analyses s advanced capability to evaluate supersonic diffuser and steam ejector performance in a sub-scale A-3 facility at NASA Stennis Space Center (SSC) where extensive testing was performed. Furthermore, the focus in this paper relates to modeling of critical sub-systems and components used in facilities such as the A-3 facility. The work here will address deficiencies in empirical models and current CFD analyses that are used for design of supersonic diffusers/turning vanes/ejectors as well as analyses for confined plumes and venting processes. The primary areas that will be addressed are: (1) supersonic diffuser performance including analyses of thermal loads (2) accurate shock capturing in the diffuser duct; (3) effect of turning duct on the performance of the facility (4) prediction of mass flow rates and performance classification for steam ejectors (5) comparisons with test data from sub-scale diffuser testing and assessment of confidence levels in CFD based flowpath modeling of the facility. The analyses tools used here expand on the multi-element unstructured CFD which has been tailored and validated for impingement dynamics of dry plumes, complex valve/feed systems, and high pressure propellant delivery systems used in engine and component test stands at NASA SSC. The analyses performed in the evaluation of the sub-scale diffuser facility explored several important factors that influence modeling and understanding of facility operation such as (a) importance of modeling the facility with Real Gas approximation, (b) approximating the cluster of steam ejector nozzles as a single annular nozzle, (c) existence of mixed subsonic/supersonic flow downstream of the turning duct, and (d) inadequacy of two-equation turbulence models in predicting the correct pressurization in the turning duct and expansion of the second stage steam ejectors. The procedure used for modeling the facility was as follows: (i) The engine, test cell and first stage ejectors were simulated with an axisymmetric approximation (ii) the turning duct, second stage ejectors and the piping downstream of the second stage ejectors were analyzed with a three-dimensional simulation utilizing a half-plane symmetry approximation. The solution i.e. primitive variables such as pressure, velocity components, temperature and turbulence quantities were passed from the first computational domain and specified as a supersonic boundary condition for the second simulation. (iii) The third domain comprised of the exit diffuser and the region in the vicinity of the facility (primary included to get the correct shock structure at the exit of the facility and entrainment characteristics). The first set of simulations comprising the engine, test cell and first stage ejectors was carried out both as a turbulent real gas calculation as well as a turbulent perfect gas calculation. A comparison for the two cases (Real Turbulent and Perfect gas turbulent) of the Ma Number distribution and temperature distributions are shown in Figures 1 and 2 respectively. The Mach Number distribution shows small yet distinct differences between the two cases such as locations of shocks/shock reflections and a slightly different impingement point on the wall of the diffuser from the expansion at the exit of the nozzle. Similarly the temperature distribution indicates different flow recirculation patterns in the test cell. Both cases capture all the essential flow phenomena such as the shock-boundary layer interaction, plume expansion, expansion of the first stage ejectors, mixing between the engine plume and the first stage ejector flow and pressurization due to the first stage ejectors. The final paper will discuss thermal loads on the walls of the diffuser and cooling mechanisms investigated.

  5. Human performance models for computer-aided engineering

    NASA Technical Reports Server (NTRS)

    Elkind, Jerome I. (Editor); Card, Stuart K. (Editor); Hochberg, Julian (Editor); Huey, Beverly Messick (Editor)

    1989-01-01

    This report discusses a topic important to the field of computational human factors: models of human performance and their use in computer-based engineering facilities for the design of complex systems. It focuses on a particular human factors design problem -- the design of cockpit systems for advanced helicopters -- and on a particular aspect of human performance -- vision and related cognitive functions. By focusing in this way, the authors were able to address the selected topics in some depth and develop findings and recommendations that they believe have application to many other aspects of human performance and to other design domains.

  6. Autonomous Electrothermal Facility for Oil Recovery Intensification Fed by Wind Driven Power Unit

    NASA Astrophysics Data System (ADS)

    Belsky, Aleksey A.; Dobush, Vasiliy S.

    2017-10-01

    This paper describes the structure of autonomous facility fed by wind driven power unit for intensification of viscous and heavy crude oil recovery by means of heat impact on productive strata. Computer based service simulation of this facility was performed. Operational energy characteristics were obtained for various operational modes of facility. The optimal resistance of heating element of the downhole heater was determined for maximum operating efficiency of wind power unit.

  7. Evaluation of Cooling Conditions for a High Heat Flux Testing Facility Based on Plasma-Arc Lamps

    DOE PAGES

    Charry, Carlos H.; Abdel-khalik, Said I.; Yoda, Minami; ...

    2015-07-31

    The new Irradiated Material Target Station (IMTS) facility for fusion materials at Oak Ridge National Laboratory (ORNL) uses an infrared plasma-arc lamp (PAL) to deliver incident heat fluxes as high as 27 MW/m 2. The facility is being used to test irradiated plasma-facing component materials as part of the joint US-Japan PHENIX program. The irradiated samples are to be mounted on molybdenum sample holders attached to a water-cooled copper rod. Depending on the size and geometry of samples, several sample holders and copper rod configurations have been fabricated and tested. As a part of the effort to design sample holdersmore » compatible with the high heat flux (HHF) testing to be conducted at the IMTS facility, numerical simulations have been performed for two different water-cooled sample holder designs using the ANSYS FLUENT 14.0 commercial computational fluid dynamics (CFD) software package. The primary objective of this work is to evaluate the cooling capability of different sample holder designs, i.e. to estimate their maximum allowable incident heat flux values. 2D axisymmetric numerical simulations are performed using the realizable k-ε turbulence model and the RPI nucleate boiling model within ANSYS FLUENT 14.0. The results of the numerical model were compared against the experimental data for two sample holder designs tested in the IMTS facility. The model has been used to parametrically evaluate the effect of various operational parameters on the predicted temperature distributions. The results were used to identify the limiting parameter for safe operation of the two sample holders and the associated peak heat flux limits. The results of this investigation will help guide the development of new sample holder designs.« less

  8. An Electronic Pressure Profile Display system for aeronautic test facilities

    NASA Technical Reports Server (NTRS)

    Woike, Mark R.

    1990-01-01

    The NASA Lewis Research Center has installed an Electronic Pressure Profile Display system. This system provides for the real-time display of pressure readings on high resolution graphics monitors. The Electronic Pressure Profile Display system will replace manometer banks currently used in aeronautic test facilities. The Electronic Pressure Profile Display system consists of an industrial type Digital Pressure Transmitter (DPI) unit which interfaces with a host computer. The host computer collects the pressure data from the DPI unit, converts it into engineering units, and displays the readings on a high resolution graphics monitor in bar graph format. Software was developed to accomplish the above tasks and also draw facility diagrams as background information on the displays. Data transfer between host computer and DPT unit is done with serial communications. Up to 64 channels are displayed with one second update time. This paper describes the system configuration, its features, and its advantages over existing systems.

  9. An electronic pressure profile display system for aeronautic test facilities

    NASA Technical Reports Server (NTRS)

    Woike, Mark R.

    1990-01-01

    The NASA Lewis Research Center has installed an Electronic Pressure Profile Display system. This system provides for the real-time display of pressure readings on high resolution graphics monitors. The Electronic Pressure Profile Display system will replace manometer banks currently used in aeronautic test facilities. The Electronic Pressure Profile Display system consists of an industrial type Digital Pressure Transmitter (DPT) unit which interfaces with a host computer. The host computer collects the pressure data from the DPT unit, converts it into engineering units, and displays the readings on a high resolution graphics monitor in bar graph format. Software was developed to accomplish the above tasks and also draw facility diagrams as background information on the displays. Data transfer between host computer and DPT unit is done with serial communications. Up to 64 channels are displayed with one second update time. This paper describes the system configuration, its features, and its advantages over existing systems.

  10. Considering High-Tech Exhibits?

    ERIC Educational Resources Information Center

    Routman, Emily

    1994-01-01

    Discusses a variety of high-tech exhibit media used in The Living World, an educational facility operated by The Saint Louis Zoo. Considers the strengths and weaknesses of holograms, video, animatronics, video-equipped microscopes, and computer interactives. Computer interactives are treated with special attention. (LZ)

  11. Grids, virtualization, and clouds at Fermilab

    DOE PAGES

    Timm, S.; Chadwick, K.; Garzoglio, G.; ...

    2014-06-11

    Fermilab supports a scientific program that includes experiments and scientists located across the globe. To better serve this community, in 2004, the (then) Computing Division undertook the strategy of placing all of the High Throughput Computing (HTC) resources in a Campus Grid known as FermiGrid, supported by common shared services. In 2007, the FermiGrid Services group deployed a service infrastructure that utilized Xen virtualization, LVS network routing and MySQL circular replication to deliver highly available services that offered significant performance, reliability and serviceability improvements. This deployment was further enhanced through the deployment of a distributed redundant network core architecture andmore » the physical distribution of the systems that host the virtual machines across multiple buildings on the Fermilab Campus. In 2010, building on the experience pioneered by FermiGrid in delivering production services in a virtual infrastructure, the Computing Sector commissioned the FermiCloud, General Physics Computing Facility and Virtual Services projects to serve as platforms for support of scientific computing (FermiCloud 6 GPCF) and core computing (Virtual Services). Lastly, this work will present the evolution of the Fermilab Campus Grid, Virtualization and Cloud Computing infrastructure together with plans for the future.« less

  12. Grids, virtualization, and clouds at Fermilab

    NASA Astrophysics Data System (ADS)

    Timm, S.; Chadwick, K.; Garzoglio, G.; Noh, S.

    2014-06-01

    Fermilab supports a scientific program that includes experiments and scientists located across the globe. To better serve this community, in 2004, the (then) Computing Division undertook the strategy of placing all of the High Throughput Computing (HTC) resources in a Campus Grid known as FermiGrid, supported by common shared services. In 2007, the FermiGrid Services group deployed a service infrastructure that utilized Xen virtualization, LVS network routing and MySQL circular replication to deliver highly available services that offered significant performance, reliability and serviceability improvements. This deployment was further enhanced through the deployment of a distributed redundant network core architecture and the physical distribution of the systems that host the virtual machines across multiple buildings on the Fermilab Campus. In 2010, building on the experience pioneered by FermiGrid in delivering production services in a virtual infrastructure, the Computing Sector commissioned the FermiCloud, General Physics Computing Facility and Virtual Services projects to serve as platforms for support of scientific computing (FermiCloud 6 GPCF) and core computing (Virtual Services). This work will present the evolution of the Fermilab Campus Grid, Virtualization and Cloud Computing infrastructure together with plans for the future.

  13. Addressing capability computing challenges of high-resolution global climate modelling at the Oak Ridge Leadership Computing Facility

    NASA Astrophysics Data System (ADS)

    Anantharaj, Valentine; Norman, Matthew; Evans, Katherine; Taylor, Mark; Worley, Patrick; Hack, James; Mayer, Benjamin

    2014-05-01

    During 2013, high-resolution climate model simulations accounted for over 100 million "core hours" using Titan at the Oak Ridge Leadership Computing Facility (OLCF). The suite of climate modeling experiments, primarily using the Community Earth System Model (CESM) at nearly 0.25 degree horizontal resolution, generated over a petabyte of data and nearly 100,000 files, ranging in sizes from 20 MB to over 100 GB. Effective utilization of leadership class resources requires careful planning and preparation. The application software, such as CESM, need to be ported, optimized and benchmarked for the target platform in order to meet the computational readiness requirements. The model configuration needs to be "tuned and balanced" for the experiments. This can be a complicated and resource intensive process, especially for high-resolution configurations using complex physics. The volume of I/O also increases with resolution; and new strategies may be required to manage I/O especially for large checkpoint and restart files that may require more frequent output for resiliency. It is also essential to monitor the application performance during the course of the simulation exercises. Finally, the large volume of data needs to be analyzed to derive the scientific results; and appropriate data and information delivered to the stakeholders. Titan is currently the largest supercomputer available for open science. The computational resources, in terms of "titan core hours" are allocated primarily via the Innovative and Novel Computational Impact on Theory and Experiment (INCITE) and ASCR Leadership Computing Challenge (ALCC) programs, both sponsored by the U.S. Department of Energy (DOE) Office of Science. Titan is a Cray XK7 system, capable of a theoretical peak performance of over 27 PFlop/s, consists of 18,688 compute nodes, with a NVIDIA Kepler K20 GPU and a 16-core AMD Opteron CPU in every node, for a total of 299,008 Opteron cores and 18,688 GPUs offering a cumulative 560,640 equivalent cores. Scientific applications, such as CESM, are also required to demonstrate a "computational readiness capability" to efficiently scale across and utilize 20% of the entire system. The 0,25 deg configuration of the spectral element dynamical core of the Community Atmosphere Model (CAM-SE), the atmospheric component of CESM, has been demonstrated to scale efficiently across more than 5,000 nodes (80,000 CPU cores) on Titan. The tracer transport routines of CAM-SE have also been ported to take advantage of the hybrid many-core architecture of Titan using GPUs [see EGU2014-4233], yielding over 2X speedup when transporting over 100 tracers. The high throughput I/O in CESM, based on the Parallel IO Library (PIO), is being further augmented to support even higher resolutions and enhance resiliency. The application performance of the individual runs are archived in a database and routinely analyzed to identify and rectify performance degradation during the course of the experiments. The various resources available at the OLCF now support a scientific workflow to facilitate high-resolution climate modelling. A high-speed center-wide parallel file system, called ATLAS, capable of 1 TB/s, is available on Titan as well as on the clusters used for analysis (Rhea) and visualization (Lens/EVEREST). Long-term archive is facilitated by the HPSS storage system. The Earth System Grid (ESG), featuring search & discovery, is also used to deliver data. The end-to-end workflow allows OLCF users to efficiently share data and publish results in a timely manner.

  14. Intricacies of modern supercomputing illustrated with recent advances in simulations of strongly correlated electron systems

    NASA Astrophysics Data System (ADS)

    Schulthess, Thomas C.

    2013-03-01

    The continued thousand-fold improvement in sustained application performance per decade on modern supercomputers keeps opening new opportunities for scientific simulations. But supercomputers have become very complex machines, built with thousands or tens of thousands of complex nodes consisting of multiple CPU cores or, most recently, a combination of CPU and GPU processors. Efficient simulations on such high-end computing systems require tailored algorithms that optimally map numerical methods to particular architectures. These intricacies will be illustrated with simulations of strongly correlated electron systems, where the development of quantum cluster methods, Monte Carlo techniques, as well as their optimal implementation by means of algorithms with improved data locality and high arithmetic density have gone hand in hand with evolving computer architectures. The present work would not have been possible without continued access to computing resources at the National Center for Computational Science of Oak Ridge National Laboratory, which is funded by the Facilities Division of the Office of Advanced Scientific Computing Research, and the Swiss National Supercomputing Center (CSCS) that is funded by ETH Zurich.

  15. 77 FR 48107 - Workshop on Performance Assessments of Near-Surface Disposal Facilities: FEPs Analysis, Scenario...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-13

    ...) disposal facilities. The workshop has been developed to facilitate communication among Federal and State... and conceptual models, and (3) the selection of computer codes. Information gathered from invited.... NRC Public Meeting The purpose of this public meeting is to facilitate communication and gather...

  16. Stage Effects on Stalling and Recovery of a High-Speed 10-Stage Axial- Flow Compressor

    DTIC Science & Technology

    1990-06-01

    facility C Specific heat of air at constant pressureP Cx Axial velocity DC Direct current DAC Data acquisition computer DCS Design corrected compressor ...was designed to inve3tigate the component performance of an axial -flow compressor while stalling and operating in rotating stall. No attempt was made...Temperatures were measured from a probe configuration similar to the to - pressure design . 68 Table 4.2 Compressor instrumentation RADIAL PROPERTY AXIAL

  17. Computational study of radiation doses at UNLV accelerator facility

    NASA Astrophysics Data System (ADS)

    Hodges, Matthew; Barzilov, Alexander; Chen, Yi-Tung; Lowe, Daniel

    2017-09-01

    A Varian K15 electron linear accelerator (linac) has been considered for installation at University of Nevada, Las Vegas (UNLV). Before experiments can be performed, it is necessary to evaluate the photon and neutron spectra as generated by the linac, as well as the resulting dose rates within the accelerator facility. A computational study using MCNPX was performed to characterize the source terms for the bremsstrahlung converter. The 15 MeV electron beam available in the linac is above the photoneutron threshold energy for several materials in the linac assembly, and as a result, neutrons must be accounted for. The angular and energy distributions for bremsstrahlung flux generated by the interaction of the 15 MeV electron beam with the linac target were determined. This source term was used in conjunction with the K15 collimators to determine the dose rates within the facility.

  18. Web-based system for surgical planning and simulation

    NASA Astrophysics Data System (ADS)

    Eldeib, Ayman M.; Ahmed, Mohamed N.; Farag, Aly A.; Sites, C. B.

    1998-10-01

    The growing scientific knowledge and rapid progress in medical imaging techniques has led to an increasing demand for better and more efficient methods of remote access to high-performance computer facilities. This paper introduces a web-based telemedicine project that provides interactive tools for surgical simulation and planning. The presented approach makes use of client-server architecture based on new internet technology where clients use an ordinary web browser to view, send, receive and manipulate patients' medical records while the server uses the supercomputer facility to generate online semi-automatic segmentation, 3D visualization, surgical simulation/planning and neuroendoscopic procedures navigation. The supercomputer (SGI ONYX 1000) is located at the Computer Vision and Image Processing Lab, University of Louisville, Kentucky. This system is under development in cooperation with the Department of Neurological Surgery, Alliant Health Systems, Louisville, Kentucky. The server is connected via a network to the Picture Archiving and Communication System at Alliant Health Systems through a DICOM standard interface that enables authorized clients to access patients' images from different medical modalities.

  19. Comparisons Between NO PLIF Imaging and CFD Simulations of Mixing Flowfields for High-Speed Fuel Injectors

    NASA Technical Reports Server (NTRS)

    Drozda, Tomasz G.; Cabell, Karen F.; Ziltz, Austin R.; Hass, Neal E.; Inman, Jennifer A.; Burns, Ross A.; Bathel, Brett F.; Danehy, Paul M.

    2017-01-01

    The current work compares experimentally and computationally obtained nitric oxide (NO) planar laser induced fluorescence (PLIF) images of the mixing flowfields for three types of high-speed fuel injectors: a strut, a ramp, and a rectangular flushwall. These injection devices, which exhibited promising mixing performance at lower flight Mach numbers, are currently being studied as a part of the Enhanced Injection and Mixing Project (EIMP) at the NASA Langley Research Center. The EIMP aims to investigate scramjet fuel injection and mixing physics, and improve the understanding of underlying physical processes relevant to flight Mach numbers greater than eight. In the experiments, conducted in the NASA Langley Arc-Heated Scramjet Test Facility (AHSTF), the injectors are placed downstream of a Mach 6 facility nozzle, which simulates the high Mach number air flow at the entrance of a scramjet combustor. Helium is used as an inert substitute for hydrogen fuel. Both schlieren and PLIF techniques are applied to obtain mixing flowfield flow visualizations. The experimental PLIF is obtained by using a UV laser sheet to interrogate a plane of the flow by exciting fluorescence from the NO molecules, which are present in the AHSTF air. Consequently, the absence of signal in the resulting PLIF images is an indication of pure helium (fuel). The computational PLIF is obtained by applying a fluorescence model for NO to the results of the Reynolds-averaged simulations (RAS) of the mixing flow field carried out using the VULCAN-CFD solver. This approach is required because the PLIF signal is a nonlinear function of not only NO concentration, but also pressure, temperature, and the flow velocity. This complexity allows additional flow features to be identified and compared with those obtained from the computational fluid dynamics (CFD) simulations, however, such comparisons are only semiquantitative. Three-dimensional image reconstruction, similar to that used in magnetic resonance imaging, is also used to obtain images in the streamwise and spanwise planes from select cross-stream PLIF plane data. Synthetic schlieren is also computed from the RAS data. Good agreement between the experimental and computational results provides increased confidence in the CFD simulations for investigations of injector performance.

  20. Core commands across airway facilities systems.

    DOT National Transportation Integrated Search

    2003-05-01

    This study takes a high-level approach to evaluate computer systems without regard to the specific method of : interaction. This document analyzes the commands that Airway Facilities (AF) use across different systems and : the meanings attributed to ...

  1. The use of methods of structural optimization at the stage of designing high-rise buildings with steel construction

    NASA Astrophysics Data System (ADS)

    Vasilkin, Andrey

    2018-03-01

    The more designing solutions at the search stage for design for high-rise buildings can be synthesized by the engineer, the more likely that the final adopted version will be the most efficient and economical. However, in modern market conditions, taking into account the complexity and responsibility of high-rise buildings the designer does not have the necessary time to develop, analyze and compare any significant number of options. To solve this problem, it is expedient to use the high potential of computer-aided designing. To implement automated search for design solutions, it is proposed to develop the computing facilities, the application of which will significantly increase the productivity of the designer and reduce the complexity of designing. Methods of structural and parametric optimization have been adopted as the basis of the computing facilities. Their efficiency in the synthesis of design solutions is shown, also the schemes, that illustrate and explain the introduction of structural optimization in the traditional design of steel frames, are constructed. To solve the problem of synthesis and comparison of design solutions for steel frames, it is proposed to develop the computing facilities that significantly reduces the complexity of search designing and based on the use of methods of structural and parametric optimization.

  2. Analysis of Application Power and Schedule Composition in a High Performance Computing Environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elmore, Ryan; Gruchalla, Kenny; Phillips, Caleb

    As the capacity of high performance computing (HPC) systems continues to grow, small changes in energy management have the potential to produce significant energy savings. In this paper, we employ an extensive informatics system for aggregating and analyzing real-time performance and power use data to evaluate energy footprints of jobs running in an HPC data center. We look at the effects of algorithmic choices for a given job on the resulting energy footprints, and analyze application-specific power consumption, and summarize average power use in the aggregate. All of these views reveal meaningful power variance between classes of applications as wellmore » as chosen methods for a given job. Using these data, we discuss energy-aware cost-saving strategies based on reordering the HPC job schedule. Using historical job and power data, we present a hypothetical job schedule reordering that: (1) reduces the facility's peak power draw and (2) manages power in conjunction with a large-scale photovoltaic array. Lastly, we leverage this data to understand the practical limits on predicting key power use metrics at the time of submission.« less

  3. The NEST Dry-Run Mode: Efficient Dynamic Analysis of Neuronal Network Simulation Code.

    PubMed

    Kunkel, Susanne; Schenck, Wolfram

    2017-01-01

    NEST is a simulator for spiking neuronal networks that commits to a general purpose approach: It allows for high flexibility in the design of network models, and its applications range from small-scale simulations on laptops to brain-scale simulations on supercomputers. Hence, developers need to test their code for various use cases and ensure that changes to code do not impair scalability. However, running a full set of benchmarks on a supercomputer takes up precious compute-time resources and can entail long queuing times. Here, we present the NEST dry-run mode, which enables comprehensive dynamic code analysis without requiring access to high-performance computing facilities. A dry-run simulation is carried out by a single process, which performs all simulation steps except communication as if it was part of a parallel environment with many processes. We show that measurements of memory usage and runtime of neuronal network simulations closely match the corresponding dry-run data. Furthermore, we demonstrate the successful application of the dry-run mode in the areas of profiling and performance modeling.

  4. The NEST Dry-Run Mode: Efficient Dynamic Analysis of Neuronal Network Simulation Code

    PubMed Central

    Kunkel, Susanne; Schenck, Wolfram

    2017-01-01

    NEST is a simulator for spiking neuronal networks that commits to a general purpose approach: It allows for high flexibility in the design of network models, and its applications range from small-scale simulations on laptops to brain-scale simulations on supercomputers. Hence, developers need to test their code for various use cases and ensure that changes to code do not impair scalability. However, running a full set of benchmarks on a supercomputer takes up precious compute-time resources and can entail long queuing times. Here, we present the NEST dry-run mode, which enables comprehensive dynamic code analysis without requiring access to high-performance computing facilities. A dry-run simulation is carried out by a single process, which performs all simulation steps except communication as if it was part of a parallel environment with many processes. We show that measurements of memory usage and runtime of neuronal network simulations closely match the corresponding dry-run data. Furthermore, we demonstrate the successful application of the dry-run mode in the areas of profiling and performance modeling. PMID:28701946

  5. Upgrades at the NASA Langley Research Center National Transonic Facility

    NASA Technical Reports Server (NTRS)

    Paryz, Roman W.

    2012-01-01

    Several projects have been completed or are nearing completion at the NASA Langley Research Center (LaRC) National Transonic Facility (NTF). The addition of a Model Flow-Control/Propulsion Simulation test capability to the NTF provides a unique, transonic, high-Reynolds number test capability that is well suited for research in propulsion airframe integration studies, circulation control high-lift concepts, powered lift, and cruise separation flow control. A 1992 vintage Facility Automation System (FAS) that performs the control functions for tunnel pressure, temperature, Mach number, model position, safety interlock and supervisory controls was replaced using current, commercially available components. This FAS upgrade also involved a design study for the replacement of the facility Mach measurement system and the development of a software-based simulation model of NTF processes and control systems. The FAS upgrades were validated by a post upgrade verification wind tunnel test. The data acquisition system (DAS) upgrade project involves the design, purchase, build, integration, installation and verification of a new DAS by replacing several early 1990's vintage computer systems with state of the art hardware/software. This paper provides an update on the progress made in these efforts. See reference 1.

  6. Hypothesis generation using network structures on community health center cancer-screening performance.

    PubMed

    Carney, Timothy Jay; Morgan, Geoffrey P; Jones, Josette; McDaniel, Anna M; Weaver, Michael T; Weiner, Bryan; Haggstrom, David A

    2015-10-01

    Nationally sponsored cancer-care quality-improvement efforts have been deployed in community health centers to increase breast, cervical, and colorectal cancer-screening rates among vulnerable populations. Despite several immediate and short-term gains, screening rates remain below national benchmark objectives. Overall improvement has been both difficult to sustain over time in some organizational settings and/or challenging to diffuse to other settings as repeatable best practices. Reasons for this include facility-level changes, which typically occur in dynamic organizational environments that are complex, adaptive, and unpredictable. This study seeks to understand the factors that shape community health center facility-level cancer-screening performance over time. This study applies a computational-modeling approach, combining principles of health-services research, health informatics, network theory, and systems science. To investigate the roles of knowledge acquisition, retention, and sharing within the setting of the community health center and to examine their effects on the relationship between clinical decision support capabilities and improvement in cancer-screening rate improvement, we employed Construct-TM to create simulated community health centers using previously collected point-in-time survey data. Construct-TM is a multi-agent model of network evolution. Because social, knowledge, and belief networks co-evolve, groups and organizations are treated as complex systems to capture the variability of human and organizational factors. In Construct-TM, individuals and groups interact by communicating, learning, and making decisions in a continuous cycle. Data from the survey was used to differentiate high-performing simulated community health centers from low-performing ones based on computer-based decision support usage and self-reported cancer-screening improvement. This virtual experiment revealed that patterns of overall network symmetry, agent cohesion, and connectedness varied by community health center performance level. Visual assessment of both the agent-to-agent knowledge sharing network and agent-to-resource knowledge use network diagrams demonstrated that community health centers labeled as high performers typically showed higher levels of collaboration and cohesiveness among agent classes, faster knowledge-absorption rates, and fewer agents that were unconnected to key knowledge resources. Conclusions and research implications: Using the point-in-time survey data outlining community health center cancer-screening practices, our computational model successfully distinguished between high and low performers. Results indicated that high-performance environments displayed distinctive network characteristics in patterns of interaction among agents, as well as in the access and utilization of key knowledge resources. Our study demonstrated how non-network-specific data obtained from a point-in-time survey can be employed to forecast community health center performance over time, thereby enhancing the sustainability of long-term strategic-improvement efforts. Our results revealed a strategic profile for community health center cancer-screening improvement via simulation over a projected 10-year period. The use of computational modeling allows additional inferential knowledge to be drawn from existing data when examining organizational performance in increasingly complex environments. Copyright © 2015 Elsevier Inc. All rights reserved.

  7. GPU-Accelerated Large-Scale Electronic Structure Theory on Titan with a First-Principles All-Electron Code

    NASA Astrophysics Data System (ADS)

    Huhn, William Paul; Lange, Björn; Yu, Victor; Blum, Volker; Lee, Seyong; Yoon, Mina

    Density-functional theory has been well established as the dominant quantum-mechanical computational method in the materials community. Large accurate simulations become very challenging on small to mid-scale computers and require high-performance compute platforms to succeed. GPU acceleration is one promising approach. In this talk, we present a first implementation of all-electron density-functional theory in the FHI-aims code for massively parallel GPU-based platforms. Special attention is paid to the update of the density and to the integration of the Hamiltonian and overlap matrices, realized in a domain decomposition scheme on non-uniform grids. The initial implementation scales well across nodes on ORNL's Titan Cray XK7 supercomputer (8 to 64 nodes, 16 MPI ranks/node) and shows an overall speed up in runtime due to utilization of the K20X Tesla GPUs on each Titan node of 1.4x, with the charge density update showing a speed up of 2x. Further acceleration opportunities will be discussed. Work supported by the LDRD Program of ORNL managed by UT-Battle, LLC, for the U.S. DOE and by the Oak Ridge Leadership Computing Facility, which is a DOE Office of Science User Facility supported under Contract DE-AC05-00OR22725.

  8. Python in the NERSC Exascale Science Applications Program for Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ronaghi, Zahra; Thomas, Rollin; Deslippe, Jack

    We describe a new effort at the National Energy Re- search Scientific Computing Center (NERSC) in performance analysis and optimization of scientific Python applications targeting the Intel Xeon Phi (Knights Landing, KNL) many- core architecture. The Python-centered work outlined here is part of a larger effort called the NERSC Exascale Science Applications Program (NESAP) for Data. NESAP for Data focuses on applications that process and analyze high-volume, high-velocity data sets from experimental/observational science (EOS) facilities supported by the US Department of Energy Office of Science. We present three case study applications from NESAP for Data that use Python. These codesmore » vary in terms of “Python purity” from applications developed in pure Python to ones that use Python mainly as a convenience layer for scientists without expertise in lower level programming lan- guages like C, C++ or Fortran. The science case, requirements, constraints, algorithms, and initial performance optimizations for each code are discussed. Our goal with this paper is to contribute to the larger conversation around the role of Python in high-performance computing today and tomorrow, highlighting areas for future work and emerging best practices« less

  9. SU-E-T-531: Performance Evaluation of Multithreaded Geant4 for Proton Therapy Dose Calculations in a High Performance Computing Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shin, J; Coss, D; McMurry, J

    Purpose: To evaluate the efficiency of multithreaded Geant4 (Geant4-MT, version 10.0) for proton Monte Carlo dose calculations using a high performance computing facility. Methods: Geant4-MT was used to calculate 3D dose distributions in 1×1×1 mm3 voxels in a water phantom and patient's head with a 150 MeV proton beam covering approximately 5×5 cm2 in the water phantom. Three timestamps were measured on the fly to separately analyze the required time for initialization (which cannot be parallelized), processing time of individual threads, and completion time. Scalability of averaged processing time per thread was calculated as a function of thread number (1,more » 100, 150, and 200) for both 1M and 50 M histories. The total memory usage was recorded. Results: Simulations with 50 M histories were fastest with 100 threads, taking approximately 1.3 hours and 6 hours for the water phantom and the CT data, respectively with better than 1.0 % statistical uncertainty. The calculations show 1/N scalability in the event loops for both cases. The gains from parallel calculations started to decrease with 150 threads. The memory usage increases linearly with number of threads. No critical failures were observed during the simulations. Conclusion: Multithreading in Geant4-MT decreased simulation time in proton dose distribution calculations by a factor of 64 and 54 at a near optimal 100 threads for water phantom and patient's data respectively. Further simulations will be done to determine the efficiency at the optimal thread number. Considering the trend of computer architecture development, utilizing Geant4-MT for radiotherapy simulations is an excellent cost-effective alternative for a distributed batch queuing system. However, because the scalability depends highly on simulation details, i.e., the ratio of the processing time of one event versus waiting time to access for the shared event queue, a performance evaluation as described is recommended.« less

  10. Understanding I/O workload characteristics of a Peta-scale storage system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Youngjae; Gunasekaran, Raghul

    2015-01-01

    Understanding workload characteristics is critical for optimizing and improving the performance of current systems and software, and architecting new storage systems based on observed workload patterns. In this paper, we characterize the I/O workloads of scientific applications of one of the world s fastest high performance computing (HPC) storage cluster, Spider, at the Oak Ridge Leadership Computing Facility (OLCF). OLCF flagship petascale simulation platform, Titan, and other large HPC clusters, in total over 250 thousands compute cores, depend on Spider for their I/O needs. We characterize the system utilization, the demands of reads and writes, idle time, storage space utilization,more » and the distribution of read requests to write requests for the Peta-scale Storage Systems. From this study, we develop synthesized workloads, and we show that the read and write I/O bandwidth usage as well as the inter-arrival time of requests can be modeled as a Pareto distribution. We also study the I/O load imbalance problems using I/O performance data collected from the Spider storage system.« less

  11. Activity status and future plans for the Optical Laboratory of the National Astronomical Research Institute of Thailand

    NASA Astrophysics Data System (ADS)

    Buisset, Christophe; Poshyachinda, Saran; Soonthornthum, Boonrucksar; Prasit, Apirat; Alagao, Mary Angelie; Choochalerm, Piyamas; Wanajaroen, Weerapot; Lepine, Thierry; Rabbia, Yves; Aukkaravittayapun, Suparerk; Leckngam, Apichat; Thummasorn, Griangsak; Ngernsujja, Surin; Inpan, Anuphong; Kaewsamoet, Pimon; Lhospice, Esther; Meemon, Panomsak; Artsang, Pornapa; Suwansukho, Kajpanya; Sirichote, Wichit; Paenoi, Jitsupa

    2018-03-01

    The National Astronomical Research Institute of Thailand (NARIT) has developed since June 2014 an optical laboratory that comprises all the activities and facilities related to the research and development of new instruments in the following areas: telescope design, high dynamic and high resolution imaging systems and spectrographs. The facilities include ZEMAX and Solidwork software for design and simulation activities as well as an optical room with all the equipment required to develop optical setup with cutting-edge performance. The current projects include: i) the development of a focal reducer for the 2.3 m Thai National Telescope (TNT), ii) the development of the Evanescent Wave Coronagraph dedicated to the high contrast observations of star close environment and iii) the development of low resolution spectrographs for the Thai National Telescope and for the 0.7 m telescopes of NARIT regional observatories. In each project, our activities start from the instrument optical and mechanical design to the simulation of the performance, the development of the prototype and finally to the final system integration, alignment and tests. Most of the mechanical parts are manufactured by using the facilities of NARIT precision mechanical workshop that includes a 3-axis Computer Numerical Control (CNC) to machine the mechanical structures and a Coordinate Measuring Machine (CMM) to verify the dimensions. In this paper, we give an overview of the optical laboratory activities and of the associated facilities. We also describe the objective of the current projects, present the specifications and the design of the instruments and establish the status of development and we present our future plans.

  12. Feasibility of MHD submarine propulsion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Doss, E.D.; Sikes, W.C.

    1992-09-01

    This report describes the work performed during Phase 1 and Phase 2 of the collaborative research program established between Argonne National Laboratory (ANL) and Newport News Shipbuilding and Dry Dock Company (NNS). Phase I of the program focused on the development of computer models for Magnetohydrodynamic (MHD) propulsion. Phase 2 focused on the experimental validation of the thruster performance models and the identification, through testing, of any phenomena which may impact the attractiveness of this propulsion system for shipboard applications. The report discusses in detail the work performed in Phase 2 of the program. In Phase 2, a two Teslamore » test facility was designed, built, and operated. The facility test loop, its components, and their design are presented. The test matrix and its rationale are discussed. Representative experimental results of the test program are presented, and are compared to computer model predictions. In general, the results of the tests and their comparison with the predictions indicate that thephenomena affecting the performance of MHD seawater thrusters are well understood and can be accurately predicted with the developed thruster computer models.« less

  13. High-speed pulse-shape generator, pulse multiplexer

    DOEpatents

    Burkhart, Scott C.

    2002-01-01

    The invention combines arbitrary amplitude high-speed pulses for precision pulse shaping for the National Ignition Facility (NIF). The circuitry combines arbitrary height pulses which are generated by replicating scaled versions of a trigger pulse and summing them delayed in time on a pulse line. The combined electrical pulses are connected to an electro-optic modulator which modulates a laser beam. The circuit can also be adapted to combine multiple channels of high speed data into a single train of electrical pulses which generates the optical pulses for very high speed optical communication. The invention has application in laser pulse shaping for inertial confinement fusion, in optical data links for computers, telecommunications, and in laser pulse shaping for atomic excitation studies. The invention can be used to effect at least a 10.times. increase in all fiber communication lines. It allows a greatly increased data transfer rate between high-performance computers. The invention is inexpensive enough to bring high-speed video and data services to homes through a super modem.

  14. Workload Characterization of a Leadership Class Storage Cluster

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Youngjae; Gunasekaran, Raghul; Shipman, Galen M

    2010-01-01

    Understanding workload characteristics is critical for optimizing and improving the performance of current systems and software, and architecting new storage systems based on observed workload patterns. In this paper, we characterize the scientific workloads of the world s fastest HPC (High Performance Computing) storage cluster, Spider, at the Oak Ridge Leadership Computing Facility (OLCF). Spider provides an aggregate bandwidth of over 240 GB/s with over 10 petabytes of RAID 6 formatted capacity. OLCFs flagship petascale simulation platform, Jaguar, and other large HPC clusters, in total over 250 thousands compute cores, depend on Spider for their I/O needs. We characterize themore » system utilization, the demands of reads and writes, idle time, and the distribution of read requests to write requests for the storage system observed over a period of 6 months. From this study we develop synthesized workloads and we show that the read and write I/O bandwidth usage as well as the inter-arrival time of requests can be modeled as a Pareto distribution.« less

  15. Cloud CPFP: a shotgun proteomics data analysis pipeline using cloud and high performance computing.

    PubMed

    Trudgian, David C; Mirzaei, Hamid

    2012-12-07

    We have extended the functionality of the Central Proteomics Facilities Pipeline (CPFP) to allow use of remote cloud and high performance computing (HPC) resources for shotgun proteomics data processing. CPFP has been modified to include modular local and remote scheduling for data processing jobs. The pipeline can now be run on a single PC or server, a local cluster, a remote HPC cluster, and/or the Amazon Web Services (AWS) cloud. We provide public images that allow easy deployment of CPFP in its entirety in the AWS cloud. This significantly reduces the effort necessary to use the software, and allows proteomics laboratories to pay for compute time ad hoc, rather than obtaining and maintaining expensive local server clusters. Alternatively the Amazon cloud can be used to increase the throughput of a local installation of CPFP as necessary. We demonstrate that cloud CPFP allows users to process data at higher speed than local installations but with similar cost and lower staff requirements. In addition to the computational improvements, the web interface to CPFP is simplified, and other functionalities are enhanced. The software is under active development at two leading institutions and continues to be released under an open-source license at http://cpfp.sourceforge.net.

  16. A probability metric for identifying high-performing facilities: an application for pay-for-performance programs.

    PubMed

    Shwartz, Michael; Peköz, Erol A; Burgess, James F; Christiansen, Cindy L; Rosen, Amy K; Berlowitz, Dan

    2014-12-01

    Two approaches are commonly used for identifying high-performing facilities on a performance measure: one, that the facility is in a top quantile (eg, quintile or quartile); and two, that a confidence interval is below (or above) the average of the measure for all facilities. This type of yes/no designation often does not do well in distinguishing high-performing from average-performing facilities. To illustrate an alternative continuous-valued metric for profiling facilities--the probability a facility is in a top quantile--and show the implications of using this metric for profiling and pay-for-performance. We created a composite measure of quality from fiscal year 2007 data based on 28 quality indicators from 112 Veterans Health Administration nursing homes. A Bayesian hierarchical multivariate normal-binomial model was used to estimate shrunken rates of the 28 quality indicators, which were combined into a composite measure using opportunity-based weights. Rates were estimated using Markov Chain Monte Carlo methods as implemented in WinBUGS. The probability metric was calculated from the simulation replications. Our probability metric allowed better discrimination of high performers than the point or interval estimate of the composite score. In a pay-for-performance program, a smaller top quantile (eg, a quintile) resulted in more resources being allocated to the highest performers, whereas a larger top quantile (eg, being above the median) distinguished less among high performers and allocated more resources to average performers. The probability metric has potential but needs to be evaluated by stakeholders in different types of delivery systems.

  17. Power source evaluation capabilities at Sandia National Laboratories

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Doughty, D.H.; Butler, P.C.

    1996-04-01

    Sandia National Laboratories maintains one of the most comprehensive power source characterization facilities in the U.S. National Laboratory system. This paper describes the capabilities for evaluation of fuel cell technologies. The facility has a rechargeable battery test laboratory and a test area for performing nondestructive and functional computer-controlled testing of cells and batteries.

  18. Corona performance of a compact 230-kV line

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chartier, V.L.; Blair, D.E.; Easley, M.D.

    Permitting requirements and the acquisition of new rights-of-way for transmission facilities has in recent years become increasingly difficult for most utilities, including Puget Sound Power and Light Company. In order to maintain a high degree of reliability of service while being responsive to public concerns regarding the siting of high voltage (HV) transmission facilities, Puget Power has found it necessary to more heavily rely upon the use of compact lines in franchise corridors. Compaction does, however, precipitant increased levels of audible noise (AN) and radio and TV interference (RI and TVI) due to corona on the conductors and insulator assemblies.more » Puget Power relies upon the Bonneville Power Administration (BPA) Corona and Field Effects computer program to calculate AN and RI for new lines. Since there was some question of the program`s ability to accurately represent quiet 230-kV compact designs, a joint project was undertaken with BPA to verify the program`s algorithms. Long-term measurements made on an operating Puget Power 230-kV compact line confirmed the accuracy of BPA`s AN model; however, the RI measurements were much lower than predicted by the BPA computer and other programs. This paper also describes how the BPA computer program can be used to calculate the voltage needed to expose insulator assemblies to the correct electric field in single test setups in HV laboratories.« less

  19. Federated data storage and management infrastructure

    NASA Astrophysics Data System (ADS)

    Zarochentsev, A.; Kiryanov, A.; Klimentov, A.; Krasnopevtsev, D.; Hristov, P.

    2016-10-01

    The Large Hadron Collider (LHC)’ operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe. Computing models for the High Luminosity LHC era anticipate a growth of storage needs of at least orders of magnitude; it will require new approaches in data storage organization and data handling. In our project we address the fundamental problem of designing of architecture to integrate a distributed heterogeneous disk resources for LHC experiments and other data- intensive science applications and to provide access to data from heterogeneous computing facilities. We have prototyped a federated storage for Russian T1 and T2 centers located in Moscow, St.-Petersburg and Gatchina, as well as Russian / CERN federation. We have conducted extensive tests of underlying network infrastructure and storage endpoints with synthetic performance measurement tools as well as with HENP-specific workloads, including the ones running on supercomputing platform, cloud computing and Grid for ALICE and ATLAS experiments. We will present our current accomplishments with running LHC data analysis remotely and locally to demonstrate our ability to efficiently use federated data storage experiment wide within National Academic facilities for High Energy and Nuclear Physics as well as for other data-intensive science applications, such as bio-informatics.

  20. Review of blunt body wake flows at hypersonic low density conditions

    NASA Technical Reports Server (NTRS)

    Moss, J. N.; Price, J. M.

    1996-01-01

    Recent results of experimental and computational studies concerning hypersonic flows about blunted cones including their near wake are reviewed. Attention is focused on conditions where rarefaction effects are present, particularly in the wake. The experiments have been performed for a common model configuration (70 deg spherically-blunted cone) in five hypersonic facilities that encompass a significant range of rarefaction and nonequilibrium effects. Computational studies using direct simulation Monte Carlo (DSMC) and Navier-Stokes solvers have been applied to selected experiments performed in each of the facilities. In addition, computations have been made for typical flight conditions in both Earth and Mars atmospheres, hence more energetic flows than produced in the ground-based tests. Also, comparisons of DSMC calculations and forebody measurements made for the Japanese Orbital Reentry Experiment (OREX) vehicle (a 50 deg spherically-blunted cone) are presented to bridge the spectrum of ground to flight conditions.

  1. High Efficiency Photonic Switch for Data Centers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    LaComb, Lloyd J.; Bablumyan, Arkady; Ordyan, Armen

    2016-12-06

    The worldwide demand for instant access to information is driving internet growth rates above 50% annually. This rapid growth is straining the resources and architectures of existing data centers, metro networks and high performance computer centers. If the current business as usual model continues, data centers alone will require 400TWhr of electricity by 2020. In order to meet the challenges of a faster and more cost effective data centers, metro networks and supercomputing facilities, we have demonstrated a new type of optical switch that will support transmissions speeds up to 1Tb/s, and requires significantly less energy per bit than

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Laros III, James H.; DeBonis, David; Grant, Ryan

    Measuring and controlling the power and energy consumption of high performance computing systems by various components in the software stack is an active research area [13, 3, 5, 10, 4, 21, 19, 16, 7, 17, 20, 18, 11, 1, 6, 14, 12]. Implementations in lower level software layers are beginning to emerge in some production systems, which is very welcome. To be most effective, a portable interface to measurement and control features would significantly facilitate participation by all levels of the software stack. We present a proposal for a standard power Application Programming Interface (API) that endeavors to cover themore » entire software space, from generic hardware interfaces to the input from the computer facility manager.« less

  3. Experimental Investigation of a High Pressure Ratio Aspirated Fan Stage

    NASA Technical Reports Server (NTRS)

    Merchant, Ali; Kerrebrock, Jack L.; Adamczyk, John J.; Braunscheidel, Edward

    2004-01-01

    The experimental investigation of an aspirated fan stage designed to achieve a pressure ratio of 3.4:1 at 1500 ft/sec is presented in this paper. The low-energy viscous flow is aspirated from diffusion-limiting locations on the blades and flowpath surfaces of the stage, enabling a very high pressure ratio to be achieved in a single stage. The fan stage performance was mapped at various operating speeds from choke to stall in a compressor facility at fully simulated engine conditions. The experimentally determined stage performance, in terms of pressure ratio and corresponding inlet mass flow rate, was found to be in good agreement with the three-dimensional viscous computational prediction, and in turn close to the design intent. Stage pressure ratios exceeding 3:1 were achieved at design speed, with an aspiration flow fraction of 3.5 percent of the stage inlet mass flow. The experimental performance of the stage at various operating conditions, including detailed flowfield measurements, are presented and discussed in the context of the computational analyses. The sensitivity of the stage performance and operability to reduced aspiration flow rates at design and off design conditions are also discussed.

  4. pFlogger: The Parallel Fortran Logging Utility

    NASA Technical Reports Server (NTRS)

    Clune, Tom; Cruz, Carlos A.

    2017-01-01

    In the context of high performance computing (HPC), software investments in support of text-based diagnostics, which monitor a running application, are typically limited compared to those for other types of IO. Examples of such diagnostics include reiteration of configuration parameters, progress indicators, simple metrics (e.g., mass conservation, convergence of solvers, etc.), and timers. To some degree, this difference in priority is justifiable as other forms of output are the primary products of a scientific model and, due to their large data volume, much more likely to be a significant performance concern. In contrast, text-based diagnostic content is generally not shared beyond the individual or group running an application and is most often used to troubleshoot when something goes wrong. We suggest that a more systematic approach enabled by a logging facility (or 'logger)' similar to those routinely used by many communities would provide significant value to complex scientific applications. In the context of high-performance computing, an appropriate logger would provide specialized support for distributed and shared-memory parallelism and have low performance overhead. In this paper, we present our prototype implementation of pFlogger - a parallel Fortran-based logging framework, and assess its suitability for use in a complex scientific application.

  5. Morphology of methane hydrate host sediments

    USGS Publications Warehouse

    Jones, K.W.; Feng, H.; Tomov, S.; Winters, W.J.; Eaton, M.; Mahajan, D.

    2005-01-01

    The morphological features including porosity and grains of methane hydrate host sediments were investigated using synchrotron computed microtomography (CMT) technique. The sediment sample was obtained during Ocean Drilling Program Leg 164 on the Blake Ridge at water depth of 2278.5 m. The CMT experiment was performed at the Brookhaven National Synchrotron Light Source facility. The analysis gave ample porosity, specific surface area, mean particle size, and tortuosity. The method was found to be highly effective for the study of methane hydrate host sediments.

  6. Neutron Characterization for Additive Manufacturing

    NASA Technical Reports Server (NTRS)

    Watkins, Thomas; Bilheux, Hassina; An, Ke; Payzant, Andrew; DeHoff, Ryan; Duty, Chad; Peter, William; Blue, Craig; Brice, Craig A.

    2013-01-01

    Oak Ridge National Laboratory (ORNL) is leveraging decades of experience in neutron characterization of advanced materials together with resources such as the Spallation Neutron Source (SNS) and the High Flux Isotope Reactor (HFIR) shown in Fig. 1 to solve challenging problems in additive manufacturing (AM). Additive manufacturing, or three-dimensional (3-D) printing, is a rapidly maturing technology wherein components are built by selectively adding feedstock material at locations specified by a computer model. The majority of these technologies use thermally driven phase change mechanisms to convert the feedstock into functioning material. As the molten material cools and solidifies, the component is subjected to significant thermal gradients, generating significant internal stresses throughout the part (Fig. 2). As layers are added, inherent residual stresses cause warping and distortions that lead to geometrical differences between the final part and the original computer generated design. This effect also limits geometries that can be fabricated using AM, such as thin-walled, high-aspect- ratio, and overhanging structures. Distortion may be minimized by intelligent toolpath planning or strategic placement of support structures, but these approaches are not well understood and often "Edisonian" in nature. Residual stresses can also impact component performance during operation. For example, in a thermally cycled environment such as a high-pressure turbine engine, residual stresses can cause components to distort unpredictably. Different thermal treatments on as-fabricated AM components have been used to minimize residual stress, but components still retain a nonhomogeneous stress state and/or demonstrate a relaxation-derived geometric distortion. Industry, federal laboratory, and university collaboration is needed to address these challenges and enable the U.S. to compete in the global market. Work is currently being conducted on AM technologies at the ORNL Manufacturing Demonstration Facility (MDF) sponsored by the DOE's Advanced Manufacturing Office. The MDF is focusing on R&D of both metal and polymer AM pertaining to in-situ process monitoring and closed-loop controls; implementation of advanced materials in AM technologies; and demonstration, characterization, and optimization of next-generation technologies. ORNL is working directly with industry partners to leverage world-leading facilities in fields such as high performance computing, advanced materials characterization, and neutron sciences to solve fundamental challenges in advanced manufacturing. Specifically, MDF is leveraging two of the world's most advanced neutron facilities, the HFIR and SNS, to characterize additive manufactured components.

  7. Refurbishment and Automation of the Thermal/Vacuum Facilities at the Goddard Space Flight Center

    NASA Technical Reports Server (NTRS)

    Donohue, John T.; Johnson, Chris; Ogden, Rick; Sushon, Janet

    1998-01-01

    The thermal/vacuum facilities located at the Goddard Space Flight Center (GSFC) have supported both manned and unmanned space flight since the 1960s. Of the 11 facilities, currently 10 of the systems are scheduled for refurbishment and/or replacement as part of a 5-year implementation. Expected return on investment includes the reduction in test schedules, improvements in the safety of facility operations, reduction in the complexity of a test and the reduction in personnel support required for a test. Additionally, GSFC will become a global resource renowned for expertise in thermal engineering, mechanical engineering and for the automation of thermal/vacuum facilities and thermal/vacuum tests. Automation of the thermal/vacuum facilities includes the utilization of Programmable Logic Controllers (PLCs) and the use of Supervisory Control and Data Acquisition (SCADA) systems. These components allow the computer control and automation of mechanical components such as valves and pumps. In some cases, the chamber and chamber shroud require complete replacement while others require only mechanical component retrofit or replacement. The project of refurbishment and automation began in 1996 and has resulted in the computer control of one Facility (Facility #225) and the integration of electronically controlled devices and PLCs within several other facilities. Facility 225 has been successfully controlled by PLC and SCADA for over one year. Insignificant anomalies have occurred and were resolved with minimal impact to testing and operations. The amount of work remaining to be performed will occur over the next four to five years. Fiscal year 1998 includes the complete refurbishment of one facility, computer control of the thermal systems in two facilities, implementation of SCADA and PLC systems to support multiple facilities and the implementation of a Database server to allow efficient test management and data analysis.

  8. Spontaneous Raman Scattering (SRS) System for Calibrating High-Pressure Flames Became Operational

    NASA Technical Reports Server (NTRS)

    Nguyen, Quang-Viet

    2003-01-01

    A high-performance spontaneous Raman scattering (SRS) system for measuring quantitative species concentration and temperature in high-pressure flames is now operational. The system is located in Glenn s Engine Research Building. Raman scattering is perhaps the only optical diagnostic technique that permits the simultaneous (single-shot) measurement of all major species (N2, O2, CO2, H2O, CO, H2, and CH4) as well as temperature in combustion systems. The preliminary data acquired with this new system in a 20-atm hydrogen-air (H2-air) flame show excellent spectral coverage, good resolution, and a signal-to-noise ratio high enough for the data to serve as a calibration standard. This new SRS diagnostic system is used in conjunction with the newly developed High- Pressure Gaseous Burner facility (ref. 1). The main purpose of this diagnostic system and the High-Pressure Gaseous Burner facility is to acquire and establish a comprehensive Raman-scattering spectral database calibration standard for the combustion diagnostic community. A secondary purpose of the system is to provide actual measurements in standardized flames to validate computational combustion models. The High-Pressure Gaseous Burner facility and its associated SRS system will provide researchers throughout the world with new insights into flame conditions that simulate the environment inside the ultra-high-pressure-ratio combustion chambers of tomorrow s advanced aircraft engines.

  9. Status and Trends in Networking at LHC Tier1 Facilities

    NASA Astrophysics Data System (ADS)

    Bobyshev, A.; DeMar, P.; Grigaliunas, V.; Bigrow, J.; Hoeft, B.; Reymund, A.

    2012-12-01

    The LHC is entering its fourth year of production operation. Most Tier1 facilities have been in operation for almost a decade, when development and ramp-up efforts are included. LHC's distributed computing model is based on the availability of high capacity, high performance network facilities for both the WAN and LAN data movement, particularly within the Tier1 centers. As a result, the Tier1 centers tend to be on the leading edge of data center networking technology. In this paper, we analyze past and current developments in Tier1 LAN networking, as well as extrapolating where we anticipate networking technology is heading. Our analysis will include examination into the following areas: • Evolution of Tier1 centers to their current state • Evolving data center networking models and how they apply to Tier1 centers • Impact of emerging network technologies (e.g. 10GE-connected hosts, 40GE/100GE links, IPv6) on Tier1 centers • Trends in WAN data movement and emergence of software-defined WAN network capabilities • Network virtualization

  10. Influence of Computer-Aided Detection on Performance of Screening Mammography

    PubMed Central

    Fenton, Joshua J.; Taplin, Stephen H.; Carney, Patricia A.; Abraham, Linn; Sickles, Edward A.; D'Orsi, Carl; Berns, Eric A.; Cutter, Gary; Hendrick, R. Edward; Barlow, William E.; Elmore, Joann G.

    2011-01-01

    Background Computer-aided detection identifies suspicious findings on mammograms to assist radiologists. Since the Food and Drug Administration approved the technology in 1998, it has been disseminated into practice, but its effect on the accuracy of interpretation is unclear. Methods We determined the association between the use of computer-aided detection at mammography facilities and the performance of screening mammography from 1998 through 2002 at 43 facilities in three states. We had complete data for 222,135 women (a total of 429,345 mammograms), including 2351 women who received a diagnosis of breast cancer within 1 year after screening. We calculated the specificity, sensitivity, and positive predictive value of screening mammography with and without computer-aided detection, as well as the rates of biopsy and breast-cancer detection and the overall accuracy, measured as the area under the receiver-operating-characteristic (ROC) curve. Results Seven facilities (16%) implemented computer-aided detection during the study period. Diagnostic specificity decreased from 90.2% before implementation to 87.2% after implementation (P<0.001), the positive predictive value decreased from 4.1% to 3.2% (P = 0.01), and the rate of biopsy increased by 19.7% (P<0.001). The increase in sensitivity from 80.4% before implementation of computer-aided detection to 84.0% after implementation was not significant (P = 0.32). The change in the cancer-detection rate (including invasive breast cancers and ductal carcinomas in situ) was not significant (4.15 cases per 1000 screening mammograms before implementation and 4.20 cases after implementation, P = 0.90). Analyses of data from all 43 facilities showed that the use of computer-aided detection was associated with significantly lower overall accuracy than was nonuse (area under the ROC curve, 0.871 vs. 0.919; P = 0.005). Conclusions The use of computer-aided detection is associated with reduced accuracy of interpretation of screening mammograms. The increased rate of biopsy with the use of computer-aided detection is not clearly associated with improved detection of invasive breast cancer. PMID:17409321

  11. A Performance Measurement and Implementation Methodology in a Department of Defense CIM (Computer Integrated Manufacturing) Environment

    DTIC Science & Technology

    1988-01-24

    vanes.-The new facility is currently being called the Engine Blade/ Vape Facility (EB/VF). There are three primary goals in automating this proc..e...earlier, the search led primarily into the areas of CIM Justification, Automation Strategies , Performance Measurement, and Integration issues. Of...of living, has been steadily eroding. One dangerous trend that has developed in keenly competitive world markets , says Rohan [33], has been for U.S

  12. An investigation of rotor tip leakage flows in the rear-block of a multistage compressor

    NASA Astrophysics Data System (ADS)

    Brossman, John Richard

    An effective method to improve gas turbine propulsive efficiency is to increase the bypass ratio. With fan diameter reaching a practical limit, increases in bypass ratio can be obtained from reduced core engine size. Decreasing the engine core, results in small, high pressure compressor blading, and large relative tip clearances. At general rule of 1% reduction in compressor efficiency with a 1% increase in tip clearance, a 0.66% change in SFC indicates the entire engine is sensitive to high pressure compressor tip leakage flows. Therefore, further investigations and understanding of the rotor tip leakage flows can help to improve gas turbine engine efficiency. The objectives of this research were to investigate tip leakage flows through computational modeling, examine the baseline experimental steady-stage performance, and acquire unsteady static pressure, over-the rotor to observe the tip leakage flow structure. While tip leakage flows have been investigated in the past, there have been no facilities capable of matching engine representative Reynolds number and Mach number while maintaining blade row interactions, presenting a unique and original flow field to investigate at the Purdue 3-stage axial compressor facility. To aid the design of experimental hardware and determine the influence of clearance geometry on compressor performance, a computational model of the Purdue 3-stage compressor was investigated using a steady RANS CFD analysis. A cropped rotor and casing recess design was investigated to increase the rotor tip clearance. While there were small performance differences between the geometries, the tip leakage flow field was found independent of the design therefore designing future experimental hardware around a casing recess is valid. The largest clearance with flow margin past the design point was 4% tip clearance based on the computational model. The Purdue 3-stage axial compressor facility was rebuilt and setup for high quality, detailed flow measurements during this investigation. A detailed investigation and sensitivity analysis of the inlet flow field found the influence by the inlet total temperature profile was important to performance calculations. This finding was significant and original as previous investigations have been conducted on low-speed machines where there is minimal temperature rise. The steady state performance of the baseline 1.5% tip clearance case was outlined at design speed and three off-design speeds. The leakage flow from the rear seal, the inlet flow field and a thermal boundary condition over the casing was recorded at each operating point. Stage 1 was found to be the limiting stage independent of speed. Few datasets exist on multistage compressor performance with full boundary condition definitions, especially with off-design operating points presenting this as a unique dataset for CFD comparison. The detailed unsteady pressure measurements were conducted over Rotor 1 at design and a near-stall operating condition to characterize the leakage trajectory and position. The leakage flow initial point closer to the leading edge and trajectory angle increased at the higher loading condition. The over-the-rotor static pressure field on Rotor 1 indicated similar trends between the computational model and the leakage trajectory.

  13. An evaluation of TRAC-PF1/MOD1 computer code performance during posttest simulations of Semiscale MOD-2C feedwater line break transients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hall, D.G.: Watkins, J.C.

    This report documents an evaluation of the TRAC-PF1/MOD1 reactor safety analysis computer code during computer simulations of feedwater line break transients. The experimental data base for the evaluation included the results of three bottom feedwater line break tests performed in the Semiscale Mod-2C test facility. The tests modeled 14.3% (S-FS-7), 50% (S-FS-11), and 100% (S-FS-6B) breaks. The test facility and the TRAC-PF1/MOD1 model used in the calculations are described. Evaluations of the accuracy of the calculations are presented in the form of comparisons of measured and calculated histories of selected parameters associated with the primary and secondary systems. In additionmore » to evaluating the accuracy of the code calculations, the computational performance of the code during the simulations was assessed. A conclusion was reached that the code is capable of making feedwater line break transient calculations efficiently, but there is room for significant improvements in the simulations that were performed. Recommendations are made for follow-on investigations to determine how to improve future feedwater line break calculations and for code improvements to make the code easier to use.« less

  14. EIAGRID: In-field optimization of seismic data acquisition by real-time subsurface imaging using a remote GRID computing environment.

    NASA Astrophysics Data System (ADS)

    Heilmann, B. Z.; Vallenilla Ferrara, A. M.

    2009-04-01

    The constant growth of contaminated sites, the unsustainable use of natural resources, and, last but not least, the hydrological risk related to extreme meteorological events and increased climate variability are major environmental issues of today. Finding solutions for these complex problems requires an integrated cross-disciplinary approach, providing a unified basis for environmental science and engineering. In computer science, grid computing is emerging worldwide as a formidable tool allowing distributed computation and data management with administratively-distant resources. Utilizing these modern High Performance Computing (HPC) technologies, the GRIDA3 project bundles several applications from different fields of geoscience aiming to support decision making for reasonable and responsible land use and resource management. In this abstract we present a geophysical application called EIAGRID that uses grid computing facilities to perform real-time subsurface imaging by on-the-fly processing of seismic field data and fast optimization of the processing workflow. Even though, seismic reflection profiling has a broad application range spanning from shallow targets in a few meters depth to targets in a depth of several kilometers, it is primarily used by the hydrocarbon industry and hardly for environmental purposes. The complexity of data acquisition and processing poses severe problems for environmental and geotechnical engineering: Professional seismic processing software is expensive to buy and demands large experience from the user. In-field processing equipment needed for real-time data Quality Control (QC) and immediate optimization of the acquisition parameters is often not available for this kind of studies. As a result, the data quality will be suboptimal. In the worst case, a crucial parameter such as receiver spacing, maximum offset, or recording time turns out later to be inappropriate and the complete acquisition campaign has to be repeated. The EIAGRID portal provides an innovative solution to this problem combining state-of-the-art data processing methods and modern remote grid computing technology. In field-processing equipment is substituted by remote access to high performance grid computing facilities. The latter can be ubiquitously controlled by a user-friendly web-browser interface accessed from the field by any mobile computer using wireless data transmission technology such as UMTS (Universal Mobile Telecommunications System) or HSUPA/HSDPA (High-Speed Uplink/Downlink Packet Access). The complexity of data-manipulation and processing and thus also the time demanding user interaction is minimized by a data-driven, and highly automated velocity analysis and imaging approach based on the Common-Reflection-Surface (CRS) stack. Furthermore, the huge computing power provided by the grid deployment allows parallel testing of alternative processing sequences and parameter settings, a feature which considerably reduces the turn-around times. A shared data storage using georeferencing tools and data grid technology is under current development. It will allow to publish already accomplished projects, making results, processing workflows and parameter settings available in a transparent and reproducible way. Creating a unified database shared by all users will facilitate complex studies and enable the use of data-crossing techniques to incorporate results of other environmental applications hosted on the GRIDA3 portal.

  15. MaRIE: A facility for time-dependent materials science at the mesoscale

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barnes, Cris William; Kippen, Karen Elizabeth

    To meet new and emerging national security issues the Laboratory is stepping up to meet another grand challenge—transitioning from observing to controlling a material’s performance. This challenge requires the best of experiment, modeling, simulation, and computational tools. MaRIE is the Laboratory’s proposed flagship experimental facility intended to meet the challenge.

  16. Evaluation of a Dining Facility Intervention on U.S. Army Special Operations Soldiers’ Meal Quality, Dining Satisfaction, and Cost Effectiveness

    DTIC Science & Technology

    2016-11-18

    Eating Index (HEI) scores were computed. Descriptive and independent t-test analyses were performed pre to post HPP implementation ( =0.05; 80% power...Appendix B: HPP THOR3 Point-of-Service Label Examples ....................................... 77 Appendix C: Demographic & Lifestyle Survey ...78 Appendix D: Dining Facility Satisfaction Survey

  17. THE EFFECTS OF COMPUTER-BASED FIRE SAFETY TRAINING ON THE KNOWLEDGE, ATTITUDES, AND PRACTICES OF CAREGIVERS

    PubMed Central

    Harrington, Susan S.; Walker, Bonnie L.

    2010-01-01

    Background Older adults in small residential board and care facilities are at a particularly high risk of fire death and injury because of their characteristics and environment. Methods The authors investigated computer-based instruction as a way to teach fire emergency planning to owners, operators, and staff of small residential board and care facilities. Participants (N = 59) were randomly assigned to a treatment or control group. Results Study participants who completed the training significantly improved their scores from pre- to posttest when compared to a control group. Participants indicated on the course evaluation that the computers were easy to use for training (97%) and that they would like to use computers for future training courses (97%). Conclusions This study demonstrates the potential for using interactive computer-based training as a viable alternative to instructor-led training to meet the fire safety training needs of owners, operators, and staff of small board and care facilities for the elderly. PMID:19263929

  18. The NASA Ames 16-Inch Shock Tunnel Nozzle Simulations and Experimental Comparison

    NASA Technical Reports Server (NTRS)

    TokarcikPolsky, S.; Papadopoulos, P.; Venkatapathy, E.; Delwert, G. S.; Edwards, Thomas A. (Technical Monitor)

    1995-01-01

    The 16-Inch Shock Tunnel at NASA Ames Research Center is a unique test facility used for hypersonic propulsion testing. To provide information necessary to understand the hypersonic testing of the combustor model, computational simulations of the facility nozzle were performed and results are compared with available experimental data, namely static pressure along the nozzle walls and pitot pressure at the exit of the nozzle section. Both quasi-one-dimensional and axisymmetric approaches were used to study the numerous modeling issues involved. The facility nozzle flow was examined for three hypersonic test conditions, and the computational results are presented in detail. The effects of variations in reservoir conditions, boundary layer growth, and parameters of numerical modeling are explored.

  19. Filtered epithermal quasi-monoenergetic neutron beams at research reactor facilities.

    PubMed

    Mansy, M S; Bashter, I I; El-Mesiry, M S; Habib, N; Adib, M

    2015-03-01

    Filtered neutron techniques were applied to produce quasi-monoenergetic neutron beams in the energy range of 1.5-133keV at research reactors. A simulation study was performed to characterize the filter components and transmitted beam lines. The filtered beams were characterized in terms of the optimal thickness of the main and additive components. The filtered neutron beams had high purity and intensity, with low contamination from the accompanying thermal emission, fast neutrons and γ-rays. A computer code named "QMNB" was developed in the "MATLAB" programming language to perform the required calculations. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. Hardware simulation of fuel cell/gas turbine hybrids

    NASA Astrophysics Data System (ADS)

    Smith, Thomas Paul

    Hybrid solid oxide fuel cell/gas turbine (SOFC/GT) systems offer high efficiency power generation, but face numerous integration and operability challenges. This dissertation addresses the application of hardware-in-the-loop simulation (HILS) to explore the performance of a solid oxide fuel cell stack and gas turbine when combined into a hybrid system. Specifically, this project entailed developing and demonstrating a methodology for coupling a numerical SOFC subsystem model with a gas turbine that has been modified with supplemental process flow and control paths to mimic a hybrid system. This HILS approach was implemented with the U.S. Department of Energy Hybrid Performance Project (HyPer) located at the National Energy Technology Laboratory. By utilizing HILS the facility provides a cost effective and capable platform for characterizing the response of hybrid systems to dynamic variations in operating conditions. HILS of a hybrid system was accomplished by first interfacing a numerical model with operating gas turbine hardware. The real-time SOFC stack model responds to operating turbine flow conditions in order to predict the level of thermal effluent from the SOFC stack. This simulated level of heating then dynamically sets the turbine's "firing" rate to reflect the stack output heat rate. Second, a high-speed computer system with data acquisition capabilities was integrated with the existing controls and sensors of the turbine facility. In the future, this will allow for the utilization of high-fidelity fuel cell models that infer cell performance parameters while still computing the simulation in real-time. Once the integration of the numeric and the hardware simulation components was completed, HILS experiments were conducted to evaluate hybrid system performance. The testing identified non-intuitive transient responses arising from the large thermal capacitance of the stack that are inherent to hybrid systems. Furthermore, the tests demonstrated the capabilities of HILS as a research tool for investigating the dynamic behavior of SOFC/GT hybrid power generation systems.

  1. Improving Data Transfer Throughput with Direct Search Optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balaprakash, Prasanna; Morozov, Vitali; Kettimuthu, Rajkumar

    2016-01-01

    Improving data transfer throughput over high-speed long-distance networks has become increasingly difficult. Numerous factors such as nondeterministic congestion, dynamics of the transfer protocol, and multiuser and multitask source and destination endpoints, as well as interactions among these factors, contribute to this difficulty. A promising approach to improving throughput consists in using parallel streams at the application layer.We formulate and solve the problem of choosing the number of such streams from a mathematical optimization perspective. We propose the use of direct search methods, a class of easy-to-implement and light-weight mathematical optimization algorithms, to improve the performance of data transfers by dynamicallymore » adapting the number of parallel streams in a manner that does not require domain expertise, instrumentation, analytical models, or historic data. We apply our method to transfers performed with the GridFTP protocol, and illustrate the effectiveness of the proposed algorithm when used within Globus, a state-of-the-art data transfer tool, on productionWAN links and servers. We show that when compared to user default settings our direct search methods can achieve up to 10x performance improvement under certain conditions. We also show that our method can overcome performance degradation due to external compute and network load on source end points, a common scenario at high performance computing facilities.« less

  2. The OSG open facility: A sharing ecosystem

    DOE PAGES

    Jayatilaka, B.; Levshina, T.; Rynge, M.; ...

    2015-12-23

    The Open Science Grid (OSG) ties together individual experiments’ computing power, connecting their resources to create a large, robust computing grid, this computing infrastructure started primarily as a collection of sites associated with large HEP experiments such as ATLAS, CDF, CMS, and DZero. In the years since, the OSG has broadened its focus to also address the needs of other US researchers and increased delivery of Distributed High Through-put Computing (DHTC) to users from a wide variety of disciplines via the OSG Open Facility. Presently, the Open Facility delivers about 100 million computing wall hours per year to researchers whomore » are not already associated with the owners of the computing sites, this is primarily accomplished by harvesting and organizing the temporarily unused capacity (i.e. opportunistic cycles) from the sites in the OSG. Using these methods, OSG resource providers and scientists share computing hours with researchers in many other fields to enable their science, striving to make sure that these computing power used with maximal efficiency. Furthermore, we believe that expanded access to DHTC is an essential tool for scientific innovation and work continues in expanding this service.« less

  3. Cryogenic, high speed, turbopump bearing cooling requirements

    NASA Technical Reports Server (NTRS)

    Dolan, Fred J.; Gibson, Howard G.; Cannon, James L.; Cody, Joe C.

    1988-01-01

    Although the Space Shuttle Main Engine (SSME) has repeatedly demonstrated the capability to perform during launch, the High Pressure Oxidizer Turbopump (HPOTP) main shaft bearings have not met their 7.5 hour life requirement. A tester is being employed to provide the capability of subjecting full scale bearings and seals to speeds, loads, propellants, temperatures, and pressures which simulate engine operating conditions. The tester design permits much more elaborate instrumentation and diagnostics than could be accommodated in an SSME turbopump. Tests were made to demonstrate the facilities; and the devices' capabilities, to verify the instruments in its operating environment and to establish a performance baseline for the flight type SSME HPOTP Turbine Bearing design. Bearing performance data from tests are being utilized to generate: (1) a high speed, cryogenic turbopump bearing computer mechanical model, and (2) a much improved, very detailed thermal model to better understand bearing internal operating conditions. Parametric tests were also made to determine the effects of speed, axial loads, coolant flow rate, and surface finish degradation on bearing performance.

  4. GPU acceleration of the Locally Selfconsistent Multiple Scattering code for first principles calculation of the ground state and statistical physics of materials

    NASA Astrophysics Data System (ADS)

    Eisenbach, Markus; Larkin, Jeff; Lutjens, Justin; Rennich, Steven; Rogers, James H.

    2017-02-01

    The Locally Self-consistent Multiple Scattering (LSMS) code solves the first principles Density Functional theory Kohn-Sham equation for a wide range of materials with a special focus on metals, alloys and metallic nano-structures. It has traditionally exhibited near perfect scalability on massively parallel high performance computer architectures. We present our efforts to exploit GPUs to accelerate the LSMS code to enable first principles calculations of O(100,000) atoms and statistical physics sampling of finite temperature properties. We reimplement the scattering matrix calculation for GPUs with a block matrix inversion algorithm that only uses accelerator memory. Using the Cray XK7 system Titan at the Oak Ridge Leadership Computing Facility we achieve a sustained performance of 14.5PFlop/s and a speedup of 8.6 compared to the CPU only code.

  5. GPU acceleration of the Locally Selfconsistent Multiple Scattering code for first principles calculation of the ground state and statistical physics of materials

    DOE PAGES

    Eisenbach, Markus; Larkin, Jeff; Lutjens, Justin; ...

    2016-07-12

    The Locally Self-consistent Multiple Scattering (LSMS) code solves the first principles Density Functional theory Kohn–Sham equation for a wide range of materials with a special focus on metals, alloys and metallic nano-structures. It has traditionally exhibited near perfect scalability on massively parallel high performance computer architectures. In this paper, we present our efforts to exploit GPUs to accelerate the LSMS code to enable first principles calculations of O(100,000) atoms and statistical physics sampling of finite temperature properties. We reimplement the scattering matrix calculation for GPUs with a block matrix inversion algorithm that only uses accelerator memory. Finally, using the Craymore » XK7 system Titan at the Oak Ridge Leadership Computing Facility we achieve a sustained performance of 14.5PFlop/s and a speedup of 8.6 compared to the CPU only code.« less

  6. Cyberdyn supercomputer - a tool for imaging geodinamic processes

    NASA Astrophysics Data System (ADS)

    Pomeran, Mihai; Manea, Vlad; Besutiu, Lucian; Zlagnean, Luminita

    2014-05-01

    More and more physical processes developed within the deep interior of our planet, but with significant impact on the Earth's shape and structure, become subject to numerical modelling by using high performance computing facilities. Nowadays, worldwide an increasing number of research centers decide to make use of such powerful and fast computers for simulating complex phenomena involving fluid dynamics and get deeper insight to intricate problems of Earth's evolution. With the CYBERDYN cybernetic infrastructure (CCI), the Solid Earth Dynamics Department in the Institute of Geodynamics of the Romanian Academy boldly steps into the 21st century by entering the research area of computational geodynamics. The project that made possible this advancement, has been jointly supported by EU and Romanian Government through the Structural and Cohesion Funds. It lasted for about three years, ending October 2013. CCI is basically a modern high performance Beowulf-type supercomputer (HPCC), combined with a high performance visualization cluster (HPVC) and a GeoWall. The infrastructure is mainly structured around 1344 cores and 3 TB of RAM. The high speed interconnect is provided by a Qlogic InfiniBand switch, able to transfer up to 40 Gbps. The CCI storage component is a 40 TB Panasas NAS. The operating system is Linux (CentOS). For control and maintenance, the Bright Cluster Manager package is used. The SGE job scheduler manages the job queues. CCI has been designed for a theoretical peak performance up to 11.2 TFlops. Speed tests showed that a high resolution numerical model (256 × 256 × 128 FEM elements) could be resolved with a mean computational speed of 1 time step at 30 seconds, by employing only a fraction of the computing power (20%). After passing the mandatory tests, the CCI has been involved in numerical modelling of various scenarios related to the East Carpathians tectonic and geodynamic evolution, including the Neogene magmatic activity, and the intriguing intermediate-depth seismicity within the so-called Vrancea zone. The CFD code for numerical modelling is CitcomS, a widely employed open source package specifically developed for earth sciences. Several preliminary 3D geodynamic models for simulating an assumed subduction or the effect of a mantle plume will be presented and discussed.

  7. Operation of the 25kW NASA Lewis Research Center Solar Regenerative Fuel Cell Tested Facility

    NASA Technical Reports Server (NTRS)

    Moore, S. H.; Voecks, G. E.

    1997-01-01

    Assembly of the NASA Lewis Research Center(LeRC)Solar Regenerative Fuel Cell (RFC) Testbed Facility has been completed and system testing has proceeded. This facility includes the integration of two 25kW photovoltaic solar cell arrays, a 25kW proton exchange membrane (PEM) electrolysis unit, four 5kW PEM fuel cells, high pressure hydrogen and oxygen storage vessels, high purity water storage containers, and computer monitoring, control and data acquisition.

  8. Highly integrated digital engine control system on an F-15 airplane

    NASA Technical Reports Server (NTRS)

    Burcham, F. W., Jr.; Haering, E. A., Jr.

    1984-01-01

    The highly integrated digital electronic control (HIDEC) program will demonstrate and evaluate the improvements in performance and mission effectiveness that result from integrated engine-airframe control systems. This system is being used on the F-15 airplane at the Dryden Flight Research Facility of NASA Ames Research Center. An integrated flightpath management mode and an integrated adaptive engine stall margin mode are being implemented into the system. The adaptive stall margin mode is a highly integrated mode in which the airplane flight conditions, the resulting inlet distortion, and the engine stall margin are continuously computed; the excess stall margin is used to uptrim the engine for more thrust. The integrated flightpath management mode optimizes the flightpath and throttle setting to reach a desired flight condition. The increase in thrust and the improvement in airplane performance is discussed in this paper.

  9. Honey Lake Power Facility under construction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1988-12-01

    Geothermal energy and wood waste are primary energy sources for the 30 megawatt, net, Honey Lake Power Facility, a cogeneration power plant. The facility 60% completed in January 1989, will use 1,300 tons per day of fuel obtained from selective forest thinnings and from logging residue combined with mill wastes. The power plant will be the largest industrial facility to use some of Lassen County's geothermal resources. The facility will produce 236 million kilowatt-hours of electricity annually. The plant consists of a wood-fired traveling grate furnace with a utility-type high pressure boiler. Fluids from a geothermal well will pass throughmore » a heat exchange to preheat boiler feedwater. Used geothermal fluid will be disposed of in an injection well. Steam will be converted to electrical power through a 35.5-megawatt turbine generator and transmitted 22 miles to Susanville over company-owned and maintained transmission lines. The plant includes pollution control for particulate removal, ammonia injection for removal of nitrogen oxides, and computer-controlled combustion systems to control carbon monoxide and hydrocarbons. The highly automated wood yard consists of systems to remove metal, handle oversized material, receive up to six truck loads of wood products per hour, and continuously deliver 58 tons per hour of fuel through redundant systems to ensure maximum on-line performance. The plant is scheduled to become operational in mid-1989.« less

  10. Oak Ridge National Laboratory Support of Non-light Water Reactor Technologies: Capabilities Assessment for NRC Near-term Implementation Action Plans for Non-light Water Reactors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Belles, Randy; Jain, Prashant K.; Powers, Jeffrey J.

    The Oak Ridge National Laboratory (ORNL) has a rich history of support for light water reactor (LWR) and non-LWR technologies. The ORNL history involves operation of 13 reactors at ORNL including the graphite reactor dating back to World War II, two aqueous homogeneous reactors, two molten salt reactors (MSRs), a fast-burst health physics reactor, and seven LWRs. Operation of the High Flux Isotope Reactor (HFIR) has been ongoing since 1965. Expertise exists amongst the ORNL staff to provide non-LWR training; support evaluation of non-LWR licensing and safety issues; perform modeling and simulation using advanced computational tools; run laboratory experiments usingmore » equipment such as the liquid salt component test facility; and perform in-depth fuel performance and thermal-hydraulic technology reviews using a vast suite of computer codes and tools. Summaries of this expertise are included in this paper.« less

  11. The Nuclear Energy Advanced Modeling and Simulation Enabling Computational Technologies FY09 Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Diachin, L F; Garaizar, F X; Henson, V E

    2009-10-12

    In this document we report on the status of the Nuclear Energy Advanced Modeling and Simulation (NEAMS) Enabling Computational Technologies (ECT) effort. In particular, we provide the context for ECT In the broader NEAMS program and describe the three pillars of the ECT effort, namely, (1) tools and libraries, (2) software quality assurance, and (3) computational facility (computers, storage, etc) needs. We report on our FY09 deliverables to determine the needs of the integrated performance and safety codes (IPSCs) in these three areas and lay out the general plan for software quality assurance to meet the requirements of DOE andmore » the DOE Advanced Fuel Cycle Initiative (AFCI). We conclude with a brief description of our interactions with the Idaho National Laboratory computer center to determine what is needed to expand their role as a NEAMS user facility.« less

  12. The Argonne Leadership Computing Facility 2010 annual report.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Drugan, C.

    Researchers found more ways than ever to conduct transformative science at the Argonne Leadership Computing Facility (ALCF) in 2010. Both familiar initiatives and innovative new programs at the ALCF are now serving a growing, global user community with a wide range of computing needs. The Department of Energy's (DOE) INCITE Program remained vital in providing scientists with major allocations of leadership-class computing resources at the ALCF. For calendar year 2011, 35 projects were awarded 732 million supercomputer processor-hours for computationally intensive, large-scale research projects with the potential to significantly advance key areas in science and engineering. Argonne also continued tomore » provide Director's Discretionary allocations - 'start up' awards - for potential future INCITE projects. And DOE's new ASCR Leadership Computing (ALCC) Program allocated resources to 10 ALCF projects, with an emphasis on high-risk, high-payoff simulations directly related to the Department's energy mission, national emergencies, or for broadening the research community capable of using leadership computing resources. While delivering more science today, we've also been laying a solid foundation for high performance computing in the future. After a successful DOE Lehman review, a contract was signed to deliver Mira, the next-generation Blue Gene/Q system, to the ALCF in 2012. The ALCF is working with the 16 projects that were selected for the Early Science Program (ESP) to enable them to be productive as soon as Mira is operational. Preproduction access to Mira will enable ESP projects to adapt their codes to its architecture and collaborate with ALCF staff in shaking down the new system. We expect the 10-petaflops system to stoke economic growth and improve U.S. competitiveness in key areas such as advancing clean energy and addressing global climate change. Ultimately, we envision Mira as a stepping-stone to exascale-class computers that will be faster than petascale-class computers by a factor of a thousand. Pete Beckman, who served as the ALCF's Director for the past few years, has been named director of the newly created Exascale Technology and Computing Institute (ETCi). The institute will focus on developing exascale computing to extend scientific discovery and solve critical science and engineering problems. Just as Pete's leadership propelled the ALCF to great success, we know that that ETCi will benefit immensely from his expertise and experience. Without question, the future of supercomputing is certainly in good hands. I would like to thank Pete for all his effort over the past two years, during which he oversaw the establishing of ALCF2, the deployment of the Magellan project, increases in utilization, availability, and number of projects using ALCF1. He managed the rapid growth of ALCF staff and made the facility what it is today. All the staff and users are better for Pete's efforts.« less

  13. Integration Of PanDA Workload Management System With Supercomputers for ATLAS and Data Intensive Science

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    De, K; Jha, S; Klimentov, A

    2016-01-01

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Managementmore » System for managing the workflow for all data processing on over 150 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250,000 cores with a peak performance of 0.3 petaFLOPS, LHC data taking runs require more resources than Grid computing can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, Europe and Russia (in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF), MIRA supercomputer at Argonne Leadership Computing Facilities (ALCF), Supercomputer at the National Research Center Kurchatov Institute , IT4 in Ostrava and others). Current approach utilizes modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on LCFs multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads on several supercomputing platforms for ALICE and ATLAS experiments and it is in full production for the ATLAS experiment since September 2015. We will present our current accomplishments with running PanDA WMS at supercomputers and demonstrate our ability to use PanDA as a portal independent of the computing facilities infrastructure for High Energy and Nuclear Physics as well as other data-intensive science applications, such as bioinformatics and astro-particle physics.« less

  14. Development of a Large Scale, High Speed Wheel Test Facility

    NASA Technical Reports Server (NTRS)

    Kondoleon, Anthony; Seltzer, Donald; Thornton, Richard; Thompson, Marc

    1996-01-01

    Draper Laboratory, with its internal research and development budget, has for the past two years been funding a joint effort with the Massachusetts Institute of Technology (MIT) for the development of a large scale, high speed wheel test facility. This facility was developed to perform experiments and carry out evaluations on levitation and propulsion designs for MagLev systems currently under consideration. The facility was developed to rotate a large (2 meter) wheel which could operate with peripheral speeds of greater than 100 meters/second. The rim of the wheel was constructed of a non-magnetic, non-conductive composite material to avoid the generation of errors from spurious forces. A sensor package containing a multi-axis force and torque sensor mounted to the base of the station, provides a signal of the lift and drag forces on the package being tested. Position tables mounted on the station allow for the introduction of errors in real time. A computer controlled data acquisition system was developed around a Macintosh IIfx to record the test data and control the speed of the wheel. This paper describes the development of this test facility. A detailed description of the major components is presented. Recently completed tests carried out on a novel Electrodynamic (EDS) suspension system, developed by MIT as part of this joint effort are described and presented. Adaptation of this facility for linear motor and other propulsion and levitation testing is described.

  15. A performance evaluation of the IBM 370/XT personal computer

    NASA Technical Reports Server (NTRS)

    Dominick, Wayne D. (Editor); Triantafyllopoulos, Spiros

    1984-01-01

    An evaluation of the IBM 370/XT personal computer is given. This evaluation focuses primarily on the use of the 370/XT for scientific and technical applications and applications development. A measurement of the capabilities of the 370/XT was performed by means of test programs which are presented. Also included is a review of facilities provided by the operating system (VM/PC), along with comments on the IBM 370/XT hardware configuration.

  16. Open-Loop HIRF Experiments Performed on a Fault Tolerant Flight Control Computer

    NASA Technical Reports Server (NTRS)

    Koppen, Daniel M.

    1997-01-01

    During the third quarter of 1996, the Closed-Loop Systems Laboratory was established at the NASA Langley Research Center (LaRC) to study the effects of High Intensity Radiated Fields on complex avionic systems and control system components. This new facility provided a link and expanded upon the existing capabilities of the High Intensity Radiated Fields Laboratory at LaRC that were constructed and certified during 1995-96. The scope of the Closed-Loop Systems Laboratory is to place highly integrated avionics instrumentation into a high intensity radiated field environment, interface the avionics to a real-time flight simulation that incorporates aircraft dynamics, engines, sensors, actuators and atmospheric turbulence, and collect, analyze, and model aircraft performance. This paper describes the layout and functionality of the Closed-Loop Systems Laboratory, and the open-loop calibration experiments that led up to the commencement of closed-loop real-time flight experiments.

  17. Design and implementation of a reliable and cost-effective cloud computing infrastructure: the INFN Napoli experience

    NASA Astrophysics Data System (ADS)

    Capone, V.; Esposito, R.; Pardi, S.; Taurino, F.; Tortone, G.

    2012-12-01

    Over the last few years we have seen an increasing number of services and applications needed to manage and maintain cloud computing facilities. This is particularly true for computing in high energy physics, which often requires complex configurations and distributed infrastructures. In this scenario a cost effective rationalization and consolidation strategy is the key to success in terms of scalability and reliability. In this work we describe an IaaS (Infrastructure as a Service) cloud computing system, with high availability and redundancy features, which is currently in production at INFN-Naples and ATLAS Tier-2 data centre. The main goal we intended to achieve was a simplified method to manage our computing resources and deliver reliable user services, reusing existing hardware without incurring heavy costs. A combined usage of virtualization and clustering technologies allowed us to consolidate our services on a small number of physical machines, reducing electric power costs. As a result of our efforts we developed a complete solution for data and computing centres that can be easily replicated using commodity hardware. Our architecture consists of 2 main subsystems: a clustered storage solution, built on top of disk servers running GlusterFS file system, and a virtual machines execution environment. GlusterFS is a network file system able to perform parallel writes on multiple disk servers, providing this way live replication of data. High availability is also achieved via a network configuration using redundant switches and multiple paths between hypervisor hosts and disk servers. We also developed a set of management scripts to easily perform basic system administration tasks such as automatic deployment of new virtual machines, adaptive scheduling of virtual machines on hypervisor hosts, live migration and automated restart in case of hypervisor failures.

  18. SISYPHUS: A high performance seismic inversion factory

    NASA Astrophysics Data System (ADS)

    Gokhberg, Alexey; Simutė, Saulė; Boehm, Christian; Fichtner, Andreas

    2016-04-01

    In the recent years the massively parallel high performance computers became the standard instruments for solving the forward and inverse problems in seismology. The respective software packages dedicated to forward and inverse waveform modelling specially designed for such computers (SPECFEM3D, SES3D) became mature and widely available. These packages achieve significant computational performance and provide researchers with an opportunity to solve problems of bigger size at higher resolution within a shorter time. However, a typical seismic inversion process contains various activities that are beyond the common solver functionality. They include management of information on seismic events and stations, 3D models, observed and synthetic seismograms, pre-processing of the observed signals, computation of misfits and adjoint sources, minimization of misfits, and process workflow management. These activities are time consuming, seldom sufficiently automated, and therefore represent a bottleneck that can substantially offset performance benefits provided by even the most powerful modern supercomputers. Furthermore, a typical system architecture of modern supercomputing platforms is oriented towards the maximum computational performance and provides limited standard facilities for automation of the supporting activities. We present a prototype solution that automates all aspects of the seismic inversion process and is tuned for the modern massively parallel high performance computing systems. We address several major aspects of the solution architecture, which include (1) design of an inversion state database for tracing all relevant aspects of the entire solution process, (2) design of an extensible workflow management framework, (3) integration with wave propagation solvers, (4) integration with optimization packages, (5) computation of misfits and adjoint sources, and (6) process monitoring. The inversion state database represents a hierarchical structure with branches for the static process setup, inversion iterations, and solver runs, each branch specifying information at the event, station and channel levels. The workflow management framework is based on an embedded scripting engine that allows definition of various workflow scenarios using a high-level scripting language and provides access to all available inversion components represented as standard library functions. At present the SES3D wave propagation solver is integrated in the solution; the work is in progress for interfacing with SPECFEM3D. A separate framework is designed for interoperability with an optimization module; the workflow manager and optimization process run in parallel and cooperate by exchanging messages according to a specially designed protocol. A library of high-performance modules implementing signal pre-processing, misfit and adjoint computations according to established good practices is included. Monitoring is based on information stored in the inversion state database and at present implements a command line interface; design of a graphical user interface is in progress. The software design fits well into the common massively parallel system architecture featuring a large number of computational nodes running distributed applications under control of batch-oriented resource managers. The solution prototype has been implemented on the "Piz Daint" supercomputer provided by the Swiss Supercomputing Centre (CSCS).

  19. Flow analysis of airborne particles in a hospital operating room

    NASA Astrophysics Data System (ADS)

    Faeghi, Shiva; Lennerts, Kunibert

    2016-06-01

    Preventing airborne infections during a surgery has been always an important issue to deliver effective and high quality medical care to the patient. One of the important sources of infection is particles that are distributed through airborne routes. Factors influencing infection rates caused by airborne particles, among others, are efficient ventilation and the arrangement of surgical facilities inside the operating room. The paper studies the ventilation airflow pattern in an operating room in a hospital located in Tehran, Iran, and seeks to find the efficient configurations with respect to the ventilation system and layout of facilities. This study uses computational fluid dynamics (CFD) and investigates the effects of different inflow velocities for inlets, two pressurization scenarios (equal and excess pressure) and two arrangements of surgical facilities in room while the door is completely open. The results show that system does not perform adequately when the door is open in the operating room under the current conditions, and excess pressure adjustments should be employed to achieve efficient results. The findings of this research can be discussed in the context of design and controlling of the ventilation facilities of operating rooms.

  20. LaRC design analysis report for National Transonic Facility for 304 stainless steel tunnel shell. Volume 1S: Finite difference analysis of cone/cylinder junction

    NASA Technical Reports Server (NTRS)

    Ramsey, J. W., Jr.; Taylor, J. T.; Wilson, J. F.; Gray, C. E., Jr.; Leatherman, A. D.; Rooker, J. R.; Allred, J. W.

    1976-01-01

    The results of extensive computer (finite element, finite difference and numerical integration), thermal, fatigue, and special analyses of critical portions of a large pressurized, cryogenic wind tunnel (National Transonic Facility) are presented. The computer models, loading and boundary conditions are described. Graphic capability was used to display model geometry, section properties, and stress results. A stress criteria is presented for evaluation of the results of the analyses. Thermal analyses were performed for major critical and typical areas. Fatigue analyses of the entire tunnel circuit are presented.

  1. Turbulent Navier-Stokes Flow Analysis of an Advanced Semispan Diamond-Wing Model in Tunnel and Free Air at High-Lift Conditions

    NASA Technical Reports Server (NTRS)

    Ghaffari, Farhad; Biedron, Robert T.; Luckring, James M.

    2002-01-01

    Turbulent Navier-Stokes computational results are presented for an advanced diamond wing semispan model at low-speed, high-lift conditions. The numerical results are obtained in support of a wind-tunnel test that was conducted in the National Transonic Facility at the NASA Langley Research Center. The model incorporated a generic fuselage and was mounted on the tunnel sidewall using a constant-width non-metric standoff. The computations were performed at to a nominal approach and landing flow conditions.The computed high-lift flow characteristics for the model in both the tunnel and in free-air environment are presented. The computed wing pressure distributions agreed well with the measured data and they both indicated a small effect due to the tunnel wall interference effects. However, the wall interference effects were found to be relatively more pronounced in the measured and the computed lift, drag and pitching moment. Although the magnitudes of the computed forces and moment were slightly off compared to the measured data, the increments due the wall interference effects were predicted reasonably well. The numerical results are also presented on the combined effects of the tunnel sidewall boundary layer and the standoff geometry on the fuselage forebody pressure distributions and the resulting impact on the configuration longitudinal aerodynamic characteristics.

  2. Integrated approach to modeling long-term durability of concrete engineered barriers in LLRW disposal facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, J.H.; Roy, D.M.; Mann, B.

    1995-12-31

    This paper describes an integrated approach to developing a predictive computer model for long-term performance of concrete engineered barriers utilized in LLRW and ILRW disposal facilities. The model development concept consists of three major modeling schemes: hydration modeling of the binder phase, pore solution speciation, and transport modeling in the concrete barrier and service environment. Although still in its inception, the model development approach demonstrated that the chemical and physical properties of complex cementitious materials and their interactions with service environments can be described quantitatively. Applying the integrated model development approach to modeling alkali (Na and K) leaching from amore » concrete pad barrier in an above-grade tumulus disposal unit, it is predicted that, in a near-surface land disposal facility where water infiltration through the facility is normally minimal, the alkalis control the pore solution pH of the concrete barriers for much longer than most previous concrete barrier degradation studies assumed. The results also imply that a highly alkaline condition created by the alkali leaching will result in alteration of the soil mineralogy in the vicinity of the disposal facility.« less

  3. OpenTopography: Addressing Big Data Challenges Using Cloud Computing, HPC, and Data Analytics

    NASA Astrophysics Data System (ADS)

    Crosby, C. J.; Nandigam, V.; Phan, M.; Youn, C.; Baru, C.; Arrowsmith, R.

    2014-12-01

    OpenTopography (OT) is a geoinformatics-based data facility initiated in 2009 for democratizing access to high-resolution topographic data, derived products, and tools. Hosted at the San Diego Supercomputer Center (SDSC), OT utilizes cyberinfrastructure, including large-scale data management, high-performance computing, and service-oriented architectures to provide efficient Web based access to large, high-resolution topographic datasets. OT collocates data with processing tools to enable users to quickly access custom data and derived products for their application. OT's ongoing R&D efforts aim to solve emerging technical challenges associated with exponential growth in data, higher order data products, as well as user base. Optimization of data management strategies can be informed by a comprehensive set of OT user access metrics that allows us to better understand usage patterns with respect to the data. By analyzing the spatiotemporal access patterns within the datasets, we can map areas of the data archive that are highly active (hot) versus the ones that are rarely accessed (cold). This enables us to architect a tiered storage environment consisting of high performance disk storage (SSD) for the hot areas and less expensive slower disk for the cold ones, thereby optimizing price to performance. From a compute perspective, OT is looking at cloud based solutions such as the Microsoft Azure platform to handle sudden increases in load. An OT virtual machine image in Microsoft's VM Depot can be invoked and deployed quickly in response to increased system demand. OT has also integrated SDSC HPC systems like the Gordon supercomputer into our infrastructure tier to enable compute intensive workloads like parallel computation of hydrologic routing on high resolution topography. This capability also allows OT to scale to HPC resources during high loads to meet user demand and provide more efficient processing. With a growing user base and maturing scientific user community comes new requests for algorithms and processing capabilities. To address this demand, OT is developing an extensible service based architecture for integrating community-developed software. This "plugable" approach to Web service deployment will enable new processing and analysis tools to run collocated with OT hosted data.

  4. Computer-Aided Facilities Management Systems (CAFM).

    ERIC Educational Resources Information Center

    Cyros, Kreon L.

    Computer-aided facilities management (CAFM) refers to a collection of software used with increasing frequency by facilities managers. The six major CAFM components are discussed with respect to their usefulness and popularity in facilities management applications: (1) computer-aided design; (2) computer-aided engineering; (3) decision support…

  5. Supercomputing Sheds Light on the Dark Universe

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Habib, Salman; Heitmann, Katrin

    2012-11-15

    At Argonne National Laboratory, scientists are using supercomputers to shed light on one of the great mysteries in science today, the Dark Universe. With Mira, a petascale supercomputer at the Argonne Leadership Computing Facility, a team led by physicists Salman Habib and Katrin Heitmann will run the largest, most complex simulation of the universe ever attempted. By contrasting the results from Mira with state-of-the-art telescope surveys, the scientists hope to gain new insights into the distribution of matter in the universe, advancing future investigations of dark energy and dark matter into a new realm. The team's research was named amore » finalist for the 2012 Gordon Bell Prize, an award recognizing outstanding achievement in high-performance computing.« less

  6. New statistical scission-point model to predict fission fragment observables

    NASA Astrophysics Data System (ADS)

    Lemaître, Jean-François; Panebianco, Stefano; Sida, Jean-Luc; Hilaire, Stéphane; Heinrich, Sophie

    2015-09-01

    The development of high performance computing facilities makes possible a massive production of nuclear data in a full microscopic framework. Taking advantage of the individual potential calculations of more than 7000 nuclei, a new statistical scission-point model, called SPY, has been developed. It gives access to the absolute available energy at the scission point, which allows the use of a parameter-free microcanonical statistical description to calculate the distributions and the mean values of all fission observables. SPY uses the richness of microscopy in a rather simple theoretical framework, without any parameter except the scission-point definition, to draw clear answers based on perfect knowledge of the ingredients involved in the model, with very limited computing cost.

  7. High Performance Computing Facility Operational Assessment, FY 2011 Oak Ridge Leadership Computing Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baker, Ann E; Bland, Arthur S Buddy; Hack, James J

    Oak Ridge National Laboratory's Leadership Computing Facility (OLCF) continues to deliver the most powerful resources in the U.S. for open science. At 2.33 petaflops peak performance, the Cray XT Jaguar delivered more than 1.5 billion core hours in calendar year (CY) 2010 to researchers around the world for computational simulations relevant to national and energy security; advancing the frontiers of knowledge in physical sciences and areas of biological, medical, environmental, and computer sciences; and providing world-class research facilities for the nation's science enterprise. Scientific achievements by OLCF users range from collaboration with university experimentalists to produce a working supercapacitor thatmore » uses atom-thick sheets of carbon materials to finely determining the resolution requirements for simulations of coal gasifiers and their components, thus laying the foundation for development of commercial-scale gasifiers. OLCF users are pushing the boundaries with software applications sustaining more than one petaflop of performance in the quest to illuminate the fundamental nature of electronic devices. Other teams of researchers are working to resolve predictive capabilities of climate models, to refine and validate genome sequencing, and to explore the most fundamental materials in nature - quarks and gluons - and their unique properties. Details of these scientific endeavors - not possible without access to leadership-class computing resources - are detailed in Section 4 of this report and in the INCITE in Review. Effective operations of the OLCF play a key role in the scientific missions and accomplishments of its users. This Operational Assessment Report (OAR) will delineate the policies, procedures, and innovations implemented by the OLCF to continue delivering a petaflop-scale resource for cutting-edge research. The 2010 operational assessment of the OLCF yielded recommendations that have been addressed (Reference Section 1) and where appropriate, changes in Center metrics were introduced. This report covers CY 2010 and CY 2011 Year to Date (YTD) that unless otherwise specified, denotes January 1, 2011 through June 30, 2011. User Support remains an important element of the OLCF operations, with the philosophy 'whatever it takes' to enable successful research. Impact of this center-wide activity is reflected by the user survey results that show users are 'very satisfied.' The OLCF continues to aggressively pursue outreach and training activities to promote awareness - and effective use - of U.S. leadership-class resources (Reference Section 2). The OLCF continues to meet and in many cases exceed DOE metrics for capability usage (35% target in CY 2010, delivered 39%; 40% target in CY 2011, 54% January 1, 2011 through June 30, 2011). The Schedule Availability (SA) and Overall Availability (OA) for Jaguar were exceeded in CY2010. Given the solution to the VRM problem the SA and OA for Jaguar in CY 2011 are expected to exceed the target metrics of 95% and 90%, respectively (Reference Section 3). Numerous and wide-ranging research accomplishments, scientific support, and technological innovations are more fully described in Sections 4 and 6 and reflect OLCF leadership in enabling high-impact science solutions and vision in creating an exascale-ready center. Financial Management (Section 5) and Risk Management (Section 7) are carried out using best practices approved of by DOE. The OLCF has a valid cyber security plan and Authority to Operate (Section 8). The proposed metrics for 2012 are reflected in Section 9.« less

  8. Fine grained event processing on HPCs with the ATLAS Yoda system

    NASA Astrophysics Data System (ADS)

    Calafiura, Paolo; De, Kaushik; Guan, Wen; Maeno, Tadashi; Nilsson, Paul; Oleynik, Danila; Panitkin, Sergey; Tsulaia, Vakhtang; Van Gemmeren, Peter; Wenaus, Torre

    2015-12-01

    High performance computing facilities present unique challenges and opportunities for HEP event processing. The massive scale of many HPC systems means that fractionally small utilization can yield large returns in processing throughput. Parallel applications which can dynamically and efficiently fill any scheduling opportunities the resource presents benefit both the facility (maximal utilization) and the (compute-limited) science. The ATLAS Yoda system provides this capability to HEP-like event processing applications by implementing event-level processing in an MPI-based master-client model that integrates seamlessly with the more broadly scoped ATLAS Event Service. Fine grained, event level work assignments are intelligently dispatched to parallel workers to sustain full utilization on all cores, with outputs streamed off to destination object stores in near real time with similarly fine granularity, such that processing can proceed until termination with full utilization. The system offers the efficiency and scheduling flexibility of preemption without requiring the application actually support or employ check-pointing. We will present the new Yoda system, its motivations, architecture, implementation, and applications in ATLAS data processing at several US HPC centers.

  9. Simulation and stability analysis of supersonic impinging jet noise with microjet control

    NASA Astrophysics Data System (ADS)

    Hildebrand, Nathaniel; Nichols, Joseph W.

    2014-11-01

    A model for an ideally expanded 1.5 Mach turbulent jet impinging on a flat plate using unstructured high-fidelity large eddy simulations (LES) and hydrodynamic stability analysis is presented. Note the LES configuration conforms exactly to experiments performed at the STOVL supersonic jet facility of the Florida Center for Advanced Aero-Propulsion allowing validation against experimental measurements. The LES are repeated for different nozzle-wall separation distances as well as with and without the addition of sixteen microjets positioned uniformly around the nozzle lip. For some nozzle-wall distances, but not all, the microjets result in substantial noise reduction. Observations of substantial noise reduction are associated with a relative absence of large-scale coherent vortices in the jet shear layer. To better understand and predict the effectiveness of microjet noise control, the application of global stability analysis about LES mean fields is used to extract axisymmetric and helical instability modes connected to the complex interplay between the coherent vortices, shocks, and acoustic feedback. We gratefully acknowledge computational resources provided by the Argonne Leadership Computing Facility.

  10. Scheduler for multiprocessor system switch with selective pairing

    DOEpatents

    Gara, Alan; Gschwind, Michael Karl; Salapura, Valentina

    2015-01-06

    System, method and computer program product for scheduling threads in a multiprocessing system with selective pairing of processor cores for increased processing reliability. A selective pairing facility is provided that selectively connects, i.e., pairs, multiple microprocessor or processor cores to provide one highly reliable thread (or thread group). The method configures the selective pairing facility to use checking provide one highly reliable thread for high-reliability and allocate threads to corresponding processor cores indicating need for hardware checking. The method configures the selective pairing facility to provide multiple independent cores and allocate threads to corresponding processor cores indicating inherent resilience.

  11. High Performance Molecular Visualization: In-Situ and Parallel Rendering with EGL.

    PubMed

    Stone, John E; Messmer, Peter; Sisneros, Robert; Schulten, Klaus

    2016-05-01

    Large scale molecular dynamics simulations produce terabytes of data that is impractical to transfer to remote facilities. It is therefore necessary to perform visualization tasks in-situ as the data are generated, or by running interactive remote visualization sessions and batch analyses co-located with direct access to high performance storage systems. A significant challenge for deploying visualization software within clouds, clusters, and supercomputers involves the operating system software required to initialize and manage graphics acceleration hardware. Recently, it has become possible for applications to use the Embedded-system Graphics Library (EGL) to eliminate the requirement for windowing system software on compute nodes, thereby eliminating a significant obstacle to broader use of high performance visualization applications. We outline the potential benefits of this approach in the context of visualization applications used in the cloud, on commodity clusters, and supercomputers. We discuss the implementation of EGL support in VMD, a widely used molecular visualization application, and we outline benefits of the approach for molecular visualization tasks on petascale computers, clouds, and remote visualization servers. We then provide a brief evaluation of the use of EGL in VMD, with tests using developmental graphics drivers on conventional workstations and on Amazon EC2 G2 GPU-accelerated cloud instance types. We expect that the techniques described here will be of broad benefit to many other visualization applications.

  12. High Performance Molecular Visualization: In-Situ and Parallel Rendering with EGL

    PubMed Central

    Stone, John E.; Messmer, Peter; Sisneros, Robert; Schulten, Klaus

    2016-01-01

    Large scale molecular dynamics simulations produce terabytes of data that is impractical to transfer to remote facilities. It is therefore necessary to perform visualization tasks in-situ as the data are generated, or by running interactive remote visualization sessions and batch analyses co-located with direct access to high performance storage systems. A significant challenge for deploying visualization software within clouds, clusters, and supercomputers involves the operating system software required to initialize and manage graphics acceleration hardware. Recently, it has become possible for applications to use the Embedded-system Graphics Library (EGL) to eliminate the requirement for windowing system software on compute nodes, thereby eliminating a significant obstacle to broader use of high performance visualization applications. We outline the potential benefits of this approach in the context of visualization applications used in the cloud, on commodity clusters, and supercomputers. We discuss the implementation of EGL support in VMD, a widely used molecular visualization application, and we outline benefits of the approach for molecular visualization tasks on petascale computers, clouds, and remote visualization servers. We then provide a brief evaluation of the use of EGL in VMD, with tests using developmental graphics drivers on conventional workstations and on Amazon EC2 G2 GPU-accelerated cloud instance types. We expect that the techniques described here will be of broad benefit to many other visualization applications. PMID:27747137

  13. Automation of electromagnetic compatability (EMC) test facilities

    NASA Technical Reports Server (NTRS)

    Harrison, C. A.

    1986-01-01

    Efforts to automate electromagnetic compatibility (EMC) test facilities at Marshall Space Flight Center are discussed. The present facility is used to accomplish a battery of nine standard tests (with limited variations) deigned to certify EMC of Shuttle payload equipment. Prior to this project, some EMC tests were partially automated, but others were performed manually. Software was developed to integrate all testing by means of a desk-top computer-controller. Near real-time data reduction and onboard graphics capabilities permit immediate assessment of test results. Provisions for disk storage of test data permit computer production of the test engineer's certification report. Software flexibility permits variation in the tests procedure, the ability to examine more closely those frequency bands which indicate compatibility problems, and the capability to incorporate additional test procedures.

  14. Development and Demonstration of a Computational Tool for the Analysis of Particle Vitiation Effects in Hypersonic Propulsion Test Facilities

    NASA Technical Reports Server (NTRS)

    Perkins, Hugh Douglas

    2010-01-01

    In order to improve the understanding of particle vitiation effects in hypersonic propulsion test facilities, a quasi-one dimensional numerical tool was developed to efficiently model reacting particle-gas flows over a wide range of conditions. Features of this code include gas-phase finite-rate kinetics, a global porous-particle combustion model, mass, momentum and energy interactions between phases, and subsonic and supersonic particle drag and heat transfer models. The basic capabilities of this tool were validated against available data or other validated codes. To demonstrate the capabilities of the code a series of computations were performed for a model hypersonic propulsion test facility and scramjet. Parameters studied were simulated flight Mach number, particle size, particle mass fraction and particle material.

  15. Direct numerical simulation of reactor two-phase flows enabled by high-performance computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fang, Jun; Cambareri, Joseph J.; Brown, Cameron S.

    Nuclear reactor two-phase flows remain a great engineering challenge, where the high-resolution two-phase flow database which can inform practical model development is still sparse due to the extreme reactor operation conditions and measurement difficulties. Owing to the rapid growth of computing power, the direct numerical simulation (DNS) is enjoying a renewed interest in investigating the related flow problems. A combination between DNS and an interface tracking method can provide a unique opportunity to study two-phase flows based on first principles calculations. More importantly, state-of-the-art high-performance computing (HPC) facilities are helping unlock this great potential. This paper reviews the recent researchmore » progress of two-phase flow DNS related to reactor applications. The progress in large-scale bubbly flow DNS has been focused not only on the sheer size of those simulations in terms of resolved Reynolds number, but also on the associated advanced modeling and analysis techniques. Specifically, the current areas of active research include modeling of sub-cooled boiling, bubble coalescence, as well as the advanced post-processing toolkit for bubbly flow simulations in reactor geometries. A novel bubble tracking method has been developed to track the evolution of bubbles in two-phase bubbly flow. Also, spectral analysis of DNS database in different geometries has been performed to investigate the modulation of the energy spectrum slope due to bubble-induced turbulence. In addition, the single-and two-phase analysis results are presented for turbulent flows within the pressurized water reactor (PWR) core geometries. The related simulations are possible to carry out only with the world leading HPC platforms. These simulations are allowing more complex turbulence model development and validation for use in 3D multiphase computational fluid dynamics (M-CFD) codes.« less

  16. The Generation Challenge Programme Platform: Semantic Standards and Workbench for Crop Science

    PubMed Central

    Bruskiewich, Richard; Senger, Martin; Davenport, Guy; Ruiz, Manuel; Rouard, Mathieu; Hazekamp, Tom; Takeya, Masaru; Doi, Koji; Satoh, Kouji; Costa, Marcos; Simon, Reinhard; Balaji, Jayashree; Akintunde, Akinnola; Mauleon, Ramil; Wanchana, Samart; Shah, Trushar; Anacleto, Mylah; Portugal, Arllet; Ulat, Victor Jun; Thongjuea, Supat; Braak, Kyle; Ritter, Sebastian; Dereeper, Alexis; Skofic, Milko; Rojas, Edwin; Martins, Natalia; Pappas, Georgios; Alamban, Ryan; Almodiel, Roque; Barboza, Lord Hendrix; Detras, Jeffrey; Manansala, Kevin; Mendoza, Michael Jonathan; Morales, Jeffrey; Peralta, Barry; Valerio, Rowena; Zhang, Yi; Gregorio, Sergio; Hermocilla, Joseph; Echavez, Michael; Yap, Jan Michael; Farmer, Andrew; Schiltz, Gary; Lee, Jennifer; Casstevens, Terry; Jaiswal, Pankaj; Meintjes, Ayton; Wilkinson, Mark; Good, Benjamin; Wagner, James; Morris, Jane; Marshall, David; Collins, Anthony; Kikuchi, Shoshi; Metz, Thomas; McLaren, Graham; van Hintum, Theo

    2008-01-01

    The Generation Challenge programme (GCP) is a global crop research consortium directed toward crop improvement through the application of comparative biology and genetic resources characterization to plant breeding. A key consortium research activity is the development of a GCP crop bioinformatics platform to support GCP research. This platform includes the following: (i) shared, public platform-independent domain models, ontology, and data formats to enable interoperability of data and analysis flows within the platform; (ii) web service and registry technologies to identify, share, and integrate information across diverse, globally dispersed data sources, as well as to access high-performance computational (HPC) facilities for computationally intensive, high-throughput analyses of project data; (iii) platform-specific middleware reference implementations of the domain model integrating a suite of public (largely open-access/-source) databases and software tools into a workbench to facilitate biodiversity analysis, comparative analysis of crop genomic data, and plant breeding decision making. PMID:18483570

  17. Power monitoring and control for large scale projects: SKA, a case study

    NASA Astrophysics Data System (ADS)

    Barbosa, Domingos; Barraca, João. Paulo; Maia, Dalmiro; Carvalho, Bruno; Vieira, Jorge; Swart, Paul; Le Roux, Gerhard; Natarajan, Swaminathan; van Ardenne, Arnold; Seca, Luis

    2016-07-01

    Large sensor-based science infrastructures for radio astronomy like the SKA will be among the most intensive datadriven projects in the world, facing very high demanding computation, storage, management, and above all power demands. The geographically wide distribution of the SKA and its associated processing requirements in the form of tailored High Performance Computing (HPC) facilities, require a Greener approach towards the Information and Communications Technologies (ICT) adopted for the data processing to enable operational compliance to potentially strict power budgets. Addressing the reduction of electricity costs, improve system power monitoring and the generation and management of electricity at system level is paramount to avoid future inefficiencies and higher costs and enable fulfillments of Key Science Cases. Here we outline major characteristics and innovation approaches to address power efficiency and long-term power sustainability for radio astronomy projects, focusing on Green ICT for science and Smart power monitoring and control.

  18. Unobtrusive monitoring of computer interactions to detect cognitive status in elders.

    PubMed

    Jimison, Holly; Pavel, Misha; McKanna, James; Pavel, Jesse

    2004-09-01

    The U.S. has experienced a rapid growth in the use of computers by elders. E-mail, Web browsing, and computer games are among the most common routine activities for this group of users. In this paper, we describe techniques for unobtrusively monitoring naturally occurring computer interactions to detect sustained changes in cognitive performance. Researchers have demonstrated the importance of the early detection of cognitive decline. Users over the age of 75 are at risk for medically related cognitive problems and confusion, and early detection allows for more effective clinical intervention. In this paper, we present algorithms for inferring a user's cognitive performance using monitoring data from computer games and psychomotor measurements associated with keyboard entry and mouse movement. The inferences are then used to classify significant performance changes, and additionally, to adapt computer interfaces with tailored hints and assistance when needed. These methods were tested in a group of elders in a residential facility.

  19. Telemetry Technology

    NASA Technical Reports Server (NTRS)

    1997-01-01

    In 1990, Avtec Systems, Inc. developed its first telemetry boards for Goddard Space Flight Center. Avtec products now include PC/AT, PCI and VME-based high speed I/O boards and turn-key systems. The most recent and most successful technology transfer from NASA to Avtec is the Programmable Telemetry Processor (PTP), a personal computer- based, multi-channel telemetry front-end processing system originally developed to support the NASA communication (NASCOM) network. The PTP performs data acquisition, real-time network transfer, and store and forward operations. There are over 100 PTP systems located in NASA facilities and throughout the world.

  20. A Computer-Aided Instruction Program for Teaching the TOPS20-MM Facility on the DDN (Defense Data Network)

    DTIC Science & Technology

    1988-06-01

    Continue on reverse if necessary and identify by block number) FIELD GROUP SUB-GROUP Computer Assisted Instruction; Artificial Intelligence 194...while he/she tries to perform given tasks. Means-ends analysis, a classic technique for solving search problems in Artificial Intelligence, has been...he/she tries to perform given tasks. Means-ends analysis, a classic technique for solving search problems in Artificial Intelligence, has been used

  1. RESIF Seismology Datacentre : Recently Released Data and New Services. Computing with Dense Seisimic Networks Data.

    NASA Astrophysics Data System (ADS)

    Volcke, P.; Pequegnat, C.; Grunberg, M.; Lecointre, A.; Bzeznik, B.; Wolyniec, D.; Engels, F.; Maron, C.; Cheze, J.; Pardo, C.; Saurel, J. M.; André, F.

    2015-12-01

    RESIF is a nationwide french project aimed at building a high quality observation system to observe and understand the inner earth. RESIF deals with permanent seismic networks data as well as mobile networks data, including dense/semi-dense arrays. RESIF project is distributed among different nodes providing qualified data to the main datacentre in Université Grenoble Alpes, France. Data control and qualification is performed by each individual nodes : the poster will provide some insights on RESIF broadband seismic component data quality control. We will then present data that has been recently made publicly available. Data is distributed through worldwide FDSN and european EIDA standards protocols. A new web portal is now opened to explore and download seismic data and metadata. The RESIF datacentre is also now connected to Grenoble University High Performance Computing (HPC) facility : a typical use-case will be presented using iRODS technologies. The use of dense observation networks is increasing, bringing challenges in data growth and handling : we will present an example where HDF5 data format was used as an alternative to usual seismology data formats.

  2. Reliable communication in the presence of failures

    NASA Technical Reports Server (NTRS)

    Birman, Kenneth P.; Joseph, Thomas A.

    1987-01-01

    The design and correctness of a communication facility for a distributed computer system are reported on. The facility provides support for fault-tolerant process groups in the form of a family of reliable multicast protocols that can be used in both local- and wide-area networks. These protocols attain high levels of concurrency, while respecting application-specific delivery ordering constraints, and have varying cost and performance that depend on the degree of ordering desired. In particular, a protocol that enforces causal delivery orderings is introduced and shown to be a valuable alternative to conventional asynchronous communication protocols. The facility also ensures that the processes belonging to a fault-tolerant process group will observe consistant orderings of events affecting the group as a whole, including process failures, recoveries, migration, and dynamic changes to group properties like member rankings. A review of several uses for the protocols is the ISIS system, which supports fault-tolerant resilient objects and bulletin boards, illustrates the significant simplification of higher level algorithms made possible by our approach.

  3. Argonne Leadership Computing Facility 2011 annual report : Shaping future supercomputing.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Papka, M.; Messina, P.; Coffey, R.

    The ALCF's Early Science Program aims to prepare key applications for the architecture and scale of Mira and to solidify libraries and infrastructure that will pave the way for other future production applications. Two billion core-hours have been allocated to 16 Early Science projects on Mira. The projects, in addition to promising delivery of exciting new science, are all based on state-of-the-art, petascale, parallel applications. The project teams, in collaboration with ALCF staff and IBM, have undertaken intensive efforts to adapt their software to take advantage of Mira's Blue Gene/Q architecture, which, in a number of ways, is a precursormore » to future high-performance-computing architecture. The Argonne Leadership Computing Facility (ALCF) enables transformative science that solves some of the most difficult challenges in biology, chemistry, energy, climate, materials, physics, and other scientific realms. Users partnering with ALCF staff have reached research milestones previously unattainable, due to the ALCF's world-class supercomputing resources and expertise in computation science. In 2011, the ALCF's commitment to providing outstanding science and leadership-class resources was honored with several prestigious awards. Research on multiscale brain blood flow simulations was named a Gordon Bell Prize finalist. Intrepid, the ALCF's BG/P system, ranked No. 1 on the Graph 500 list for the second consecutive year. The next-generation BG/Q prototype again topped the Green500 list. Skilled experts at the ALCF enable researchers to conduct breakthrough science on the Blue Gene system in key ways. The Catalyst Team matches project PIs with experienced computational scientists to maximize and accelerate research in their specific scientific domains. The Performance Engineering Team facilitates the effective use of applications on the Blue Gene system by assessing and improving the algorithms used by applications and the techniques used to implement those algorithms. The Data Analytics and Visualization Team lends expertise in tools and methods for high-performance, post-processing of large datasets, interactive data exploration, batch visualization, and production visualization. The Operations Team ensures that system hardware and software work reliably and optimally; system tools are matched to the unique system architectures and scale of ALCF resources; the entire system software stack works smoothly together; and I/O performance issues, bug fixes, and requests for system software are addressed. The User Services and Outreach Team offers frontline services and support to existing and potential ALCF users. The team also provides marketing and outreach to users, DOE, and the broader community.« less

  4. Performance of freestanding inpatient rehabilitation hospitals before and after the rehabilitation prospective payment system.

    PubMed

    Thompson, Jon M; McCue, Michael J

    2010-01-01

    Inpatient rehabilitation hospitals provide important services to patients to restore physical and cognitive functioning. Historically, these hospitals have been reimbursed by Medicare under a cost-based system; but in 2002, Medicare implemented a rehabilitation prospective payment system (PPS). Despite the implementation of a PPS for rehabilitation, there is limited published research that addresses the operating and financial performance of these hospitals. We examined operating and financial performance in the pre- and post-PPS periods for for-profit and nonprofit freestanding inpatient rehabilitation hospitals to test for pre- and post-PPS differences within the ownership groups. We identified freestanding inpatient rehabilitation hospitals from the Centers for Medicare and Medicaid Services Health Care Cost Report Information System database for the first two fiscal years under PPS. We excluded facilities that had fiscal years less than 270 days, facilities with missing data, and government facilities. We computed average values for performance variables for the facilities in the two consecutive fiscal years post-PPS. For the pre-PPS period, we collected data on these same facilities and, once facilities with missing data and fiscal years less than 270 days were excluded, computed average values for the two consecutive fiscal years pre-PPS. Our final sample of 140 inpatient rehabilitation facilities was composed of 44 nonprofit hospitals and 96 for-profit hospitals both pre- and post-PPS. We utilized a pairwise comparison test (t-test comparison) to measure the significance of differences on each performance variable between pre- and post-PPS periods within each ownership group. Findings show that both nonprofit and for-profit freestanding inpatient rehabilitation hospitals reduced length of stay, increased discharges, and increased profitability. Within the for-profit ownership group, the percentage of Medicare discharges increased and operating expense per adjusted discharge decreased. Findings suggest that managers of these hospitals have adapted their administrative practices to conform with the financial incentives of the rehabilitation PPS. Managers must continue to control costs, increase discharges, and reduce length of stay to remain financially viable under the rehabilitation PPS.

  5. Position Paper - pFLogger: The Parallel Fortran Logging framework for HPC Applications

    NASA Technical Reports Server (NTRS)

    Clune, Thomas L.; Cruz, Carlos A.

    2017-01-01

    In the context of high performance computing (HPC), software investments in support of text-based diagnostics, which monitor a running application, are typically limited compared to those for other types of IO. Examples of such diagnostics include reiteration of configuration parameters, progress indicators, simple metrics (e.g., mass conservation, convergence of solvers, etc.), and timers. To some degree, this difference in priority is justifiable as other forms of output are the primary products of a scientific model and, due to their large data volume, much more likely to be a significant performance concern. In contrast, text-based diagnostic content is generally not shared beyond the individual or group running an application and is most often used to troubleshoot when something goes wrong. We suggest that a more systematic approach enabled by a logging facility (or logger) similar to those routinely used by many communities would provide significant value to complex scientific applications. In the context of high-performance computing, an appropriate logger would provide specialized support for distributed and shared-memory parallelism and have low performance overhead. In this paper, we present our prototype implementation of pFlogger a parallel Fortran-based logging framework, and assess its suitability for use in a complex scientific application.

  6. POSITION PAPER - pFLogger: The Parallel Fortran Logging Framework for HPC Applications

    NASA Technical Reports Server (NTRS)

    Clune, Thomas L.; Cruz, Carlos A.

    2017-01-01

    In the context of high performance computing (HPC), software investments in support of text-based diagnostics, which monitor a running application, are typically limited compared to those for other types of IO. Examples of such diagnostics include reiteration of configuration parameters, progress indicators, simple metrics (e.g., mass conservation, convergence of solvers, etc.), and timers. To some degree, this difference in priority is justifiable as other forms of output are the primary products of a scientific model and, due to their large data volume, much more likely to be a significant performance concern. In contrast, text-based diagnostic content is generally not shared beyond the individual or group running an application and is most often used to troubleshoot when something goes wrong. We suggest that a more systematic approach enabled by a logging facility (or 'logger') similar to those routinely used by many communities would provide significant value to complex scientific applications. In the context of high-performance computing, an appropriate logger would provide specialized support for distributed and shared-memory parallelism and have low performance overhead. In this paper, we present our prototype implementation of pFlogger - a parallel Fortran-based logging framework, and assess its suitability for use in a complex scientific application.

  7. NASA Lighting Research, Test, & Analysis

    NASA Technical Reports Server (NTRS)

    Clark, Toni

    2015-01-01

    The Habitability and Human Factors Branch, at Johnson Space Center, in Houston, TX, provides technical guidance for the development of spaceflight lighting requirements, verification of light system performance, analysis of integrated environmental lighting systems, and research of lighting-related human performance issues. The Habitability & Human Factors Lighting Team maintains two physical facilities that are integrated to provide support. The Lighting Environment Test Facility (LETF) provides a controlled darkroom environment for physical verification of lighting systems with photometric and spetrographic measurement systems. The Graphics Research & Analysis Facility (GRAF) maintains the capability for computer-based analysis of operational lighting environments. The combined capabilities of the Lighting Team at Johnson Space Center have been used for a wide range of lighting-related issues.

  8. SeqMule: automated pipeline for analysis of human exome/genome sequencing data.

    PubMed

    Guo, Yunfei; Ding, Xiaolei; Shen, Yufeng; Lyon, Gholson J; Wang, Kai

    2015-09-18

    Next-generation sequencing (NGS) technology has greatly helped us identify disease-contributory variants for Mendelian diseases. However, users are often faced with issues such as software compatibility, complicated configuration, and no access to high-performance computing facility. Discrepancies exist among aligners and variant callers. We developed a computational pipeline, SeqMule, to perform automated variant calling from NGS data on human genomes and exomes. SeqMule integrates computational-cluster-free parallelization capability built on top of the variant callers, and facilitates normalization/intersection of variant calls to generate consensus set with high confidence. SeqMule integrates 5 alignment tools, 5 variant calling algorithms and accepts various combinations all by one-line command, therefore allowing highly flexible yet fully automated variant calling. In a modern machine (2 Intel Xeon X5650 CPUs, 48 GB memory), when fast turn-around is needed, SeqMule generates annotated VCF files in a day from a 30X whole-genome sequencing data set; when more accurate calling is needed, SeqMule generates consensus call set that improves over single callers, as measured by both Mendelian error rate and consistency. SeqMule supports Sun Grid Engine for parallel processing, offers turn-key solution for deployment on Amazon Web Services, allows quality check, Mendelian error check, consistency evaluation, HTML-based reports. SeqMule is available at http://seqmule.openbioinformatics.org.

  9. Using computer graphics to enhance astronaut and systems safety

    NASA Technical Reports Server (NTRS)

    Brown, J. W.

    1985-01-01

    Computer graphics is being employed at the NASA Johnson Space Center as a tool to perform rapid, efficient and economical analyses for man-machine integration, flight operations development and systems engineering. The Operator Station Design System (OSDS), a computer-based facility featuring a highly flexible and versatile interactive software package, PLAID, is described. This unique evaluation tool, with its expanding data base of Space Shuttle elements, various payloads, experiments, crew equipment and man models, supports a multitude of technical evaluations, including spacecraft and workstation layout, definition of astronaut visual access, flight techniques development, cargo integration and crew training. As OSDS is being applied to the Space Shuttle, Orbiter payloads (including the European Space Agency's Spacelab) and future space vehicles and stations, astronaut and systems safety are being enhanced. Typical OSDS examples are presented. By performing physical and operational evaluations during early conceptual phases. supporting systems verification for flight readiness, and applying its capabilities to real-time mission support, the OSDS provides the wherewithal to satisfy a growing need of the current and future space programs for efficient, economical analyses.

  10. Integrated Computational Materials Engineering for Magnesium in Automotive Body Applications

    NASA Astrophysics Data System (ADS)

    Allison, John E.; Liu, Baicheng; Boyle, Kevin P.; Hector, Lou; McCune, Robert

    This paper provides an overview and progress report for an international collaborative project which aims to develop an ICME infrastructure for magnesium for use in automotive body applications. Quantitative processing-micro structure-property relationships are being developed for extruded Mg alloys, sheet-formed Mg alloys and high pressure die cast Mg alloys. These relationships are captured in computational models which are then linked with manufacturing process simulation and used to provide constitutive models for component performance analysis. The long term goal is to capture this information in efficient computational models and in a web-centered knowledge base. The work is being conducted at leading universities, national labs and industrial research facilities in the US, China and Canada. This project is sponsored by the U.S. Department of Energy, the U.S. Automotive Materials Partnership (USAMP), Chinese Ministry of Science and Technology (MOST) and Natural Resources Canada (NRCan).

  11. Dialysis Facility Transplant Philosophy and Access to Kidney Transplantation in the Southeast.

    PubMed

    Gander, Jennifer; Browne, Teri; Plantinga, Laura; Pastan, Stephen O; Sauls, Leighann; Krisher, Jenna; Patzer, Rachel E

    2015-01-01

    Little is known about the impact of dialysis facility treatment philosophy on access to transplant. The aim of our study was to determine the relationship between the dialysis facility transplant philosophy and facility-level access to kidney transplant waitlisting. A 25-item questionnaire administered to Southeastern dialysis facilities (n = 509) in 2012 captured the facility transplant philosophy (categorized as 'transplant is our first choice', 'transplant is a great option for some', and 'transplant is a good option, if the patient is interested'). Facility-level waitlisting and facility characteristics were obtained from the 2008-2011 Dialysis Facility Report. Multivariable logistic regression was used to examine the association between the dialysis facility transplant philosophy and facility waitlisting performance (dichotomized using the national median), where low performance was defined as fewer than 21.7% of dialysis patients waitlisted within a facility. Fewer than 25% (n = 124) of dialysis facilities reported 'transplant is our first option'. A total of 131 (31.4%) dialysis facilities in the Southeast were high-performing facilities with respect to waitlisting. Adjusted analysis showed that facilities who reported 'transplant is our first option' were twice (OR 2.0; 95% CI 1.0-3.9) as likely to have high waitlisting performance compared to facilities who reported that 'transplant is a good option, if the patient is interested'. Facilities with staff who had a more positive transplant philosophy were more likely to have better facility waitlisting performance. Future prospective studies are needed to further investigate if improving the kidney transplant philosophy in dialysis facilities improves access to transplantation.

  12. High-Performance Computing Systems and Operations | Computational Science |

    Science.gov Websites

    NREL Systems and Operations High-Performance Computing Systems and Operations NREL operates high-performance computing (HPC) systems dedicated to advancing energy efficiency and renewable energy technologies. Capabilities NREL's HPC capabilities include: High-Performance Computing Systems We operate

  13. Space technology test facilities at the NASA Ames Research Center

    NASA Technical Reports Server (NTRS)

    Gross, Anthony R.; Rodrigues, Annette T.

    1990-01-01

    The major space research and technology test facilities at the NASA Ames Research Center are divided into five categories: General Purpose, Life Support, Computer-Based Simulation, High Energy, and the Space Exploraton Test Facilities. The paper discusses selected facilities within each of the five categories and discusses some of the major programs in which these facilities have been involved. Special attention is given to the 20-G Man-Rated Centrifuge, the Human Research Facility, the Plant Crop Growth Facility, the Numerical Aerodynamic Simulation Facility, the Arc-Jet Complex and Hypersonic Test Facility, the Infrared Detector and Cryogenic Test Facility, and the Mars Wind Tunnel. Each facility is described along with its objectives, test parameter ranges, and major current programs and applications.

  14. Investigation of Storage Options for Scientific Computing on Grid and Cloud Facilities

    NASA Astrophysics Data System (ADS)

    Garzoglio, Gabriele

    2012-12-01

    In recent years, several new storage technologies, such as Lustre, Hadoop, OrangeFS, and BlueArc, have emerged. While several groups have run benchmarks to characterize them under a variety of configurations, more work is needed to evaluate these technologies for the use cases of scientific computing on Grid clusters and Cloud facilities. This paper discusses our evaluation of the technologies as deployed on a test bed at FermiCloud, one of the Fermilab infrastructure-as-a-service Cloud facilities. The test bed consists of 4 server-class nodes with 40 TB of disk space and up to 50 virtual machine clients, some running on the storage server nodes themselves. With this configuration, the evaluation compares the performance of some of these technologies when deployed on virtual machines and on “bare metal” nodes. In addition to running standard benchmarks such as IOZone to check the sanity of our installation, we have run I/O intensive tests using physics-analysis applications. This paper presents how the storage solutions perform in a variety of realistic use cases of scientific computing. One interesting difference among the storage systems tested is found in a decrease in total read throughput with increasing number of client processes, which occurs in some implementations but not others.

  15. Investigation of storage options for scientific computing on Grid and Cloud facilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garzoglio, Gabriele

    In recent years, several new storage technologies, such as Lustre, Hadoop, OrangeFS, and BlueArc, have emerged. While several groups have run benchmarks to characterize them under a variety of configurations, more work is needed to evaluate these technologies for the use cases of scientific computing on Grid clusters and Cloud facilities. This paper discusses our evaluation of the technologies as deployed on a test bed at FermiCloud, one of the Fermilab infrastructure-as-a-service Cloud facilities. The test bed consists of 4 server-class nodes with 40 TB of disk space and up to 50 virtual machine clients, some running on the storagemore » server nodes themselves. With this configuration, the evaluation compares the performance of some of these technologies when deployed on virtual machines and on bare metal nodes. In addition to running standard benchmarks such as IOZone to check the sanity of our installation, we have run I/O intensive tests using physics-analysis applications. This paper presents how the storage solutions perform in a variety of realistic use cases of scientific computing. One interesting difference among the storage systems tested is found in a decrease in total read throughput with increasing number of client processes, which occurs in some implementations but not others.« less

  16. Application of high performance computing for studying cyclic variability in dilute internal combustion engines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    FINNEY, Charles E A; Edwards, Kevin Dean; Stoyanov, Miroslav K

    2015-01-01

    Combustion instabilities in dilute internal combustion engines are manifest in cyclic variability (CV) in engine performance measures such as integrated heat release or shaft work. Understanding the factors leading to CV is important in model-based control, especially with high dilution where experimental studies have demonstrated that deterministic effects can become more prominent. Observation of enough consecutive engine cycles for significant statistical analysis is standard in experimental studies but is largely wanting in numerical simulations because of the computational time required to compute hundreds or thousands of consecutive cycles. We have proposed and begun implementation of an alternative approach to allowmore » rapid simulation of long series of engine dynamics based on a low-dimensional mapping of ensembles of single-cycle simulations which map input parameters to output engine performance. This paper details the use Titan at the Oak Ridge Leadership Computing Facility to investigate CV in a gasoline direct-injected spark-ignited engine with a moderately high rate of dilution achieved through external exhaust gas recirculation. The CONVERGE CFD software was used to perform single-cycle simulations with imposed variations of operating parameters and boundary conditions selected according to a sparse grid sampling of the parameter space. Using an uncertainty quantification technique, the sampling scheme is chosen similar to a design of experiments grid but uses functions designed to minimize the number of samples required to achieve a desired degree of accuracy. The simulations map input parameters to output metrics of engine performance for a single cycle, and by mapping over a large parameter space, results can be interpolated from within that space. This interpolation scheme forms the basis for a low-dimensional metamodel which can be used to mimic the dynamical behavior of corresponding high-dimensional simulations. Simulations of high-EGR spark-ignition combustion cycles within a parametric sampling grid were performed and analyzed statistically, and sensitivities of the physical factors leading to high CV are presented. With these results, the prospect of producing low-dimensional metamodels to describe engine dynamics at any point in the parameter space will be discussed. Additionally, modifications to the methodology to account for nondeterministic effects in the numerical solution environment are proposed« less

  17. The OSG Open Facility: an on-ramp for opportunistic scientific computing

    NASA Astrophysics Data System (ADS)

    Jayatilaka, B.; Levshina, T.; Sehgal, C.; Gardner, R.; Rynge, M.; Würthwein, F.

    2017-10-01

    The Open Science Grid (OSG) is a large, robust computing grid that started primarily as a collection of sites associated with large HEP experiments such as ATLAS, CDF, CMS, and DZero, but has evolved in recent years to a much larger user and resource platform. In addition to meeting the US LHC community’s computational needs, the OSG continues to be one of the largest providers of distributed high-throughput computing (DHTC) to researchers from a wide variety of disciplines via the OSG Open Facility. The Open Facility consists of OSG resources that are available opportunistically to users other than resource owners and their collaborators. In the past two years, the Open Facility has doubled its annual throughput to over 200 million wall hours. More than half of these resources are used by over 100 individual researchers from over 60 institutions in fields such as biology, medicine, math, economics, and many others. Over 10% of these individual users utilized in excess of 1 million computational hours each in the past year. The largest source of these cycles is temporary unused capacity at institutions affiliated with US LHC computational sites. An increasing fraction, however, comes from university HPC clusters and large national infrastructure supercomputers offering unused capacity. Such expansions have allowed the OSG to provide ample computational resources to both individual researchers and small groups as well as sizable international science collaborations such as LIGO, AMS, IceCube, and sPHENIX. Opening up access to the Fermilab FabrIc for Frontier Experiments (FIFE) project has also allowed experiments such as mu2e and NOvA to make substantial use of Open Facility resources, the former with over 40 million wall hours in a year. We present how this expansion was accomplished as well as future plans for keeping the OSG Open Facility at the forefront of enabling scientific research by way of DHTC.

  18. The OSG Open Facility: An On-Ramp for Opportunistic Scientific Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jayatilaka, B.; Levshina, T.; Sehgal, C.

    The Open Science Grid (OSG) is a large, robust computing grid that started primarily as a collection of sites associated with large HEP experiments such as ATLAS, CDF, CMS, and DZero, but has evolved in recent years to a much larger user and resource platform. In addition to meeting the US LHC community’s computational needs, the OSG continues to be one of the largest providers of distributed high-throughput computing (DHTC) to researchers from a wide variety of disciplines via the OSG Open Facility. The Open Facility consists of OSG resources that are available opportunistically to users other than resource ownersmore » and their collaborators. In the past two years, the Open Facility has doubled its annual throughput to over 200 million wall hours. More than half of these resources are used by over 100 individual researchers from over 60 institutions in fields such as biology, medicine, math, economics, and many others. Over 10% of these individual users utilized in excess of 1 million computational hours each in the past year. The largest source of these cycles is temporary unused capacity at institutions affiliated with US LHC computational sites. An increasing fraction, however, comes from university HPC clusters and large national infrastructure supercomputers offering unused capacity. Such expansions have allowed the OSG to provide ample computational resources to both individual researchers and small groups as well as sizable international science collaborations such as LIGO, AMS, IceCube, and sPHENIX. Opening up access to the Fermilab FabrIc for Frontier Experiments (FIFE) project has also allowed experiments such as mu2e and NOvA to make substantial use of Open Facility resources, the former with over 40 million wall hours in a year. We present how this expansion was accomplished as well as future plans for keeping the OSG Open Facility at the forefront of enabling scientific research by way of DHTC.« less

  19. Brief Survey of TSC Computing Facilities

    DOT National Transportation Integrated Search

    1972-05-01

    The Transportation Systems Center (TSC) has four, essentially separate, in-house computing facilities. We shall call them Honeywell Facility, the Hybrid Facility, the Multimode Simulation Facility, and the Central Facility. In addition to these four,...

  20. Antennas for 20/30 GHz and beyond

    NASA Technical Reports Server (NTRS)

    Chen, C. Harry; Wong, William C.; Hamada, S. Jim

    1989-01-01

    Antennas of 20/30 GHz and higher frequency, due to the small wavelength, offer capabilities for many space applications. With the government-sponsored space programs (such as ACTS) in recent years, the industry has gone through the learning curve of designing and developing high-performance, multi-function antennas in this frequency range. Design and analysis tools (such as the computer modelling used in feedhorn design and reflector surface and thermal distortion analysis) are available. The components/devices (such as BFN's, weight modules, feedhorns and etc.) are space-qualified. The manufacturing procedures (such as reflector surface control) are refined to meet the stringent tolerance accompanying high frequencies. The integration and testing facilities (such as Near-Field range) also advance to facilitate precision assembling and performance verification. These capabilities, essential to the successful design and development of high-frequency spaceborne antennas, shall find more space applications (such as ESGP) than just communications.

  1. Closely Spaced Independent Parallel Runway Simulation.

    DTIC Science & Technology

    1984-10-01

    facility consists of the Central Computer Facility, the Controller Laboratory, and the Simulator Pilot Complex. CENTRAL COMPUTER FACILITY. The Central... Computer Facility consists of a group of mainframes, minicomputers, and associated peripherals which host the operational and data acquisition...in the Controller Laboratory and convert their verbal directives into a keyboard entry which is transmitted to the Central Computer Complex, where

  2. Dialysis Facility Transplant Philosophy and Access to Kidney Transplantation in the Southeast

    PubMed Central

    Gander, Jennifer; Browne, Teri; Plantinga, Laura; Pastan, Stephen O; Sauls, Leighann; Krisher, Jenna; Patzer, Rachel E

    2015-01-01

    Background Little is known about the impact of dialysis facility treatment philosophy on access to transplant. The aim of our study was to determine the relationship between dialysis facility transplant philosophy and facility-level access to kidney transplant waitlisting. Methods A 25-item questionnaire administered to Southeastern dialysis facilities (n=509) in 2012 captured facility transplant philosophy (categorized as “transplant is our first choice,” “transplant is a great option for some,” and “transplant is a good option, if the patient is interested”) .. Facility-level waitlisting and facility characteristics were obtained from the 2008-2011 Dialysis Facility Report. Multivariable logistic regression was used to examinethe association between dialysis facility transplant philosophy and facility waitlisting performance (dichotomized using the national median), where low performance was defined as less than 21.7% of dialysis patients waitlisted within a facility. Results Fewer than 25% (n=124) of dialysis facilities reported “transplant is our first option.” A total of 131 (31.4%) dialysis facilities in the Southeast were high-performing with respect to waitlisting. Adjusted analysis showed that facilities who reported “transplant is our first option” were twice (OR=2.0, 95% CI 1.0, 3.9) as likely to have high waitlisting performance compared to facilities who reported “transplant is a good option, if the patient is interested.” Conclusions Facilities with staff who had a more positive transplant philosophy were more likely to have better facility waitlisting performance. Future prospective studies are needed to further transplantation. PMID:26278585

  3. About Distributed Simulation-based Optimization of Forming Processes using a Grid Architecture

    NASA Astrophysics Data System (ADS)

    Grauer, Manfred; Barth, Thomas

    2004-06-01

    Permanently increasing complexity of products and their manufacturing processes combined with a shorter "time-to-market" leads to more and more use of simulation and optimization software systems for product design. Finding a "good" design of a product implies the solution of computationally expensive optimization problems based on the results of simulation. Due to the computational load caused by the solution of these problems, the requirements on the Information&Telecommunication (IT) infrastructure of an enterprise or research facility are shifting from stand-alone resources towards the integration of software and hardware resources in a distributed environment for high-performance computing. Resources can either comprise software systems, hardware systems, or communication networks. An appropriate IT-infrastructure must provide the means to integrate all these resources and enable their use even across a network to cope with requirements from geographically distributed scenarios, e.g. in computational engineering and/or collaborative engineering. Integrating expert's knowledge into the optimization process is inevitable in order to reduce the complexity caused by the number of design variables and the high dimensionality of the design space. Hence, utilization of knowledge-based systems must be supported by providing data management facilities as a basis for knowledge extraction from product data. In this paper, the focus is put on a distributed problem solving environment (PSE) capable of providing access to a variety of necessary resources and services. A distributed approach integrating simulation and optimization on a network of workstations and cluster systems is presented. For geometry generation the CAD-system CATIA is used which is coupled with the FEM-simulation system INDEED for simulation of sheet-metal forming processes and the problem solving environment OpTiX for distributed optimization.

  4. Low Prevalence of Chronic Beryllium Disease Among Workers at aNuclearWeaponsResearchandDevelopmentFacility

    PubMed Central

    Arjomandi, Mehrdad; Seward, James; Gotway, Michael B.; Nishimura, Stephen; Fulton, George P.; Thundiyil, Josef; King, Talmadge E.; Harber, Philip; Balmes, John R.

    2012-01-01

    Objective To study the prevalence of beryllium sensitization (BeS) and chronic beryllium disease (CBD) in a cohort of workers from a nuclear weapons research and development facility. Methods We evaluated 50 workers with BeS with medical and occupational histories, physical examination, chest imaging with high-resolution computed tomography (N = 49), and pulmonary function testing. Forty of these workers also underwent bronchoscopy for bronchoalveolar lavage and transbronchial biopsies. Results The mean duration of employment at the facility was 18 years and the mean latency (from first possible exposure) to time of evaluation was 32 years. Five of the workers had CBD at the time of evaluation (based on histology or high-resolution computed tomography); three others had evidence of probable CBD. Conclusions These workers with BeS, characterized by a long duration of potential Be exposure and a long latency, had a low prevalence of CBD. PMID:20523233

  5. Extraordinary Tools for Extraordinary Science: The Impact ofSciDAC on Accelerator Science&Technology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ryne, Robert D.

    2006-08-10

    Particle accelerators are among the most complex and versatile instruments of scientific exploration. They have enabled remarkable scientific discoveries and important technological advances that span all programs within the DOE Office of Science (DOE/SC). The importance of accelerators to the DOE/SC mission is evident from an examination of the DOE document, ''Facilities for the Future of Science: A Twenty-Year Outlook''. Of the 28 facilities listed, 13 involve accelerators. Thanks to SciDAC, a powerful suite of parallel simulation tools has been developed that represent a paradigm shift in computational accelerator science. Simulations that used to take weeks or more now takemore » hours, and simulations that were once thought impossible are now performed routinely. These codes have been applied to many important projects of DOE/SC including existing facilities (the Tevatron complex, the Relativistic Heavy Ion Collider), facilities under construction (the Large Hadron Collider, the Spallation Neutron Source, the Linac Coherent Light Source), and to future facilities (the International Linear Collider, the Rare Isotope Accelerator). The new codes have also been used to explore innovative approaches to charged particle acceleration. These approaches, based on the extremely intense fields that can be present in lasers and plasmas, may one day provide a path to the outermost reaches of the energy frontier. Furthermore, they could lead to compact, high-gradient accelerators that would have huge consequences for US science and technology, industry, and medicine. In this talk I will describe the new accelerator modeling capabilities developed under SciDAC, the essential role of multi-disciplinary collaboration with applied mathematicians, computer scientists, and other IT experts in developing these capabilities, and provide examples of how the codes have been used to support DOE/SC accelerator projects.« less

  6. Extraordinary tools for extraordinary science: the impact of SciDAC on accelerator science and technology

    NASA Astrophysics Data System (ADS)

    Ryne, Robert D.

    2006-09-01

    Particle accelerators are among the most complex and versatile instruments of scientific exploration. They have enabled remarkable scientific discoveries and important technological advances that span all programs within the DOE Office of Science (DOE/SC). The importance of accelerators to the DOE/SC mission is evident from an examination of the DOE document, ''Facilities for the Future of Science: A Twenty-Year Outlook.'' Of the 28 facilities listed, 13 involve accelerators. Thanks to SciDAC, a powerful suite of parallel simulation tools has been developed that represent a paradigm shift in computational accelerator science. Simulations that used to take weeks or more now take hours, and simulations that were once thought impossible are now performed routinely. These codes have been applied to many important projects of DOE/SC including existing facilities (the Tevatron complex, the Relativistic Heavy Ion Collider), facilities under construction (the Large Hadron Collider, the Spallation Neutron Source, the Linac Coherent Light Source), and to future facilities (the International Linear Collider, the Rare Isotope Accelerator). The new codes have also been used to explore innovative approaches to charged particle acceleration. These approaches, based on the extremely intense fields that can be present in lasers and plasmas, may one day provide a path to the outermost reaches of the energy frontier. Furthermore, they could lead to compact, high-gradient accelerators that would have huge consequences for US science and technology, industry, and medicine. In this talk I will describe the new accelerator modeling capabilities developed under SciDAC, the essential role of multi-disciplinary collaboration with applied mathematicians, computer scientists, and other IT experts in developing these capabilities, and provide examples of how the codes have been used to support DOE/SC accelerator projects.

  7. 14 CFR 1260.35 - Investigative Requirements.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... (a) NASA reserves the right to perform security checks and to deny or restrict access to a NASA Center, facility, or computer system, or to NASA technical information, as NASA deems appropriate. To the...

  8. 14 CFR 1260.35 - Investigative Requirements.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... (a) NASA reserves the right to perform security checks and to deny or restrict access to a NASA Center, facility, or computer system, or to NASA technical information, as NASA deems appropriate. To the...

  9. 14 CFR § 1260.35 - Investigative Requirements.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... January 2004 (a) NASA reserves the right to perform security checks and to deny or restrict access to a NASA Center, facility, or computer system, or to NASA technical information, as NASA deems appropriate...

  10. 14 CFR 1260.35 - Investigative Requirements.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... (a) NASA reserves the right to perform security checks and to deny or restrict access to a NASA Center, facility, or computer system, or to NASA technical information, as NASA deems appropriate. To the...

  11. 14 CFR 1260.35 - Investigative Requirements.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... (a) NASA reserves the right to perform security checks and to deny or restrict access to a NASA Center, facility, or computer system, or to NASA technical information, as NASA deems appropriate. To the...

  12. Simulation and control of a 20 kHz spacecraft power system

    NASA Technical Reports Server (NTRS)

    Wasynczuk, O.; Krause, P. C.

    1988-01-01

    A detailed computer representation of four Mapham inverters connected in a series, parallel arrangement has been implemented. System performance is illustrated by computer traces for the four Mapham inverters connected to a Litz cable with parallel resistance and dc receiver loads at the receiving end of the transmission cable. Methods of voltage control and load sharing between the inverters are demonstrated. Also, the detailed computer representation is used to design and to demonstrate the advantages of a feed-forward voltage control strategy. It is illustrated that with a computer simulation of this type, the performance and control of spacecraft power systems may be investigated with relative ease and facility.

  13. Designing Facilities for Collaborative Operations

    NASA Technical Reports Server (NTRS)

    Norris, Jeffrey; Powell, Mark; Backes, Paul; Steinke, Robert; Tso, Kam; Wales, Roxana

    2003-01-01

    A methodology for designing operational facilities for collaboration by multiple experts has begun to take shape as an outgrowth of a project to design such facilities for scientific operations of the planned 2003 Mars Exploration Rover (MER) mission. The methodology could also be applicable to the design of military "situation rooms" and other facilities for terrestrial missions. It was recognized in this project that modern mission operations depend heavily upon the collaborative use of computers. It was further recognized that tests have shown that layout of a facility exerts a dramatic effect on the efficiency and endurance of the operations staff. The facility designs (for example, see figure) and the methodology developed during the project reflect this recognition. One element of the methodology is a metric, called effective capacity, that was created for use in evaluating proposed MER operational facilities and may also be useful for evaluating other collaboration spaces, including meeting rooms and military situation rooms. The effective capacity of a facility is defined as the number of people in the facility who can be meaningfully engaged in its operations. A person is considered to be meaningfully engaged if the person can (1) see, hear, and communicate with everyone else present; (2) see the material under discussion (typically data on a piece of paper, computer monitor, or projection screen); and (3) provide input to the product under development by the group. The effective capacity of a facility is less than the number of people that can physically fit in the facility. For example, a typical office that contains a desktop computer has an effective capacity of .4, while a small conference room that contains a projection screen has an effective capacity of around 10. Little or no benefit would be derived from allowing the number of persons in an operational facility to exceed its effective capacity: At best, the operations staff would be underutilized; at worst, operational performance would deteriorate. Elements of this methodology were applied to the design of three operations facilities for a series of rover field tests. These tests were observed by human-factors researchers and their conclusions are being used to refine and extend the methodology to be used in the final design of the MER operations facility. Further work is underway to evaluate the use of personal digital assistant (PDA) units as portable input interfaces and communication devices in future mission operations facilities. A PDA equipped for wireless communication and Ethernet, Bluetooth, or another networking technology would cost less than a complete computer system, and would enable a collaborator to communicate electronically with computers and with other collaborators while moving freely within the virtual environment created by a shared immersive graphical display.

  14. Advanced Simulation and Computing Fiscal Year 2016 Implementation Plan, Version 0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCoy, M.; Archer, B.; Hendrickson, B.

    2015-08-27

    The Stockpile Stewardship Program (SSP) is an integrated technical program for maintaining the safety, surety, and reliability of the U.S. nuclear stockpile. The SSP uses nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of experimental facilities and programs, and the computational capabilities to support these programs. The purpose of this IP is to outline key work requirements to be performed and to control individualmore » work activities within the scope of work. Contractors may not deviate from this plan without a revised WA or subsequent IP.« less

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bobyshev, A.; DeMar, P.; Grigaliunas, V.

    The LHC is entering its fourth year of production operation. Most Tier1 facilities have been in operation for almost a decade, when development and ramp-up efforts are included. LHC's distributed computing model is based on the availability of high capacity, high performance network facilities for both the WAN and LAN data movement, particularly within the Tier1 centers. As a result, the Tier1 centers tend to be on the leading edge of data center networking technology. In this paper, we analyze past and current developments in Tier1 LAN networking, as well as extrapolating where we anticipate networking technology is heading. Ourmore » analysis will include examination into the following areas: Evolution of Tier1 centers to their current state Evolving data center networking models and how they apply to Tier1 centers Impact of emerging network technologies (e.g. 10GE-connected hosts, 40GE/100GE links, IPv6) on Tier1 centers Trends in WAN data movement and emergence of software-defined WAN network capabilities Network virtualization« less

  16. Towards Portable Large-Scale Image Processing with High-Performance Computing.

    PubMed

    Huo, Yuankai; Blaber, Justin; Damon, Stephen M; Boyd, Brian D; Bao, Shunxing; Parvathaneni, Prasanna; Noguera, Camilo Bermudez; Chaganti, Shikha; Nath, Vishwesh; Greer, Jasmine M; Lyu, Ilwoo; French, William R; Newton, Allen T; Rogers, Baxter P; Landman, Bennett A

    2018-05-03

    High-throughput, large-scale medical image computing demands tight integration of high-performance computing (HPC) infrastructure for data storage, job distribution, and image processing. The Vanderbilt University Institute for Imaging Science (VUIIS) Center for Computational Imaging (CCI) has constructed a large-scale image storage and processing infrastructure that is composed of (1) a large-scale image database using the eXtensible Neuroimaging Archive Toolkit (XNAT), (2) a content-aware job scheduling platform using the Distributed Automation for XNAT pipeline automation tool (DAX), and (3) a wide variety of encapsulated image processing pipelines called "spiders." The VUIIS CCI medical image data storage and processing infrastructure have housed and processed nearly half-million medical image volumes with Vanderbilt Advanced Computing Center for Research and Education (ACCRE), which is the HPC facility at the Vanderbilt University. The initial deployment was natively deployed (i.e., direct installations on a bare-metal server) within the ACCRE hardware and software environments, which lead to issues of portability and sustainability. First, it could be laborious to deploy the entire VUIIS CCI medical image data storage and processing infrastructure to another HPC center with varying hardware infrastructure, library availability, and software permission policies. Second, the spiders were not developed in an isolated manner, which has led to software dependency issues during system upgrades or remote software installation. To address such issues, herein, we describe recent innovations using containerization techniques with XNAT/DAX which are used to isolate the VUIIS CCI medical image data storage and processing infrastructure from the underlying hardware and software environments. The newly presented XNAT/DAX solution has the following new features: (1) multi-level portability from system level to the application level, (2) flexible and dynamic software development and expansion, and (3) scalable spider deployment compatible with HPC clusters and local workstations.

  17. High-precision arithmetic in mathematical physics

    DOE PAGES

    Bailey, David H.; Borwein, Jonathan M.

    2015-05-12

    For many scientific calculations, particularly those involving empirical data, IEEE 32-bit floating-point arithmetic produces results of sufficient accuracy, while for other applications IEEE 64-bit floating-point is more appropriate. But for some very demanding applications, even higher levels of precision are often required. Furthermore, this article discusses the challenge of high-precision computation, in the context of mathematical physics, and highlights what facilities are required to support future computation, in light of emerging developments in computer architecture.

  18. Strategy and methodology for rank-ordering Virginia state agencies regarding solar attractiveness and identification of specific project possibilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hewett, R.

    1997-12-31

    This paper describes the strategy and computer processing system that NREL, the Virginia Department of Mines, Minerals and Energy (DMME) and the state energy office, are developing for computing solar attractiveness scores for state agencies and the individual facilities or buildings within each agency. In the case of an agency, solar attractiveness is a measure of that agency`s having a significant number of facilities for which solar has the potential to be promising. In the case of a facility, solar attractiveness is a measure of its potential for being good, economically viable candidate for a solar waste heating system. Virginiamore » State agencies are charged with reducing fossil energy and electricity use and expense. DMME is responsible for working with them to achieve the goals and for managing the state`s energy consumption and cost monitoring program. This is done using the Fast Accounting System for Energy Reporting (FASER) computerized energy accounting and tracking system and database. Agencies report energy use and expenses (by individual facility and energy type) to DMME quarterly. DMME is also responsible for providing technical and other assistance services to agencies and facilities interested in investigating use of solar. Since Virginia has approximately 80 agencies operating over 8,000 energy-consuming facilities and since DMME`s resources are limited, it is interested in being able to determine: (1) on which agencies to focus; (2) specific facilities on which to focus within each high-priority agency; and (3) irrespective of agency, which facilities are the most promising potential candidates for solar. The computer processing system described in this paper computes numerical solar attractiveness scores for the state`s agencies and the individual facilities using the energy use and cost data in the FASER system database and the state`s and NREL`s experience in implementing, testing and evaluating solar water heating systems in commercial and government facilities.« less

  19. Close to real life. [solving for transonic flow about lifting airfoils using supercomputers

    NASA Technical Reports Server (NTRS)

    Peterson, Victor L.; Bailey, F. Ron

    1988-01-01

    NASA's Numerical Aerodynamic Simulation (NAS) facility for CFD modeling of highly complex aerodynamic flows employs as its basic hardware two Cray-2s, an ETA-10 Model Q, an Amdahl 5880 mainframe computer that furnishes both support processing and access to 300 Gbytes of disk storage, several minicomputers and superminicomputers, and a Thinking Machines 16,000-device 'connection machine' processor. NAS, which was the first supercomputer facility to standardize operating-system and communication software on all processors, has done important Space Shuttle aerodynamics simulations and will be critical to the configurational refinement of the National Aerospace Plane and its intergrated powerplant, which will involve complex, high temperature reactive gasdynamic computations.

  20. Definition of performance specifications for automated Analytical Electrophoresis Facility (AAEF)

    NASA Technical Reports Server (NTRS)

    Brooks, D. E.

    1976-01-01

    In order to provide specifications for the automated Analytical Electrophoresis Facility (AAEF) that would satisfy the broadest variety of demands of a future user community, a survey was carried out of all those people who were identified as having published papers on cell electrophoresis in the past four years. A computer search was conducted of the relevant literature from which a list of 87 investigators was derived and defined as the user community for purposes of the mailing. A questionnaire was developed covering the areas of performance which required definition which was subsequently circulated to the user community. Based on the response to this survey performance specifications were assembled.

  1. Facility-level outcome performance measures for nursing homes.

    PubMed

    Porell, F; Caro, F G

    1998-12-01

    Risk-adjusted nursing home performance scores were developed for four health outcomes and five quality indicators from resident-level longitudinal case-mix reimbursement data for Medicaid residents of more than 500 nursing homes in Massachusetts. Facility performance was measured by comparing actual resident outcomes with expected outcomes derived from quarterly predictions of resident-level econometric models over a 3-year period (1991-1994). Performance measures were tightly distributed among facilities in the state. The intercorrelations among the nine outcome performance measures were relatively low and not uniformly positive. Performance measures were not highly associated with various structural facility attributes. For most outcomes, longitudinal analyses revealed only modest correlations between a facility's performance score from one time period to the next. Relatively few facilities exhibited consistent superior or inferior performance over time. The findings have implications toward the practical use of facility outcome performance measures for quality assurance and reimbursement purposes in the near future.

  2. Experimental and Computational Investigation of Multiple Injection Ports in a Convergent-Divergent Nozzle for Fluidic Thrust Vectoring

    NASA Technical Reports Server (NTRS)

    Waithe, Kenrick A.; Deere, Karen A.

    2003-01-01

    A computational and experimental study was conducted to investigate the effects of multiple injection ports in a two-dimensional, convergent-divergent nozzle, for fluidic thrust vectoring. The concept of multiple injection ports was conceived to enhance the thrust vectoring capability of a convergent-divergent nozzle over that of a single injection port without increasing the secondary mass flow rate requirements. The experimental study was conducted at static conditions in the Jet Exit Test Facility of the 16-Foot Transonic Tunnel Complex at NASA Langley Research Center. Internal nozzle performance was obtained at nozzle pressure ratios up to 10 with secondary nozzle pressure ratios up to 1 for five configurations. The computational study was conducted using the Reynolds Averaged Navier-Stokes computational fluid dynamics code PAB3D with two-equation turbulence closure and linear Reynolds stress modeling. Internal nozzle performance was predicted for nozzle pressure ratios up to 10 with a secondary nozzle pressure ratio of 0.7 for two configurations. Results from the experimental study indicate a benefit to multiple injection ports in a convergent-divergent nozzle. In general, increasing the number of injection ports from one to two increased the pitch thrust vectoring capability without any thrust performance penalties at nozzle pressure ratios less than 4 with high secondary pressure ratios. Results from the computational study are in excellent agreement with experimental results and validates PAB3D as a tool for predicting internal nozzle performance of a two dimensional, convergent-divergent nozzle with multiple injection ports.

  3. Experimental and analytical comparison of flowfields in a 110 N (25 lbf) H2/O2 rocket

    NASA Technical Reports Server (NTRS)

    Reed, Brian D.; Penko, Paul F.; Schneider, Steven J.; Kim, Suk C.

    1991-01-01

    A gaseous hydrogen/gaseous oxygen 110 N (25 lbf) rocket was examined through the RPLUS code using the full Navier-Stokes equations with finite rate chemistry. Performance tests were conducted on the rocket in an altitude test facility. Preliminary parametric analyses were performed for a range of mixture ratios and fuel film cooling pcts. It is shown that the computed values of specific impulse and characteristic exhaust velocity follow the trend of the experimental data. Specific impulse computed by the code is lower than the comparable test values by about two to three percent. The computed characteristic exhaust velocity values are lower than the comparable test values by three to four pct. Thrust coefficients computed by the code are found to be within two pct. of the measured values. It is concluded that the discrepancy between computed and experimental performance values could not be attributed to experimental uncertainty.

  4. Test Facilities and Experience on Space Nuclear System Developments at the Kurchatov Institute

    NASA Astrophysics Data System (ADS)

    Ponomarev-Stepnoi, Nikolai N.; Garin, Vladimir P.; Glushkov, Evgeny S.; Kompaniets, George V.; Kukharkin, Nikolai E.; Madeev, Vicktor G.; Papin, Vladimir K.; Polyakov, Dmitry N.; Stepennov, Boris S.; Tchuniyaev, Yevgeny I.; Tikhonov, Lev Ya.; Uksusov, Yevgeny I.

    2004-02-01

    The complexity of space fission systems and rigidity of requirement on minimization of weight and dimension characteristics along with the wish to decrease expenditures on their development demand implementation of experimental works which results shall be used in designing, safety substantiation, and licensing procedures. Experimental facilities are intended to solve the following tasks: obtainment of benchmark data for computer code validations, substantiation of design solutions when computational efforts are too expensive, quality control in a production process, and ``iron'' substantiation of criticality safety design solutions for licensing and public relations. The NARCISS and ISKRA critical facilities and unique ORM facility on shielding investigations at the operating OR nuclear research reactor were created in the Kurchatov Institute to solve the mentioned tasks. The range of activities performed at these facilities within the implementation of the previous Russian nuclear power system programs is briefly described in the paper. This experience shall be analyzed in terms of methodological approach to development of future space nuclear systems (this analysis is beyond this paper). Because of the availability of these facilities for experiments, the brief description of their critical assemblies and characteristics is given in this paper.

  5. Performance Assessment Institute-NV

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lombardo, Joesph

    2012-12-31

    The National Supercomputing Center for Energy and the Environment’s intention is to purchase a multi-purpose computer cluster in support of the Performance Assessment Institute (PA Institute). The PA Institute will serve as a research consortium located in Las Vegas Nevada with membership that includes: national laboratories, universities, industry partners, and domestic and international governments. This center will provide a one-of-a-kind centralized facility for the accumulation of information for use by Institutions of Higher Learning, the U.S. Government, and Regulatory Agencies and approved users. This initiative will enhance and extend High Performance Computing (HPC) resources in Nevada to support critical nationalmore » and international needs in "scientific confirmation". The PA Institute will be promoted as the leading Modeling, Learning and Research Center worldwide. The program proposes to utilize the existing supercomputing capabilities and alliances of the University of Nevada Las Vegas as a base, and to extend these resource and capabilities through a collaborative relationship with its membership. The PA Institute will provide an academic setting for interactive sharing, learning, mentoring and monitoring of multi-disciplinary performance assessment and performance confirmation information. The role of the PA Institute is to facilitate research, knowledge-increase, and knowledge-sharing among users.« less

  6. High Performance Computing Meets Energy Efficiency - Continuum Magazine |

    Science.gov Websites

    NREL High Performance Computing Meets Energy Efficiency High Performance Computing Meets Energy turbines. Simulation by Patrick J. Moriarty and Matthew J. Churchfield, NREL The new High Performance Computing Data Center at the National Renewable Energy Laboratory (NREL) hosts high-speed, high-volume data

  7. Resilient workflows for computational mechanics platforms

    NASA Astrophysics Data System (ADS)

    Nguyên, Toàn; Trifan, Laurentiu; Désidéri, Jean-Antoine

    2010-06-01

    Workflow management systems have recently been the focus of much interest and many research and deployment for scientific applications worldwide [26, 27]. Their ability to abstract the applications by wrapping application codes have also stressed the usefulness of such systems for multidiscipline applications [23, 24]. When complex applications need to provide seamless interfaces hiding the technicalities of the computing infrastructures, their high-level modeling, monitoring and execution functionalities help giving production teams seamless and effective facilities [25, 31, 33]. Software integration infrastructures based on programming paradigms such as Python, Mathlab and Scilab have also provided evidence of the usefulness of such approaches for the tight coupling of multidisciplne application codes [22, 24]. Also high-performance computing based on multi-core multi-cluster infrastructures open new opportunities for more accurate, more extensive and effective robust multi-discipline simulations for the decades to come [28]. This supports the goal of full flight dynamics simulation for 3D aircraft models within the next decade, opening the way to virtual flight-tests and certification of aircraft in the future [23, 24, 29].

  8. Testing Small CPAS Parachutes Using HIVAS

    NASA Technical Reports Server (NTRS)

    Ray, Eric S.; Hennings, Elsa; Bernatovich, Michael A.

    2013-01-01

    The High Velocity Airflow System (HIVAS) facility at the Naval Air Warfare Center (NAWC) at China Lake was successfully used as an alternative to flight test to determine parachute drag performance of two small Capsule Parachute Assembly System (CPAS) canopies. A similar parachute with known performance was also tested as a control. Realtime computations of drag coefficient were unrealistically low. This is because HIVAS produces a non-uniform flow which rapidly decays from a high central core flow. Additional calibration runs were performed to characterize this flow assuming radial symmetry from the centerline. The flow field was used to post-process effective flow velocities at each throttle setting and parachute diameter using the definition of the momentum flux factor. Because one parachute had significant oscillations, additional calculations were required to estimate the projected flow at off-axis angles. The resulting drag data from HIVAS compared favorably to previously estimated parachute performance based on scaled data from analogous CPAS parachutes. The data will improve drag area distributions in the next version of the CPAS Model Memo.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCoy, Michel; Archer, Bill; Hendrickson, Bruce

    The Stockpile Stewardship Program (SSP) is an integrated technical program for maintaining the safety, surety, and reliability of the U.S. nuclear stockpile. The SSP uses nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of experimental facilities and programs, and the computational capabilities to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities and computationalmore » resources that support annual stockpile assessment and certification, study advanced nuclear weapons design and manufacturing processes, analyze accident scenarios and weapons aging, and provide the tools to enable stockpile Life Extension Programs (LEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balance of resource, including technical staff, hardware, simulation software, and computer science solutions. ASC is now focused on increasing predictive capabilities in a three-dimensional (3D) simulation environment while maintaining support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (sufficient resolution, dimensionality, and scientific details), and quantifying critical margins and uncertainties. Resolving each issue requires increasingly difficult analyses because the aging process has progressively moved the stockpile further away from the original test base. Where possible, the program also enables the use of high performance computing (HPC) and simulation tools to address broader national security needs, such as foreign nuclear weapon assessments and counter nuclear terrorism.« less

  10. Performance optimization of Qbox and WEST on Intel Knights Landing

    NASA Astrophysics Data System (ADS)

    Zheng, Huihuo; Knight, Christopher; Galli, Giulia; Govoni, Marco; Gygi, Francois

    We present the optimization of electronic structure codes Qbox and WEST targeting the Intel®Xeon Phi™processor, codenamed Knights Landing (KNL). Qbox is an ab-initio molecular dynamics code based on plane wave density functional theory (DFT) and WEST is a post-DFT code for excited state calculations within many-body perturbation theory. Both Qbox and WEST employ highly scalable algorithms which enable accurate large-scale electronic structure calculations on leadership class supercomputer platforms beyond 100,000 cores, such as Mira and Theta at the Argonne Leadership Computing Facility. In this work, features of the KNL architecture (e.g. hierarchical memory) are explored to achieve higher performance in key algorithms of the Qbox and WEST codes and to develop a road-map for further development targeting next-generation computing architectures. In particular, the optimizations of the Qbox and WEST codes on the KNL platform will target efficient large-scale electronic structure calculations of nanostructured materials exhibiting complex structures and prediction of their electronic and thermal properties for use in solar and thermal energy conversion device. This work was supported by MICCoM, as part of Comp. Mats. Sci. Program funded by the U.S. DOE, Office of Sci., BES, MSE Division. This research used resources of the ALCF, which is a DOE Office of Sci. User Facility under Contract DE-AC02-06CH11357.

  11. Requirements for facilities and measurement techniques to support CFD development for hypersonic aircraft

    NASA Technical Reports Server (NTRS)

    Sellers, William L., III; Dwoyer, Douglas L.

    1992-01-01

    The design of a hypersonic aircraft poses unique challenges to the engineering community. Problems with duplicating flight conditions in ground based facilities have made performance predictions risky. Computational fluid dynamics (CFD) has been proposed as an additional means of providing design data. At the present time, CFD codes are being validated based on sparse experimental data and then used to predict performance at flight conditions with generally unknown levels of uncertainty. This paper will discuss the facility and measurement techniques that are required to support CFD development for the design of hypersonic aircraft. Illustrations are given of recent success in combining experimental and direct numerical simulation in CFD model development and validation for hypersonic perfect gas flows.

  12. Evaluative studies in nuclear medicine research. Emission-computed tomography assessment. Progress report 1 January-15 August 1981

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Potchen, E.J.

    Questions regarding what imaging performance goals need to be met to produce effective biomedical research using positron emission computer tomography, how near those performance goals are to being realized by imaging systems, and the dependence of currently-unachieved performance goals on design and operational factors have been addressed in the past year, along with refinement of economic estimates for the capital and operating costs of a PECT research facility. The two primary sources of information have been solicitations of expert opinion and review of current literature. (ACR)

  13. Understanding the Size-Dependent Sodium Storage Properties of Na2C6O6-Based Organic Electrodes for Sodium-Ion Batteries.

    PubMed

    Wang, Yaqun; Ding, Yu; Pan, Lijia; Shi, Ye; Yue, Zhuanghao; Shi, Yi; Yu, Guihua

    2016-05-11

    Organic electroactive materials represent a new generation of sustainable energy storage technology due to their unique features including environmental benignity, material sustainability, and highly tailorable properties. Here a carbonyl-based organic salt Na2C6O6, sodium rhodizonate (SR) dibasic, is systematically investigated for high-performance sodium-ion batteries. A combination of structural control, electrochemical analysis, and computational simulation show that rational morphological control can lead to significantly improved sodium storage performance. A facile antisolvent method was developed to synthesize microbulk, microrod, and nanorod structured SRs, which exhibit strong size-dependent sodium ion storage properties. The SR nanorod exhibited the best performance to deliver a reversible capacity of ∼190 mA h g(-1) at 0.1 C with over 90% retention after 100 cycles. At a high rate of 10 C, 50% of the capacity can be obtained due to enhanced reaction kinetics, and such high electrochemical activity maintains even at 80 °C. These results demonstrate a generic design route toward high-performance organic-based electrode materials for beyond Li-ion batteries. Using such a biomass-derived organic electrode material enables access to sustainable energy storage devices with low cost, high electrochemical performance and thermal stability.

  14. The ASCI Network for SC 2000: Gigabyte Per Second Networking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    PRATT, THOMAS J.; NAEGLE, JOHN H.; MARTINEZ JR., LUIS G.

    2001-11-01

    This document highlights the Discom's Distance computing and communication team activities at the 2000 Supercomputing conference in Dallas Texas. This conference is sponsored by the IEEE and ACM. Sandia's participation in the conference has now spanned a decade, for the last five years Sandia National Laboratories, Los Alamos National Lab and Lawrence Livermore National Lab have come together at the conference under the DOE's ASCI, Accelerated Strategic Computing Initiatives, Program rubric to demonstrate ASCI's emerging capabilities in computational science and our combined expertise in high performance computer science and communication networking developments within the program. At SC 2000, DISCOM demonstratedmore » an infrastructure. DISCOM2 uses this forum to demonstrate and focus communication and pre-standard implementation of 10 Gigabit Ethernet, the first gigabyte per second data IP network transfer application, and VPN technology that enabled a remote Distributed Resource Management tools demonstration. Additionally a national OC48 POS network was constructed to support applications running between the show floor and home facilities. This network created the opportunity to test PSE's Parallel File Transfer Protocol (PFTP) across a network that had similar speed and distances as the then proposed DISCOM WAN. The SCINET SC2000 showcased wireless networking and the networking team had the opportunity to explore this emerging technology while on the booth. This paper documents those accomplishments, discusses the details of their convention exhibit floor. We also supported the production networking needs of the implementation, and describes how these demonstrations supports DISCOM overall strategies in high performance computing networking.« less

  15. Ways of achieving continuous service from computers

    NASA Technical Reports Server (NTRS)

    Quinn, M. J., Jr.

    1974-01-01

    This paper outlines the methods used in the real-time computer complex to keep computers operating. Methods include selectover, high-speed restart, and low-speed restart. The hardware and software needed to implement these methods is discussed as well as the system recovery facility, alternate device support, and timeout. In general, methods developed while supporting the Gemini, Apollo, and Skylab space missions are presented.

  16. PEGylated hybrid ytterbia nanoparticles as high-performance diagnostic probes for in vivo magnetic resonance and X-ray computed tomography imaging with low systemic toxicity

    NASA Astrophysics Data System (ADS)

    Liu, Zhen; Pu, Fang; Liu, Jianhua; Jiang, Liyan; Yuan, Qinghai; Li, Zhengqiang; Ren, Jinsong; Qu, Xiaogang

    2013-05-01

    Novel nanoparticulate contrast agents with low systemic toxicity and inexpensive character have exhibited more advantages over routinely used small molecular contrast agents for the diagnosis and prognosis of disease. Herein, we designed and synthesized PEGylated hybrid ytterbia nanoparticles as high-performance nanoprobes for X-ray computed tomography (CT) imaging and magnetic resonance (MR) imaging both in vitro and in vivo. These well-defined nanoparticles were facile to prepare and cost-effective, meeting the criteria as a biomedical material. Compared with routinely used Iobitridol in clinic, our PEG-Yb2O3:Gd nanoparticles could provide much significantly enhanced contrast upon various clinical voltages ranging from 80 kVp to 140 kVp owing to the high atomic number and well-positioned K-edge energy of ytterbium. By the doping of gadolinium, our nanoparticulate contrast agent could perform perfect MR imaging simultaneously, revealing similar organ enrichment and bio-distribution with the CT imaging results. The super improvement in imaging efficiency was mainly attributed to the high content of Yb and Gd in a single nanoparticle, thus making these nanoparticles suitable for dual-modal diagnostic imaging with a low single-injection dose. In addition, detailed toxicological study in vitro and in vivo indicated that uniformly sized PEG-Yb2O3:Gd nanoparticles possessed excellent biocompatibility and revealed overall safety.Novel nanoparticulate contrast agents with low systemic toxicity and inexpensive character have exhibited more advantages over routinely used small molecular contrast agents for the diagnosis and prognosis of disease. Herein, we designed and synthesized PEGylated hybrid ytterbia nanoparticles as high-performance nanoprobes for X-ray computed tomography (CT) imaging and magnetic resonance (MR) imaging both in vitro and in vivo. These well-defined nanoparticles were facile to prepare and cost-effective, meeting the criteria as a biomedical material. Compared with routinely used Iobitridol in clinic, our PEG-Yb2O3:Gd nanoparticles could provide much significantly enhanced contrast upon various clinical voltages ranging from 80 kVp to 140 kVp owing to the high atomic number and well-positioned K-edge energy of ytterbium. By the doping of gadolinium, our nanoparticulate contrast agent could perform perfect MR imaging simultaneously, revealing similar organ enrichment and bio-distribution with the CT imaging results. The super improvement in imaging efficiency was mainly attributed to the high content of Yb and Gd in a single nanoparticle, thus making these nanoparticles suitable for dual-modal diagnostic imaging with a low single-injection dose. In addition, detailed toxicological study in vitro and in vivo indicated that uniformly sized PEG-Yb2O3:Gd nanoparticles possessed excellent biocompatibility and revealed overall safety. Electronic supplementary information (ESI) available. See DOI: 10.1039/c3nr00491k

  17. High performance computing and communications program

    NASA Technical Reports Server (NTRS)

    Holcomb, Lee

    1992-01-01

    A review of the High Performance Computing and Communications (HPCC) program is provided in vugraph format. The goals and objectives of this federal program are as follows: extend U.S. leadership in high performance computing and computer communications; disseminate the technologies to speed innovation and to serve national goals; and spur gains in industrial competitiveness by making high performance computing integral to design and production.

  18. Detailed Concepts in Performing Oversight on an Army Radiographic Inspection Site

    DTIC Science & Technology

    2017-03-01

    number of facilities that perform various nondestructive tests , inspections, and evaluations. The U.S. Army Armament Research, Development and...procedures, and documentation in place to conform to nationally recognized standards. This report specifically reviews the radiographic testing ...X-ray Nondestructive testing (NDT) Radiographic testing (RT) Computed tomography (CT) 16. SECURITY

  19. Integration of Panda Workload Management System with supercomputers

    NASA Astrophysics Data System (ADS)

    De, K.; Jha, S.; Klimentov, A.; Maeno, T.; Mashinistov, R.; Nilsson, P.; Novikov, A.; Oleynik, D.; Panitkin, S.; Poyda, A.; Read, K. F.; Ryabinkin, E.; Teslyuk, A.; Velikhov, V.; Wells, J. C.; Wenaus, T.

    2016-09-01

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 140 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250000 cores with a peak performance of 0.3+ petaFLOPS, next LHC data taking runs will require more resources than Grid computing can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, Europe and Russia (in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF), Supercomputer at the National Research Center "Kurchatov Institute", IT4 in Ostrava, and others). The current approach utilizes a modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run singlethreaded workloads in parallel on Titan's multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads on several supercomputing platforms. We will present our current accomplishments in running PanDA WMS at supercomputers and demonstrate our ability to use PanDA as a portal independent of the computing facility's infrastructure for High Energy and Nuclear Physics, as well as other data-intensive science applications, such as bioinformatics and astro-particle physics.

  20. Computer aiding for low-altitude helicopter flight

    NASA Technical Reports Server (NTRS)

    Swenson, Harry N.

    1991-01-01

    A computer-aiding concept for low-altitude helicopter flight was developed and evaluated in a real-time piloted simulation. The concept included an optimal control trajectory-generated algorithm based on dynamic programming, and a head-up display (HUD) presentation of a pathway-in-the-sky, a phantom aircraft, and flight-path vector/predictor symbol. The trajectory-generation algorithm uses knowledge of the global mission requirements, a digital terrain map, aircraft performance capabilities, and advanced navigation information to determine a trajectory between mission waypoints that minimizes threat exposure by seeking valleys. The pilot evaluation was conducted at NASA Ames Research Center's Sim Lab facility in both the fixed-base Interchangeable Cab (ICAB) simulator and the moving-base Vertical Motion Simulator (VMS) by pilots representing NASA, the U.S. Army, and the U.S. Air Force. The pilots manually tracked the trajectory generated by the algorithm utilizing the HUD symbology. They were able to satisfactorily perform the tracking tasks while maintaining a high degree of awareness of the outside world.

  1. Trace gas detection in hyperspectral imagery using the wavelet packet subspace

    NASA Astrophysics Data System (ADS)

    Salvador, Mark A. Z.

    This dissertation describes research into a new remote sensing method to detect trace gases in hyperspectral and ultra-spectral data. This new method is based on the wavelet packet transform. It attempts to improve both the computational tractability and the detection of trace gases in airborne and spaceborne spectral imagery. Atmospheric trace gas research supports various Earth science disciplines to include climatology, vulcanology, pollution monitoring, natural disasters, and intelligence and military applications. Hyperspectral and ultra-spectral data significantly increases the data glut of existing Earth science data sets. Spaceborne spectral data in particular significantly increases spectral resolution while performing daily global collections of the earth. Application of the wavelet packet transform to the spectral space of hyperspectral and ultra-spectral imagery data potentially improves remote sensing detection algorithms. It also facilities the parallelization of these methods for high performance computing. This research seeks two science goals, (1) developing a new spectral imagery detection algorithm, and (2) facilitating the parallelization of trace gas detection in spectral imagery data.

  2. On the Large-Scaling Issues of Cloud-based Applications for Earth Science Dat

    NASA Astrophysics Data System (ADS)

    Hua, H.

    2016-12-01

    Next generation science data systems are needed to address the incoming flood of data from new missions such as NASA's SWOT and NISAR where its SAR data volumes and data throughput rates are order of magnitude larger than present day missions. Existing missions, such as OCO-2, may also require high turn-around time for processing different science scenarios where on-premise and even traditional HPC computing environments may not meet the high processing needs. Additionally, traditional means of procuring hardware on-premise are already limited due to facilities capacity constraints for these new missions. Experiences have shown that to embrace efficient cloud computing approaches for large-scale science data systems requires more than just moving existing code to cloud environments. At large cloud scales, we need to deal with scaling and cost issues. We present our experiences on deploying multiple instances of our hybrid-cloud computing science data system (HySDS) to support large-scale processing of Earth Science data products. We will explore optimization approaches to getting best performance out of hybrid-cloud computing as well as common issues that will arise when dealing with large-scale computing. Novel approaches were utilized to do processing on Amazon's spot market, which can potentially offer 75%-90% costs savings but with an unpredictable computing environment based on market forces.

  3. Numerical Viscous Flow Analysis of an Advanced Semispan Diamond-Wing Model at High-Life Conditions

    NASA Technical Reports Server (NTRS)

    Ghaffari, F.; Biedron, R. T.; Luckring, J. M.

    2002-01-01

    Turbulent Navier-Stokes computational results are presented for an advanced diamond wing semispan model at low speed, high-lift conditions. The numerical results are obtained in support of a wind-tunnel test that was conducted in the National Transonic Facility (NTF) at the NASA Langley Research Center. The model incorporated a generic fuselage and was mounted on the tunnel sidewall using a constant width standoff. The analyses include: (1) the numerical simulation of the NTF empty, tunnel flow characteristics; (2) semispan high-lift model with the standoff in the tunnel environment; (3) semispan high-lift model with the standoff and viscous sidewall in free air; and (4) semispan high-lift model without the standoff in free air. The computations were performed at conditions that correspond to a nominal approach and landing configuration. The wing surface pressure distributions computed for the model in both the tunnel and in free air agreed well with the corresponding experimental data and they both indicated small increments due to the wall interference effects. However, the wall interference effects were found to be more pronounced in the total measured and the computed lift, drag and pitching moment due to standard induced up-flow effects. Although the magnitudes of the computed forces and moment were slightly off compared to the measured data, the increments due the wall interference effects were predicted well. The numerical predictions are also presented on the combined effects of the tunnel sidewall boundary layer and the standoff geometry on the fuselage fore-body pressure distributions and the resulting impact on the overall configuration longitudinal aerodynamic characteristics.

  4. Lustre Distributed Name Space (DNE) Evaluation at the Oak Ridge Leadership Computing Facility (OLCF)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simmons, James S.; Leverman, Dustin B.; Hanley, Jesse A.

    This document describes the Lustre Distributed Name Space (DNE) evaluation carried at the Oak Ridge Leadership Computing Facility (OLCF) between 2014 and 2015. DNE is a development project funded by the OpenSFS, to improve Lustre metadata performance and scalability. The development effort has been split into two parts, the first part (DNE P1) providing support for remote directories over remote Lustre Metadata Server (MDS) nodes and Metadata Target (MDT) devices, while the second phase (DNE P2) addressed split directories over multiple remote MDS nodes and MDT devices. The OLCF have been actively evaluating the performance, reliability, and the functionality ofmore » both DNE phases. For these tests, internal OLCF testbed were used. Results are promising and OLCF is planning on a full DNE deployment by mid-2016 timeframe on production systems.« less

  5. Apollo experience report: Real-time auxiliary computing facility development

    NASA Technical Reports Server (NTRS)

    Allday, C. E.

    1972-01-01

    The Apollo real time auxiliary computing function and facility were an extension of the facility used during the Gemini Program. The facility was expanded to include support of all areas of flight control, and computer programs were developed for mission and mission-simulation support. The scope of the function was expanded to include prime mission support functions in addition to engineering evaluations, and the facility became a mandatory mission support facility. The facility functioned as a full scale mission support activity until after the first manned lunar landing mission. After the Apollo 11 mission, the function and facility gradually reverted to a nonmandatory, offline, on-call operation because the real time program flexibility was increased and verified sufficiently to eliminate the need for redundant computations. The evaluation of the facility and function and recommendations for future programs are discussed in this report.

  6. Controlling Infrastructure Costs: Right-Sizing the Mission Control Facility

    NASA Technical Reports Server (NTRS)

    Martin, Keith; Sen-Roy, Michael; Heiman, Jennifer

    2009-01-01

    Johnson Space Center's Mission Control Center is a space vehicle, space program agnostic facility. The current operational design is essentially identical to the original facility architecture that was developed and deployed in the mid-90's. In an effort to streamline the support costs of the mission critical facility, the Mission Operations Division (MOD) of Johnson Space Center (JSC) has sponsored an exploratory project to evaluate and inject current state-of-the-practice Information Technology (IT) tools, processes and technology into legacy operations. The general push in the IT industry has been trending towards a data-centric computer infrastructure for the past several years. Organizations facing challenges with facility operations costs are turning to creative solutions combining hardware consolidation, virtualization and remote access to meet and exceed performance, security, and availability requirements. The Operations Technology Facility (OTF) organization at the Johnson Space Center has been chartered to build and evaluate a parallel Mission Control infrastructure, replacing the existing, thick-client distributed computing model and network architecture with a data center model utilizing virtualization to provide the MCC Infrastructure as a Service. The OTF will design a replacement architecture for the Mission Control Facility, leveraging hardware consolidation through the use of blade servers, increasing utilization rates for compute platforms through virtualization while expanding connectivity options through the deployment of secure remote access. The architecture demonstrates the maturity of the technologies generally available in industry today and the ability to successfully abstract the tightly coupled relationship between thick-client software and legacy hardware into a hardware agnostic "Infrastructure as a Service" capability that can scale to meet future requirements of new space programs and spacecraft. This paper discusses the benefits and difficulties that a migration to cloud-based computing philosophies has uncovered when compared to the legacy Mission Control Center architecture. The team consists of system and software engineers with extensive experience with the MCC infrastructure and software currently used to support the International Space Station (ISS) and Space Shuttle program (SSP).

  7. Simulation of the neutron flux in the irradiation facility at RA-3 reactor.

    PubMed

    Bortolussi, S; Pinto, J M; Thorp, S I; Farias, R O; Soto, M S; Sztejnberg, M; Pozzi, E C C; Gonzalez, S J; Gadan, M A; Bellino, A N; Quintana, J; Altieri, S; Miller, M

    2011-12-01

    A facility for the irradiation of a section of patients' explanted liver and lung was constructed at RA-3 reactor, Comisión Nacional de Energía Atómica, Argentina. The facility, located in the thermal column, is characterized by the possibility to insert and extract samples without the need to shutdown the reactor. In order to reach the best levels of security and efficacy of the treatment, it is necessary to perform an accurate dosimetry. The possibility to simulate neutron flux and absorbed dose in the explanted organs, together with the experimental dosimetry, allows setting more precise and effective treatment plans. To this end, a computational model of the entire reactor was set-up, and the simulations were validated with the experimental measurements performed in the facility. Copyright © 2011 Elsevier Ltd. All rights reserved.

  8. Computational Science News | Computational Science | NREL

    Science.gov Websites

    -Cooled High-Performance Computing Technology at the ESIF February 28, 2018 NREL Launches New Website for High-Performance Computing System Users The National Renewable Energy Laboratory (NREL) Computational Science Center has launched a revamped website for users of the lab's high-performance computing (HPC

  9. The Center for Nanophase Materials Sciences

    NASA Astrophysics Data System (ADS)

    Lowndes, Douglas

    2005-03-01

    The Center for Nanophase Materials Sciences (CNMS) located at Oak Ridge National Laboratory (ORNL) will be the first DOE Nanoscale Science Research Center to begin operation, with construction to be completed in April 2005 and initial operations in October 2005. The CNMS' scientific program has been developed through workshops with the national community, with the goal of creating a highly collaborative research environment to accelerate discovery and drive technological advances. Research at the CNMS is organized under seven Scientific Themes selected to address challenges to understanding and to exploit particular ORNL strengths (see http://cnms.ornl.govhttp://cnms.ornl.gov). These include extensive synthesis and characterization capabilities for soft, hard, nanostructured, magnetic and catalytic materials and their composites; neutron scattering at the Spallation Neutron Source and High Flux Isotope Reactor; computational nanoscience in the CNMS' Nanomaterials Theory Institute and utilizing facilities and expertise of the Center for Computational Sciences and the new Leadership Scientific Computing Facility at ORNL; a new CNMS Nanofabrication Research Laboratory; and a suite of unique and state-of-the-art instruments to be made reliably available to the national community for imaging, manipulation, and properties measurements on nanoscale materials in controlled environments. The new research facilities will be described together with the planned operation of the user research program, the latter illustrated by the current ``jump start'' user program that utilizes existing ORNL/CNMS facilities.

  10. Uncertainty analysis of thermocouple measurements used in normal and abnormal thermal environment experiments at Sandia's Radiant Heat Facility and Lurance Canyon Burn Site.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nakos, James Thomas

    2004-04-01

    It would not be possible to confidently qualify weapon systems performance or validate computer codes without knowing the uncertainty of the experimental data used. This report provides uncertainty estimates associated with thermocouple data for temperature measurements from two of Sandia's large-scale thermal facilities. These two facilities (the Radiant Heat Facility (RHF) and the Lurance Canyon Burn Site (LCBS)) routinely gather data from normal and abnormal thermal environment experiments. They are managed by Fire Science & Technology Department 09132. Uncertainty analyses were performed for several thermocouple (TC) data acquisition systems (DASs) used at the RHF and LCBS. These analyses apply tomore » Type K, chromel-alumel thermocouples of various types: fiberglass sheathed TC wire, mineral-insulated, metal-sheathed (MIMS) TC assemblies, and are easily extended to other TC materials (e.g., copper-constantan). Several DASs were analyzed: (1) A Hewlett-Packard (HP) 3852A system, and (2) several National Instrument (NI) systems. The uncertainty analyses were performed on the entire system from the TC to the DAS output file. Uncertainty sources include TC mounting errors, ANSI standard calibration uncertainty for Type K TC wire, potential errors due to temperature gradients inside connectors, extension wire uncertainty, DAS hardware uncertainties including noise, common mode rejection ratio, digital voltmeter accuracy, mV to temperature conversion, analog to digital conversion, and other possible sources. Typical results for 'normal' environments (e.g., maximum of 300-400 K) showed the total uncertainty to be about {+-}1% of the reading in absolute temperature. In high temperature or high heat flux ('abnormal') thermal environments, total uncertainties range up to {+-}2-3% of the reading (maximum of 1300 K). The higher uncertainties in abnormal thermal environments are caused by increased errors due to the effects of imperfect TC attachment to the test item. 'Best practices' are provided in Section 9 to help the user to obtain the best measurements possible.« less

  11. The Design and Evaluation of "CAPTools"--A Computer Aided Parallelization Toolkit

    NASA Technical Reports Server (NTRS)

    Yan, Jerry; Frumkin, Michael; Hribar, Michelle; Jin, Haoqiang; Waheed, Abdul; Johnson, Steve; Cross, Jark; Evans, Emyr; Ierotheou, Constantinos; Leggett, Pete; hide

    1998-01-01

    Writing applications for high performance computers is a challenging task. Although writing code by hand still offers the best performance, it is extremely costly and often not very portable. The Computer Aided Parallelization Tools (CAPTools) are a toolkit designed to help automate the mapping of sequential FORTRAN scientific applications onto multiprocessors. CAPTools consists of the following major components: an inter-procedural dependence analysis module that incorporates user knowledge; a 'self-propagating' data partitioning module driven via user guidance; an execution control mask generation and optimization module for the user to fine tune parallel processing of individual partitions; a program transformation/restructuring facility for source code clean up and optimization; a set of browsers through which the user interacts with CAPTools at each stage of the parallelization process; and a code generator supporting multiple programming paradigms on various multiprocessors. Besides describing the rationale behind the architecture of CAPTools, the parallelization process is illustrated via case studies involving structured and unstructured meshes. The programming process and the performance of the generated parallel programs are compared against other programming alternatives based on the NAS Parallel Benchmarks, ARC3D and other scientific applications. Based on these results, a discussion on the feasibility of constructing architectural independent parallel applications is presented.

  12. ASPS performance with large payloads onboard the Shuttle Orbiter. [Annular Suspension and Pointing System

    NASA Technical Reports Server (NTRS)

    Keckler, C. R.

    1980-01-01

    A high fidelity digital computer simulation was used to establish the viability of the Annular Suspension and Pointing System (ASPS) for satisfying the pointing and stability requirements of facility class payloads, such as the Solar Optical Telescope, when subjected to the Orbiter disturbance environment. The ASPS and its payload were subjected to disturbances resulting from crew motions in the Orbiter aft flight deck and VRCS thruster firings. Worst case pointing errors of 0.005 arc seconds were experienced under the disturbance environment simulated; this is well within the 0.08 arc seconds requirement specified by the payload.

  13. Radio Astronomy at the Centre for High Performance Computing in South Africa

    NASA Astrophysics Data System (ADS)

    Catherine Cress; UWC Simulation Team

    2014-04-01

    I will present results on galaxy evolution and cosmology which we obtained using the supercomputing facilities at the CHPC. These include cosmological-scale N-body simulations modelling neutral hydrogen as well as the study of the clustering of radio galaxies to probe the relationship between dark and luminous matter in the universe. I will also discuss the various roles that the CHPC is playing in Astronomy in SA, including the provision of HPC for a variety of Astronomical applications, the provision of storage for radio data, our educational programs and our participation in planning for the SKA.

  14. Pre-Hardware Optimization of Spacecraft Image Processing Algorithms and Hardware Implementation

    NASA Technical Reports Server (NTRS)

    Kizhner, Semion; Petrick, David J.; Flatley, Thomas P.; Hestnes, Phyllis; Jentoft-Nilsen, Marit; Day, John H. (Technical Monitor)

    2002-01-01

    Spacecraft telemetry rates and telemetry product complexity have steadily increased over the last decade presenting a problem for real-time processing by ground facilities. This paper proposes a solution to a related problem for the Geostationary Operational Environmental Spacecraft (GOES-8) image data processing and color picture generation application. Although large super-computer facilities are the obvious heritage solution, they are very costly, making it imperative to seek a feasible alternative engineering solution at a fraction of the cost. The proposed solution is based on a Personal Computer (PC) platform and synergy of optimized software algorithms, and reconfigurable computing hardware (RC) technologies, such as Field Programmable Gate Arrays (FPGA) and Digital Signal Processors (DSP). It has been shown that this approach can provide superior inexpensive performance for a chosen application on the ground station or on-board a spacecraft.

  15. POLLUX: a program for simulated cloning, mutagenesis and database searching of DNA constructs.

    PubMed

    Dayringer, H E; Sammons, S A

    1991-04-01

    Computer support for research in biotechnology has developed rapidly and has provided several tools to aid the researcher. This report describes the capabilities of new computer software developed in this laboratory to aid in the documentation and planning of experiments in molecular biology. The program, POLLUX, provides a graphical medium for the entry, edit and manipulation of DNA constructs and a textual format for display and edit of construct descriptive data. Program operation and procedures are designed to mimic the actual laboratory experiments with respect to capability and the order in which they are performed. Flexible control over the content of the computer-generated displays and program facilities is provided by a mouse-driven menu interface. Programmed facilities for mutagenesis, simulated cloning and searching of the database from networked workstations are described.

  16. Computational model of gamma irradiation room at ININ

    NASA Astrophysics Data System (ADS)

    Rodríguez-Romo, Suemi; Patlan-Cardoso, Fernando; Ibáñez-Orozco, Oscar; Vergara Martínez, Francisco Javier

    2018-03-01

    In this paper, we present a model of the gamma irradiation room at the National Institute of Nuclear Research (ININ is its acronym in Spanish) in Mexico to improve the use of physics in dosimetry for human protection. We deal with air-filled ionization chambers and scientific computing made in house and framed in both the GEANT4 scheme and our analytical approach to characterize the irradiation room. This room is the only secondary dosimetry facility in Mexico. Our aim is to optimize its experimental designs, facilities, and industrial applications of physical radiation. The computational results provided by our model are supported by all the known experimental data regarding the performance of the ININ gamma irradiation room and allow us to predict the values of the main variables related to this fully enclosed space to within an acceptable margin of error.

  17. Configuration and Management of a Cluster Computing Facility in Undergraduate Student Computer Laboratories

    ERIC Educational Resources Information Center

    Cornforth, David; Atkinson, John; Spennemann, Dirk H. R.

    2006-01-01

    Purpose: Many researchers require access to computer facilities beyond those offered by desktop workstations. Traditionally, these are offered either through partnerships, to share the cost of supercomputing facilities, or through purpose-built cluster facilities. However, funds are not always available to satisfy either of these options, and…

  18. Virtual worlds to support patient group communication? A questionnaire study investigating potential for virtual world focus group use by respiratory patients.

    PubMed

    Taylor, Michael J; Taylor, Dave; Vlaev, Ivo; Elkin, Sarah

    2017-01-01

    Recent advances in communication technologies enable potential provision of remote education for patients using computer-generated environments known as virtual worlds. Previous research has revealed highly variable levels of patient receptiveness to using information technologies for healthcare-related purposes. This preliminary study involved implementing a questionnaire investigating attitudes and access to computer technologies of respiratory outpatients, in order to assess potential for use of virtual worlds to facilitate health-related education for this sample. Ninety-four patients with a chronic respiratory condition completed surveys, which were distributed at a Chest Clinic. In accordance with our prediction, younger participants were more likely to be able to use, and have access to a computer and some patients were keen to explore use virtual worlds for healthcare-related purposes: Of those with access to computer facilities, 14.50% expressed a willingness to attend a virtual world focus group. Results indicate future virtual world health education facilities should be designed to cater for younger patients, because this group are most likely to accept and use such facilities. Within the study sample, this is likely to comprise of people diagnosed with asthma. Future work could investigate the potential of creating a virtual world asthma education facility.

  19. Atmospheric concentrations of polybrominated diphenyl ethers at near-source sites.

    PubMed

    Cahill, Thomas M; Groskova, Danka; Charles, M Judith; Sanborn, James R; Denison, Michael S; Baker, Lynton

    2007-09-15

    Concentrations of polybrominated diphenyl ethers (PBDEs) were determined in air samples from near suspected sources, namely an indoors computer laboratory, indoors and outdoors at an electronics recycling facility, and outdoors at an automotive shredding and metal recycling facility. The results showed that (1) PBDE concentrations in the computer laboratorywere higherwith computers on compared with the computers off, (2) indoor concentrations at an electronics recycling facility were as high as 650,000 pg/m3 for decabromodiphenyl ether (PBDE 209), and (3) PBDE 209 concentrations were up to 1900 pg/m3 at the downwind fenceline at an automotive shredding/metal recycling facility. The inhalation exposure estimates for all the sites were typically below 110 pg/kg/day with the exception of the indoor air samples adjacent to the electronics shredding equipment, which gave exposure estimates upward of 40,000 pg/kg/day. Although there were elevated inhalation exposures at the three source sites, the exposure was not expected to cause adverse health effects based on the lowest reference dose (RfD) currently in the Integrated Risk Information System (IRIS), although these RfD values are currently being re-evaluated by the U.S. Environmental Protection Agency. More research is needed on the potential health effects of PBDEs.

  20. Virtual worlds to support patient group communication? A questionnaire study investigating potential for virtual world focus group use by respiratory patients

    PubMed Central

    Taylor, Michael J.; Taylor, Dave; Vlaev, Ivo; Elkin, Sarah

    2015-01-01

    Recent advances in communication technologies enable potential provision of remote education for patients using computer-generated environments known as virtual worlds. Previous research has revealed highly variable levels of patient receptiveness to using information technologies for healthcare-related purposes. This preliminary study involved implementing a questionnaire investigating attitudes and access to computer technologies of respiratory outpatients, in order to assess potential for use of virtual worlds to facilitate health-related education for this sample. Ninety-four patients with a chronic respiratory condition completed surveys, which were distributed at a Chest Clinic. In accordance with our prediction, younger participants were more likely to be able to use, and have access to a computer and some patients were keen to explore use virtual worlds for healthcare-related purposes: Of those with access to computer facilities, 14.50% expressed a willingness to attend a virtual world focus group. Results indicate future virtual world health education facilities should be designed to cater for younger patients, because this group are most likely to accept and use such facilities. Within the study sample, this is likely to comprise of people diagnosed with asthma. Future work could investigate the potential of creating a virtual world asthma education facility. PMID:28239187

  1. Lifelong Learning for the 21st Century.

    ERIC Educational Resources Information Center

    Goodnight, Ron

    The Lifelong Learning Center for the 21st Century was proposed to provide personal renewal and technical training for employees at a major United States automotive manufacturing company when it implemented a new, computer-based Computer Numerical Controlled (CNC) machining, robotics, and high technology facility. The employees needed training for…

  2. Extended MHD modeling of nonlinear instabilities in fusion and space plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Germaschewski, Kai

    A number of different sub-projects where pursued within this DOE early career project. The primary focus was on using fully nonlinear, curvilinear, extended MHD simulations of instabilities with applications to fusion and space plasmas. In particular, we performed comprehensive studies of the dynamics of the double tearing mode in different regimes and confi gurations, using Cartesian and cyclindrical geometry and investigating both linear and non-linear dynamics. In addition to traditional extended MHD involving Hall term and electron pressure gradient, we also employed a new multi-fluid moment model, which shows great promise to incorporate kinetic effects, in particular off-diagonal elements ofmore » the pressure tensor, in a fluid model, which is naturally computationally much cheaper than fully kinetic particle or Vlasov simulations. We used our Vlasov code for detailed studies of how weak collisions effect plasma echos. In addition, we have played an important supporting role working with the PPPL theory group around Will Fox and Amitava Bhattacharjee on providing simulation support for HED plasma experiments performed at high-powered laser facilities like OMEGA-EP in Rochester, NY. This project has support a great number of computational advances in our fluid and kinetic plasma models, and has been crucial to winning multiple INCITE computer time awards that supported our computational modeling.« less

  3. Evaluation of lens distortion errors using an underwater camera system for video-based motion analysis

    NASA Technical Reports Server (NTRS)

    Poliner, Jeffrey; Fletcher, Lauren; Klute, Glenn K.

    1994-01-01

    Video-based motion analysis systems are widely employed to study human movement, using computers to capture, store, process, and analyze video data. This data can be collected in any environment where cameras can be located. One of the NASA facilities where human performance research is conducted is the Weightless Environment Training Facility (WETF), a pool of water which simulates zero-gravity with neutral buoyance. Underwater video collection in the WETF poses some unique problems. This project evaluates the error caused by the lens distortion of the WETF cameras. A grid of points of known dimensions was constructed and videotaped using a video vault underwater system. Recorded images were played back on a VCR and a personal computer grabbed and stored the images on disk. These images were then digitized to give calculated coordinates for the grid points. Errors were calculated as the distance from the known coordinates of the points to the calculated coordinates. It was demonstrated that errors from lens distortion could be as high as 8 percent. By avoiding the outermost regions of a wide-angle lens, the error can be kept smaller.

  4. Development of X-ray computed tomography inspection facility for the H-II solid rocket boosters

    NASA Astrophysics Data System (ADS)

    Sasaki, M.; Fujita, T.; Fukushima, Y.; Shimizu, M.; Itoh, S.; Satoh, A.; Miyamoto, H.

    The National Space Development Agency of Japan (NASDA) initiated the development of an X-ray computed tomography (CT) equipment for the H-II solid rocket boosters (SRBs) in 1987 for the purpose of minimizing inspection time and achieving high cost-effectiveness. The CT facility has been completed in Jan. 1991 in Tanegashima Space Center for the inspection of the SRBs transported from the manufacturer's factory to the launch site. It was first applied to the qualification model SRB from Feb. to Apr. in 1991. Through the CT inspection of the SRB, it has been confirmed that inspection time decreased significantly compared with the X-ray radiography method and that even an unskilled inspector could find various defects. As a result, the establishment of a new reliable inspection method for the SRB has been verified. In this paper, the following are discussed: (1) the defect detectability of the CT equipment using a dummy SRB with various artificial defects, (2) the performance comparison between the CT method and the X-ray radiography method, (3) the reliability of the CT equipment, and (4) the radiation shield design of the nondestructive test building.

  5. Performance Measurement, Visualization and Modeling of Parallel and Distributed Programs

    NASA Technical Reports Server (NTRS)

    Yan, Jerry C.; Sarukkai, Sekhar R.; Mehra, Pankaj; Lum, Henry, Jr. (Technical Monitor)

    1994-01-01

    This paper presents a methodology for debugging the performance of message-passing programs on both tightly coupled and loosely coupled distributed-memory machines. The AIMS (Automated Instrumentation and Monitoring System) toolkit, a suite of software tools for measurement and analysis of performance, is introduced and its application illustrated using several benchmark programs drawn from the field of computational fluid dynamics. AIMS includes (i) Xinstrument, a powerful source-code instrumentor, which supports both Fortran77 and C as well as a number of different message-passing libraries including Intel's NX Thinking Machines' CMMD, and PVM; (ii) Monitor, a library of timestamping and trace -collection routines that run on supercomputers (such as Intel's iPSC/860, Delta, and Paragon and Thinking Machines' CM5) as well as on networks of workstations (including Convex Cluster and SparcStations connected by a LAN); (iii) Visualization Kernel, a trace-animation facility that supports source-code clickback, simultaneous visualization of computation and communication patterns, as well as analysis of data movements; (iv) Statistics Kernel, an advanced profiling facility, that associates a variety of performance data with various syntactic components of a parallel program; (v) Index Kernel, a diagnostic tool that helps pinpoint performance bottlenecks through the use of abstract indices; (vi) Modeling Kernel, a facility for automated modeling of message-passing programs that supports both simulation -based and analytical approaches to performance prediction and scalability analysis; (vii) Intrusion Compensator, a utility for recovering true performance from observed performance by removing the overheads of monitoring and their effects on the communication pattern of the program; and (viii) Compatibility Tools, that convert AIMS-generated traces into formats used by other performance-visualization tools, such as ParaGraph, Pablo, and certain AVS/Explorer modules.

  6. Development and applications of nondestructive evaluation at Marshall Space Flight Center

    NASA Technical Reports Server (NTRS)

    Whitaker, Ann F.

    1990-01-01

    A brief description of facility design and equipment, facility usage, and typical investigations are presented for the following: Surface Inspection Facility; Advanced Computer Tomography Inspection Station (ACTIS); NDE Data Evaluation Facility; Thermographic Test Development Facility; Radiographic Test Facility; Realtime Radiographic Test Facility; Eddy Current Research Facility; Acoustic Emission Monitoring System; Advanced Ultrasonic Test Station (AUTS); Ultrasonic Test Facility; and Computer Controlled Scanning (CONSCAN) System.

  7. Software Manages Documentation in a Large Test Facility

    NASA Technical Reports Server (NTRS)

    Gurneck, Joseph M.

    2001-01-01

    The 3MCS computer program assists and instrumentation engineer in performing the 3 essential functions of design, documentation, and configuration management of measurement and control systems in a large test facility. Services provided by 3MCS are acceptance of input from multiple engineers and technicians working at multiple locations;standardization of drawings;automated cross-referencing; identification of errors;listing of components and resources; downloading of test settings; and provision of information to customers.

  8. Laser Assisted CVD Growth of A1N and GaN

    DTIC Science & Technology

    1990-08-31

    additional cost sharing. RESEARCH FACILITIES The york is being performed in the Howard University Laser Laboratory. This is a free-standing buildinq...would be used to optimize computer models of the laser induced CVD reactor. FACILITIES AND EQUIPMENT - ADDITIONAL COST SHARING This year Howard ... University has provided $45,000 for the purchase of an excimer laser to be shared by Dr. Crye for the diode laser probe experiments and another Assistant

  9. Computational Study of Hypersonic Boundary Layer Stability on Cones

    NASA Astrophysics Data System (ADS)

    Gronvall, Joel Edwin

    Due to the complex nature of boundary layer laminar-turbulent transition in hypersonic flows and the resultant effect on the design of re-entry vehicles, there remains considerable interest in developing a deeper understanding of the underlying physics. To that end, the use of experimental observations and computational analysis in a complementary manner will provide the greatest insights. It is the intent of this work to provide such an analysis for two ongoing experimental investigations. The first focuses on the hypersonic boundary layer transition experiments for a slender cone that are being conducted at JAXA's free-piston shock tunnel HIEST facility. Of particular interest are the measurements of disturbance frequencies associated with transition at high enthalpies. The computational analysis provided for these cases included two-dimensional CFD mean flow solutions for use in boundary layer stability analyses. The disturbances in the boundary layer were calculated using the linear parabolized stability equations. Estimates for transition locations, comparisons of measured disturbance frequencies and computed frequencies, and a determination of the type of disturbances present were made. It was found that for the cases where the disturbances were measured at locations where the flow was still laminar but nearly transitional, that the highly amplified disturbances showed reasonable agreement with the computations. Additionally, an investigation of the effects of finite-rate chemistry and vibrational excitation on flows over cones was conducted for a set of theoretical operational conditions at the HIEST facility. The second study focuses on transition in three-dimensional hypersonic boundary layers, and for this the cone at angle of attack experiments being conducted at the Boeing/AFOSR Mach-6 quiet tunnel at Purdue University were examined. Specifically, the effect of surface roughness on the development of the stationary crossflow instability are investigated in this work. One standard mean flow solution and two direct numerical simulations of a slender cone at an angle of attack were computed. The direct numerical simulations included a digitally-filtered, randomly distributed surface roughness and were performed using a high-order, low-dissipation numerical scheme on appropriately resolved grids. Comparisons with experimental observations showed excellent qualitative agreement. Comparisons with similar previous computational work were also made and showed agreement in the wavenumber range of the most unstable crossflow modes.

  10. Integrated Test Facility (ITF)

    NASA Technical Reports Server (NTRS)

    1992-01-01

    The NASA-Dryden Integrated Test Facility (ITF), also known as the Walter C. Williams Research Aircraft Integration Facility (RAIF), provides an environment for conducting efficient and thorough testing of advanced, highly integrated research aircraft. Flight test confidence is greatly enhanced by the ability to qualify interactive aircraft systems in a controlled environment. In the ITF, each element of a flight vehicle can be regulated and monitored in real time as it interacts with the rest of the aircraft systems. Testing in the ITF is accomplished through automated techniques in which the research aircraft is interfaced to a high-fidelity real-time simulation. Electric and hydraulic power are also supplied, allowing all systems except the engines to function as if in flight. The testing process is controlled by an engineering workstation that sets up initial conditions for a test, initiates the test run, monitors its progress, and archives the data generated. The workstation is also capable of analyzing results of individual tests, comparing results of multiple tests, and producing reports. The computers used in the automated aircraft testing process are also capable of operating in a stand-alone mode with a simulation cockpit, complete with its own instruments and controls. Control law development and modification, aerodynamic, propulsion, guidance model qualification, and flight planning -- functions traditionally associated with real-time simulation -- can all be performed in this manner. The Remotely Augmented Vehicles (RAV) function, now located in the ITF, is a mainstay in the research techniques employed at Dryden. This function is used for tests that are too dangerous for direct human involvement or for which computational capacity does not exist onboard a research aircraft. RAV provides the researcher with a ground-based computer that is radio linked to the test aircraft during actual flight. The Ground Vibration Testing (GVT) system, formerly housed in the Thermostructural Laboratory, now also resides in the ITF. In preparing a research aircraft for flight testing, it is vital to measure its structural frequencies and mode shapes and compare results to the models used in design analysis. The final function performed in the ITF is routine aircraft maintenance. This includes preflight and post-flight instrumentation checks and the servicing of hydraulics, avionics, and engines necessary on any research aircraft. Aircraft are not merely moved to the ITF for automated testing purposes but are housed there throughout their flight test programs.

  11. Integrated Test Facility (ITF)

    NASA Technical Reports Server (NTRS)

    1991-01-01

    The NASA-Dryden Integrated Test Facility (ITF), also known as the Walter C. Williams Research Aircraft Integration Facility (RAIF), provides an environment for conducting efficient and thorough testing of advanced, highly integrated research aircraft. Flight test confidence is greatly enhanced by the ability to qualify interactive aircraft systems in a controlled environment. In the ITF, each element of a flight vehicle can be regulated and monitored in real time as it interacts with the rest of the aircraft systems. Testing in the ITF is accomplished through automated techniques in which the research aircraft is interfaced to a high-fidelity real-time simulation. Electric and hydraulic power are also supplied, allowing all systems except the engines to function as if in flight. The testing process is controlled by an engineering workstation that sets up initial conditions for a test, initiates the test run, monitors its progress, and archives the data generated. The workstation is also capable of analyzing results of individual tests, comparing results of multiple tests, and producing reports. The computers used in the automated aircraft testing process are also capable of operating in a stand-alone mode with a simulation cockpit, complete with its own instruments and controls. Control law development and modification, aerodynamic, propulsion, guidance model qualification, and flight planning -- functions traditionally associated with real-time simulation -- can all be performed in this manner. The Remotely Augmented Vehicles (RAV) function, now located in the ITF, is a mainstay in the research techniques employed at Dryden. This function is used for tests that are too dangerous for direct human involvement or for which computational capacity does not exist onboard a research aircraft. RAV provides the researcher with a ground-based computer that is radio linked to the test aircraft during actual flight. The Ground Vibration Testing (GVT) system, formerly housed in the Thermostructural Laboratory, now also resides in the ITF. In preparing a research aircraft for flight testing, it is vital to measure its structural frequencies and mode shapes and compare results to the models used in design analysis. The final function performed in the ITF is routine aircraft maintenance. This includes preflight and post-flight instrumentation checks and the servicing of hydraulics, avionics, and engines necessary on any research aircraft. Aircraft are not merely moved to the ITF for automated testing purposes but are housed there throughout their flight test programs.

  12. Walter C. Williams Research Aircraft Integration Facility (RAIF)

    NASA Technical Reports Server (NTRS)

    1996-01-01

    The NASA-Dryden Integrated Test Facility (ITF), also known as the Walter C. Williams Research Aircraft Integration Facility (RAIF), provides an environment for conducting efficient and thorough testing of advanced, highly integrated research aircraft. Flight test confidence is greatly enhanced by the ability to qualify interactive aircraft systems in a controlled environment. In the ITF, each element of a flight vehicle can be regulated and monitored in real time as it interacts with the rest of the aircraft systems. Testing in the ITF is accomplished through automated techniques in which the research aircraft is interfaced to a high-fidelity real-time simulation. Electric and hydraulic power are also supplied, allowing all systems except the engines to function as if in flight. The testing process is controlled by an engineering workstation that sets up initial conditions for a test, initiates the test run, monitors its progress, and archives the data generated. The workstation is also capable of analyzing results of individual tests, comparing results of multiple tests, and producing reports. The computers used in the automated aircraft testing process are also capable of operating in a stand-alone mode with a simulation cockpit, complete with its own instruments and controls. Control law development and modification, aerodynamic, propulsion, guidance model qualification, and flight planning -- functions traditionally associated with real-time simulation -- can all be performed in this manner. The Remotely Augmented Vehicles (RAV) function, now located in the ITF, is a mainstay in the research techniques employed at Dryden. This function is used for tests that are too dangerous for direct human involvement or for which computational capacity does not exist onboard a research aircraft. RAV provides the researcher with a ground-based computer that is radio linked to the test aircraft during actual flight. The Ground Vibration Testing (GVT) system, formerly housed in the Thermostructural Laboratory, now also resides in the ITF. In preparing a research aircraft for flight testing, it is vital to measure its structural frequencies and mode shapes and compare results to the models used in design analysis. The final function performed in the ITF is routine aircraft maintenance. This includes preflight and post-flight instrumentation checks and the servicing of hydraulics, avionics, and engines necessary on any research aircraft. Aircraft are not merely moved to the ITF for automated testing purposes but are housed there throughout their flight test programs.

  13. Structured analysis and modeling of complex systems

    NASA Technical Reports Server (NTRS)

    Strome, David R.; Dalrymple, Mathieu A.

    1992-01-01

    The Aircrew Evaluation Sustained Operations Performance (AESOP) facility at Brooks AFB, Texas, combines the realism of an operational environment with the control of a research laboratory. In recent studies we collected extensive data from the Airborne Warning and Control Systems (AWACS) Weapons Directors subjected to high and low workload Defensive Counter Air Scenarios. A critical and complex task in this environment involves committing a friendly fighter against a hostile fighter. Structured Analysis and Design techniques and computer modeling systems were applied to this task as tools for analyzing subject performance and workload. This technology is being transferred to the Man-Systems Division of NASA Johnson Space Center for application to complex mission related tasks, such as manipulating the Shuttle grappler arm.

  14. Research on anisotropy of fusion-produced protons and neutrons emission from high-current plasma-focus discharges.

    PubMed

    Malinowski, K; Skladnik-Sadowska, E; Sadowski, M J; Szydlowski, A; Czaus, K; Kwiatkowski, R; Zaloga, D; Paduch, M; Zielinska, E

    2015-01-01

    The paper concerns fast protons and neutrons from D-D fusion reactions in a Plasma-Focus-1000U facility. Measurements were performed with nuclear-track detectors arranged in "sandwiches" of an Al-foil and two PM-355 detectors separated by a polyethylene-plate. The Al-foil eliminated all primary deuterons, but was penetrable for fast fusion protons. The foil and first PM-355 detector were penetrable for fast neutrons, which were converted into recoil-protons in the polyethylene and recorded in the second PM-355 detector. The "sandwiches" were irradiated by discharges of comparable neutron-yields. Analyses of etched tracks and computer simulations of the fusion-products behavior in the detectors were performed.

  15. Research on anisotropy of fusion-produced protons and neutrons emission from high-current plasma-focus discharges

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Malinowski, K., E-mail: karol.malinowski@ncbj.gov.pl; Sadowski, M. J.; Szydlowski, A.

    2015-01-15

    The paper concerns fast protons and neutrons from D-D fusion reactions in a Plasma-Focus-1000U facility. Measurements were performed with nuclear-track detectors arranged in “sandwiches” of an Al-foil and two PM-355 detectors separated by a polyethylene-plate. The Al-foil eliminated all primary deuterons, but was penetrable for fast fusion protons. The foil and first PM-355 detector were penetrable for fast neutrons, which were converted into recoil-protons in the polyethylene and recorded in the second PM-355 detector. The “sandwiches” were irradiated by discharges of comparable neutron-yields. Analyses of etched tracks and computer simulations of the fusion-products behavior in the detectors were performed.

  16. Corona performance of a compact 230-kV line

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chartier, V.L.; Blair, D.E.; Easley, M.D.

    Permitting requirements and the acquisition of new rights-of-way for transmission facilities has in recent years become increasingly difficult for most utilities, including Puget Sound Power and Light Company. In order to maintain a high degree of reliability of service while being responsive to public concerns regarding the siting of high voltage (HV) transmission facilities, Puget Power has found it necessary to more heavily rely upon the use of compact lines in franchise corridors. Compaction does, however, precipitate increased levels of audible noise (AN) and radio and TV interference (RI and TVI) due to corona on the conductors and insulator assemblies.more » Puget Power relies upon the Bonneville Power Administration (BPA) Corona and Field Effects computer program to calculate AN and RI for new lines. Since there was some question of the program`s ability to accurately represent quiet 230-kV compact designs, a joint project was undertaken with BPA to verify the program`s algorithms. Long-term measurements made on an operating Puget Power 230-kV compact line confirmed the accuracy of BPA`s AN model; however, the RI measurements were much lower than predicted by the BPA and other programs. This paper also describes how the BPA computer program can be used to calculate the voltage needed to expose insulator assemblies to the correct electric field in single test setups in HV laboratories.« less

  17. Integration of an open interface PC scene generator using COTS DVI converter hardware

    NASA Astrophysics Data System (ADS)

    Nordland, Todd; Lyles, Patrick; Schultz, Bret

    2006-05-01

    Commercial-Off-The-Shelf (COTS) personal computer (PC) hardware is increasingly capable of computing high dynamic range (HDR) scenes for military sensor testing at high frame rates. New electro-optical and infrared (EO/IR) scene projectors feature electrical interfaces that can accept the DVI output of these PC systems. However, military Hardware-in-the-loop (HWIL) facilities such as those at the US Army Aviation and Missile Research Development and Engineering Center (AMRDEC) utilize a sizeable inventory of existing projection systems that were designed to use the Silicon Graphics Incorporated (SGI) digital video port (DVP, also known as DVP2 or DD02) interface. To mate the new DVI-based scene generation systems to these legacy projection systems, CG2 Inc., a Quantum3D Company (CG2), has developed a DVI-to-DVP converter called Delta DVP. This device takes progressive scan DVI input, converts it to digital parallel data, and combines and routes color components to derive a 16-bit wide luminance channel replicated on a DVP output interface. The HWIL Functional Area of AMRDEC has developed a suite of modular software to perform deterministic real-time, wave band-specific rendering of sensor scenes, leveraging the features of commodity graphics hardware and open source software. Together, these technologies enable sensor simulation and test facilities to integrate scene generation and projection components with diverse pedigrees.

  18. PDS: A Performance Database Server

    DOE PAGES

    Berry, Michael W.; Dongarra, Jack J.; Larose, Brian H.; ...

    1994-01-01

    The process of gathering, archiving, and distributing computer benchmark data is a cumbersome task usually performed by computer users and vendors with little coordination. Most important, there is no publicly available central depository of performance data for all ranges of machines from personal computers to supercomputers. We present an Internet-accessible performance database server (PDS) that can be used to extract current benchmark data and literature. As an extension to the X-Windows-based user interface (Xnetlib) to the Netlib archival system, PDS provides an on-line catalog of public domain computer benchmarks such as the LINPACK benchmark, Perfect benchmarks, and the NAS parallelmore » benchmarks. PDS does not reformat or present the benchmark data in any way that conflicts with the original methodology of any particular benchmark; it is thereby devoid of any subjective interpretations of machine performance. We believe that all branches (research laboratories, academia, and industry) of the general computing community can use this facility to archive performance metrics and make them readily available to the public. PDS can provide a more manageable approach to the development and support of a large dynamic database of published performance metrics.« less

  19. Aerodynamic design guidelines and computer program for estimation of subsonic wind tunnel performance

    NASA Technical Reports Server (NTRS)

    Eckert, W. T.; Mort, K. W.; Jope, J.

    1976-01-01

    General guidelines are given for the design of diffusers, contractions, corners, and the inlets and exits of non-return tunnels. A system of equations, reflecting the current technology, has been compiled and assembled into a computer program (a user's manual for this program is included) for determining the total pressure losses. The formulation presented is applicable to compressible flow through most closed- or open-throat, single-, double-, or non-return wind tunnels. A comparison of estimated performance with that actually achieved by several existing facilities produced generally good agreement.

  20. Design and Testing for a New Thermosyphon Irradiation Vehicle

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Felde, David K.; Carbajo, Juan J.; McDuffee, Joel Lee

    The High Flux Isotope Reactor (HFIR) at the Oak Ridge National Laboratory (ORNL) requires most materials and all fuel experiments to be placed in a pressure containment vessel to ensure that internal contaminants such as fission products cannot be released into the primary coolant. It also requires that all experiments be capable of withstanding various accident conditions (e.g., loss of coolant) without generating vapor bubbles on the surface of the experiment in the primary coolant. These requirements are intended to artificially increase experiment temperatures by introducing a barrier between the experimental materials and the HFIR coolant, and by reducing heatmore » loads to the HFIR primary coolant, thus ensuring that no boiling can occur. A proposed design for materials irradiation would remove these limitations by providing the required primary containment with an internal cooling flow. This would allow for experiments to be irradiated without concern for coolant contamination (e.g., from cladding failure of advanced fuel pins) or for specimen heat load. This report describes a new materials irradiation experiment design that uses a thermosyphon cooling system to allow experimental materials direct access to a liquid coolant. The new design also increases the range of conditions that can be tested in HFIR. This design will provide a unique capability to validate the performance of current and advanced fuels and materials. Because of limited supporting data for this kind of irradiation vehicle, a test program was initiated to obtain operating data that can be used to (1) qualify the vehicle for operation in HFIR and (2) validate computer models used to perform design- and safety-basis calculations. This report also describes the test facility and experimental data, and it provides a comparison of the experimental data to computer simulations. A total of 51 tests have been completed: four tests with pure steam, 12 tests with argon, and 35 tests with helium. A total of 10 tests were performed at subatmospheric pressure, and four of these were performed with pure steam. One test was conducted at a high power of 92.7 kW, six tests were HFIR startups, and two tests were HFIR loss of offsite power (LOOP). Pressures up to 10 MPa, vapor temperatures up to 583 K (310°C), and heater temperatures above 600 K (327°C) have been reached in these tests. Two computer programs, RELAP5-3D and TRACE, have been used to simulate the tests. The TRACE code has shown good agreement with the test data and has been used to model a variety of tests. This experimental facility has been very useful in demonstrating the viability of this new type of irradiation facility.« less

  1. High Performance Computer Cluster for Theoretical Studies of Roaming in Chemical Reactions

    DTIC Science & Technology

    2016-08-30

    High-performance Computer Cluster for Theoretical Studies of Roaming in Chemical Reactions A dedicated high-performance computer cluster was...SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS (ES) U.S. Army Research Office P.O. Box 12211 Research Triangle Park, NC 27709-2211 Computer cluster ...peer-reviewed journals: Final Report: High-performance Computer Cluster for Theoretical Studies of Roaming in Chemical Reactions Report Title A dedicated

  2. EBR-II high-ramp transients under computer control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Forrester, R.J.; Larson, H.A.; Christensen, L.J.

    1983-01-01

    During reactor run 122, EBR-II was subjected to 13 computer-controlled overpower transients at ramps of 4 MWt/s to qualify the facility and fuel for transient testing of LMFBR oxide fuels as part of the EBR-II operational-reliability-testing (ORT) program. A computer-controlled automatic control-rod drive system (ACRDS), designed by EBR-II personnel, permitted automatic control on demand power during the transients.

  3. Cloud@Home: A New Enhanced Computing Paradigm

    NASA Astrophysics Data System (ADS)

    Distefano, Salvatore; Cunsolo, Vincenzo D.; Puliafito, Antonio; Scarpa, Marco

    Cloud computing is a distributed computing paradigm that mixes aspects of Grid computing, ("… hardware and software infrastructure that provides dependable, consistent, pervasive, and inexpensive access to high-end computational capabilities" (Foster, 2002)) Internet Computing ("…a computing platform geographically distributed across the Internet" (Milenkovic et al., 2003)), Utility computing ("a collection of technologies and business practices that enables computing to be delivered seamlessly and reliably across multiple computers, ... available as needed and billed according to usage, much like water and electricity are today" (Ross & Westerman, 2004)) Autonomic computing ("computing systems that can manage themselves given high-level objectives from administrators" (Kephart & Chess, 2003)), Edge computing ("… provides a generic template facility for any type of application to spread its execution across a dedicated grid, balancing the load …" Davis, Parikh, & Weihl, 2004) and Green computing (a new frontier of Ethical computing1 starting from the assumption that in next future energy costs will be related to the environment pollution).

  4. Advancing EDL Technologies for Future Space Missions: From Ground Testing Facilities to Ablative Heatshields

    NASA Astrophysics Data System (ADS)

    Rabinovitch, Jason

    Motivated by recent MSL results where the ablation rate of the PICA heatshield was over-predicted, and staying true to the objectives outlined in the NASA Space Technology Roadmaps and Priorities report, this work focuses on advancing EDL technologies for future space missions. Due to the difficulties in performing flight tests in the hypervelocity regime, a new ground testing facility called the vertical expansion tunnel is proposed. The adverse effects from secondary diaphragm rupture in an expansion tunnel may be reduced or eliminated by orienting the tunnel vertically, matching the test gas pressure and the accelerator gas pressure, and initially separating the test gas from the accelerator gas by density stratification. If some sacrifice of the reservoir conditions can be made, the VET can be utilized in hypervelocity ground testing, without the problems associated with secondary diaphragm rupture. The performance of different constraints for the Rate-Controlled Constrained-Equilibrium (RCCE) method is investigated in the context of modeling reacting flows characteristic to ground testing facilities, and re-entry conditions. The effectiveness of different constraints are isolated, and new constraints previously unmentioned in the literature are introduced. Three main benefits from the RCCE method were determined: 1) the reduction in number of equations that need to be solved to model a reacting flow; 2) the reduction in stiffness of the system of equations needed to be solved; and 3) the ability to tabulate chemical properties as a function of a constraint once, prior to running a simulation, along with the ability to use the same table for multiple simulations. Finally, published physical properties of PICA are compiled, and the composition of the pyrolysis gases that form at high temperatures internal to a heatshield is investigated. A necessary link between the composition of the solid resin, and the composition of the pyrolysis gases created is provided. This link, combined with a detailed investigation into a reacting pyrolysis gas mixture, allows a much needed consistent, and thorough description of many of the physical phenomena occurring in a PICA heatshield, and their implications, to be presented. Through the use of computational fluid mechanics and computational chemistry methods, significant contributions have been made to advancing ground testing facilities, computational methods for reacting flows, and ablation modeling.

  5. Hypersonic Combustor Model Inlet CFD Simulations and Experimental Comparisons

    NASA Technical Reports Server (NTRS)

    Venkatapathy, E.; TokarcikPolsky, S.; Deiwert, G. S.; Edwards, Thomas A. (Technical Monitor)

    1995-01-01

    Numerous two-and three-dimensional computational simulations were performed for the inlet associated with the combustor model for the hypersonic propulsion experiment in the NASA Ames 16-Inch Shock Tunnel. The inlet was designed to produce a combustor-inlet flow that is nearly two-dimensional and of sufficient mass flow rate for large scale combustor testing. The three-dimensional simulations demonstrated that the inlet design met all the design objectives and that the inlet produced a very nearly two-dimensional combustor inflow profile. Numerous two-dimensional simulations were performed with various levels of approximations such as in the choice of chemical and physical models, as well as numerical approximations. Parametric studies were conducted to better understand and to characterize the inlet flow. Results from the two-and three-dimensional simulations were used to predict the mass flux entering the combustor and a mass flux correlation as a function of facility stagnation pressure was developed. Surface heat flux and pressure measurements were compared with the computed results and good agreement was found. The computational simulations helped determine the inlet low characteristics in the high enthalpy environment, the important parameters that affect the combustor-inlet flow, and the sensitivity of the inlet flow to various modeling assumptions.

  6. Drive development for an 10 Mbar Rayleigh-Taylor strength experiment on the National Ignition Facility

    NASA Astrophysics Data System (ADS)

    Prisbrey, Shon; Park, Hye-Sook; Huntington, Channing; McNaney, James; Smith, Raym; Wehrenberg, Christopher; Swift, Damian; Panas, Cynthia; Lord, Dawn; Arsenlis, Athanasios

    2017-10-01

    Strength can be inferred by the amount a Rayleigh-Taylor surface deviates from classical growth when subjected to acceleration. If the acceleration is great enough, even materials highly resistant to deformation will flow. We use the National Ignition Facility (NIF) to create an acceleration profile that will cause sample metals, such as Mo or Cu, to reach peak pressures of 10 Mbar without inducing shock melt. To create such a profile we shock release a stepped density reservoir across a large gap with the stagnation of the reservoir on the far side of the gap resulting in the desired pressure drive history. Low density steps (foams) are a necessary part of this design and have been studied in the last several years on the Omega and NIF facilities. We will present computational and experimental progress that has been made on the 10 Mbar drive designs - including recent drive shots carried out at the NIF. This work was performed under the auspices of the Lawrence Livermore National Security, LLC, (LLNS) under Contract No. DE-AC52-07NA27344. LLNL-ABS-734781.

  7. Energy consumption and load profiling at major airports. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kennedy, J.

    1998-12-01

    This report describes the results of energy audits at three major US airports. These studies developed load profiles and quantified energy usage at these airports while identifying procedures and electrotechnologies that could reduce their power consumption. The major power consumers at the airports studied included central plants, runway and taxiway lighting, fuel farms, terminals, people mover systems, and hangar facilities. Several major findings emerged during the study. The amount of energy efficient equipment installed at an airport is directly related to the age of the facility. Newer facilities had more energy efficient equipment while older facilities had much of themore » original electric and natural gas equipment still in operation. As redesign, remodeling, and/or replacement projects proceed, responsible design engineers are selecting more energy efficient equipment to replace original devices. The use of computer-controlled energy management systems varies. At airports, the primary purpose of these systems is to monitor and control the lighting and environmental air conditioning and heating of the facility. Of the facilities studied, one used computer management extensively, one used it only marginally, and one had no computer controlled management devices. At all of the facilities studied, natural gas is used to provide heat and hot water. Natural gas consumption is at its highest in the months of November, December, January, and February. The Central Plant contains most of the inductive load at an airport and is also a major contributor to power consumption inefficiency. Power factor correction equipment was used at one facility but was not installed at the other two facilities due to high power factor and/or lack of need.« less

  8. High-Performance Rh 2 P Electrocatalyst for Efficient Water Splitting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duan, Haohong; Li, Dongguo; Tang, Yan

    2017-04-05

    The search for active, stable, and cost-efficient electrocatalysts for hydrogen production via water splitting could make a substantial impact on energy technologies that do not rely on fossil fuels. Here we report the synthesis of rhodium phosphide electrocatalyst with low metal loading in the form of nanocubes (NCs) dispersed in high-surface-area carbon (Rh2P/C) by a facile solvo-thermal approach. The Rh2P/C NCs exhibit remarkable performance for hydrogen evolution reaction and oxygen evolution reaction compared to Rh/C and Pt/C catalysts. The atomic structure of the Rh2P NCs was directly observed by annular dark-field scanning transmission electron microscopy, which revealed a phosphorus-rich outermostmore » atomic layer. Combined experimental and computational studies suggest that surface phosphorus plays a crucial role in determining the robust catalyst properties.« less

  9. High-Performance Rh 2 P Electrocatalyst for Efficient Water Splitting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duan, Haohong; Li, Dongguo; Tang, Yan

    2017-04-05

    Search for active, stable and cost-efficient electrocatalysts for hydrogen production via water splitting could make substantial impact to the energy technologies that do not rely on fossil fuels. Here we report the synthesis of rhodium phosphide electrocatalyst with low metal loading in the form of nanocubes (NCs) dispersed in high surface area carbon (Rh2P/C) by a facile solvo-thermal approach. The Rh2P/C NCs exhibit remarkable performance for hydrogen evolution reaction (HER) and oxygen evolution reaction (OER) compared to Rh/C and Pt/C catalysts. The atomic structure of the rhodium phosphide nanocubes was directly observed by annular dark-field scanning transmission electron microscopy (ADF-STEM),more » which revealed phosphorous-rich outermost atomic layer. Combined experimental and computational studies suggest that surface phosphorous plays crucial role in determining the robust catalyst properties.« less

  10. Improvement of the mechanical reliability of monolithic refractory linings for coal gasification process vessels. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Potter, R.A.

    1981-09-01

    Eighteen heat-up tests were run on nine standard and experimental dual component monolithic refractory concrete linings. These tests were run with a five foot diameter by 14-ft high Pressure Vessel/Test Furnace designed to accommodate a 12-inch thick by 5-ft high refractory lining, heat the hot face to 2000/sup 0/F and expose the lining to air or steam pressures up to 150 psig. Results obtained from standard type linings in the test facility indicated that lining degradation duplicated that observed in field installations. The lining performance was significantly improved due to information gained from a systematic study of the cracking thatmore » occurred in the linings; the analysis of the lining strains, shell stresses and acoustic emission results; and the stress analyses performed on the standard and experimental lining designs with the finite element analysis computer programs, REFSAM and RESGAP.« less

  11. Crystal MD: The massively parallel molecular dynamics software for metal with BCC structure

    NASA Astrophysics Data System (ADS)

    Hu, Changjun; Bai, He; He, Xinfu; Zhang, Boyao; Nie, Ningming; Wang, Xianmeng; Ren, Yingwen

    2017-02-01

    Material irradiation effect is one of the most important keys to use nuclear power. However, the lack of high-throughput irradiation facility and knowledge of evolution process, lead to little understanding of the addressed issues. With the help of high-performance computing, we could make a further understanding of micro-level-material. In this paper, a new data structure is proposed for the massively parallel simulation of the evolution of metal materials under irradiation environment. Based on the proposed data structure, we developed the new molecular dynamics software named Crystal MD. The simulation with Crystal MD achieved over 90% parallel efficiency in test cases, and it takes more than 25% less memory on multi-core clusters than LAMMPS and IMD, which are two popular molecular dynamics simulation software. Using Crystal MD, a two trillion particles simulation has been performed on Tianhe-2 cluster.

  12. The COSIMA experiments and their verification, a data base for the validation of two phase flow computer codes

    NASA Astrophysics Data System (ADS)

    Class, G.; Meyder, R.; Stratmanns, E.

    1985-12-01

    The large data base for validation and development of computer codes for two-phase flow, generated at the COSIMA facility, is reviewed. The aim of COSIMA is to simulate the hydraulic, thermal, and mechanical conditions in the subchannel and the cladding of fuel rods in pressurized water reactors during the blowout phase of a loss of coolant accident. In terms of fuel rod behavior, it is found that during blowout under realistic conditions only small strains are reached. For cladding rupture extremely high rod internal pressures are necessary. The behavior of fuel rod simulators and the effect of thermocouples attached to the cladding outer surface are clarified. Calculations performed with the codes RELAP and DRUFAN show satisfactory agreement with experiments. This can be improved by updating the phase separation models in the codes.

  13. The Pain in Storage: Work Safety in a High-Density Shelving Facility

    ERIC Educational Resources Information Center

    Atkins, Stephanie A.

    2005-01-01

    An increasing number of academic and research libraries have built high-density shelving facilities to address overcrowding conditions in their regular stacks. However, the work performed in these facilities is physically strenuous and highly repetitive in nature and may require the use of potentially dangerous equipment. This article will examine…

  14. Grand Challenges: High Performance Computing and Communications. The FY 1992 U.S. Research and Development Program.

    ERIC Educational Resources Information Center

    Federal Coordinating Council for Science, Engineering and Technology, Washington, DC.

    This report presents a review of the High Performance Computing and Communications (HPCC) Program, which has as its goal the acceleration of the commercial availability and utilization of the next generation of high performance computers and networks in order to: (1) extend U.S. technological leadership in high performance computing and computer…

  15. Functional requirements document for the Earth Observing System Data and Information System (EOSDIS) Scientific Computing Facilities (SCF) of the NASA/MSFC Earth Science and Applications Division, 1992

    NASA Technical Reports Server (NTRS)

    Botts, Michael E.; Phillips, Ron J.; Parker, John V.; Wright, Patrick D.

    1992-01-01

    Five scientists at MSFC/ESAD have EOS SCF investigator status. Each SCF has unique tasks which require the establishment of a computing facility dedicated to accomplishing those tasks. A SCF Working Group was established at ESAD with the charter of defining the computing requirements of the individual SCFs and recommending options for meeting these requirements. The primary goal of the working group was to determine which computing needs can be satisfied using either shared resources or separate but compatible resources, and which needs require unique individual resources. The requirements investigated included CPU-intensive vector and scalar processing, visualization, data storage, connectivity, and I/O peripherals. A review of computer industry directions and a market survey of computing hardware provided information regarding important industry standards and candidate computing platforms. It was determined that the total SCF computing requirements might be most effectively met using a hierarchy consisting of shared and individual resources. This hierarchy is composed of five major system types: (1) a supercomputer class vector processor; (2) a high-end scalar multiprocessor workstation; (3) a file server; (4) a few medium- to high-end visualization workstations; and (5) several low- to medium-range personal graphics workstations. Specific recommendations for meeting the needs of each of these types are presented.

  16. BigData and computing challenges in high energy and nuclear physics

    NASA Astrophysics Data System (ADS)

    Klimentov, A.; Grigorieva, M.; Kiryanov, A.; Zarochentsev, A.

    2017-06-01

    In this contribution we discuss the various aspects of the computing resource needs experiments in High Energy and Nuclear Physics, in particular at the Large Hadron Collider. This will evolve in the future when moving from LHC to HL-LHC in ten years from now, when the already exascale levels of data we are processing could increase by a further order of magnitude. The distributed computing environment has been a great success and the inclusion of new super-computing facilities, cloud computing and volunteering computing for the future is a big challenge, which we are successfully mastering with a considerable contribution from many super-computing centres around the world, academic and commercial cloud providers. We also discuss R&D computing projects started recently in National Research Center ``Kurchatov Institute''

  17. GeoBrain Computational Cyber-laboratory for Earth Science Studies

    NASA Astrophysics Data System (ADS)

    Deng, M.; di, L.

    2009-12-01

    Computational approaches (e.g., computer-based data visualization, analysis and modeling) are critical for conducting increasingly data-intensive Earth science (ES) studies to understand functions and changes of the Earth system. However, currently Earth scientists, educators, and students have met two major barriers that prevent them from being effectively using computational approaches in their learning, research and application activities. The two barriers are: 1) difficulties in finding, obtaining, and using multi-source ES data; and 2) lack of analytic functions and computing resources (e.g., analysis software, computing models, and high performance computing systems) to analyze the data. Taking advantages of recent advances in cyberinfrastructure, Web service, and geospatial interoperability technologies, GeoBrain, a project funded by NASA, has developed a prototype computational cyber-laboratory to effectively remove the two barriers. The cyber-laboratory makes ES data and computational resources at large organizations in distributed locations available to and easily usable by the Earth science community through 1) enabling seamless discovery, access and retrieval of distributed data, 2) federating and enhancing data discovery with a catalogue federation service and a semantically-augmented catalogue service, 3) customizing data access and retrieval at user request with interoperable, personalized, and on-demand data access and services, 4) automating or semi-automating multi-source geospatial data integration, 5) developing a large number of analytic functions as value-added, interoperable, and dynamically chainable geospatial Web services and deploying them in high-performance computing facilities, 6) enabling the online geospatial process modeling and execution, and 7) building a user-friendly extensible web portal for users to access the cyber-laboratory resources. Users can interactively discover the needed data and perform on-demand data analysis and modeling through the web portal. The GeoBrain cyber-laboratory provides solutions to meet common needs of ES research and education, such as, distributed data access and analysis services, easy access to and use of ES data, and enhanced geoprocessing and geospatial modeling capability. It greatly facilitates ES research, education, and applications. The development of the cyber-laboratory provides insights, lessons-learned, and technology readiness to build more capable computing infrastructure for ES studies, which can meet wide-range needs of current and future generations of scientists, researchers, educators, and students for their formal or informal educational training, research projects, career development, and lifelong learning.

  18. Issues and approach to develop validated analysis tools for hypersonic flows: One perspective

    NASA Technical Reports Server (NTRS)

    Deiwert, George S.

    1993-01-01

    Critical issues concerning the modeling of low density hypervelocity flows where thermochemical nonequilibrium effects are pronounced are discussed. Emphasis is on the development of validated analysis tools, and the activity in the NASA Ames Research Center's Aerothermodynamics Branch is described. Inherent in the process is a strong synergism between ground test and real gas computational fluid dynamics (CFD). Approaches to develop and/or enhance phenomenological models and incorporate them into computational flowfield simulation codes are discussed. These models were partially validated with experimental data for flows where the gas temperature is raised (compressive flows). Expanding flows, where temperatures drop, however, exhibit somewhat different behavior. Experimental data for these expanding flow conditions is sparse and reliance must be made on intuition and guidance from computational chemistry to model transport processes under these conditions. Ground based experimental studies used to provide necessary data for model development and validation are described. Included are the performance characteristics of high enthalpy flow facilities, such as shock tubes and ballistic ranges.

  19. The Data Acquisition and Control Systems of the Jet Noise Laboratory at the NASA Langley Research Center

    NASA Technical Reports Server (NTRS)

    Jansen, B. J., Jr.

    1998-01-01

    The features of the data acquisition and control systems of the NASA Langley Research Center's Jet Noise Laboratory are presented. The Jet Noise Laboratory is a facility that simulates realistic mixed flow turbofan jet engine nozzle exhaust systems in simulated flight. The system is capable of acquiring data for a complete take-off assessment of noise and nozzle performance. This paper describes the development of an integrated system to control and measure the behavior of model jet nozzles featuring dual independent high pressure combusting air streams with wind tunnel flow. The acquisition and control system is capable of simultaneous measurement of forces, moments, static and dynamic model pressures and temperatures, and jet noise. The design concepts for the coordination of the control computers and multiple data acquisition computers and instruments are discussed. The control system design and implementation are explained, describing the features, equipment, and the experiences of using a primarily Personal Computer based system. Areas for future development are examined.

  20. Issues and approach to develop validated analysis tools for hypersonic flows: One perspective

    NASA Technical Reports Server (NTRS)

    Deiwert, George S.

    1992-01-01

    Critical issues concerning the modeling of low-density hypervelocity flows where thermochemical nonequilibrium effects are pronounced are discussed. Emphasis is on the development of validated analysis tools. A description of the activity in the Ames Research Center's Aerothermodynamics Branch is also given. Inherent in the process is a strong synergism between ground test and real-gas computational fluid dynamics (CFD). Approaches to develop and/or enhance phenomenological models and incorporate them into computational flow-field simulation codes are discussed. These models have been partially validated with experimental data for flows where the gas temperature is raised (compressive flows). Expanding flows, where temperatures drop, however, exhibit somewhat different behavior. Experimental data for these expanding flow conditions are sparse; reliance must be made on intuition and guidance from computational chemistry to model transport processes under these conditions. Ground-based experimental studies used to provide necessary data for model development and validation are described. Included are the performance characteristics of high-enthalpy flow facilities, such as shock tubes and ballistic ranges.

  1. A Comparison of Computed and Experimental Flowfields of the RAH-66 Helicopter

    NASA Technical Reports Server (NTRS)

    vanDam, C. P.; Budge, A. M.; Duque, E. P. N.

    1996-01-01

    This paper compares and evaluates numerical and experimental flowfields of the RAH-66 Comanche helicopter. The numerical predictions were obtained by solving the Thin-Layer Navier-Stokes equations. The computations use actuator disks to investigate the main and tail rotor effects upon the fuselage flowfield. The wind tunnel experiment was performed in the 14 x 22 foot facility located at NASA Langley. A suite of flow conditions, rotor thrusts and fuselage-rotor-tail configurations were tested. In addition, the tunnel model and the computational geometry were based upon the same CAD definition. Computations were performed for an isolated fuselage configuration and for a rotor on configuration. Comparisons between the measured and computed surface pressures show areas of correlation and some discrepancies. Local areas of poor computational grid-quality and local areas of geometry differences account for the differences. These calculations demonstrate the use of advanced computational fluid dynamic methodologies towards a flight vehicle currently under development. It serves as an important verification for future computed results.

  2. Best Practices Manual, 2002 Edition.

    ERIC Educational Resources Information Center

    Collaborative for High Performance Schools, CA.

    The goal of this manual is to create a new generation of high performance school facilities in California. The focus is on public schools and levels K-12, althoughmany of the design principals apply to private schools and higher education facilities as well. High performance schools are healthy, comfortable, energy efficient, resource efficient,…

  3. ELECTRICAL RESISTIVITY TECHNIQUE TO ASSESS THE INTEGRITY OF GEOMEMBRANE LINERS

    EPA Science Inventory

    Two-dimensional electrical modeling of a liner system was performed using computer techniques. The modeling effort examined the voltage distributions in cross sections of lined facilities with different leak locations. Results confirmed that leaks in the liner influenced voltage ...

  4. Investigations for Supersonic Transports at Transonic and Supersonic Conditions

    NASA Technical Reports Server (NTRS)

    Rivers, S. Melissa B.; Owens, Lewis R.; Wahls, Richard A.

    2007-01-01

    Several computational studies were conducted as part of NASA s High Speed Research Program. Results of turbulence model comparisons from two studies on supersonic transport configurations performed during the NASA High-Speed Research program are given. The effects of grid topology and the representation of the actual wind tunnel model geometry are also investigated. Results are presented for both transonic conditions at Mach 0.90 and supersonic conditions at Mach 2.48. A feature of these two studies was the availability of higher Reynolds number wind tunnel data with which to compare the computational results. The transonic wind tunnel data was obtained in the National Transonic Facility at NASA Langley, and the supersonic data was obtained in the Boeing Polysonic Wind Tunnel. The computational data was acquired using a state of the art Navier-Stokes flow solver with a wide range of turbulence models implemented. The results show that the computed forces compare reasonably well with the experimental data, with the Baldwin-Lomax with Degani-Schiff modifications and the Baldwin-Barth models showing the best agreement for the transonic conditions and the Spalart-Allmaras model showing the best agreement for the supersonic conditions. The transonic results were more sensitive to the choice of turbulence model than were the supersonic results.

  5. Reducing cooling energy consumption in data centres and critical facilities

    NASA Astrophysics Data System (ADS)

    Cross, Gareth

    Given the rise of our everyday reliance on computers in all walks of life, from checking the train times to paying our credit card bills online, the need for computational power is ever increasing. Other than the ever-increasing performance of home Personal Computers (PC's) this reliance has given rise to a new phenomenon in the last 10 years ago. The data centre. Data centres contain vast arrays of IT cabinets loaded with servers that perform millions of computational equations every second. It is these data centres that allow us to continue with our reliance on the internet and the PC. As more and more data centres become necessary due to the increase in computing processing power required for the everyday activities we all take for granted so the energy consumed by these data centres rises. Not only are more and more data centres being constructed daily, but operators are also looking at ways to squeeze more processing from their existing data centres. This in turn leads to greater heat outputs and therefore requires more cooling. Cooling data centres requires a sizeable energy input, indeed to many megawatts per data centre site. Given the large amounts of money dependant on the successful operation of data centres, in particular for data centres operated by financial institutions, the onus is predominantly on ensuring the data centres operate with no technical glitches rather than in an energy conscious fashion. This report aims to investigate the ways and means of reducing energy consumption within data centres without compromising the technology the data centres are designed to house. As well as discussing the individual merits of the technologies and their implementation technical calculations will be undertaken where necessary to determine the levels of energy saving, if any, from each proposal. To enable comparison between each proposal any design calculations within this report will be undertaken against a notional data facility. This data facility will nominally be considered to require 1000 kW. Refer to Section 2.1 'Outline of Notional data Facility for Calculation Purposes' for details of the design conditions and constraints of the energy consumption calculations.

  6. Development of alternative data analysis techniques for improving the accuracy and specificity of natural resource inventories made with digital remote sensing data

    NASA Technical Reports Server (NTRS)

    Lillesand, T. M.; Meisner, D. E. (Principal Investigator)

    1980-01-01

    An investigation was conducted into ways to improve the involvement of state and local user personnel in the digital image analysis process by isolating those elements of the analysis process which require extensive involvement by field personnel and providing means for performing those activities apart from a computer facility. In this way, the analysis procedure can be converted from a centralized activity focused on a computer facility to a distributed activity in which users can interact with the data at the field office level or in the field itself. A general image processing software was developed on the University of Minnesota computer system (Control Data Cyber models 172 and 74). The use of color hardcopy image data as a primary medium in supervised training procedures was investigated and digital display equipment and a coordinate digitizer were procured.

  7. Final Report Scalable Analysis Methods and In Situ Infrastructure for Extreme Scale Knowledge Discovery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O'Leary, Patrick

    The primary challenge motivating this project is the widening gap between the ability to compute information and to store it for subsequent analysis. This gap adversely impacts science code teams, who can perform analysis only on a small fraction of the data they calculate, resulting in the substantial likelihood of lost or missed science, when results are computed but not analyzed. Our approach is to perform as much analysis or visualization processing on data while it is still resident in memory, which is known as in situ processing. The idea in situ processing was not new at the time ofmore » the start of this effort in 2014, but efforts in that space were largely ad hoc, and there was no concerted effort within the research community that aimed to foster production-quality software tools suitable for use by Department of Energy (DOE) science projects. Our objective was to produce and enable the use of production-quality in situ methods and infrastructure, at scale, on DOE high-performance computing (HPC) facilities, though we expected to have an impact beyond DOE due to the widespread nature of the challenges, which affect virtually all large-scale computational science efforts. To achieve this objective, we engaged in software technology research and development (R&D), in close partnerships with DOE science code teams, to produce software technologies that were shown to run efficiently at scale on DOE HPC platforms.« less

  8. Emerging CAE technologies and their role in Future Ambient Intelligence Environments

    NASA Astrophysics Data System (ADS)

    Noor, Ahmed K.

    2011-03-01

    Dramatic improvements are on the horizon in Computer Aided Engineering (CAE) and various simulation technologies. The improvements are due, in part, to the developments in a number of leading-edge technologies and their synergistic combinations/convergence. The technologies include ubiquitous, cloud, and petascale computing; ultra high-bandwidth networks, pervasive wireless communication; knowledge based engineering; networked immersive virtual environments and virtual worlds; novel human-computer interfaces; and powerful game engines and facilities. This paper describes the frontiers and emerging simulation technologies, and their role in the future virtual product creation and learning/training environments. The environments will be ambient intelligence environments, incorporating a synergistic combination of novel agent-supported visual simulations (with cognitive learning and understanding abilities); immersive 3D virtual world facilities; development chain management systems and facilities (incorporating a synergistic combination of intelligent engineering and management tools); nontraditional methods; intelligent, multimodal and human-like interfaces; and mobile wireless devices. The Virtual product creation environment will significantly enhance the productivity and will stimulate creativity and innovation in future global virtual collaborative enterprises. The facilities in the learning/training environment will provide timely, engaging, personalized/collaborative and tailored visual learning.

  9. Proceedings of the nineteenth LAMPF Users Group meeting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bradbury, J.N.

    1986-02-01

    Separate abstracts were prepared for eight invited talks on various aspects of nuclear and particle physics as well as status reports on LAMPF and discussions of upgrade options. Also included in these proceedings are the minutes of the working groups for: energetic pion channel and spectrometer; high resolution spectrometer; high energy pion channel; neutron facilities; low-energy pion work; nucleon physics laboratory; stopped muon physics; solid state physics and material science; nuclear chemistry; and computing facilities. Recent LAMPF proposals are also briefly summarized. (LEW)

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Darrow, Ken; Hedman, Bruce

    Data centers represent a rapidly growing and very energy intensive activity in commercial, educational, and government facilities. In the last five years the growth of this sector was the electric power equivalent to seven new coal-fired power plants. Data centers consume 1.5% of the total power in the U.S. Growth over the next five to ten years is expected to require a similar increase in power generation. This energy consumption is concentrated in buildings that are 10-40 times more energy intensive than a typical office building. The sheer size of the market, the concentrated energy consumption per facility, and themore » tendency of facilities to cluster in 'high-tech' centers all contribute to a potential power infrastructure crisis for the industry. Meeting the energy needs of data centers is a moving target. Computing power is advancing rapidly, which reduces the energy requirements for data centers. A lot of work is going into improving the computing power of servers and other processing equipment. However, this increase in computing power is increasing the power densities of this equipment. While fewer pieces of equipment may be needed to meet a given data processing load, the energy density of a facility designed to house this higher efficiency equipment will be as high as or higher than it is today. In other words, while the data center of the future may have the IT power of ten data centers of today, it is also going to have higher power requirements and higher power densities. This report analyzes the opportunities for CHP technologies to assist primary power in making the data center more cost-effective and energy efficient. Broader application of CHP will lower the demand for electricity from central stations and reduce the pressure on electric transmission and distribution infrastructure. This report is organized into the following sections: (1) Data Center Market Segmentation--the description of the overall size of the market, the size and types of facilities involved, and the geographic distribution. (2) Data Center Energy Use Trends--a discussion of energy use and expected energy growth and the typical energy consumption and uses in data centers. (3) CHP Applicability--Potential configurations, CHP case studies, applicable equipment, heat recovery opportunities (cooling), cost and performance benchmarks, and power reliability benefits (4) CHP Drivers and Hurdles--evaluation of user benefits, social benefits, market structural issues and attitudes toward CHP, and regulatory hurdles. (5) CHP Paths to Market--Discussion of technical needs, education, strategic partnerships needed to promote CHP in the IT community.« less

  11. NASA Engine Icing Research Overview: Aeronautics Evaluation and Test Capabilities (AETC) Project

    NASA Technical Reports Server (NTRS)

    Veres, Joseph P.

    2015-01-01

    The occurrence of ice accretion within commercial high bypass aircraft turbine engines has been reported by airlines under certain atmospheric conditions. Engine anomalies have taken place at high altitudes that have been attributed to ice crystal ingestion by the engine. The ice crystals can result in degraded engine performance, loss of thrust control, compressor surge or stall, and flameout of the combustor. The Aviation Safety Program at NASA has taken on the technical challenge of a turbofan engine icing caused by ice crystals which can exist in high altitude convective clouds. The NASA engine icing project consists of an integrated approach with four concurrent and ongoing research elements, each of which feeds critical information to the next element. The project objective is to gain understanding of high altitude ice crystals by developing knowledge bases and test facilities for testing full engines and engine components. The first element is to utilize a highly instrumented aircraft to characterize the high altitude convective cloud environment. The second element is the enhancement of the Propulsion Systems Laboratory altitude test facility for gas turbine engines to include the addition of an ice crystal cloud. The third element is basic research of the fundamental physics associated with ice crystal ice accretion. The fourth and final element is the development of computational tools with the goal of simulating the effects of ice crystal ingestion on compressor and gas turbine engine performance. The NASA goal is to provide knowledge to the engine and aircraft manufacturing communities to help mitigate, or eliminate turbofan engine interruptions, engine damage, and failures due to ice crystal ingestion.

  12. A Low-Cost and Energy-Efficient Multiprocessor System-on-Chip for UWB MAC Layer

    NASA Astrophysics Data System (ADS)

    Xiao, Hao; Isshiki, Tsuyoshi; Khan, Arif Ullah; Li, Dongju; Kunieda, Hiroaki; Nakase, Yuko; Kimura, Sadahiro

    Ultra-wideband (UWB) technology has attracted much attention recently due to its high data rate and low emission power. Its media access control (MAC) protocol, WiMedia MAC, promises a lot of facilities for high-speed and high-quality wireless communication. However, these benefits in turn involve a large amount of computational load, which challenges the traditional uniprocessor architecture based implementation method to provide the required performance. However, the constrained cost and power budget, on the other hand, makes using commercial multiprocessor solutions unrealistic. In this paper, a low-cost and energy-efficient multiprocessor system-on-chip (MPSoC), which tackles at once the aspects of system design, software migration and hardware architecture, is presented for the implementation of UWB MAC layer. Experimental results show that the proposed MPSoC, based on four simple RISC processors and shared-memory infrastructure, achieves up to 45% performance improvement and 65% power saving, but takes 15% less area than the uniprocessor implementation.

  13. A Testing Platform for Validation of Overhead Conductor Aging Models and Understanding Thermal Limits

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Irminger, Philip; Starke, Michael R; Dimitrovski, Aleksandar D

    2014-01-01

    Power system equipment manufacturers and researchers continue to experiment with novel overhead electric conductor designs that support better conductor performance and address congestion issues. To address the technology gap in testing these novel designs, Oak Ridge National Laboratory constructed the Powerline Conductor Accelerated Testing (PCAT) facility to evaluate the performance of novel overhead conductors in an accelerated fashion in a field environment. Additionally, PCAT has the capability to test advanced sensors and measurement methods for accessing overhead conductor performance and condition. Equipped with extensive measurement and monitoring devices, PCAT provides a platform to improve/validate conductor computer models and assess themore » performance of novel conductors. The PCAT facility and its testing capabilities are described in this paper.« less

  14. Navier-Stokes Simulation of Airconditioning Facility of a Large Modem Computer Room

    NASA Technical Reports Server (NTRS)

    2005-01-01

    NASA recently assembled one of the world's fastest operational supercomputers to meet the agency's new high performance computing needs. This large-scale system, named Columbia, consists of 20 interconnected SGI Altix 512-processor systems, for a total of 10,240 Intel Itanium-2 processors. High-fidelity CFD simulations were performed for the NASA Advanced Supercomputing (NAS) computer room at Ames Research Center. The purpose of the simulations was to assess the adequacy of the existing air handling and conditioning system and make recommendations for changes in the design of the system if needed. The simulations were performed with NASA's OVERFLOW-2 CFD code which utilizes overset structured grids. A new set of boundary conditions were developed and added to the flow solver for modeling the roomls air-conditioning and proper cooling of the equipment. Boundary condition parameters for the flow solver are based on cooler CFM (flow rate) ratings and some reasonable assumptions of flow and heat transfer data for the floor and central processing units (CPU) . The geometry modeling from blue prints and grid generation were handled by the NASA Ames software package Chimera Grid Tools (CGT). This geometric model was developed as a CGT-scripted template, which can be easily modified to accommodate any changes in shape and size of the room, locations and dimensions of the CPU racks, disk racks, coolers, power distribution units, and mass-storage system. The compute nodes are grouped in pairs of racks with an aisle in the middle. High-speed connection cables connect the racks with overhead cable trays. The cool air from the cooling units is pumped into the computer room from a sub-floor through perforated floor tiles. The CPU cooling fans draw cool air from the floor tiles, which run along the outside length of each rack, and eject warm air into the center isle between the racks. This warm air is eventually drawn into the cooling units located near the walls of the room. One major concern is that the hot air ejected to the middle isle might recirculate back into the cool rack side and cause thermal short-cycling. The simulations analyzed and addressed the following important elements of the computer room: 1) High-temperature build-up in certain regions of the room; 2) Areas of low air circulation in the room; 3) Potential short-cycling of the computer rack cooling system; 4) Effectiveness of the perforated cooling floor tiles; 5) Effect of changes in various aspects of the cooling units. Detailed flow visualization is performed to show temperature distribution, air-flow streamlines and velocities in the computer room.

  15. The NHERI RAPID Facility: Enabling the Next-Generation of Natural Hazards Reconnaissance

    NASA Astrophysics Data System (ADS)

    Wartman, J.; Berman, J.; Olsen, M. J.; Irish, J. L.; Miles, S.; Gurley, K.; Lowes, L.; Bostrom, A.

    2017-12-01

    The NHERI post-disaster, rapid response research (or "RAPID") facility, headquartered at the University of Washington (UW), is a collaboration between UW, Oregon State University, Virginia Tech, and the University of Florida. The RAPID facility will enable natural hazard researchers to conduct next-generation quick response research through reliable acquisition and community sharing of high-quality, post-disaster data sets that will enable characterization of civil infrastructure performance under natural hazard loads, evaluation of the effectiveness of current and previous design methodologies, understanding of socio-economic dynamics, calibration of computational models used to predict civil infrastructure component and system response, and development of solutions for resilient communities. The facility will provide investigators with the hardware, software and support services needed to collect, process and assess perishable interdisciplinary data following extreme natural hazard events. Support to the natural hazards research community will be provided through training and educational activities, field deployment services, and by promoting public engagement with science and engineering. Specifically, the RAPID facility is undertaking the following strategic activities: (1) acquiring, maintaining, and operating state-of-the-art data collection equipment; (2) developing and supporting mobile applications to support interdisciplinary field reconnaissance; (3) providing advisory services and basic logistics support for research missions; (4) facilitating the systematic archiving, processing and visualization of acquired data in DesignSafe-CI; (5) training a broad user base through workshops and other activities; and (6) engaging the public through citizen science, as well as through community outreach and education. The facility commenced operations in September 2016 and will begin field deployments beginning in September 2018. This poster will provide an overview of the vision for the RAPID facility, the equipment that will be available for use, the facility's operations, and opportunities for user training and facility use.

  16. Evaluation of initial collector field performance at the Langley Solar Building Test Facility

    NASA Technical Reports Server (NTRS)

    Boyle, R. J.; Knoll, R. H.; Jensen, R. N.

    1977-01-01

    The thermal performance of the solar collector field for the NASA Langley Solar Building Test Facility is given for October 1976 through January 1977. An 1180 square meter solar collector field with seven collector designs helped to provide hot water for the building heating system and absorption air conditioner. The collectors were arranged in 12 rows with nominally 51 collectors per row. Heat transfer rates for each row are calculated and recorded along with sensor, insolation, and weather data every 5 minutes using a mini-computer. The agreement between the experimental and predicted collector efficiencies was generally within five percentage points.

  17. Parameter meta-optimization of metaheuristics of solving specific NP-hard facility location problem

    NASA Astrophysics Data System (ADS)

    Skakov, E. S.; Malysh, V. N.

    2018-03-01

    The aim of the work is to create an evolutionary method for optimizing the values of the control parameters of metaheuristics of solving the NP-hard facility location problem. A system analysis of the tuning process of optimization algorithms parameters is carried out. The problem of finding the parameters of a metaheuristic algorithm is formulated as a meta-optimization problem. Evolutionary metaheuristic has been chosen to perform the task of meta-optimization. Thus, the approach proposed in this work can be called “meta-metaheuristic”. Computational experiment proving the effectiveness of the procedure of tuning the control parameters of metaheuristics has been performed.

  18. Hydrologic evaluation methodology for estimating water movement through the unsaturated zone at commercial low-level radioactive waste disposal site

    USGS Publications Warehouse

    Meyer, P.D.; Rockhold, M.L.; Nichols, W.E.; Gee, G.W.

    1996-01-01

    This report identifies key technical issues related to hydrologic assessment of water flow in the unsaturated zone at low-level radioactive waste (LLW) disposal facilities. In addition, a methodology for incorporating these issues in the performance assessment of proposed LLW disposal facilities is identified and evaluated. The issues discussed fall into four areas:Estimating the water balance at a site (i.e., infiltration, runoff, water storage, evapotranspiration, and recharge);Analyzing the hydrologic performance of engineered components of a facility;Evaluating the application of models to the prediction of facility performance; andEstimating the uncertainty in predicted facility performance.An estimate of recharge at a LLW site is important since recharge is a principal factor in controlling the release of contaminants via the groundwater pathway. The most common methods for estimating recharge are discussed in Chapter 2. Many factors affect recharge; the natural recharge at an undisturbed site is not necessarily representative either of the recharge that will occur after the site has been disturbed or of the flow of water into a disposal facility at the site. Factors affecting recharge are discussed in Chapter 2.At many sites engineered components are required for a LLW facility to meet performance requirements. Chapter 3 discusses the use of engineered barriers to control the flow of water in a LLW facility, with a particular emphasis on cover systems. Design options and the potential performance and degradation mechanisms of engineered components are also discussed.Water flow in a LLW disposal facility must be evaluated before construction of the facility. In addition, hydrologic performance must be predicted over a very long time frame. For these reasons, the hydrologic evaluation relies on the use of predictive modeling. In Chapter 4, the evaluation of unsaturated water flow modeling is discussed. A checklist of items is presented to guide the evaluation. Several computer simulation codes that were used in the examples (Chapter 6) are discussed with respect to this checklist. The codes used include HELP, UNSAT-H, and VAM3DCG.To provide a defensible estimate of water flow in a LLW disposal facility, the uncertainty associated with model predictions must be considered. Uncertainty arises because of the highly heterogeneous nature of most subsurface environments and the long time frame required in the analysis. Sources of uncertainty in hydrologic evaluation of the unsaturated zone and several approaches for analysis are discussed in Chapter 5. The methods of analysis discussed include a bounding approach, sensitivity analysis, and Monte Carlo simulation.To illustrate the application of the discussion in Chapters 2 through 5, two examples are presented in Chapter 6. The first example is of a below ground vault located in a humid environment. The second example looks at a shallow land burial facility located in an arid environment. The examples utilize actual site-specific data and realistic facility designs. The two examples illustrate the issues unique to humid and arid sites as well as the issues common to all LLW sites. Strategies for addressing the analytical difficulties arising in any complex hydrologic evaluation of the unsaturated zone are demonstrated.The report concludes with some final observations and recommendations.

  19. Integration Of PanDA Workload Management System With Supercomputers for ATLAS and Data Intensive Science

    NASA Astrophysics Data System (ADS)

    Klimentov, A.; De, K.; Jha, S.; Maeno, T.; Nilsson, P.; Oleynik, D.; Panitkin, S.; Wells, J.; Wenaus, T.

    2016-10-01

    The.LHC, operating at CERN, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 150 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250,000 cores with a peak performance of 0.3 petaFLOPS, LHC data taking runs require more resources than grid can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility. Current approach utilizes modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on LCFs multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads on several supercomputing platforms for ALICE and ATLAS experiments and it is in full pro duction for the ATLAS since September 2015. We will present our current accomplishments with running PanDA at supercomputers and demonstrate our ability to use PanDA as a portal independent of the computing facilities infrastructure for High Energy and Nuclear Physics as well as other data-intensive science applications, such as bioinformatics and astro-particle physics.

  20. Open release of the DCA++ project

    NASA Astrophysics Data System (ADS)

    Haehner, Urs; Solca, Raffaele; Staar, Peter; Alvarez, Gonzalo; Maier, Thomas; Summers, Michael; Schulthess, Thomas

    We present the first open release of the DCA++ project, a highly scalable and efficient research code to solve quantum many-body problems with cutting edge quantum cluster algorithms. The implemented dynamical cluster approximation (DCA) and its DCA+ extension with a continuous self-energy capture nonlocal correlations in strongly correlated electron systems thereby allowing insight into high-Tc superconductivity. With the increasing heterogeneity of modern machines, DCA++ provides portable performance on conventional and emerging new architectures, such as hybrid CPU-GPU and Xeon Phi, sustaining multiple petaflops on ORNL's Titan and CSCS' Piz Daint. Moreover, we will describe how best practices in software engineering can be applied to make software development sustainable and scalable in a research group. Software testing and documentation not only prevent productivity collapse, but more importantly, they are necessary for correctness, credibility and reproducibility of scientific results. This research used resources of the Oak Ridge Leadership Computing Facility (OLCF) awarded by the INCITE program, and of the Swiss National Supercomputing Center. OLCF is a DOE Office of Science User Facility supported under Contract DE-AC05-00OR22725.

  1. Upgrades of DARWIN, a dose and spectrum monitoring system applicable to various types of radiation over wide energy ranges

    NASA Astrophysics Data System (ADS)

    Sato, Tatsuhiko; Satoh, Daiki; Endo, Akira; Shigyo, Nobuhiro; Watanabe, Fusao; Sakurai, Hiroki; Arai, Yoichi

    2011-05-01

    A dose and spectrum monitoring system applicable to neutrons, photons and muons over wide ranges of energy, designated as DARWIN, has been developed for radiological protection in high-energy accelerator facilities. DARWIN consists of a phoswitch-type scintillation detector, a data-acquisition (DAQ) module for digital waveform analysis, and a personal computer equipped with a graphical-user-interface (GUI) program for controlling the system. The system was recently upgraded by introducing an original DAQ module based on a field programmable gate array, FPGA, and also by adding a function for estimating neutron and photon spectra based on an unfolding technique without requiring any specific scientific background of the user. The performance of the upgraded DARWIN was examined in various radiation fields, including an operational field in J-PARC. The experiments revealed that the dose rates and spectra measured by the upgraded DARWIN are quite reasonable, even in radiation fields with peak structures in terms of both spectrum and time variation. These results clearly demonstrate the usefulness of DARWIN for improving radiation safety in high-energy accelerator facilities.

  2. Comparison of Computational and Experimental Results for a Transonic Variable-Speed Power-Turbine Blade Operating with Low Inlet Turbulence Levels

    NASA Technical Reports Server (NTRS)

    Booth, David; Flegel, Ashlie

    2015-01-01

    A computational assessment of the aerodynamic performance of the midspan section of a variable-speed power-turbine blade is described. The computation comprises a periodic single blade that represents the 2-D Midspan section VSPT blade that was tested in the NASA Glenn Research Center Transonic Turbine Blade Cascade Facility. A commercial, off-the-shelf (COTS) software package, Pointwise and CFD++, was used for the grid generation and RANS and URANS computations. The CFD code, which offers flexibility in terms of turbulence and transition modeling options, was assessed in terms of blade loading, loss, and turning against test data from the transonic tunnel. Simulations were assessed at positive and negative incidence angles that represent the turbine cruise and take-off design conditions. The results indicate that the secondary flow induced at the positive incidence cruise condition results in a highly loaded case and transitional flow on the blade is observed. The negative incidence take-off condition is unloaded and the flow is very two-dimensional. The computational results demonstrate the predictive capability of the gridding technique and COTS software for a linear transonic turbine blade cascade with large incidence angle variation.

  3. Comparison of Computational and Experimental Results for a Transonic Variable-speed Power-Turbine Blade Operating with Low Inlet Turbulence Levels

    NASA Technical Reports Server (NTRS)

    Booth, David T.; Flegel, Ashlie B.

    2015-01-01

    A computational assessment of the aerodynamic performance of the midspan section of a variable-speed power-turbine blade is described. The computation comprises a periodic single blade that represents the 2-D Midspan section VSPT blade that was tested in the NASA Glenn Research Center Transonic Turbine Blade Cascade Facility. A commercial, off-the-shelf (COTS) software package, Pointwise and CFD++, was used for the grid generation and RANS and URANS computations. The CFD code, which offers flexibility in terms of turbulence and transition modeling options, was assessed in terms of blade loading, loss, and turning against test data from the transonic tunnel. Simulations were assessed at positive and negative incidence angles that represent the turbine cruise and take-off design conditions. The results indicate that the secondary flow induced at the positive incidence cruise condition results in a highly loaded case and transitional flow on the blade is observed. The negative incidence take-off condition is unloaded and the flow is very two-dimensional. The computational results demonstrate the predictive capability of the gridding technique and COTS software for a linear transonic turbine blade cascade with large incidence angle variation.

  4. Pressure profiles of the BRing based on the simulation used in the CSRm

    NASA Astrophysics Data System (ADS)

    Wang, J. C.; Li, P.; Yang, J. C.; Yuan, Y. J.; Wu, B.; Chai, Z.; Luo, C.; Dong, Z. Q.; Zheng, W. H.; Zhao, H.; Ruan, S.; Wang, G.; Liu, J.; Chen, X.; Wang, K. D.; Qin, Z. M.; Yin, B.

    2017-07-01

    HIAF-BRing, a new multipurpose accelerator facility of the High Intensity heavy-ion Accelerator Facility project, requires an extremely high vacuum lower than 10-11 mbar to fulfill the requirements of radioactive beam physics and high energy density physics. To achieve the required process pressure, the bench-marked codes of VAKTRAK and Molflow+ are used to simulate the pressure profiles of the BRing system. In order to ensure the accuracy of the implementation of VAKTRAK, the computational results are verified by measured pressure data and compared with a new simulation code BOLIDE on the current synchrotron CSRm. Since the verification of VAKTRAK has been done, the pressure profiles of the BRing are calculated with different parameters such as conductance, out-gassing rates and pumping speeds. According to the computational results, the optimal parameters are selected to achieve the required pressure for the BRing.

  5. Balancing Particle and Mesh Computation in a Particle-In-Cell Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Worley, Patrick H; D'Azevedo, Eduardo; Hager, Robert

    2016-01-01

    The XGC1 plasma microturbulence particle-in-cell simulation code has both particle-based and mesh-based computational kernels that dominate performance. Both of these are subject to load imbalances that can degrade performance and that evolve during a simulation. Each separately can be addressed adequately, but optimizing just for one can introduce significant load imbalances in the other, degrading overall performance. A technique has been developed based on Golden Section Search that minimizes wallclock time given prior information on wallclock time, and on current particle distribution and mesh cost per cell, and also adapts to evolution in load imbalance in both particle and meshmore » work. In problems of interest this doubled the performance on full system runs on the XK7 at the Oak Ridge Leadership Computing Facility compared to load balancing only one of the kernels.« less

  6. A feasibility study of a hypersonic real-gas facility

    NASA Technical Reports Server (NTRS)

    Gully, J. H.; Driga, M. D.; Weldon, W. F.

    1987-01-01

    A four month feasibility study of a hypersonic real-gas free flight test facility for NASA Langley Research Center (LARC) was performed. The feasibility of using a high-energy electromagnetic launcher (EML) to accelerate complex models (lifting and nonlifting) in the hypersonic, real-gas facility was examined. Issues addressed include: design and performance of the accelerator; design and performance of the power supply; design and operation of the sabot and payload during acceleration and separation; effects of high current, magnetic fields, temperature, and stress on the sabot and payload; and survivability of payload instrumentation during acceleration, flight, and soft catch.

  7. Radiography Capabilities for Matter-Radiation Interactions in Extremes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Walstrom, Peter Lowell; Garnett, Robert William; Chapman, Catherine A. B

    The Matter-Radiation Interactions in Extremes (MaRIE) experimental facility will be used to discover and design the advanced materials needed to meet 21st century national security and energy security challenges. This new facility will provide the new tools scientists need to develop next-generation materials that will perform predictably and on-demand for currently unattainable lifetimes in extreme environments. The MaRIE facility is based on upgrades to the existing LANSCE 800-MeV proton linac and a new 12-GeV electron linac and associated X-ray FEL to provide simultaneous multiple probe beams, and new experimental areas. In addition to the high-energy photon probe beam, both electronmore » and proton radiography capabilities will be available at the MaRIE facility. Recently, detailed radiography system studies have been performed to develop conceptual layouts of high-magnification electron and proton radiography systems that can meet the experimental requirements for the expected first experiments to be performed at the facility. A description of the radiography systems, their performance requirements, and a proposed facility layout are presented.« less

  8. Navier-Stokes Analysis of a High Wing Transport High-Lift Configuration with Externally Blown Flaps

    NASA Technical Reports Server (NTRS)

    Slotnick, Jeffrey P.; An, Michael Y.; Mysko, Stephen J.; Yeh, David T.; Rogers, Stuart E.; Roth, Karlin; Baker, M.David; Nash, S.

    2000-01-01

    Insights and lessons learned from the aerodynamic analysis of the High Wing Transport (HWT) high-lift configuration are presented. Three-dimensional Navier-Stokes CFD simulations using the OVERFLOW flow solver are compared with high Reynolds test data obtained in the NASA Ames 12 Foot Pressure Wind Tunnel (PWT) facility. Computational analysis of the baseline HWT high-lift configuration with and without Externally Blown Flap (EBF) jet effects is highlighted. Several additional aerodynamic investigations, such as nacelle strake effectiveness and wake vortex studies, are presented. Technical capabilities and shortcomings of the computational method are discussed and summarized.

  9. Central Computational Facility CCF communications subsystem options

    NASA Technical Reports Server (NTRS)

    Hennigan, K. B.

    1979-01-01

    A MITRE study which investigated the communication options available to support both the remaining Central Computational Facility (CCF) computer systems and the proposed U1108 replacements is presented. The facilities utilized to link the remote user terminals with the CCF were analyzed and guidelines to provide more efficient communications were established.

  10. Academic Computing Facilities and Services in Higher Education--A Survey.

    ERIC Educational Resources Information Center

    Warlick, Charles H.

    1986-01-01

    Presents statistics about academic computing facilities based on data collected over the past six years from 1,753 institutions in the United States, Canada, Mexico, and Puerto Rico for the "Directory of Computing Facilities in Higher Education." Organizational, functional, and financial characteristics are examined as well as types of…

  11. A Simple and Resource-efficient Setup for the Computer-aided Drug Design Laboratory.

    PubMed

    Moretti, Loris; Sartori, Luca

    2016-10-01

    Undertaking modelling investigations for Computer-Aided Drug Design (CADD) requires a proper environment. In principle, this could be done on a single computer, but the reality of a drug discovery program requires robustness and high-throughput computing (HTC) to efficiently support the research. Therefore, a more capable alternative is needed but its implementation has no widespread solution. Here, the realization of such a computing facility is discussed, from general layout to technical details all aspects are covered. © 2016 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. On-Line Mathematics Assessment: The Impact of Mode on Performance and Question Answering Strategies

    ERIC Educational Resources Information Center

    Johnson, Martin; Green, Sylvia

    2006-01-01

    The transition from paper-based to computer-based assessment raises a number of important issues about how mode might affect children's performance and question answering strategies. In this project 104 eleven-year-olds were given two sets of matched mathematics questions, one set on-line and the other on paper. Facility values were analyzed to…

  13. Interactive Software For Astrodynamical Calculations

    NASA Technical Reports Server (NTRS)

    Schlaifer, Ronald S.; Skinner, David L.; Roberts, Phillip H.

    1995-01-01

    QUICK computer program provides user with facilities of sophisticated desk calculator performing scalar, vector, and matrix arithmetic; propagate conic-section orbits; determines planetary and satellite coordinates; and performs other related astrodynamic calculations within FORTRAN-like software environment. QUICK is interpreter, and no need to use compiler or linker to run QUICK code. Outputs plotted in variety of formats on variety of terminals. Written in RATFOR.

  14. Evaluating the Performance of the NASA LaRC CMF Motion Base Safety Devices

    NASA Technical Reports Server (NTRS)

    Gupton, Lawrence E.; Bryant, Richard B., Jr.; Carrelli, David J.

    2006-01-01

    This paper describes the initial measured performance results of the previously documented NASA Langley Research Center (LaRC) Cockpit Motion Facility (CMF) motion base hardware safety devices. These safety systems are required to prevent excessive accelerations that could injure personnel and damage simulator cockpits or the motion base structure. Excessive accelerations may be caused by erroneous commands or hardware failures driving an actuator to the end of its travel at high velocity, stepping a servo valve, or instantly reversing servo direction. Such commands may result from single order failures of electrical or hydraulic components within the control system itself, or from aggressive or improper cueing commands from the host simulation computer. The safety systems must mitigate these high acceleration events while minimizing the negative performance impacts. The system accomplishes this by controlling the rate of change of valve signals to limit excessive commanded accelerations. It also aids hydraulic cushion performance by limiting valve command authority as the actuator approaches its end of travel. The design takes advantage of inherent motion base hydraulic characteristics to implement all safety features using hardware only solutions.

  15. High-Performance Computing Data Center Warm-Water Liquid Cooling |

    Science.gov Websites

    Computational Science | NREL Warm-Water Liquid Cooling High-Performance Computing Data Center Warm-Water Liquid Cooling NREL's High-Performance Computing Data Center (HPC Data Center) is liquid water Liquid cooling technologies offer a more energy-efficient solution that also allows for effective

  16. Status report of the end-to-end ASKAP software system: towards early science operations

    NASA Astrophysics Data System (ADS)

    Guzman, Juan Carlos; Chapman, Jessica; Marquarding, Malte; Whiting, Matthew

    2016-08-01

    The Australian SKA Pathfinder (ASKAP) is a novel centimetre radio synthesis telescope currently in the commissioning phase and located in the midwest region of Western Australia. It comprises of 36 x 12 m diameter reflector antennas each equipped with state-of-the-art and award winning Phased Array Feeds (PAF) technology. The PAFs provide a wide, 30 square degree field-of-view by forming up to 36 separate dual-polarisation beams at once. This results in a high data rate: 70 TB of correlated visibilities in an 8-hour observation, requiring custom-written, high-performance software running in dedicated High Performance Computing (HPC) facilities. The first six antennas equipped with first-generation PAF technology (Mark I), named the Boolardy Engineering Test Array (BETA) have been in use since 2014 as a platform to test PAF calibration and imaging techniques, and along the way it has been producing some great science results. Commissioning of the ASKAP Array Release 1, that is the first six antennas with second-generation PAFs (Mark II) is currently under way. An integral part of the instrument is the Central Processor platform hosted at the Pawsey Supercomputing Centre in Perth, which executes custom-written software pipelines, designed specifically to meet the ASKAP imaging requirements of wide field of view and high dynamic range. There are three key hardware components of the Central Processor: The ingest nodes (16 x node cluster), the fast temporary storage (1 PB Lustre file system) and the processing supercomputer (200 TFlop system). This High-Performance Computing (HPC) platform is managed and supported by the Pawsey support team. Due to the limited amount of data generated by BETA and the first ASKAP Array Release, the Central Processor platform has been running in a more "traditional" or user-interactive mode. But this is about to change: integration and verification of the online ingest pipeline starts in early 2016, which is required to support the full 300 MHz bandwidth for Array Release 1; followed by the deployment of the real-time data processing components. In addition to the Central Processor, the first production release of the CSIRO ASKAP Science Data Archive (CASDA) has also been deployed in one of the Pawsey Supercomputing Centre facilities and it is integrated to the end-to-end ASKAP data flow system. This paper describes the current status of the "end-to-end" data flow software system from preparing observations to data acquisition, processing and archiving; and the challenges of integrating an HPC facility as a key part of the instrument. It also shares some lessons learned since the start of integration activities and the challenges ahead in preparation for the start of the Early Science program.

  17. NiftySim: A GPU-based nonlinear finite element package for simulation of soft tissue biomechanics.

    PubMed

    Johnsen, Stian F; Taylor, Zeike A; Clarkson, Matthew J; Hipwell, John; Modat, Marc; Eiben, Bjoern; Han, Lianghao; Hu, Yipeng; Mertzanidou, Thomy; Hawkes, David J; Ourselin, Sebastien

    2015-07-01

    NiftySim, an open-source finite element toolkit, has been designed to allow incorporation of high-performance soft tissue simulation capabilities into biomedical applications. The toolkit provides the option of execution on fast graphics processing unit (GPU) hardware, numerous constitutive models and solid-element options, membrane and shell elements, and contact modelling facilities, in a simple to use library. The toolkit is founded on the total Lagrangian explicit dynamics (TLEDs) algorithm, which has been shown to be efficient and accurate for simulation of soft tissues. The base code is written in C[Formula: see text], and GPU execution is achieved using the nVidia CUDA framework. In most cases, interaction with the underlying solvers can be achieved through a single Simulator class, which may be embedded directly in third-party applications such as, surgical guidance systems. Advanced capabilities such as contact modelling and nonlinear constitutive models are also provided, as are more experimental technologies like reduced order modelling. A consistent description of the underlying solution algorithm, its implementation with a focus on GPU execution, and examples of the toolkit's usage in biomedical applications are provided. Efficient mapping of the TLED algorithm to parallel hardware results in very high computational performance, far exceeding that available in commercial packages. The NiftySim toolkit provides high-performance soft tissue simulation capabilities using GPU technology for biomechanical simulation research applications in medical image computing, surgical simulation, and surgical guidance applications.

  18. The infrared spectrograph during the SIRTF pre-definition phase

    NASA Technical Reports Server (NTRS)

    Houck, James R.

    1988-01-01

    A test facility was set up to evaluate back-illuminated impurity band detectors constructed for an infrared spectrograph to be used on the Space Infrared Telescope Facility (SIRTF). Equipment built to perform the tests on these arrays is described. Initial tests have been geared toward determining dark current and read noise for the array. Four prior progress reports are incorporated into this report. They describe the first efforts in the detector development and testing effort; testing details and a new spectrograph concept; a discussion of resolution issues raised by the new design; management activities; a review of computer software and testing facility hardware; and a review of the preamplifier constructed as well as a revised schematic of the detector evaluation facility.

  19. An investigation of networking techniques for the ASRM facility

    NASA Technical Reports Server (NTRS)

    Moorhead, Robert J., II; Smith, Wayne D.; Thompson, Dale R.

    1992-01-01

    This report is based on the early design concepts for a communications network for the Advanced Solid Rocket Motor (ASRM) facility being built at Yellow Creek near Iuka, MS. The investigators have participated in the early design concepts and in the evaluation of the initial concepts. The continuing system design effort and any modification of the plan will require a careful evaluation of the required bandwidth of the network, the capabilities of the protocol, and the requirements of the controllers and computers on the network. The overall network, which is heterogeneous in protocol and bandwidth, is being modeled, analyzed, simulated, and tested to obtain some degree of confidence in its performance capabilities and in its performance under nominal and heavy loads. The results of the proposed work should have an impact on the design and operation of the ASRM facility.

  20. Research Activity in Computational Physics utilizing High Performance Computing: Co-authorship Network Analysis

    NASA Astrophysics Data System (ADS)

    Ahn, Sul-Ah; Jung, Youngim

    2016-10-01

    The research activities of the computational physicists utilizing high performance computing are analyzed by bibliometirc approaches. This study aims at providing the computational physicists utilizing high-performance computing and policy planners with useful bibliometric results for an assessment of research activities. In order to achieve this purpose, we carried out a co-authorship network analysis of journal articles to assess the research activities of researchers for high-performance computational physics as a case study. For this study, we used journal articles of the Scopus database from Elsevier covering the time period of 2004-2013. We extracted the author rank in the physics field utilizing high-performance computing by the number of papers published during ten years from 2004. Finally, we drew the co-authorship network for 45 top-authors and their coauthors, and described some features of the co-authorship network in relation to the author rank. Suggestions for further studies are discussed.

  1. Microgravity

    NASA Image and Video Library

    1999-11-10

    Space Vacuum Epitaxy Center works with industry and government laboratories to develop advanced thin film materials and devices by utilizing the most abundant free resource in orbit: the vacuum of space. SVEC, along with its affiliates, is developing semiconductor mid-IR lasers for environmental sensing and defense applications, high efficiency solar cells for space satellite applications, oxide thin films for computer memory applications, and ultra-hard thin film coatings for wear resistance in micro devices. Performance of these vacuum deposited thin film materials and devices can be enhanced by using the ultra-vacuum of space for which SVEC has developed the Wake Shield Facility---a free flying research platform dedicated to thin film materials development in space.

  2. Microgravity

    NASA Image and Video Library

    2000-11-10

    Space Vacuum Epitaxy Center works with industry and government laboratories to develop advanced thin film materials and devices by utilizing the most abundant free resource in orbit: the vacuum of space. SVEC, along with its affiliates, is developing semiconductor mid-IR lasers for environmental sensing and defense applications, high efficiency solar cells for space satellite applications, oxide thin films for computer memory applications, and ultra-hard thin film coatings for wear resistance in micro devices. Performance of these vacuum deposited thin film materials and devices can be enhanced by using the ultra-vacuum of space for which SVEC has developed the Wake Shield Facility---a free flying research platform dedicated to thin film materials development in space.

  3. Flow Field Characterization of an Angled Supersonic Jet Near a Bluff Body

    NASA Technical Reports Server (NTRS)

    Wolter, John D.; Childs, Robert; Wernet, Mark P.; Shestopalov, Andrea; Melton, John E.

    2011-01-01

    An experiment was performed to acquire data from a hot supersonic jet in cross flow for the purpose of validating computational fluid dynamics (CFD) turbulence modeling relevant to the Orion Launch Abort System. Hot jet conditions were at the highest temperature and pressure that could be acquired in the test facility. The nozzle pressure ratio was 28.5, and the nozzle temperature ratio was 3. These conditions are different from those of the flight vehicle, but sufficiently high to model the observed turbulence features. Stereo Particle Image Velocimetry (SPIV) data and capsule pressure data are presented. Features of the flow field are presented and discussed

  4. Recent skyshine calculations at Jefferson Lab

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Degtyarenko, P.

    1997-12-01

    New calculations of the skyshine dose distribution of neutrons and secondary photons have been performed at Jefferson Lab using the Monte Carlo method. The dose dependence on neutron energy, distance to the neutron source, polar angle of a source neutron, and azimuthal angle between the observation point and the momentum direction of a source neutron have been studied. The azimuthally asymmetric term in the skyshine dose distribution is shown to be important in the dose calculations around high-energy accelerator facilities. A parameterization formula and corresponding computer code have been developed which can be used for detailed calculations of the skyshinemore » dose maps.« less

  5. Feasibility study for a numerical aerodynamic simulation facility. Volume 1

    NASA Technical Reports Server (NTRS)

    Lincoln, N. R.; Bergman, R. O.; Bonstrom, D. B.; Brinkman, T. W.; Chiu, S. H. J.; Green, S. S.; Hansen, S. D.; Klein, D. L.; Krohn, H. E.; Prow, R. P.

    1979-01-01

    A Numerical Aerodynamic Simulation Facility (NASF) was designed for the simulation of fluid flow around three-dimensional bodies, both in wind tunnel environments and in free space. The application of numerical simulation to this field of endeavor promised to yield economies in aerodynamic and aircraft body designs. A model for a NASF/FMP (Flow Model Processor) ensemble using a possible approach to meeting NASF goals is presented. The computer hardware and software are presented, along with the entire design and performance analysis and evaluation.

  6. Working Towards New Transformative Geoscience Analytics Enabled by Petascale Computing

    NASA Astrophysics Data System (ADS)

    Woodcock, R.; Wyborn, L.

    2012-04-01

    Currently the top 10 supercomputers in the world are petascale and already exascale computers are being planned. Cloud computing facilities are becoming mainstream either as private or commercial investments. These computational developments will provide abundant opportunities for the earth science community to tackle the data deluge which has resulted from new instrumentation enabling data to be gathered at a greater rate and at higher resolution. Combined, the new computational environments should enable the earth sciences to be transformed. However, experience in Australia and elsewhere has shown that it is not easy to scale existing earth science methods, software and analytics to take advantage of the increased computational capacity that is now available. It is not simply a matter of 'transferring' current work practices to the new facilities: they have to be extensively 'transformed'. In particular new Geoscientific methods will need to be developed using advanced data mining, assimilation, machine learning and integration algorithms. Software will have to be capable of operating in highly parallelised environments, and will also need to be able to scale as the compute systems grow. Data access will have to improve and the earth science community needs to move from the file discovery, display and then locally download paradigm to self describing data cubes and data arrays that are available as online resources from either major data repositories or in the cloud. In the new transformed world, rather than analysing satellite data scene by scene, sensor agnostic data cubes of calibrated earth observation data will enable researchers to move across data from multiple sensors at varying spatial data resolutions. In using geophysics to characterise basement and cover, rather than analysing individual gridded airborne geophysical data sets, and then combining the results, petascale computing will enable analysis of multiple data types, collected at varying resolutions with integration and validation across data type boundaries. Increased capacity of storage and compute will mean that uncertainty and reliability of individual observations will consistently be taken into account and propagated throughout the processing chain. If these data access difficulties can be overcome, the increased compute capacity will also mean that larger scale, more complex models can be run at higher resolution and instead of single pass modelling runs. Ensembles of models will be able to be run to simultaneously test multiple hypotheses. Petascale computing and high performance data offer more than "bigger, faster": it is an opportunity for a transformative change in the way in which geoscience research is routinely conducted.

  7. Importance of balanced architectures in the design of high-performance imaging systems

    NASA Astrophysics Data System (ADS)

    Sgro, Joseph A.; Stanton, Paul C.

    1999-03-01

    Imaging systems employed in demanding military and industrial applications, such as automatic target recognition and computer vision, typically require real-time high-performance computing resources. While high- performances computing systems have traditionally relied on proprietary architectures and custom components, recent advances in high performance general-purpose microprocessor technology have produced an abundance of low cost components suitable for use in high-performance computing systems. A common pitfall in the design of high performance imaging system, particularly systems employing scalable multiprocessor architectures, is the failure to balance computational and memory bandwidth. The performance of standard cluster designs, for example, in which several processors share a common memory bus, is typically constrained by memory bandwidth. The symptom characteristic of this problem is failure to the performance of the system to scale as more processors are added. The problem becomes exacerbated if I/O and memory functions share the same bus. The recent introduction of microprocessors with large internal caches and high performance external memory interfaces makes it practical to design high performance imaging system with balanced computational and memory bandwidth. Real word examples of such designs will be presented, along with a discussion of adapting algorithm design to best utilize available memory bandwidth.

  8. CFD Simulations of the IHF Arc-Jet Flow: Compression-Pad/Separation Bolt Wedge Tests

    NASA Technical Reports Server (NTRS)

    Gokcen, Tahir; Skokova, Kristina A.

    2017-01-01

    This paper reports computational analyses in support of two wedge tests in a high enthalpy arc-jet facility at NASA Ames Research Center. These tests were conducted using two different wedge models, each placed in a free jet downstream of a corresponding different conical nozzle in the Ames 60-MW Interaction Heating Facility. Panel test articles included a metallic separation bolt imbedded in the compression-pad and heat shield materials, resulting in a circular protuberance over a flat plate. As part of the test calibration runs, surface pressure and heat flux measurements on water-cooled calibration plates integrated with the wedge models were also obtained. Surface heating distributions on the test articles as well as arc-jet test environment parameters for each test configuration are obtained through computational fluid dynamics simulations, consistent with the facility and calibration measurements. The present analysis comprises simulations of the non-equilibrium flow field in the facility nozzle, test box, and flow field over test articles, and comparisons with the measured calibration data.

  9. Feasibility study for a numerical aerodynamic simulation facility. Volume 2: Hardware specifications/descriptions

    NASA Technical Reports Server (NTRS)

    Green, F. M.; Resnick, D. R.

    1979-01-01

    An FMP (Flow Model Processor) was designed for use in the Numerical Aerodynamic Simulation Facility (NASF). The NASF was developed to simulate fluid flow over three-dimensional bodies in wind tunnel environments and in free space. The facility is applicable to studying aerodynamic and aircraft body designs. The following general topics are discussed in this volume: (1) FMP functional computer specifications; (2) FMP instruction specification; (3) standard product system components; (4) loosely coupled network (LCN) specifications/description; and (5) three appendices: performance of trunk allocation contention elimination (trace) method, LCN channel protocol and proposed LCN unified second level protocol.

  10. Diffraction studies applicable to 60-foot microwave research facilities

    NASA Technical Reports Server (NTRS)

    Schmidt, R. F.

    1973-01-01

    The principal features of this document are the analysis of a large dual-reflector antenna system by vector Kirchhoff theory, the evaluation of subreflector aperture-blocking, determination of the diffraction and blockage effects of a subreflector mounting structure, and an estimate of strut-blockage effects. Most of the computations are for a frequency of 15.3 GHz, and were carried out using the IBM 360/91 and 360/95 systems at Goddard Space Flight Center. The FORTRAN 4 computer program used to perform the computations is of a general and modular type so that various system parameters such as frequency, eccentricity, diameter, focal-length, etc. can be varied at will. The parameters of the 60-foot NRL Ku-band installation at Waldorf, Maryland, were entered into the program for purposes of this report. Similar calculations could be performed for the NELC installation at La Posta, California, the NASA Wallops Station facility in Virginia, and other antenna systems, by a simple change in IBM control cards. A comparison is made between secondary radiation patterns of the NRL antenna measured by DOD Satellite and those obtained by analytical/numerical methods at a frequency of 7.3 GHz.

  11. An assessment technique for computer-socket manufacturing

    PubMed Central

    Sanders, Joan; Severance, Michael

    2015-01-01

    An assessment strategy is presented for testing the quality of carving and forming of individual computer aided manufacturing facilities. The strategy is potentially useful to facilities making sockets and companies marketing manufacturing equipment. To execute the strategy, an evaluator fabricates a collection of test models and sockets using the manufacturing suite under evaluation, and then measures their shapes using scanning equipment. Overall socket quality is assessed by comparing socket shapes with electronic file shapes. Then model shapes are compared with electronic file shapes to characterize carving performance. Socket shapes are compared with model shapes to characterize forming performance. The mean radial error (MRE), which is the average difference in radii between the two shapes being compared, provides insight into sizing quality. Inter-quartile range (IQR), the range of radial error for the best matched half of the points on the surfaces being compared, provides insight into shape quality. By determining MRE and IQR for carving and forming separately, the source(s) of socket shape error may be pinpointed. The developed strategy may provide a useful tool to the prosthetics community and industry to help identify problems and limitations in computer aided manufacturing and insight into appropriate modifications to overcome them. PMID:21938663

  12. Analysis of accident sequences and source terms at waste treatment and storage facilities for waste generated by U.S. Department of Energy Waste Management Operations, Volume 3: Appendixes C-H

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mueller, C.; Nabelssi, B.; Roglans-Ribas, J.

    1995-04-01

    This report contains the Appendices for the Analysis of Accident Sequences and Source Terms at Waste Treatment and Storage Facilities for Waste Generated by the U.S. Department of Energy Waste Management Operations. The main report documents the methodology, computational framework, and results of facility accident analyses performed as a part of the U.S. Department of Energy (DOE) Waste Management Programmatic Environmental Impact Statement (WM PEIS). The accident sequences potentially important to human health risk are specified, their frequencies are assessed, and the resultant radiological and chemical source terms are evaluated. A personal computer-based computational framework and database have been developedmore » that provide these results as input to the WM PEIS for calculation of human health risk impacts. This report summarizes the accident analyses and aggregates the key results for each of the waste streams. Source terms are estimated and results are presented for each of the major DOE sites and facilities by WM PEIS alternative for each waste stream. The appendices identify the potential atmospheric release of each toxic chemical or radionuclide for each accident scenario studied. They also provide discussion of specific accident analysis data and guidance used or consulted in this report.« less

  13. Calibration of Axisymmetric and Quasi-1D Solvers for High Enthalpy Nozzles

    NASA Technical Reports Server (NTRS)

    Papadopoulos, P. E.; Gochberg, L. A.; Tokarcik-Polsky, S.; Venkatapathy, E.; Deiwert, G. S.; Edwards, Thomas A. (Technical Monitor)

    1994-01-01

    The proposed paper will present a numerical investigation of the flow characteristics and boundary layer development in the nozzles of high enthalpy shock tunnel facilities used for hypersonic propulsion testing. The computed flow will be validated against existing experimental data. Pitot pressure data obtained at the entrance of the test cabin will be used to validate the numerical simulations. It is necessary to accurately model the facility nozzles in order to characterize the test article flow conditions. Initially the axisymmetric nozzle flow will be computed using a Navier Stokes solver for a range of reservoir conditions. The calculated solutions will be compared and calibrated against available experimental data from the DLR HEG piston-driven shock tunnel and the 16-inch shock tunnel at NASA Ames Research Center. The Reynolds number is assumed to be high enough at the throat that the boundary layer flow is assumed turbulent at this point downstream. The real gas affects will be examined. In high Mach number facilities the boundary layer is thick. Attempts will be made to correlate the boundary layer displacement thickness. The displacement thickness correlation will be used to calibrate the quasi-1D codes NENZF and LSENS in order to provide fast and efficient tools of characterizing the facility nozzles. The calibrated quasi-1D codes will be implemented to study the effects of chemistry and the flow condition variations at the test section due to small variations in the driver gas conditions.

  14. 40 CFR 60.260 - Applicability and designation of affected facility.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... affected facility. 60.260 Section 60.260 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... Performance for Ferroalloy Production Facilities § 60.260 Applicability and designation of affected facility..., ferrochrome silicon, silvery iron, high-carbon ferrochrome, charge chrome, standard ferromanganese...

  15. 40 CFR 60.260 - Applicability and designation of affected facility.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... affected facility. 60.260 Section 60.260 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... Performance for Ferroalloy Production Facilities § 60.260 Applicability and designation of affected facility..., ferrochrome silicon, silvery iron, high-carbon ferrochrome, charge chrome, standard ferromanganese...

  16. EUROPLANET-RI modelling service for the planetary science community: European Modelling and Data Analysis Facility (EMDAF)

    NASA Astrophysics Data System (ADS)

    Khodachenko, Maxim; Miller, Steven; Stoeckler, Robert; Topf, Florian

    2010-05-01

    Computational modeling and observational data analysis are two major aspects of the modern scientific research. Both appear nowadays under extensive development and application. Many of the scientific goals of planetary space missions require robust models of planetary objects and environments as well as efficient data analysis algorithms, to predict conditions for mission planning and to interpret the experimental data. Europe has great strength in these areas, but it is insufficiently coordinated; individual groups, models, techniques and algorithms need to be coupled and integrated. Existing level of scientific cooperation and the technical capabilities for operative communication, allow considerable progress in the development of a distributed international Research Infrastructure (RI) which is based on the existing in Europe computational modelling and data analysis centers, providing the scientific community with dedicated services in the fields of their computational and data analysis expertise. These services will appear as a product of the collaborative communication and joint research efforts of the numerical and data analysis experts together with planetary scientists. The major goal of the EUROPLANET-RI / EMDAF is to make computational models and data analysis algorithms associated with particular national RIs and teams, as well as their outputs, more readily available to their potential user community and more tailored to scientific user requirements, without compromising front-line specialized research on model and data analysis algorithms development and software implementation. This objective will be met through four keys subdivisions/tasks of EMAF: 1) an Interactive Catalogue of Planetary Models; 2) a Distributed Planetary Modelling Laboratory; 3) a Distributed Data Analysis Laboratory, and 4) enabling Models and Routines for High Performance Computing Grids. Using the advantages of the coordinated operation and efficient communication between the involved computational modelling, research and data analysis expert teams and their related research infrastructures, EMDAF will provide a 1) flexible, 2) scientific user oriented, 3) continuously developing and fast upgrading computational and data analysis service to support and intensify the European planetary scientific research. At the beginning EMDAF will create a set of demonstrators and operational tests of this service in key areas of European planetary science. This work will aim at the following objectives: (a) Development and implementation of tools for distant interactive communication between the planetary scientists and computing experts (including related RIs); (b) Development of standard routine packages, and user-friendly interfaces for operation of the existing numerical codes and data analysis algorithms by the specialized planetary scientists; (c) Development of a prototype of numerical modelling services "on demand" for space missions and planetary researchers; (d) Development of a prototype of data analysis services "on demand" for space missions and planetary researchers; (e) Development of a prototype of coordinated interconnected simulations of planetary phenomena and objects (global multi-model simulators); (f) Providing the demonstrators of a coordinated use of high performance computing facilities (super-computer networks), done in cooperation with European HPC Grid DEISA.

  17. Oklahoma Center for High Energy Physics (OCHEP)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nandi, S; Strauss, M J; Snow, J

    2012-02-29

    The DOE EPSCoR implementation grant, with the support from the State of Oklahoma and from the three universities, Oklahoma State University, University of Oklahoma and Langston University, resulted in establishing of the Oklahoma Center for High Energy Physics (OCHEP) in 2004. Currently, OCHEP continues to flourish as a vibrant hub for research in experimental and theoretical particle physics and an educational center in the State of Oklahoma. All goals of the original proposal were successfully accomplished. These include foun- dation of a new experimental particle physics group at OSU, the establishment of a Tier 2 computing facility for the Largemore » Hadron Collider (LHC) and Tevatron data analysis at OU and organization of a vital particle physics research center in Oklahoma based on resources of the three universities. OSU has hired two tenure-track faculty members with initial support from the grant funds. Now both positions are supported through OSU budget. This new HEP Experimental Group at OSU has established itself as a full member of the Fermilab D0 Collaboration and LHC ATLAS Experiment and has secured external funds from the DOE and the NSF. These funds currently support 2 graduate students, 1 postdoctoral fellow, and 1 part-time engineer. The grant initiated creation of a Tier 2 computing facility at OU as part of the Southwest Tier 2 facility, and a permanent Research Scientist was hired at OU to maintain and run the facility. Permanent support for this position has now been provided through the OU university budget. OCHEP represents a successful model of cooperation of several universities, providing the establishment of critical mass of manpower, computing and hardware resources. This led to increasing Oklahoma's impact in all areas of HEP, theory, experiment, and computation. The Center personnel are involved in cutting edge research in experimental, theoretical, and computational aspects of High Energy Physics with the research areas ranging from the search for new phenomena at the Fermilab Tevatron and the CERN Large Hadron Collider to theoretical modeling, computer simulation, detector development and testing, and physics analysis. OCHEP faculty members participating on the D0 collaboration at the Fermilab Tevatron and on the ATLAS collaboration at the CERN LHC have made major impact on the Standard Model (SM) Higgs boson search, top quark studies, B physics studies, and measurements of Quantum Chromodynamics (QCD) phenomena. The OCHEP Grid computing facility consists of a large computer cluster which is playing a major role in data analysis and Monte Carlo productions for both the D0 and ATLAS experiments. Theoretical efforts are devoted to new ideas in Higgs bosons physics, extra dimensions, neutrino masses and oscillations, Grand Unified Theories, supersymmetric models, dark matter, and nonperturbative quantum field theory. Theory members are making major contributions to the understanding of phenomena being explored at the Tevatron and the LHC. They have proposed new models for Higgs bosons, and have suggested new signals for extra dimensions, and for the search of supersymmetric particles. During the seven year period when OCHEP was partially funded through the DOE EPSCoR implementation grant, OCHEP members published over 500 refereed journal articles and made over 200 invited presentations at major conferences. The Center is also involved in education and outreach activities by offering summer research programs for high school teachers and college students, and organizing summer workshops for high school teachers, sometimes coordinating with the Quarknet programs at OSU and OU. The details of the Center can be found in http://ochep.phy.okstate.edu.« less

  18. ArrayBridge: Interweaving declarative array processing with high-performance computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xing, Haoyuan; Floratos, Sofoklis; Blanas, Spyros

    Scientists are increasingly turning to datacenter-scale computers to produce and analyze massive arrays. Despite decades of database research that extols the virtues of declarative query processing, scientists still write, debug and parallelize imperative HPC kernels even for the most mundane queries. This impedance mismatch has been partly attributed to the cumbersome data loading process; in response, the database community has proposed in situ mechanisms to access data in scientific file formats. Scientists, however, desire more than a passive access method that reads arrays from files. This paper describes ArrayBridge, a bi-directional array view mechanism for scientific file formats, that aimsmore » to make declarative array manipulations interoperable with imperative file-centric analyses. Our prototype implementation of ArrayBridge uses HDF5 as the underlying array storage library and seamlessly integrates into the SciDB open-source array database system. In addition to fast querying over external array objects, ArrayBridge produces arrays in the HDF5 file format just as easily as it can read from it. ArrayBridge also supports time travel queries from imperative kernels through the unmodified HDF5 API, and automatically deduplicates between array versions for space efficiency. Our extensive performance evaluation in NERSC, a large-scale scientific computing facility, shows that ArrayBridge exhibits statistically indistinguishable performance and I/O scalability to the native SciDB storage engine.« less

  19. Experimental and Computational Aerothermodynamics of a Mars Entry Vehicle

    NASA Technical Reports Server (NTRS)

    Hollis, Brian R.

    1996-01-01

    An aerothermodynamic database has been generated through both experimental testing and computational fluid dynamics simulations for a 70 deg sphere-cone configuration based on the NASA Mars Pathfinder entry vehicle. The aerothermodynamics of several related parametric configurations were also investigated. Experimental heat-transfer data were obtained at hypersonic test conditions in both a perfect gas air wind tunnel and in a hypervelocity, high-enthalpy expansion tube in which both air and carbon dioxide were employed as test gases. In these facilities, measurements were made with thin-film temperature-resistance gages on both the entry vehicle models and on the support stings of the models. Computational results for freestream conditions equivalent to those of the test facilities were generated using an axisymmetric/2D laminar Navier-Stokes solver with both perfect-gas and nonequilibrium thermochemical models. Forebody computational and experimental heating distributions agreed to within the experimental uncertainty for both the perfect-gas and high-enthalpy test conditions. In the wake, quantitative differences between experimental and computational heating distributions for the perfect-gas conditions indicated transition of the free shear layer near the reattachment point on the sting. For the high enthalpy cases, agreement to within, or slightly greater than, the experimental uncertainty was achieved in the wake except within the recirculation region, where further grid resolution appeared to be required. Comparisons between the perfect-gas and high-enthalpy results indicated that the wake remained laminar at the high-enthalpy test conditions, for which the Reynolds number was significantly lower than that of the perfect-gas conditions.

  20. Clinical Practice Guideline Implementation Strategy Patterns in Veterans Affairs Primary Care Clinics

    PubMed Central

    Hysong, Sylvia J; Best, Richard G; Pugh, Jacqueline A

    2007-01-01

    Background The Department of Veterans Affairs (VA) mandated the system-wide implementation of clinical practice guidelines (CPGs) in the mid-1990s, arming all facilities with basic resources to facilitate implementation; despite this resource allocation, significant variability still exists across VA facilities in implementation success. Objective This study compares CPG implementation strategy patterns used by high and low performing primary care clinics in the VA. Research Design Descriptive, cross-sectional study of a purposeful sample of six Veterans Affairs Medical Centers (VAMCs) with high and low performance on six CPGs. Subjects One hundred and two employees (management, quality improvement, clinic personnel) involved with guideline implementation at each VAMC primary care clinic. Measures Participants reported specific strategies used by their facility to implement guidelines in 1-hour semi-structured interviews. Facilities were classified as high or low performers based on their guideline adherence scores calculated through independently conducted chart reviews. Findings High performing facilities (HPFs) (a) invested significantly in the implementation of the electronic medical record and locally adapting it to provider needs, (b) invested dedicated resources to guideline-related initiatives, and (c) exhibited a clear direction in their strategy choices. Low performing facilities exhibited (a) earlier stages of development for their electronic medical record, (b) reliance on preexisting resources for guideline implementation, with little local adaptation, and (c) no clear direction in their strategy choices. Conclusion A multifaceted, yet targeted, strategic approach to guideline implementation emphasizing dedicated resources and local adaptation may result in more successful implementation and higher guideline adherence than relying on standardized resources and taxing preexisting channels. PMID:17355583

  1. Public Computer Assisted Learning Facilities for Children with Visual Impairment: Universal Design for Inclusive Learning

    ERIC Educational Resources Information Center

    Siu, Kin Wai Michael; Lam, Mei Seung

    2012-01-01

    Although computer assisted learning (CAL) is becoming increasingly popular, people with visual impairment face greater difficulty in accessing computer-assisted learning facilities. This is primarily because most of the current CAL facilities are not visually impaired friendly. People with visual impairment also do not normally have access to…

  2. 14 CFR 61.125 - Aeronautical knowledge.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... magnetic compass for pilotage and dead reckoning; (10) Use of air navigation facilities; (11) Aeronautical... aeronautical knowledge areas of paragraph (b) of this section that apply to the aircraft category and class... operation of aircraft; (6) Weight and balance computations; (7) Use of performance charts; (8) Significance...

  3. Marc Henry de Frahan | NREL

    Science.gov Websites

    Computing Project, Marc develops high-fidelity turbulence models to enhance simulation accuracy and efficient numerical algorithms for future high performance computing hardware architectures. Research Interests High performance computing High order numerical methods for computational fluid dynamics Fluid

  4. Multiprocessor switch with selective pairing

    DOEpatents

    Gara, Alan; Gschwind, Michael K; Salapura, Valentina

    2014-03-11

    System, method and computer program product for a multiprocessing system to offer selective pairing of processor cores for increased processing reliability. A selective pairing facility is provided that selectively connects, i.e., pairs, multiple microprocessor or processor cores to provide one highly reliable thread (or thread group). Each paired microprocessor or processor cores that provide one highly reliable thread for high-reliability connect with a system components such as a memory "nest" (or memory hierarchy), an optional system controller, and optional interrupt controller, optional I/O or peripheral devices, etc. The memory nest is attached to a selective pairing facility via a switch or a bus

  5. High-performance conjugate-gradient benchmark: A new metric for ranking high-performance computing systems

    DOE PAGES

    Dongarra, Jack; Heroux, Michael A.; Luszczek, Piotr

    2015-08-17

    Here, we describe a new high-performance conjugate-gradient (HPCG) benchmark. HPCG is composed of computations and data-access patterns commonly found in scientific applications. HPCG strives for a better correlation to existing codes from the computational science domain and to be representative of their performance. Furthermore, HPCG is meant to help drive the computer system design and implementation in directions that will better impact future performance improvement.

  6. Performance evaluation of the NASA/KSC CAD/CAE and office automation LAN's

    NASA Technical Reports Server (NTRS)

    Zobrist, George W.

    1994-01-01

    This study's objective is the performance evaluation of the existing CAD/CAE (Computer Aided Design/Computer Aided Engineering) network at NASA/KSC. This evaluation also includes a similar study of the Office Automation network, since it is being planned to integrate this network into the CAD/CAE network. The Microsoft mail facility which is presently on the CAD/CAE network was monitored to determine its present usage. This performance evaluation of the various networks will aid the NASA/KSC network managers in planning for the integration of future workload requirements into the CAD/CAE network and determining the effectiveness of the planned FDDI (Fiber Distributed Data Interface) migration.

  7. Facility versus unit level reporting of quality indicators in nursing homes when performance monitoring is the goal

    PubMed Central

    Norton, Peter G; Murray, Michael; Doupe, Malcolm B; Cummings, Greta G; Poss, Jeff W; Squires, Janet E; Teare, Gary F; Estabrooks, Carole A

    2014-01-01

    Objectives To demonstrate the benefit of defining operational management units in nursing homes and computing quality indicators on these units as well as on the whole facility. Design Calculation of adjusted Resident Assessment Instrument – Minimum Data Set 2.0 (RAI–MDS 2.0) quality indicators for: PRU05 (prevalence of residents with a stage 2–4 pressure ulcer), PAI0X (prevalence of residents with pain) and DRG01 (prevalence of residents receiving an antipsychotic with no diagnosis of psychosis), for quarterly assessments between 2007 and 2011 at unit and facility levels. Comparisons of these risk-adjusted quality indicators using statistical process control (control charts). Setting A representative sample of 30 urban nursing homes in the three Canadian Prairie Provinces. Measurements Explicit decision rules were developed and tested to determine whether the control charts demonstrated improving, worsening, unchanging or unclassifiable trends over the time period. Unit and facility performance were compared. Results In 48.9% of the units studied, unit control chart performance indicated different changes in quality over the reporting period than did the facility chart. Examples are provided to illustrate that these differences lead to quite different quality interventions. Conclusions Our results demonstrate the necessity of considering facility-level and unit-level measurement when calculating quality indicators derived from the RAI–MDS 2.0 data, and quite probably from any RAI measures. PMID:24523428

  8. Science & Technology Review June 2012

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Poyneer, L A

    2012-04-20

    This month's issue has the following articles: (1) A New Era in Climate System Analysis - Commentary by William H. Goldstein; (2) Seeking Clues to Climate Change - By comparing past climate records with results from computer simulations, Livermore scientists can better understand why Earth's climate has changed and how it might change in the future; (3) Finding and Fixing a Supercomputer's Faults - Livermore experts have developed innovative methods to detect hardware faults in supercomputers and help applications recover from errors that do occur; (4) Targeting Ignition - Enhancements to the cryogenic targets for National Ignition Facility experiments aremore » furthering work to achieve fusion ignition with energy gain; (5) Neural Implants Come of Age - A new generation of fully implantable, biocompatible neural prosthetics offers hope to patients with neurological impairment; and (6) Incubator Busy Growing Energy Technologies - Six collaborations with industrial partners are using the Laboratory's high-performance computing resources to find solutions to urgent energy-related problems.« less

  9. The markup is the model: reasoning about systems biology models in the Semantic Web era.

    PubMed

    Kell, Douglas B; Mendes, Pedro

    2008-06-07

    Metabolic control analysis, co-invented by Reinhart Heinrich, is a formalism for the analysis of biochemical networks, and is a highly important intellectual forerunner of modern systems biology. Exchanging ideas and exchanging models are part of the international activities of science and scientists, and the Systems Biology Markup Language (SBML) allows one to perform the latter with great facility. Encoding such models in SBML allows their distributed analysis using loosely coupled workflows, and with the advent of the Internet the various software modules that one might use to analyze biochemical models can reside on entirely different computers and even on different continents. Optimization is at the core of many scientific and biotechnological activities, and Reinhart made many major contributions in this area, stimulating our own activities in the use of the methods of evolutionary computing for optimization.

  10. Implementing an Affordable High-Performance Computing for Teaching-Oriented Computer Science Curriculum

    ERIC Educational Resources Information Center

    Abuzaghleh, Omar; Goldschmidt, Kathleen; Elleithy, Yasser; Lee, Jeongkyu

    2013-01-01

    With the advances in computing power, high-performance computing (HPC) platforms have had an impact on not only scientific research in advanced organizations but also computer science curriculum in the educational community. For example, multicore programming and parallel systems are highly desired courses in the computer science major. However,…

  11. Yahoo! Compute Coop (YCC). A Next-Generation Passive Cooling Design for Data Centers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robison, AD; Page, Christina; Lytle, Bob

    The purpose of the Yahoo! Compute Coop (YCC) project is to research, design, build and implement a greenfield "efficient data factory" and to specifically demonstrate that the YCC concept is feasible for large facilities housing tens of thousands of heat-producing computing servers. The project scope for the Yahoo! Compute Coop technology includes: - Analyzing and implementing ways in which to drastically decrease energy consumption and waste output. - Analyzing the laws of thermodynamics and implementing naturally occurring environmental effects in order to maximize the "free-cooling" for large data center facilities. "Free cooling" is the direct usage of outside air tomore » cool the servers vs. traditional "mechanical cooling" which is supplied by chillers or other Dx units. - Redesigning and simplifying building materials and methods. - Shortening and simplifying build-to-operate schedules while at the same time reducing initial build and operating costs. Selected for its favorable climate, the greenfield project site is located in Lockport, NY. Construction on the 9.0 MW critical load data center facility began in May 2009, with the fully operational facility deployed in September 2010. The relatively low initial build cost, compatibility with current server and network models, and the efficient use of power and water are all key features that make it a highly compatible and globally implementable design innovation for the data center industry. Yahoo! Compute Coop technology is designed to achieve 99.98% uptime availability. This integrated building design allows for free cooling 99% of the year via the building's unique shape and orientation, as well as server physical configuration.« less

  12. Scaling Studies for Advanced High Temperature Reactor Concepts, Final Technical Report: October 2014—December 2017

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Woods, Brian; Gutowska, Izabela; Chiger, Howard

    Computer simulations of nuclear reactor thermal-hydraulic phenomena are often used in the design and licensing of nuclear reactor systems. In order to assess the accuracy of these computer simulations, computer codes and methods are often validated against experimental data. This experimental data must be of sufficiently high quality in order to conduct a robust validation exercise. In addition, this experimental data is generally collected at experimental facilities that are of a smaller scale than the reactor systems that are being simulated due to cost considerations. Therefore, smaller scale test facilities must be designed and constructed in such a fashion tomore » ensure that the prototypical behavior of a particular nuclear reactor system is preserved. The work completed through this project has resulted in scaling analyses and conceptual design development for a test facility capable of collecting code validation data for the following high temperature gas reactor systems and events— 1. Passive natural circulation core cooling system, 2. pebble bed gas reactor concept, 3. General Atomics Energy Multiplier Module reactor, and 4. prismatic block design steam-water ingress event. In the event that code validation data for these systems or events is needed in the future, significant progress in the design of an appropriate integral-type test facility has already been completed as a result of this project. Where applicable, the next step would be to begin the detailed design development and material procurement. As part of this project applicable scaling analyses were completed and test facility design requirements developed. Conceptual designs were developed for the implementation of these design requirements at the Oregon State University (OSU) High Temperature Test Facility (HTTF). The original HTTF is based on a ¼-scale model of a high temperature gas reactor concept with the capability for both forced and natural circulation flow through a prismatic core with an electrical heat source. The peak core region temperature capability is 1400°C. As part of this project, an inventory of test facilities that could be used for these experimental programs was completed. Several of these facilities showed some promise, however, upon further investigation it became clear that only the OSU HTTF had the power and/or peak temperature limits that would allow for the experimental programs envisioned herein. Thus the conceptual design and feasibility study development focused on examining the feasibility of configuring the current HTTF to collect validation data for these experimental programs. In addition to the scaling analyses and conceptual design development, a test plan was developed for the envisioned modified test facility. This test plan included a discussion on an appropriate shakedown test program as well as the specific matrix tests. Finally, a feasibility study was completed to determine the cost and schedule considerations that would be important to any test program developed to investigate these designs and events.« less

  13. The F-18 systems research aircraft facility

    NASA Technical Reports Server (NTRS)

    Sitz, Joel R.

    1992-01-01

    To help ensure that new aerospace initiatives rapidly transition to competitive U.S. technologies, NASA Dryden Flight Research Facility has dedicated a systems research aircraft facility. The primary goal is to accelerate the transition of new aerospace technologies to commercial, military, and space vehicles. Key technologies include more-electric aircraft concepts, fly-by-light systems, flush airdata systems, and advanced computer architectures. Future aircraft that will benefit are the high-speed civil transport and the National AeroSpace Plane. This paper describes the systems research aircraft flight research vehicle and outlines near-term programs.

  14. GYROKINETIC PARTICLE SIMULATION OF TURBULENT TRANSPORT IN BURNING PLASMAS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Horton, Claude Wendell

    2014-06-10

    The SciDAC project at the IFS advanced the state of high performance computing for turbulent structures and turbulent transport. The team project with Prof Zhihong Lin [PI] at Univ California Irvine produced new understanding of the turbulent electron transport. The simulations were performed at the Texas Advanced Computer Center TACC and the NERSC facility by Wendell Horton, Lee Leonard and the IFS Graduate Students working in that group. The research included a Validation of the electron turbulent transport code using the data from a steady state university experiment at the University of Columbia in which detailed probe measurements of themore » turbulence in steady state were used for wide range of temperature gradients to compare with the simulation data. These results were published in a joint paper with Texas graduate student Dr. Xiangrong Fu using the work in his PhD dissertation. X.R. Fu, W. Horton, Y. Xiao, Z. Lin, A.K. Sen and V. Sokolov, “Validation of electron Temperature gradient turbulence in the Columbia Linear Machine, Phys. Plasmas 19, 032303 (2012).« less

  15. 235U Holdup Measurements in Three 321-M Exhaust HEPA Banks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dewberry, R

    2005-02-24

    The Analytical Development Section of Savannah River National Laboratory (SRNL) was requested by the Facilities Disposition Division to determine the holdup of enriched uranium in the 321-M facility as part of an overall deactivation project of the facility. The 321-M facility was used to fabricate enriched uranium fuel assemblies, lithium-aluminum target tubes, neptunium assemblies, and miscellaneous components for the production reactors. The results of the holdup assays are essential for determining compliance with the Waste Acceptance Criteria, Material Control & Accountability, and to meet criticality safety controls. This report covers holdup measurements of uranium residue in three HEPA filter exhaustmore » banks of the 321-M facility. Each of the exhaust banks has dimensions near 7' x 14' x 4' and represents a complex holdup problem. A portable HPGe detector and EG&G Dart system that contains the high voltage power supply and signal processing electronics were used to determine highly enriched uranium (HEU) holdup. A personal computer with Gamma-Vision software was used to control the Dart MCA and to provide space to store and manipulate multiple 4096-channel {gamma}-ray spectra. Some acquisitions were performed with the portable detector configured to a Canberra Inspector using NDA2000 acquisition and analysis software. Our results for each component uses a mixture of redundant point source and area source acquisitions that yielded HEU contents in the range of 2-10 grams. This report discusses the methodology, non-destructive assay (NDA) measurements, assumptions, and results of the uranium holdup in these items. This report includes use of transmission-corrected assay as well as correction for contributions from secondary area sources.« less

  16. The computer-communication link for the innovative use of Space Station

    NASA Technical Reports Server (NTRS)

    Carroll, C. C.

    1984-01-01

    The potential capability of the computer-communications system link of space station is related to innovative utilization for industrial applications. Conceptual computer network architectures are presented and their respective accommodation of innovative industrial projects are discussed. To achieve maximum system availability for industrialization is a possible design goal, which would place the industrial community in an interactive mode with facilities in space. A worthy design goal would be to minimize the computer-communication management function and thereby optimize the system availability for industrial users. Quasi-autonomous modes and subnetworks are key design issues, since they would be the system elements directly effecting the system performance for industrial use.

  17. Xi-cam: a versatile interface for data visualization and analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pandolfi, Ronald J.; Allan, Daniel B.; Arenholz, Elke

    Xi-cam is an extensible platform for data management, analysis and visualization.Xi-camaims to provide a flexible and extensible approach to synchrotron data treatment as a solution to rising demands for high-volume/high-throughput processing pipelines. The core ofXi-camis an extensible plugin-based graphical user interface platform which provides users with an interactive interface to processing algorithms. Plugins are available for SAXS/WAXS/GISAXS/GIWAXS, tomography and NEXAFS data. WithXi-cam's `advanced' mode, data processing steps are designed as a graph-based workflow, which can be executed live, locally or remotely. Remote execution utilizes high-performance computing or de-localized resources, allowing for the effective reduction of high-throughput data.Xi-cam's plugin-based architecture targetsmore » cross-facility and cross-technique collaborative development, in support of multi-modal analysis.Xi-camis open-source and cross-platform, and available for download on GitHub.« less

  18. Xi-cam: a versatile interface for data visualization and analysis

    DOE PAGES

    Pandolfi, Ronald J.; Allan, Daniel B.; Arenholz, Elke; ...

    2018-05-31

    Xi-cam is an extensible platform for data management, analysis and visualization.Xi-camaims to provide a flexible and extensible approach to synchrotron data treatment as a solution to rising demands for high-volume/high-throughput processing pipelines. The core ofXi-camis an extensible plugin-based graphical user interface platform which provides users with an interactive interface to processing algorithms. Plugins are available for SAXS/WAXS/GISAXS/GIWAXS, tomography and NEXAFS data. WithXi-cam's `advanced' mode, data processing steps are designed as a graph-based workflow, which can be executed live, locally or remotely. Remote execution utilizes high-performance computing or de-localized resources, allowing for the effective reduction of high-throughput data.Xi-cam's plugin-based architecture targetsmore » cross-facility and cross-technique collaborative development, in support of multi-modal analysis.Xi-camis open-source and cross-platform, and available for download on GitHub.« less

  19. Pathways to high and low performance: factors differentiating primary care facilities under performance-based financing in Nigeria

    PubMed Central

    Mabuchi, Shunsuke; Sesan, Temilade; Bennett, Sara C

    2018-01-01

    Abstract The determinants of primary health facility performance in developing countries have not been well studied. One of the most under-researched areas is health facility management. This study investigated health facilities under the pilot performance-based financing (PBF) scheme in Nigeria, and aimed to understand which factors differentiated primary health care centres (PHCCs) which had performed well, vs those which had not, with a focus on health facility management practices. We used a multiple case study where we compared two high-performing PHCCs and two low-performing PHCCs for each of the two PBF target states. Two teams of two trained local researchers spent 1 week at each PHCC and collected semi-structured interview, observation and documentary data. Data from interviews were transcribed, translated and coded using a framework approach. The data for each PHCC were synthesized to understand dynamic interactions of different elements in each case. We then compared the characteristics of high and low performers. The areas in which critical differences between high and low-performers emerged were: community engagement and support; and performance and staff management. We also found that (i) contextual and health system factors particularly staffing, access and competition with other providers; (ii) health centre management including community engagement, performance management and staff management; and (iii) community leader support interacted and drove performance improvement among the PHCCs. Among them, we found that good health centre management can overcome some contextual and health system barriers and enhance community leader support. This study findings suggest a strong need to select capable and motivated health centre managers, provide long-term coaching in managerial skills, and motivate them to improve their practices. The study also highlights the need to position engagement with community leaders as a key management practice and a central element of interventions to improve PHCC performance. PMID:29077844

  20. Large Scale Computing and Storage Requirements for High Energy Physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gerber, Richard A.; Wasserman, Harvey

    2010-11-24

    The National Energy Research Scientific Computing Center (NERSC) is the leading scientific computing facility for the Department of Energy's Office of Science, providing high-performance computing (HPC) resources to more than 3,000 researchers working on about 400 projects. NERSC provides large-scale computing resources and, crucially, the support and expertise needed for scientists to make effective use of them. In November 2009, NERSC, DOE's Office of Advanced Scientific Computing Research (ASCR), and DOE's Office of High Energy Physics (HEP) held a workshop to characterize the HPC resources needed at NERSC to support HEP research through the next three to five years. Themore » effort is part of NERSC's legacy of anticipating users needs and deploying resources to meet those demands. The workshop revealed several key points, in addition to achieving its goal of collecting and characterizing computing requirements. The chief findings: (1) Science teams need access to a significant increase in computational resources to meet their research goals; (2) Research teams need to be able to read, write, transfer, store online, archive, analyze, and share huge volumes of data; (3) Science teams need guidance and support to implement their codes on future architectures; and (4) Projects need predictable, rapid turnaround of their computational jobs to meet mission-critical time constraints. This report expands upon these key points and includes others. It also presents a number of case studies as representative of the research conducted within HEP. Workshop participants were asked to codify their requirements in this case study format, summarizing their science goals, methods of solution, current and three-to-five year computing requirements, and software and support needs. Participants were also asked to describe their strategy for computing in the highly parallel, multi-core environment that is expected to dominate HPC architectures over the next few years. The report includes a section that describes efforts already underway or planned at NERSC that address requirements collected at the workshop. NERSC has many initiatives in progress that address key workshop findings and are aligned with NERSC's strategic plans.« less

Top