Sample records for computing research facility

  1. Configuration and Management of a Cluster Computing Facility in Undergraduate Student Computer Laboratories

    ERIC Educational Resources Information Center

    Cornforth, David; Atkinson, John; Spennemann, Dirk H. R.

    2006-01-01

    Purpose: Many researchers require access to computer facilities beyond those offered by desktop workstations. Traditionally, these are offered either through partnerships, to share the cost of supercomputing facilities, or through purpose-built cluster facilities. However, funds are not always available to satisfy either of these options, and…

  2. Development and applications of nondestructive evaluation at Marshall Space Flight Center

    NASA Technical Reports Server (NTRS)

    Whitaker, Ann F.

    1990-01-01

    A brief description of facility design and equipment, facility usage, and typical investigations are presented for the following: Surface Inspection Facility; Advanced Computer Tomography Inspection Station (ACTIS); NDE Data Evaluation Facility; Thermographic Test Development Facility; Radiographic Test Facility; Realtime Radiographic Test Facility; Eddy Current Research Facility; Acoustic Emission Monitoring System; Advanced Ultrasonic Test Station (AUTS); Ultrasonic Test Facility; and Computer Controlled Scanning (CONSCAN) System.

  3. Money for Research, Not for Energy Bills: Finding Energy and Cost Savings in High Performance Computer Facility Designs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Drewmark Communications; Sartor, Dale; Wilson, Mark

    2010-07-01

    High-performance computing facilities in the United States consume an enormous amount of electricity, cutting into research budgets and challenging public- and private-sector efforts to reduce energy consumption and meet environmental goals. However, these facilities can greatly reduce their energy demand through energy-efficient design of the facility itself. Using a case study of a facility under design, this article discusses strategies and technologies that can be used to help achieve energy reductions.

  4. The multi-modal Australian ScienceS Imaging and Visualization Environment (MASSIVE) high performance computing infrastructure: applications in neuroscience and neuroinformatics research

    PubMed Central

    Goscinski, Wojtek J.; McIntosh, Paul; Felzmann, Ulrich; Maksimenko, Anton; Hall, Christopher J.; Gureyev, Timur; Thompson, Darren; Janke, Andrew; Galloway, Graham; Killeen, Neil E. B.; Raniga, Parnesh; Kaluza, Owen; Ng, Amanda; Poudel, Govinda; Barnes, David G.; Nguyen, Toan; Bonnington, Paul; Egan, Gary F.

    2014-01-01

    The Multi-modal Australian ScienceS Imaging and Visualization Environment (MASSIVE) is a national imaging and visualization facility established by Monash University, the Australian Synchrotron, the Commonwealth Scientific Industrial Research Organization (CSIRO), and the Victorian Partnership for Advanced Computing (VPAC), with funding from the National Computational Infrastructure and the Victorian Government. The MASSIVE facility provides hardware, software, and expertise to drive research in the biomedical sciences, particularly advanced brain imaging research using synchrotron x-ray and infrared imaging, functional and structural magnetic resonance imaging (MRI), x-ray computer tomography (CT), electron microscopy and optical microscopy. The development of MASSIVE has been based on best practice in system integration methodologies, frameworks, and architectures. The facility has: (i) integrated multiple different neuroimaging analysis software components, (ii) enabled cross-platform and cross-modality integration of neuroinformatics tools, and (iii) brought together neuroimaging databases and analysis workflows. MASSIVE is now operational as a nationally distributed and integrated facility for neuroinfomatics and brain imaging research. PMID:24734019

  5. LBNL Computational ResearchTheory Facility Groundbreaking - Full Press Conference. Feb 1st, 2012

    ScienceCinema

    Yelick, Kathy

    2018-01-24

    Energy Secretary Steven Chu, along with Berkeley Lab and UC leaders, broke ground on the Lab's Computational Research and Theory (CRT) facility yesterday. The CRT will be at the forefront of high-performance supercomputing research and be DOE's most efficient facility of its kind. Joining Secretary Chu as speakers were Lab Director Paul Alivisatos, UC President Mark Yudof, Office of Science Director Bill Brinkman, and UC Berkeley Chancellor Robert Birgeneau. The festivities were emceed by Associate Lab Director for Computing Sciences, Kathy Yelick, and Berkeley Mayor Tom Bates joined in the shovel ceremony.

  6. LBNL Computational Research and Theory Facility Groundbreaking. February 1st, 2012

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yelick, Kathy

    2012-02-02

    Energy Secretary Steven Chu, along with Berkeley Lab and UC leaders, broke ground on the Lab's Computational Research and Theory (CRT) facility yesterday. The CRT will be at the forefront of high-performance supercomputing research and be DOE's most efficient facility of its kind. Joining Secretary Chu as speakers were Lab Director Paul Alivisatos, UC President Mark Yudof, Office of Science Director Bill Brinkman, and UC Berkeley Chancellor Robert Birgeneau. The festivities were emceed by Associate Lab Director for Computing Sciences, Kathy Yelick, and Berkeley Mayor Tom Bates joined in the shovel ceremony.

  7. LBNL Computational Research and Theory Facility Groundbreaking. February 1st, 2012

    ScienceCinema

    Yelick, Kathy

    2017-12-09

    Energy Secretary Steven Chu, along with Berkeley Lab and UC leaders, broke ground on the Lab's Computational Research and Theory (CRT) facility yesterday. The CRT will be at the forefront of high-performance supercomputing research and be DOE's most efficient facility of its kind. Joining Secretary Chu as speakers were Lab Director Paul Alivisatos, UC President Mark Yudof, Office of Science Director Bill Brinkman, and UC Berkeley Chancellor Robert Birgeneau. The festivities were emceed by Associate Lab Director for Computing Sciences, Kathy Yelick, and Berkeley Mayor Tom Bates joined in the shovel ceremony.

  8. Data management and its role in delivering science at DOE BES user facilities - Past, Present, and Future

    NASA Astrophysics Data System (ADS)

    Miller, Stephen D.; Herwig, Kenneth W.; Ren, Shelly; Vazhkudai, Sudharshan S.; Jemian, Pete R.; Luitz, Steffen; Salnikov, Andrei A.; Gaponenko, Igor; Proffen, Thomas; Lewis, Paul; Green, Mark L.

    2009-07-01

    The primary mission of user facilities operated by Basic Energy Sciences under the Department of Energy is to produce data for users in support of open science and basic research [1]. We trace back almost 30 years of history across selected user facilities illustrating the evolution of facility data management practices and how these practices have related to performing scientific research. The facilities cover multiple techniques such as X-ray and neutron scattering, imaging and tomography sciences. Over time, detector and data acquisition technologies have dramatically increased the ability to produce prolific volumes of data challenging the traditional paradigm of users taking data home upon completion of their experiments to process and publish their results. During this time, computing capacity has also increased dramatically, though the size of the data has grown significantly faster than the capacity of one's laptop to manage and process this new facility produced data. Trends indicate that this will continue to be the case for yet some time. Thus users face a quandary for how to manage today's data complexity and size as these may exceed the computing resources users have available to themselves. This same quandary can also stifle collaboration and sharing. Realizing this, some facilities are already providing web portal access to data and computing thereby providing users access to resources they need [2]. Portal based computing is now driving researchers to think about how to use the data collected at multiple facilities in an integrated way to perform their research, and also how to collaborate and share data. In the future, inter-facility data management systems will enable next tier cross-instrument-cross facility scientific research fuelled by smart applications residing upon user computer resources. We can learn from the medical imaging community that has been working since the early 1990's to integrate data from across multiple modalities to achieve better diagnoses [3] - similarly, data fusion across BES facilities will lead to new scientific discoveries.

  9. The UK Human Genome Mapping Project online computing service.

    PubMed

    Rysavy, F R; Bishop, M J; Gibbs, G P; Williams, G W

    1992-04-01

    This paper presents an overview of computing and networking facilities developed by the Medical Research Council to provide online computing support to the Human Genome Mapping Project (HGMP) in the UK. The facility is connected to a number of other computing facilities in various centres of genetics and molecular biology research excellence, either directly via high-speed links or through national and international wide-area networks. The paper describes the design and implementation of the current system, a 'client/server' network of Sun, IBM, DEC and Apple servers, gateways and workstations. A short outline of online computing services currently delivered by this system to the UK human genetics research community is also provided. More information about the services and their availability could be obtained by a direct approach to the UK HGMP-RC.

  10. Overview of the NASA Dryden Flight Research Facility aeronautical flight projects

    NASA Technical Reports Server (NTRS)

    Meyer, Robert R., Jr.

    1992-01-01

    Several principal aerodynamics flight projects of the NASA Dryden Flight Research Facility are discussed. Key vehicle technology areas from a wide range of flight vehicles are highlighted. These areas include flight research data obtained for ground facility and computation correlation, applied research in areas not well suited to ground facilities (wind tunnels), and concept demonstration.

  11. ASCR Cybersecurity for Scientific Computing Integrity - Research Pathways and Ideas Workshop

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peisert, Sean; Potok, Thomas E.; Jones, Todd

    At the request of the U.S. Department of Energy's (DOE) Office of Science (SC) Advanced Scientific Computing Research (ASCR) program office, a workshop was held June 2-3, 2015, in Gaithersburg, MD, to identify potential long term (10 to +20 year) cybersecurity fundamental basic research and development challenges, strategies and roadmap facing future high performance computing (HPC), networks, data centers, and extreme-scale scientific user facilities. This workshop was a follow-on to the workshop held January 7-9, 2015, in Rockville, MD, that examined higher level ideas about scientific computing integrity specific to the mission of the DOE Office of Science. Issues includedmore » research computation and simulation that takes place on ASCR computing facilities and networks, as well as network-connected scientific instruments, such as those run by various DOE Office of Science programs. Workshop participants included researchers and operational staff from DOE national laboratories, as well as academic researchers and industry experts. Participants were selected based on the submission of abstracts relating to the topics discussed in the previous workshop report [1] and also from other ASCR reports, including "Abstract Machine Models and Proxy Architectures for Exascale Computing" [27], the DOE "Preliminary Conceptual Design for an Exascale Computing Initiative" [28], and the January 2015 machine learning workshop [29]. The workshop was also attended by several observers from DOE and other government agencies. The workshop was divided into three topic areas: (1) Trustworthy Supercomputing, (2) Extreme-Scale Data, Knowledge, and Analytics for Understanding and Improving Cybersecurity, and (3) Trust within High-end Networking and Data Centers. Participants were divided into three corresponding teams based on the category of their abstracts. The workshop began with a series of talks from the program manager and workshop chair, followed by the leaders for each of the three topics and a representative of each of the four major DOE Office of Science Advanced Scientific Computing Research Facilities: the Argonne Leadership Computing Facility (ALCF), the Energy Sciences Network (ESnet), the National Energy Research Scientific Computing Center (NERSC), and the Oak Ridge Leadership Computing Facility (OLCF). The rest of the workshop consisted of topical breakout discussions and focused writing periods that produced much of this report.« less

  12. Trends in Facility Management Technology: The Emergence of the Internet, GIS, and Facility Assessment Decision Support.

    ERIC Educational Resources Information Center

    Teicholz, Eric

    1997-01-01

    Reports research on trends in computer-aided facilities management using the Internet and geographic information system (GIS) technology for space utilization research. Proposes that facility assessment software holds promise for supporting facility management decision making, and outlines four areas for its use: inventory; evaluation; reporting;…

  13. Rapid prototyping facility for flight research in artificial-intelligence-based flight systems concepts

    NASA Technical Reports Server (NTRS)

    Duke, E. L.; Regenie, V. A.; Deets, D. A.

    1986-01-01

    The Dryden Flight Research Facility of the NASA Ames Research Facility of the NASA Ames Research Center is developing a rapid prototyping facility for flight research in flight systems concepts that are based on artificial intelligence (AI). The facility will include real-time high-fidelity aircraft simulators, conventional and symbolic processors, and a high-performance research aircraft specially modified to accept commands from the ground-based AI computers. This facility is being developed as part of the NASA-DARPA automated wingman program. This document discusses the need for flight research and for a national flight research facility for the rapid prototyping of AI-based avionics systems and the NASA response to those needs.

  14. A rapid prototyping facility for flight research in advanced systems concepts

    NASA Technical Reports Server (NTRS)

    Duke, Eugene L.; Brumbaugh, Randal W.; Disbrow, James D.

    1989-01-01

    The Dryden Flight Research Facility of the NASA Ames Research Facility of the NASA Ames Research Center is developing a rapid prototyping facility for flight research in flight systems concepts that are based on artificial intelligence (AI). The facility will include real-time high-fidelity aircraft simulators, conventional and symbolic processors, and a high-performance research aircraft specially modified to accept commands from the ground-based AI computers. This facility is being developed as part of the NASA-DARPA automated wingman program. This document discusses the need for flight research and for a national flight research facility for the rapid prototyping of AI-based avionics systems and the NASA response to those needs.

  15. Crosscut report: Exascale Requirements Reviews, March 9–10, 2017 – Tysons Corner, Virginia. An Office of Science review sponsored by: Advanced Scientific Computing Research, Basic Energy Sciences, Biological and Environmental Research, Fusion Energy Sciences, High Energy Physics, Nuclear Physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gerber, Richard; Hack, James; Riley, Katherine

    The mission of the U.S. Department of Energy Office of Science (DOE SC) is the delivery of scientific discoveries and major scientific tools to transform our understanding of nature and to advance the energy, economic, and national security missions of the United States. To achieve these goals in today’s world requires investments in not only the traditional scientific endeavors of theory and experiment, but also in computational science and the facilities that support large-scale simulation and data analysis. The Advanced Scientific Computing Research (ASCR) program addresses these challenges in the Office of Science. ASCR’s mission is to discover, develop, andmore » deploy computational and networking capabilities to analyze, model, simulate, and predict complex phenomena important to DOE. ASCR supports research in computational science, three high-performance computing (HPC) facilities — the National Energy Research Scientific Computing Center (NERSC) at Lawrence Berkeley National Laboratory and Leadership Computing Facilities at Argonne (ALCF) and Oak Ridge (OLCF) National Laboratories — and the Energy Sciences Network (ESnet) at Berkeley Lab. ASCR is guided by science needs as it develops research programs, computers, and networks at the leading edge of technologies. As we approach the era of exascale computing, technology changes are creating challenges for science programs in SC for those who need to use high performance computing and data systems effectively. Numerous significant modifications to today’s tools and techniques will be needed to realize the full potential of emerging computing systems and other novel computing architectures. To assess these needs and challenges, ASCR held a series of Exascale Requirements Reviews in 2015–2017, one with each of the six SC program offices,1 and a subsequent Crosscut Review that sought to integrate the findings from each. Participants at the reviews were drawn from the communities of leading domain scientists, experts in computer science and applied mathematics, ASCR facility staff, and DOE program managers in ASCR and the respective program offices. The purpose of these reviews was to identify mission-critical scientific problems within the DOE Office of Science (including experimental facilities) and determine the requirements for the exascale ecosystem that would be needed to address those challenges. The exascale ecosystem includes exascale computing systems, high-end data capabilities, efficient software at scale, libraries, tools, and other capabilities. This effort will contribute to the development of a strategic roadmap for ASCR compute and data facility investments and will help the ASCR Facility Division establish partnerships with Office of Science stakeholders. It will also inform the Office of Science research needs and agenda. The results of the six reviews have been published in reports available on the web at http://exascaleage.org/. This report presents a summary of the individual reports and of common and crosscutting findings, and it identifies opportunities for productive collaborations among the DOE SC program offices.« less

  16. Advanced Technology Airfoil Research, volume 1, part 1. [conference on development of computational codes and test facilities

    NASA Technical Reports Server (NTRS)

    1979-01-01

    A comprehensive review of all NASA airfoil research, conducted both in-house and under grant and contract, as well as a broad spectrum of airfoil research outside of NASA is presented. Emphasis is placed on the development of computational aerodynamic codes for airfoil analysis and design, the development of experimental facilities and test techniques, and all types of airfoil applications.

  17. 2016 Annual Report - Argonne Leadership Computing Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Collins, Jim; Papka, Michael E.; Cerny, Beth A.

    The Argonne Leadership Computing Facility (ALCF) helps researchers solve some of the world’s largest and most complex problems, while also advancing the nation’s efforts to develop future exascale computing systems. This report presents some of the ALCF’s notable achievements in key strategic areas over the past year.

  18. Evolution of the Virtualized HPC Infrastructure of Novosibirsk Scientific Center

    NASA Astrophysics Data System (ADS)

    Adakin, A.; Anisenkov, A.; Belov, S.; Chubarov, D.; Kalyuzhny, V.; Kaplin, V.; Korol, A.; Kuchin, N.; Lomakin, S.; Nikultsev, V.; Skovpen, K.; Sukharev, A.; Zaytsev, A.

    2012-12-01

    Novosibirsk Scientific Center (NSC), also known worldwide as Akademgorodok, is one of the largest Russian scientific centers hosting Novosibirsk State University (NSU) and more than 35 research organizations of the Siberian Branch of Russian Academy of Sciences including Budker Institute of Nuclear Physics (BINP), Institute of Computational Technologies, and Institute of Computational Mathematics and Mathematical Geophysics (ICM&MG). Since each institute has specific requirements on the architecture of computing farms involved in its research field, currently we've got several computing facilities hosted by NSC institutes, each optimized for a particular set of tasks, of which the largest are the NSU Supercomputer Center, Siberian Supercomputer Center (ICM&MG), and a Grid Computing Facility of BINP. A dedicated optical network with the initial bandwidth of 10 Gb/s connecting these three facilities was built in order to make it possible to share the computing resources among the research communities, thus increasing the efficiency of operating the existing computing facilities and offering a common platform for building the computing infrastructure for future scientific projects. Unification of the computing infrastructure is achieved by extensive use of virtualization technology based on XEN and KVM platforms. This contribution gives a thorough review of the present status and future development prospects for the NSC virtualized computing infrastructure and the experience gained while using it for running production data analysis jobs related to HEP experiments being carried out at BINP, especially the KEDR detector experiment at the VEPP-4M electron-positron collider.

  19. Computational Science at the Argonne Leadership Computing Facility

    NASA Astrophysics Data System (ADS)

    Romero, Nichols

    2014-03-01

    The goal of the Argonne Leadership Computing Facility (ALCF) is to extend the frontiers of science by solving problems that require innovative approaches and the largest-scale computing systems. ALCF's most powerful computer - Mira, an IBM Blue Gene/Q system - has nearly one million cores. How does one program such systems? What software tools are available? Which scientific and engineering applications are able to utilize such levels of parallelism? This talk will address these questions and describe a sampling of projects that are using ALCF systems in their research, including ones in nanoscience, materials science, and chemistry. Finally, the ways to gain access to ALCF resources will be presented. This research used resources of the Argonne Leadership Computing Facility at Argonne National Laboratory, which is supported by the Office of Science of the U.S. Department of Energy under contract DE-AC02-06CH11357.

  20. The grand challenge of managing the petascale facility.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aiken, R. J.; Mathematics and Computer Science

    2007-02-28

    This report is the result of a study of networks and how they may need to evolve to support petascale leadership computing and science. As Dr. Ray Orbach, director of the Department of Energy's Office of Science, says in the spring 2006 issue of SciDAC Review, 'One remarkable example of growth in unexpected directions has been in high-end computation'. In the same article Dr. Michael Strayer states, 'Moore's law suggests that before the end of the next cycle of SciDAC, we shall see petaflop computers'. Given the Office of Science's strong leadership and support for petascale computing and facilities, wemore » should expect to see petaflop computers in operation in support of science before the end of the decade, and DOE/SC Advanced Scientific Computing Research programs are focused on making this a reality. This study took its lead from this strong focus on petascale computing and the networks required to support such facilities, but it grew to include almost all aspects of the DOE/SC petascale computational and experimental science facilities, all of which will face daunting challenges in managing and analyzing the voluminous amounts of data expected. In addition, trends indicate the increased coupling of unique experimental facilities with computational facilities, along with the integration of multidisciplinary datasets and high-end computing with data-intensive computing; and we can expect these trends to continue at the petascale level and beyond. Coupled with recent technology trends, they clearly indicate the need for including capability petascale storage, networks, and experiments, as well as collaboration tools and programming environments, as integral components of the Office of Science's petascale capability metafacility. The objective of this report is to recommend a new cross-cutting program to support the management of petascale science and infrastructure. The appendices of the report document current and projected DOE computation facilities, science trends, and technology trends, whose combined impact can affect the manageability and stewardship of DOE's petascale facilities. This report is not meant to be all-inclusive. Rather, the facilities, science projects, and research topics presented are to be considered examples to clarify a point.« less

  1. The OSG Open Facility: an on-ramp for opportunistic scientific computing

    NASA Astrophysics Data System (ADS)

    Jayatilaka, B.; Levshina, T.; Sehgal, C.; Gardner, R.; Rynge, M.; Würthwein, F.

    2017-10-01

    The Open Science Grid (OSG) is a large, robust computing grid that started primarily as a collection of sites associated with large HEP experiments such as ATLAS, CDF, CMS, and DZero, but has evolved in recent years to a much larger user and resource platform. In addition to meeting the US LHC community’s computational needs, the OSG continues to be one of the largest providers of distributed high-throughput computing (DHTC) to researchers from a wide variety of disciplines via the OSG Open Facility. The Open Facility consists of OSG resources that are available opportunistically to users other than resource owners and their collaborators. In the past two years, the Open Facility has doubled its annual throughput to over 200 million wall hours. More than half of these resources are used by over 100 individual researchers from over 60 institutions in fields such as biology, medicine, math, economics, and many others. Over 10% of these individual users utilized in excess of 1 million computational hours each in the past year. The largest source of these cycles is temporary unused capacity at institutions affiliated with US LHC computational sites. An increasing fraction, however, comes from university HPC clusters and large national infrastructure supercomputers offering unused capacity. Such expansions have allowed the OSG to provide ample computational resources to both individual researchers and small groups as well as sizable international science collaborations such as LIGO, AMS, IceCube, and sPHENIX. Opening up access to the Fermilab FabrIc for Frontier Experiments (FIFE) project has also allowed experiments such as mu2e and NOvA to make substantial use of Open Facility resources, the former with over 40 million wall hours in a year. We present how this expansion was accomplished as well as future plans for keeping the OSG Open Facility at the forefront of enabling scientific research by way of DHTC.

  2. The OSG Open Facility: An On-Ramp for Opportunistic Scientific Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jayatilaka, B.; Levshina, T.; Sehgal, C.

    The Open Science Grid (OSG) is a large, robust computing grid that started primarily as a collection of sites associated with large HEP experiments such as ATLAS, CDF, CMS, and DZero, but has evolved in recent years to a much larger user and resource platform. In addition to meeting the US LHC community’s computational needs, the OSG continues to be one of the largest providers of distributed high-throughput computing (DHTC) to researchers from a wide variety of disciplines via the OSG Open Facility. The Open Facility consists of OSG resources that are available opportunistically to users other than resource ownersmore » and their collaborators. In the past two years, the Open Facility has doubled its annual throughput to over 200 million wall hours. More than half of these resources are used by over 100 individual researchers from over 60 institutions in fields such as biology, medicine, math, economics, and many others. Over 10% of these individual users utilized in excess of 1 million computational hours each in the past year. The largest source of these cycles is temporary unused capacity at institutions affiliated with US LHC computational sites. An increasing fraction, however, comes from university HPC clusters and large national infrastructure supercomputers offering unused capacity. Such expansions have allowed the OSG to provide ample computational resources to both individual researchers and small groups as well as sizable international science collaborations such as LIGO, AMS, IceCube, and sPHENIX. Opening up access to the Fermilab FabrIc for Frontier Experiments (FIFE) project has also allowed experiments such as mu2e and NOvA to make substantial use of Open Facility resources, the former with over 40 million wall hours in a year. We present how this expansion was accomplished as well as future plans for keeping the OSG Open Facility at the forefront of enabling scientific research by way of DHTC.« less

  3. Space technology test facilities at the NASA Ames Research Center

    NASA Technical Reports Server (NTRS)

    Gross, Anthony R.; Rodrigues, Annette T.

    1990-01-01

    The major space research and technology test facilities at the NASA Ames Research Center are divided into five categories: General Purpose, Life Support, Computer-Based Simulation, High Energy, and the Space Exploraton Test Facilities. The paper discusses selected facilities within each of the five categories and discusses some of the major programs in which these facilities have been involved. Special attention is given to the 20-G Man-Rated Centrifuge, the Human Research Facility, the Plant Crop Growth Facility, the Numerical Aerodynamic Simulation Facility, the Arc-Jet Complex and Hypersonic Test Facility, the Infrared Detector and Cryogenic Test Facility, and the Mars Wind Tunnel. Each facility is described along with its objectives, test parameter ranges, and major current programs and applications.

  4. Making Cloud Computing Available For Researchers and Innovators (Invited)

    NASA Astrophysics Data System (ADS)

    Winsor, R.

    2010-12-01

    High Performance Computing (HPC) facilities exist in most academic institutions but are almost invariably over-subscribed. Access is allocated based on academic merit, the only practical method of assigning valuable finite compute resources. Cloud computing on the other hand, and particularly commercial clouds, draw flexibly on an almost limitless resource as long as the user has sufficient funds to pay the bill. How can the commercial cloud model be applied to scientific computing? Is there a case to be made for a publicly available research cloud and how would it be structured? This talk will explore these themes and describe how Cybera, a not-for-profit non-governmental organization in Alberta Canada, aims to leverage its high speed research and education network to provide cloud computing facilities for a much wider user base.

  5. The OSG open facility: A sharing ecosystem

    DOE PAGES

    Jayatilaka, B.; Levshina, T.; Rynge, M.; ...

    2015-12-23

    The Open Science Grid (OSG) ties together individual experiments’ computing power, connecting their resources to create a large, robust computing grid, this computing infrastructure started primarily as a collection of sites associated with large HEP experiments such as ATLAS, CDF, CMS, and DZero. In the years since, the OSG has broadened its focus to also address the needs of other US researchers and increased delivery of Distributed High Through-put Computing (DHTC) to users from a wide variety of disciplines via the OSG Open Facility. Presently, the Open Facility delivers about 100 million computing wall hours per year to researchers whomore » are not already associated with the owners of the computing sites, this is primarily accomplished by harvesting and organizing the temporarily unused capacity (i.e. opportunistic cycles) from the sites in the OSG. Using these methods, OSG resource providers and scientists share computing hours with researchers in many other fields to enable their science, striving to make sure that these computing power used with maximal efficiency. Furthermore, we believe that expanded access to DHTC is an essential tool for scientific innovation and work continues in expanding this service.« less

  6. Our Story | Materials Research Laboratory at UCSB: an NSF MRSEC

    Science.gov Websites

    this site Materials Research Laboratory at UCSB: an NSF MRSEC logo Materials Research Laboratory at & Workshops Visitor Info Research IRG-1: Magnetic Intermetallic Mesostructures IRG 2: Polymeric Seminars Publications MRL Calendar Facilities Computing Energy Research Facility Microscopy &

  7. Some propulsion system noise data handling conventions and computer programs used at the Lewis Research Center

    NASA Technical Reports Server (NTRS)

    Montegani, F. J.

    1974-01-01

    Methods of handling one-third-octave band noise data originating from the outdoor full-scale fan noise facility and the engine acoustic facility at the Lewis Research Center are presented. Procedures for standardizing, retrieving, extrapolating, and reporting these data are explained. Computer programs are given which are used to accomplish these and other noise data analysis tasks. This information is useful as background for interpretation of data from these facilities appearing in NASA reports and can aid data exchange by promoting standardization.

  8. User interface concerns

    NASA Technical Reports Server (NTRS)

    Redhed, D. D.

    1978-01-01

    Three possible goals for the Numerical Aerodynamic Simulation Facility (NASF) are: (1) a computational fluid dynamics (as opposed to aerodynamics) algorithm development tool; (2) a specialized research laboratory facility for nearly intractable aerodynamics problems that industry encounters; and (3) a facility for industry to use in its normal aerodynamics design work that requires high computing rates. The central system issue for industry use of such a computer is the quality of the user interface as implemented in some kind of a front end to the vector processor.

  9. Expanding Your Laboratory by Accessing Collaboratory Resources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoyt, David W.; Burton, Sarah D.; Peterson, Michael R.

    2004-03-01

    The Environmental Molecular Sciences Laboratory (EMSL) in Richland, Washington, is the home of a research facility setup by the United States Department of Energy (DOE). The facility is atypical because it houses over 100 cutting-edge research systems for the use of researchers all over the United States and the world. Access to the lab is requested through a peer-review proposal process and the scientists who use the facility are generally referred to as ‘users’. There are six main research facilities housed in EMSL, all of which host visiting researchers. Several of these facilities also participate in the EMSL Collaboratory, amore » remote access capability supported by EMSL operations funds. Of these, the High-Field Magnetic Resonance Facility (HFMRF) and Molecular Science Computing Facility (MSCF) have a significant number of their users performing remote work. The HFMRF in EMSL currently houses 12 NMR spectrometers that range in magnet field strength from 7.05T to 21.1T. Staff associated with the NMR facility offers scientific expertise in the areas of structural biology, solid-state materials/catalyst characterization, and magnetic resonance imaging (MRI) techniques. The way in which the HFMRF operates, with a high level of dedication to remote operation across the full suite of High-Field NMR spectrometers, has earned it the name “Virtual NMR Facility”. This review will focus on the operational aspects of remote research done in the High-Field Magnetic Resonance Facility and the computer tools that make remote experiments possible.« less

  10. Argonne's Magellan Cloud Computing Research Project

    ScienceCinema

    Beckman, Pete

    2017-12-11

    Pete Beckman, head of Argonne's Leadership Computing Facility (ALCF), discusses the Department of Energy's new $32-million Magellan project, which designed to test how cloud computing can be used for scientific research. More information: http://www.anl.gov/Media_Center/News/2009/news091014a.html

  11. Argonne's Magellan Cloud Computing Research Project

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beckman, Pete

    Pete Beckman, head of Argonne's Leadership Computing Facility (ALCF), discusses the Department of Energy's new $32-million Magellan project, which designed to test how cloud computing can be used for scientific research. More information: http://www.anl.gov/Media_Center/News/2009/news091014a.html

  12. Molecular Modeling and Computational Chemistry at Humboldt State University.

    ERIC Educational Resources Information Center

    Paselk, Richard A.; Zoellner, Robert W.

    2002-01-01

    Describes a molecular modeling and computational chemistry (MM&CC) facility for undergraduate instruction and research at Humboldt State University. This facility complex allows the introduction of MM&CC throughout the chemistry curriculum with tailored experiments in general, organic, and inorganic courses as well as a new molecular modeling…

  13. 45 CFR 1614.3 - Range of activities.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... assistance, research, advice and counsel, or the use of recipient facilities, libraries, computer assisted... bono basis through the provision of community legal education, training, technical assistance, research, advice and counsel; co-counseling arrangements; or the use of private law firm facilities, libraries...

  14. 45 CFR 1614.3 - Range of activities.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... assistance, research, advice and counsel, or the use of recipient facilities, libraries, computer assisted... bono basis through the provision of community legal education, training, technical assistance, research, advice and counsel; co-counseling arrangements; or the use of private law firm facilities, libraries...

  15. 45 CFR 1614.3 - Range of activities.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... assistance, research, advice and counsel, or the use of recipient facilities, libraries, computer assisted... bono basis through the provision of community legal education, training, technical assistance, research, advice and counsel; co-counseling arrangements; or the use of private law firm facilities, libraries...

  16. ICASE

    NASA Technical Reports Server (NTRS)

    1994-01-01

    This report summarizes research conducted at the Institute for Computer Applications in Science and Engineering in the areas of (1) applied and numerical mathematics, including numerical analysis and algorithm development; (2) theoretical and computational research in fluid mechanics in selected areas of interest, including acoustics and combustion; (3) experimental research in transition and turbulence and aerodynamics involving Langley facilities and scientists; and (4) computer science.

  17. The Center for Nanophase Materials Sciences

    NASA Astrophysics Data System (ADS)

    Lowndes, Douglas

    2005-03-01

    The Center for Nanophase Materials Sciences (CNMS) located at Oak Ridge National Laboratory (ORNL) will be the first DOE Nanoscale Science Research Center to begin operation, with construction to be completed in April 2005 and initial operations in October 2005. The CNMS' scientific program has been developed through workshops with the national community, with the goal of creating a highly collaborative research environment to accelerate discovery and drive technological advances. Research at the CNMS is organized under seven Scientific Themes selected to address challenges to understanding and to exploit particular ORNL strengths (see http://cnms.ornl.govhttp://cnms.ornl.gov). These include extensive synthesis and characterization capabilities for soft, hard, nanostructured, magnetic and catalytic materials and their composites; neutron scattering at the Spallation Neutron Source and High Flux Isotope Reactor; computational nanoscience in the CNMS' Nanomaterials Theory Institute and utilizing facilities and expertise of the Center for Computational Sciences and the new Leadership Scientific Computing Facility at ORNL; a new CNMS Nanofabrication Research Laboratory; and a suite of unique and state-of-the-art instruments to be made reliably available to the national community for imaging, manipulation, and properties measurements on nanoscale materials in controlled environments. The new research facilities will be described together with the planned operation of the user research program, the latter illustrated by the current ``jump start'' user program that utilizes existing ORNL/CNMS facilities.

  18. The F-18 systems research aircraft facility

    NASA Technical Reports Server (NTRS)

    Sitz, Joel R.

    1992-01-01

    To help ensure that new aerospace initiatives rapidly transition to competitive U.S. technologies, NASA Dryden Flight Research Facility has dedicated a systems research aircraft facility. The primary goal is to accelerate the transition of new aerospace technologies to commercial, military, and space vehicles. Key technologies include more-electric aircraft concepts, fly-by-light systems, flush airdata systems, and advanced computer architectures. Future aircraft that will benefit are the high-speed civil transport and the National AeroSpace Plane. This paper describes the systems research aircraft flight research vehicle and outlines near-term programs.

  19. High Energy Physics Exascale Requirements Review. An Office of Science review sponsored jointly by Advanced Scientific Computing Research and High Energy Physics, June 10-12, 2015, Bethesda, Maryland

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Habib, Salman; Roser, Robert; Gerber, Richard

    The U.S. Department of Energy (DOE) Office of Science (SC) Offices of High Energy Physics (HEP) and Advanced Scientific Computing Research (ASCR) convened a programmatic Exascale Requirements Review on June 10–12, 2015, in Bethesda, Maryland. This report summarizes the findings, results, and recommendations derived from that meeting. The high-level findings and observations are as follows. Larger, more capable computing and data facilities are needed to support HEP science goals in all three frontiers: Energy, Intensity, and Cosmic. The expected scale of the demand at the 2025 timescale is at least two orders of magnitude — and in some cases greatermore » — than that available currently. The growth rate of data produced by simulations is overwhelming the current ability of both facilities and researchers to store and analyze it. Additional resources and new techniques for data analysis are urgently needed. Data rates and volumes from experimental facilities are also straining the current HEP infrastructure in its ability to store and analyze large and complex data volumes. Appropriately configured leadership-class facilities can play a transformational role in enabling scientific discovery from these datasets. A close integration of high-performance computing (HPC) simulation and data analysis will greatly aid in interpreting the results of HEP experiments. Such an integration will minimize data movement and facilitate interdependent workflows. Long-range planning between HEP and ASCR will be required to meet HEP’s research needs. To best use ASCR HPC resources, the experimental HEP program needs (1) an established, long-term plan for access to ASCR computational and data resources, (2) the ability to map workflows to HPC resources, (3) the ability for ASCR facilities to accommodate workflows run by collaborations potentially comprising thousands of individual members, (4) to transition codes to the next-generation HPC platforms that will be available at ASCR facilities, (5) to build up and train a workforce capable of developing and using simulations and analysis to support HEP scientific research on next-generation systems.« less

  20. Key Issues in Instructional Computer Graphics.

    ERIC Educational Resources Information Center

    Wozny, Michael J.

    1981-01-01

    Addresses key issues facing universities which plan to establish instructional computer graphics facilities, including computer-aided design/computer aided manufacturing systems, role in curriculum, hardware, software, writing instructional software, faculty involvement, operations, and research. Thirty-seven references and two appendices are…

  1. NASA Center for Computational Sciences: History and Resources

    NASA Technical Reports Server (NTRS)

    2000-01-01

    The Nasa Center for Computational Sciences (NCCS) has been a leading capacity computing facility, providing a production environment and support resources to address the challenges facing the Earth and space sciences research community.

  2. Technology in the Service of Creativity: Computer Assisted Writing Project--Stetson Middle School, Philadelphia, Pennsylvania. Final Report.

    ERIC Educational Resources Information Center

    Bender, Evelyn

    The American Library Association's Carroll Preston Baber Research Award supported this project on the use, impact and feasibility of a computer assisted writing facility located in the library of Stetson Middle School in Philadelphia, an inner city school with a population of minority, "at risk" students. The writing facility consisted…

  3. Research in progress and other activities of the Institute for Computer Applications in Science and Engineering

    NASA Technical Reports Server (NTRS)

    1993-01-01

    This report summarizes research conducted at the Institute for Computer Applications in Science and Engineering in applied mathematics and computer science during the period April 1, 1993 through September 30, 1993. The major categories of the current ICASE research program are: (1) applied and numerical mathematics, including numerical analysis and algorithm development; (2) theoretical and computational research in fluid mechanics in selected areas of interest to LaRC, including acoustic and combustion; (3) experimental research in transition and turbulence and aerodynamics involving LaRC facilities and scientists; and (4) computer science.

  4. Research in progress in applied mathematics, numerical analysis, fluid mechanics, and computer science

    NASA Technical Reports Server (NTRS)

    1994-01-01

    This report summarizes research conducted at the Institute for Computer Applications in Science and Engineering in applied mathematics, fluid mechanics, and computer science during the period October 1, 1993 through March 31, 1994. The major categories of the current ICASE research program are: (1) applied and numerical mathematics, including numerical analysis and algorithm development; (2) theoretical and computational research in fluid mechanics in selected areas of interest to LaRC, including acoustics and combustion; (3) experimental research in transition and turbulence and aerodynamics involving LaRC facilities and scientists; and (4) computer science.

  5. Users Guide for the National Transonic Facility Research Data System

    NASA Technical Reports Server (NTRS)

    Foster, Jean M.; Adcock, Jerry B.

    1996-01-01

    The National Transonic Facility is a complex cryogenic wind tunnel facility. This report briefly describes the facility, the data systems, and the instrumentation used to acquire research data. The computational methods and equations are discussed in detail and many references are listed for those who need additional technical information. This report is intended to be a user's guide, not a programmer's guide; therefore, the data reduction code itself is not documented. The purpose of this report is to assist personnel involved in conducting a test in the National Transonic Facility.

  6. A test matrix sequencer for research test facility automation

    NASA Technical Reports Server (NTRS)

    Mccartney, Timothy P.; Emery, Edward F.

    1990-01-01

    The hardware and software configuration of a Test Matrix Sequencer, a general purpose test matrix profiler that was developed for research test facility automation at the NASA Lewis Research Center, is described. The system provides set points to controllers and contact closures to data systems during the course of a test. The Test Matrix Sequencer consists of a microprocessor controlled system which is operated from a personal computer. The software program, which is the main element of the overall system is interactive and menu driven with pop-up windows and help screens. Analog and digital input/output channels can be controlled from a personal computer using the software program. The Test Matrix Sequencer provides more efficient use of aeronautics test facilities by automating repetitive tasks that were once done manually.

  7. Ethics and the 7 `P`s` of computer use policies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scott, T.J.; Voss, R.B.

    1994-12-31

    A Computer Use Policy (CUP) defines who can use the computer facilities for what. The CUP is the institution`s official position on the ethical use of computer facilities. The authors believe that writing a CUP provides an ideal platform to develop a group ethic for computer users. In prior research, the authors have developed a seven phase model for writing CUPs, entitled the 7 P`s of Computer Use Policies. The purpose of this paper is to present the model and discuss how the 7 P`s can be used to identify and communicate a group ethic for the institution`s computer users.

  8. How Data Becomes Physics: Inside the RACF

    ScienceCinema

    Ernst, Michael; Rind, Ofer; Rajagopalan, Srini; Lauret, Jerome; Pinkenburg, Chris

    2018-06-22

    The RHIC & ATLAS Computing Facility (RACF) at the U.S. Department of Energy’s (DOE) Brookhaven National Laboratory sits at the center of a global computing network. It connects more than 2,500 researchers around the world with the data generated by millions of particle collisions taking place each second at Brookhaven Lab's Relativistic Heavy Ion Collider (RHIC, a DOE Office of Science User Facility for nuclear physics research), and the ATLAS experiment at the Large Hadron Collider in Europe. Watch this video to learn how the people and computing resources of the RACF serve these scientists to turn petabytes of raw data into physics discoveries.

  9. Microgravity

    NASA Image and Video Library

    2004-04-15

    The Wake Shield Facility (WSF) is a free-flying research and development facility that is designed to use the pure vacuum of space to conduct scientific research in the development of new materials. The thin film materials technology developed by the WSF could some day lead to applications such as faster electronics components for computers.

  10. Variable gravity research facility

    NASA Technical Reports Server (NTRS)

    Allan, Sean; Ancheta, Stan; Beine, Donna; Cink, Brian; Eagon, Mark; Eckstein, Brett; Luhman, Dan; Mccowan, Daniel; Nations, James; Nordtvedt, Todd

    1988-01-01

    Spin and despin requirements; sequence of activities required to assemble the Variable Gravity Research Facility (VGRF); power systems technology; life support; thermal control systems; emergencies; communication systems; space station applications; experimental activities; computer modeling and simulation of tether vibration; cost analysis; configuration of the crew compartments; and tether lengths and rotation speeds are discussed.

  11. Real-Gas Flow Properties for NASA Langley Research Center Aerothermodynamic Facilities Complex Wind Tunnels

    NASA Technical Reports Server (NTRS)

    Hollis, Brian R.

    1996-01-01

    A computational algorithm has been developed which can be employed to determine the flow properties of an arbitrary real (virial) gas in a wind tunnel. A multiple-coefficient virial gas equation of state and the assumption of isentropic flow are used to model the gas and to compute flow properties throughout the wind tunnel. This algorithm has been used to calculate flow properties for the wind tunnels of the Aerothermodynamics Facilities Complex at the NASA Langley Research Center, in which air, CF4. He, and N2 are employed as test gases. The algorithm is detailed in this paper and sample results are presented for each of the Aerothermodynamic Facilities Complex wind tunnels.

  12. Abstracts of Research, July 1975-June 1976.

    ERIC Educational Resources Information Center

    Ohio State Univ., Columbus. Computer and Information Science Research Center.

    Abstracts of research papers in computer and information science are given for 62 papers in the areas of information storage and retrieval; computer facilities; information analysis; linguistics analysis; artificial intelligence; information processes in physical, biological, and social systems; mathematical technigues; systems programming;…

  13. Katherine Johnson Legacy

    NASA Image and Video Library

    2016-05-05

    Following a naming dedication ceremony May 5, 2016 - the 55th anniversary of Alan Shepard's historic rocket launch - NASA Langley Research Center's newest building is known as the Katherine G. Johnson Computational Research Facility, honoring the "human computer" who successfully calculated the trajectories for America's first space flights.

  14. Description and operational status of the National Transonic Facility computer complex

    NASA Technical Reports Server (NTRS)

    Boyles, G. B., Jr.

    1986-01-01

    This paper describes the National Transonic Facility (NTF) computer complex and its support of tunnel operations. The capabilities of the research data acquisition and reduction are discussed along with the types of data that can be acquired and presented. Pretest, test, and posttest capabilities are also outlined along with a discussion of the computer complex to monitor the tunnel control processes and provide the tunnel operators with information needed to control the tunnel. Planned enhancements to the computer complex for support of future testing are presented.

  15. Support System Effects on the NASA Common Research Model

    NASA Technical Reports Server (NTRS)

    Rivers, S. Melissa B.; Hunter, Craig A.

    2012-01-01

    An experimental investigation of the NASA Common Research Model was conducted in the NASA Langley National Transonic Facility and NASA Ames 11-Foot Transonic Wind Tunnel Facility for use in the Drag Prediction Workshop. As data from the experimental investigations was collected, a large difference in moment values was seen between the experimental and the computational data from the 4th Drag Prediction Workshop. This difference led to the present work. In this study, a computational assessment has been undertaken to investigate model support system interference effects on the Common Research Model. The configurations computed during this investigation were the wing/body/tail=0deg without the support system and the wing/body/tail=0deg with the support system. The results from this investigation confirm that the addition of the support system to the computational cases does shift the pitching moment in the direction of the experimental results.

  16. On Laminar to Turbulent Transition of Arc-Jet Flow in the NASA Ames Panel Test Facility

    NASA Technical Reports Server (NTRS)

    Gokcen, Tahir; Alunni, Antonella I.

    2012-01-01

    This paper provides experimental evidence and supporting computational analysis to characterize the laminar to turbulent flow transition in a high enthalpy arc-jet facility at NASA Ames Research Center. The arc-jet test data obtained in the 20 MW Panel Test Facility include measurements of surface pressure and heat flux on a water-cooled calibration plate, and measurements of surface temperature on a reaction-cured glass coated tile plate. Computational fluid dynamics simulations are performed to characterize the arc-jet test environment and estimate its parameters consistent with the facility and calibration measurements. The present analysis comprises simulations of the nonequilibrium flowfield in the facility nozzle, test box, and flowfield over test articles. Both laminar and turbulent simulations are performed, and the computed results are compared with the experimental measurements, including Stanton number dependence on Reynolds number. Comparisons of computed and measured surface heat fluxes (and temperatures), along with the accompanying analysis, confirm that that the boundary layer in the Panel Test Facility flow is transitional at certain archeater conditions.

  17. Advanced ballistic range technology

    NASA Technical Reports Server (NTRS)

    Yates, Leslie A.

    1994-01-01

    The research conducted supported two facilities at NASA Ames Research Center: the Hypervelocity Free-Flight Aerodynamic Facility and the 16-Inch Shock Tunnel. During the grant period, a computerized film-reading system was developed, and five- and six-degree-of-freedom parameter-identification routines were written and successfully implemented. Studies of flow separation were conducted, and methods to extract phase shift information from finite-fringe interferograms were developed. Methods for constructing optical images from Computational Fluid Dynamics solutions were also developed, and these methods were used for one-to-one comparisons of experiment and computations.

  18. Progressive fracture of fiber composites

    NASA Technical Reports Server (NTRS)

    Irvin, T. B.; Ginty, C. A.

    1983-01-01

    Refined models and procedures are described for determining progressive composite fracture in graphite/epoxy angleplied laminates. Lewis Research Center capabilities are utilized including the Real Time Ultrasonic C Scan (RUSCAN) experimental facility and the Composite Durability Structural Analysis (CODSTRAN) computer code. The CODSTRAN computer code is used to predict the fracture progression based on composite mechanics, finite element stress analysis, and fracture criteria modules. The RUSCAN facility, CODSTRAN computer code, and scanning electron microscope are used to determine durability and identify failure mechanisms in graphite/epoxy composites.

  19. Langley Aerospace Research Summer Scholars. Part 2

    NASA Technical Reports Server (NTRS)

    Schwan, Rafaela (Compiler)

    1995-01-01

    The Langley Aerospace Research Summer Scholars (LARSS) Program was established by Dr. Samuel E. Massenberg in 1986. The program has increased from 20 participants in 1986 to 114 participants in 1995. The program is LaRC-unique and is administered by Hampton University. The program was established for the benefit of undergraduate juniors and seniors and first-year graduate students who are pursuing degrees in aeronautical engineering, mechanical engineering, electrical engineering, material science, computer science, atmospheric science, astrophysics, physics, and chemistry. Two primary elements of the LARSS Program are: (1) a research project to be completed by each participant under the supervision of a researcher who will assume the role of a mentor for the summer, and (2) technical lectures by prominent engineers and scientists. Additional elements of this program include tours of LARC wind tunnels, computational facilities, and laboratories. Library and computer facilities will be available for use by the participants.

  20. Technical Reports: Langley Aerospace Research Summer Scholars. Part 1

    NASA Technical Reports Server (NTRS)

    Schwan, Rafaela (Compiler)

    1995-01-01

    The Langley Aerospace Research Summer Scholars (LARSS) Program was established by Dr. Samuel E. Massenberg in 1986. The program has increased from 20 participants in 1986 to 114 participants in 1995. The program is LaRC-unique and is administered by Hampton University. The program was established for the benefit of undergraduate juniors and seniors and first-year graduate students who are pursuing degrees in aeronautical engineering, mechanical engineering, electrical engineering, material science, computer science, atmospheric science, astrophysics, physics, and chemistry. Two primary elements of the LARSS Program are: (1) a research project to be completed by each participant under the supervision of a researcher who will assume the role of a mentor for the summer, and (2) technical lectures by prominent engineers and scientists. Additional elements of this program include tours of LARC wind tunnels, computational facilities, and laboratories. Library and computer facilities will be available for use by the participants.

  1. Functional requirements for the man-vehicle systems research facility. [identifying and correcting human errors during flight simulation

    NASA Technical Reports Server (NTRS)

    Clement, W. F.; Allen, R. W.; Heffley, R. K.; Jewell, W. F.; Jex, H. R.; Mcruer, D. T.; Schulman, T. M.; Stapleford, R. L.

    1980-01-01

    The NASA Ames Research Center proposed a man-vehicle systems research facility to support flight simulation studies which are needed for identifying and correcting the sources of human error associated with current and future air carrier operations. The organization of research facility is reviewed and functional requirements and related priorities for the facility are recommended based on a review of potentially critical operational scenarios. Requirements are included for the experimenter's simulation control and data acquisition functions, as well as for the visual field, motion, sound, computation, crew station, and intercommunications subsystems. The related issues of functional fidelity and level of simulation are addressed, and specific criteria for quantitative assessment of various aspects of fidelity are offered. Recommendations for facility integration, checkout, and staffing are included.

  2. Wind Energy Facilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Laurie, Carol

    2017-02-01

    This book takes readers inside the places where daily discoveries shape the next generation of wind power systems. Energy Department laboratory facilities span the United States and offer wind research capabilities to meet industry needs. The facilities described in this book make it possible for industry players to increase reliability, improve efficiency, and reduce the cost of wind energy -- one discovery at a time. Whether you require blade testing or resource characterization, grid integration or high-performance computing, Department of Energy laboratory facilities offer a variety of capabilities to meet your wind research needs.

  3. Wind Energy Facilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Office of Energy Efficiency and Renewable Energy

    This book takes readers inside the places where daily discoveries shape the next generation of wind power systems. Energy Department laboratory facilities span the United States and offer wind research capabilities to meet industry needs. The facilities described in this book make it possible for industry players to increase reliability, improve efficiency, and reduce the cost of wind energy -- one discovery at a time. Whether you require blade testing or resource characterization, grid integration or high-performance computing, Department of Energy laboratory facilities offer a variety of capabilities to meet your wind research needs.

  4. The Practical Obstacles of Data Transfer: Why researchers still love scp

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nam, Hai Ah; Hill, Jason J; Parete-Koon, Suzanne T

    The importance of computing facilities is heralded every six months with the announcement of the new Top500 list, showcasing the world s fastest supercomputers. Unfortu- nately, with great computing capability does not come great long-term data storage capacity, which often means users must move their data to their local site archive, to remote sites where they may be doing future computation or anal- ysis, or back to their home institution, else face the dreaded data purge that most HPC centers employ to keep utiliza- tion of large parallel filesystems low to manage performance and capacity. At HPC centers, data transfermore » is crucial to the scientific workflow and will increase in importance as computing systems grow in size. The Energy Sciences Net- work (ESnet) recently launched its fifth generation network, a 100 Gbps high-performance, unclassified national network connecting more than 40 DOE research sites to support scientific research and collaboration. Despite the tenfold increase in bandwidth to DOE research sites amenable to multiple data transfer streams and high throughput, in prac- tice, researchers often under-utilize the network and resort to painfully-slow single stream transfer methods such as scp to avoid the complexity of using multiple stream tools such as GridFTP and bbcp, and contend with frustration from the lack of consistency of available tools between sites. In this study we survey and assess the data transfer methods pro- vided at several DOE supported computing facilities, includ- ing both leadership-computing facilities, connected through ESnet. We present observed transfer rates, suggested opti- mizations, and discuss the obstacles the tools must overcome to receive wide-spread adoption over scp.« less

  5. Human use regulatory affairs advisor (HURAA): learning about research ethics with intelligent learning modules.

    PubMed

    Hu, Xiangen; Graesser, Arthur C

    2004-05-01

    The Human Use Regulatory Affairs Advisor (HURAA) is a Web-based facility that provides help and training on the ethical use of human subjects in research, based on documents and regulations in United States federal agencies. HURAA has a number of standard features of conventional Web facilities and computer-based training, such as hypertext, multimedia, help modules, glossaries, archives, links to other sites, and page-turning didactic instruction. HURAA also has these intelligent features: (1) an animated conversational agent that serves as a navigational guide for the Web facility, (2) lessons with case-based and explanation-based reasoning, (3) document retrieval through natural language queries, and (4) a context-sensitive Frequently Asked Questions segment, called Point & Query. This article describes the functional learning components of HURAA, specifies its computational architecture, and summarizes empirical tests of the facility on learners.

  6. NASA low-speed centrifugal compressor for 3-D viscous code assessment and fundamental flow physics research

    NASA Technical Reports Server (NTRS)

    Hathaway, M. D.; Wood, J. R.; Wasserbauer, C. A.

    1991-01-01

    A low speed centrifugal compressor facility recently built by the NASA Lewis Research Center is described. The purpose of this facility is to obtain detailed flow field measurements for computational fluid dynamic code assessment and flow physics modeling in support of Army and NASA efforts to advance small gas turbine engine technology. The facility is heavily instrumented with pressure and temperature probes, both in the stationary and rotating frames of reference, and has provisions for flow visualization and laser velocimetry. The facility will accommodate rotational speeds to 2400 rpm and is rated at pressures to 1.25 atm. The initial compressor stage being tested is geometrically and dynamically representative of modern high-performance centrifugal compressor stages with the exception of Mach number levels. Preliminary experimental investigations of inlet and exit flow uniformly and measurement repeatability are presented. These results demonstrate the high quality of the data which may be expected from this facility. The significance of synergism between computational fluid dynamic analysis and experimentation throughout the development of the low speed centrifugal compressor facility is demonstrated.

  7. Assessing the uptake of persistent identifiers by research infrastructure users

    PubMed Central

    Maull, Keith E.

    2017-01-01

    Significant progress has been made in the past few years in the development of recommendations, policies, and procedures for creating and promoting citations to data sets, software, and other research infrastructures like computing facilities. Open questions remain, however, about the extent to which referencing practices of authors of scholarly publications are changing in ways desired by these initiatives. This paper uses four focused case studies to evaluate whether research infrastructures are being increasingly identified and referenced in the research literature via persistent citable identifiers. The findings of the case studies show that references to such resources are increasing, but that the patterns of these increases are variable. In addition, the study suggests that citation practices for data sets may change more slowly than citation practices for software and research facilities, due to the inertia of existing practices for referencing the use of data. Similarly, existing practices for acknowledging computing support may slow the adoption of formal citations for computing resources. PMID:28394907

  8. A Bioinformatics Facility for NASA

    NASA Technical Reports Server (NTRS)

    Schweighofer, Karl; Pohorille, Andrew

    2006-01-01

    Building on an existing prototype, we have fielded a facility with bioinformatics technologies that will help NASA meet its unique requirements for biological research. This facility consists of a cluster of computers capable of performing computationally intensive tasks, software tools, databases and knowledge management systems. Novel computational technologies for analyzing and integrating new biological data and already existing knowledge have been developed. With continued development and support, the facility will fulfill strategic NASA s bioinformatics needs in astrobiology and space exploration. . As a demonstration of these capabilities, we will present a detailed analysis of how spaceflight factors impact gene expression in the liver and kidney for mice flown aboard shuttle flight STS-108. We have found that many genes involved in signal transduction, cell cycle, and development respond to changes in microgravity, but that most metabolic pathways appear unchanged.

  9. Icing simulation: A survey of computer models and experimental facilities

    NASA Technical Reports Server (NTRS)

    Potapczuk, M. G.; Reinmann, J. J.

    1991-01-01

    A survey of the current methods for simulation of the response of an aircraft or aircraft subsystem to an icing encounter is presented. The topics discussed include a computer code modeling of aircraft icing and performance degradation, an evaluation of experimental facility simulation capabilities, and ice protection system evaluation tests in simulated icing conditions. Current research focussed on upgrading simulation fidelity of both experimental and computational methods is discussed. The need for increased understanding of the physical processes governing ice accretion, ice shedding, and iced airfoil aerodynamics is examined.

  10. Icing simulation: A survey of computer models and experimental facilities

    NASA Technical Reports Server (NTRS)

    Potapczuk, M. G.; Reinmann, J. J.

    1991-01-01

    A survey of the current methods for simulation of the response of an aircraft or aircraft subsystem to an icing encounter is presented. The topics discussed include a computer code modeling of aircraft icing and performance degradation, an evaluation of experimental facility simulation capabilities, and ice protection system evaluation tests in simulated icing conditions. Current research focused on upgrading simulation fidelity of both experimental and computational methods is discussed. The need for the increased understanding of the physical processes governing ice accretion, ice shedding, and iced aerodynamics is examined.

  11. AHPCRC (Army High Performance Computing Research Center) Bulletin. Volume 1, Issue 2

    DTIC Science & Technology

    2011-01-01

    area and the researchers working on these projects. Also inside: news from the AHPCRC consortium partners at Morgan State University and the NASA ...Computing Research Center is provided by the supercomputing and research facilities at Stanford University and at the NASA Ames Research Center at...atomic and molecular level, he said. He noted that “every general would like to have” a Star Trek -like holodeck, where holographic avatars could

  12. The development of an automated flight test management system for flight test planning and monitoring

    NASA Technical Reports Server (NTRS)

    Hewett, Marle D.; Tartt, David M.; Duke, Eugene L.; Antoniewicz, Robert F.; Brumbaugh, Randal W.

    1988-01-01

    The development of an automated flight test management system (ATMS) as a component of a rapid-prototyping flight research facility for AI-based flight systems concepts is described. The rapid-prototyping facility includes real-time high-fidelity simulators, numeric and symbolic processors, and high-performance research aircraft modified to accept commands for a ground-based remotely augmented vehicle facility. The flight system configuration of the ATMS includes three computers: the TI explorer LX and two GOULD SEL 32/27s.

  13. Operation of the 25kW NASA Lewis Research Center Solar Regenerative Fuel Cell Tested Facility

    NASA Technical Reports Server (NTRS)

    Moore, S. H.; Voecks, G. E.

    1997-01-01

    Assembly of the NASA Lewis Research Center(LeRC)Solar Regenerative Fuel Cell (RFC) Testbed Facility has been completed and system testing has proceeded. This facility includes the integration of two 25kW photovoltaic solar cell arrays, a 25kW proton exchange membrane (PEM) electrolysis unit, four 5kW PEM fuel cells, high pressure hydrogen and oxygen storage vessels, high purity water storage containers, and computer monitoring, control and data acquisition.

  14. 42 CFR 93.509 - Computation of time.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... EFFECTS STUDIES OF HAZARDOUS SUBSTANCES RELEASES AND FACILITIES PUBLIC HEALTH SERVICE POLICIES ON RESEARCH MISCONDUCT Opportunity To Contest ORI Findings of Research Misconduct and HHS Administrative Actions Hearing...

  15. Researchers Mine Information from Next-Generation Subsurface Flow Simulations

    DOE PAGES

    Gedenk, Eric D.

    2015-12-01

    A research team based at Virginia Tech University leveraged computing resources at the US Department of Energy's (DOE's) Oak Ridge National Laboratory to explore subsurface multiphase flow phenomena that can't be experimentally observed. Using the Cray XK7 Titan supercomputer at the Oak Ridge Leadership Computing Facility, the team took Micro-CT images of subsurface geologic systems and created two-phase flow simulations. The team's model development has implications for computational research pertaining to carbon sequestration, oil recovery, and contaminant transport.

  16. NASA Computational Fluid Dynamics Conference. Volume 1: Sessions 1-6

    NASA Technical Reports Server (NTRS)

    1989-01-01

    Presentations given at the NASA Computational Fluid Dynamics (CFD) Conference held at the NASA Ames Research Center, Moffett Field, California, March 7-9, 1989 are given. Topics covered include research facility overviews of CFD research and applications, validation programs, direct simulation of compressible turbulence, turbulence modeling, advances in Runge-Kutta schemes for solving 3-D Navier-Stokes equations, grid generation and invicid flow computation around aircraft geometries, numerical simulation of rotorcraft, and viscous drag prediction for rotor blades.

  17. A distributed data base management facility for the CAD/CAM environment

    NASA Technical Reports Server (NTRS)

    Balza, R. M.; Beaudet, R. W.; Johnson, H. R.

    1984-01-01

    Current/PAD research in the area of distributed data base management considers facilities for supporting CAD/CAM data management in a heterogeneous network of computers encompassing multiple data base managers supporting a variety of data models. These facilities include coordinated execution of multiple DBMSs to provide for administration of and access to data distributed across them.

  18. ARC-1980-AC80-0512-2

    NASA Image and Video Library

    1980-06-05

    N-231 High Reynolds Number Channel Facility (An example of a Versatile Wind Tunnel) Tunnel 1 I is a blowdown Facility that utilizes interchangeable test sections and nozzles. The facility provides experimental support for the fluid mechanics research, including experimental verification of aerodynamic computer codes and boundary-layer and airfoil studies that require high Reynolds number simulation. (Tunnel 1)

  19. An inventory of aeronautical ground research facilities. Volume 4: Engineering flight simulation facilities

    NASA Technical Reports Server (NTRS)

    Pirrello, C. J.; Hardin, R. D.; Capelluro, L. P.; Harrison, W. D.

    1971-01-01

    The general purpose capabilities of government and industry in the area of real time engineering flight simulation are discussed. The information covers computer equipment, visual systems, crew stations, and motion systems, along with brief statements of facility capabilities. Facility construction and typical operational costs are included where available. The facilities provide for economical and safe solutions to vehicle design, performance, control, and flying qualities problems of manned and unmanned flight systems.

  20. Sustaining and Extending the Open Science Grid: Science Innovation on a PetaScale Nationwide Facility (DE-FC02-06ER41436) SciDAC-2 Closeout Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Livny, Miron; Shank, James; Ernst, Michael

    Under this SciDAC-2 grant the project’s goal w a s t o stimulate new discoveries by providing scientists with effective and dependable access to an unprecedented national distributed computational facility: the Open Science Grid (OSG). We proposed to achieve this through the work of the Open Science Grid Consortium: a unique hands-on multi-disciplinary collaboration of scientists, software developers and providers of computing resources. Together the stakeholders in this consortium sustain and use a shared distributed computing environment that transforms simulation and experimental science in the US. The OSG consortium is an open collaboration that actively engages new research communities. Wemore » operate an open facility that brings together a broad spectrum of compute, storage, and networking resources and interfaces to other cyberinfrastructures, including the US XSEDE (previously TeraGrid), the European Grids for ESciencE (EGEE), as well as campus and regional grids. We leverage middleware provided by computer science groups, facility IT support organizations, and computing programs of application communities for the benefit of consortium members and the US national CI.« less

  1. Building Research Cyberinfrastructure at Small/Medium Research Institutions

    ERIC Educational Resources Information Center

    Agee, Anne; Rowe, Theresa; Woo, Melissa; Woods, David

    2010-01-01

    A 2006 ECAR study defined cyberinfrastructure as the coordinated aggregate of "hardware, software, communications, services, facilities, and personnel that enable researchers to conduct advanced computational, collaborative, and data-intensive research." While cyberinfrastructure was initially seen as support for scientific and…

  2. Distributed computing testbed for a remote experimental environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Butner, D.N.; Casper, T.A.; Howard, B.C.

    1995-09-18

    Collaboration is increasing as physics research becomes concentrated on a few large, expensive facilities, particularly in magnetic fusion energy research, with national and international participation. These facilities are designed for steady state operation and interactive, real-time experimentation. We are developing tools to provide for the establishment of geographically distant centers for interactive operations; such centers would allow scientists to participate in experiments from their home institutions. A testbed is being developed for a Remote Experimental Environment (REE), a ``Collaboratory.`` The testbed will be used to evaluate the ability of a remotely located group of scientists to conduct research on themore » DIII-D Tokamak at General Atomics. The REE will serve as a testing environment for advanced control and collaboration concepts applicable to future experiments. Process-to-process communications over high speed wide area networks provide real-time synchronization and exchange of data among multiple computer networks, while the ability to conduct research is enhanced by adding audio/video communication capabilities. The Open Software Foundation`s Distributed Computing Environment is being used to test concepts in distributed control, security, naming, remote procedure calls and distributed file access using the Distributed File Services. We are exploring the technology and sociology of remotely participating in the operation of a large scale experimental facility.« less

  3. NASA/FAA North Texas Research Station Overview

    NASA Technical Reports Server (NTRS)

    Borchers, Paul F.

    2012-01-01

    NTX Research Staion: NASA research assets embedded in an interesting operational air transport environment. Seven personnel (2 civil servants, 5 contractors). ARTCC, TRACON, Towers, 3 air carrier AOCs(American, Eagle and Southwest), and 2 major airports all within 12 miles. Supports NASA Airspace Systems Program with research products at all levels (fundamental to system level). NTX Laboratory: 5000 sq ft purpose-built, dedicated, air traffic management research facility. Established data links to ARTCC, TRACON, Towers, air carriers, airport and NASA facilities. Re-configurable computer labs, dedicated radio tower, state-of-the-art equipment.

  4. A large high vacuum, high pumping speed space simulation chamber for electric propulsion

    NASA Technical Reports Server (NTRS)

    Grisnik, Stanley P.; Parkes, James E.

    1994-01-01

    Testing high power electric propulsion devices poses unique requirements on space simulation facilities. Very high pumping speeds are required to maintain high vacuum levels while handling large volumes of exhaust products. These pumping speeds are significantly higher than those available in most existing vacuum facilities. There is also a requirement for relatively large vacuum chamber dimensions to minimize facility wall/thruster plume interactions and to accommodate far field plume diagnostic measurements. A 4.57 m (15 ft) diameter by 19.2 m (63 ft) long vacuum chamber at NASA Lewis Research Center is described. The chamber utilizes oil diffusion pumps in combination with cryopanels to achieve high vacuum pumping speeds at high vacuum levels. The facility is computer controlled for all phases of operation from start-up, through testing, to shutdown. The computer control system increases the utilization of the facility and reduces the manpower requirements needed for facility operations.

  5. Facility requirements for cockpit traffic display research

    NASA Technical Reports Server (NTRS)

    Chappell, S. L.; Kreifeldt, J. G.

    1982-01-01

    It is pointed out that much research is being conducted regarding the use of a cockpit display of traffic information (CDTI) for safe and efficient air traffic flow. A CDTI is a graphic display which shows the pilot the position of other aircraft relative to his or her aircraft. The present investigation is concerned with the facility requirements for the CDTI research. The facilities currently used for this research vary in fidelity from one CDTI-equipped simulator with computer-generated traffic, to four simulators with autopilot-like controls, all having a CDTI. Three groups of subjects were employed in the conducted study. Each of the groups included one controller, and three airline and four general aviation pilots.

  6. Implementation of Grid Tier 2 and Tier 3 facilities on a Distributed OpenStack Cloud

    NASA Astrophysics Data System (ADS)

    Limosani, Antonio; Boland, Lucien; Coddington, Paul; Crosby, Sean; Huang, Joanna; Sevior, Martin; Wilson, Ross; Zhang, Shunde

    2014-06-01

    The Australian Government is making a AUD 100 million investment in Compute and Storage for the academic community. The Compute facilities are provided in the form of 30,000 CPU cores located at 8 nodes around Australia in a distributed virtualized Infrastructure as a Service facility based on OpenStack. The storage will eventually consist of over 100 petabytes located at 6 nodes. All will be linked via a 100 Gb/s network. This proceeding describes the development of a fully connected WLCG Tier-2 grid site as well as a general purpose Tier-3 computing cluster based on this architecture. The facility employs an extension to Torque to enable dynamic allocations of virtual machine instances. A base Scientific Linux virtual machine (VM) image is deployed in the OpenStack cloud and automatically configured as required using Puppet. Custom scripts are used to launch multiple VMs, integrate them into the dynamic Torque cluster and to mount remote file systems. We report on our experience in developing this nation-wide ATLAS and Belle II Tier 2 and Tier 3 computing infrastructure using the national Research Cloud and storage facilities.

  7. Automated smear counting and data processing using a notebook computer in a biomedical research facility.

    PubMed

    Ogata, Y; Nishizawa, K

    1995-10-01

    An automated smear counting and data processing system for a life science laboratory was developed to facilitate routine surveys and eliminate human errors by using a notebook computer. This system was composed of a personal computer, a liquid scintillation counter and a well-type NaI(Tl) scintillation counter. The radioactivity of smear samples was automatically measured by these counters. The personal computer received raw signals from the counters through an interface of RS-232C. The software for the computer evaluated the surface density of each radioisotope and printed out that value along with other items as a report. The software was programmed in Pascal language. This system was successfully applied to routine surveys for contamination in our facility.

  8. Graphics supercomputer for computational fluid dynamics research

    NASA Astrophysics Data System (ADS)

    Liaw, Goang S.

    1994-11-01

    The objective of this project is to purchase a state-of-the-art graphics supercomputer to improve the Computational Fluid Dynamics (CFD) research capability at Alabama A & M University (AAMU) and to support the Air Force research projects. A cutting-edge graphics supercomputer system, Onyx VTX, from Silicon Graphics Computer Systems (SGI), was purchased and installed. Other equipment including a desktop personal computer, PC-486 DX2 with a built-in 10-BaseT Ethernet card, a 10-BaseT hub, an Apple Laser Printer Select 360, and a notebook computer from Zenith were also purchased. A reading room has been converted to a research computer lab by adding some furniture and an air conditioning unit in order to provide an appropriate working environments for researchers and the purchase equipment. All the purchased equipment were successfully installed and are fully functional. Several research projects, including two existing Air Force projects, are being performed using these facilities.

  9. Cumulative reports and publications

    NASA Technical Reports Server (NTRS)

    1993-01-01

    A complete list of Institute for Computer Applications in Science and Engineering (ICASE) reports are listed. Since ICASE reports are intended to be preprints of articles that will appear in journals or conference proceedings, the published reference is included when it is available. The major categories of the current ICASE research program are: applied and numerical mathematics, including numerical analysis and algorithm development; theoretical and computational research in fluid mechanics in selected areas of interest to LaRC, including acoustics and combustion; experimental research in transition and turbulence and aerodynamics involving LaRC facilities and scientists; and computer science.

  10. Unique life sciences research facilities at NASA Ames Research Center

    NASA Technical Reports Server (NTRS)

    Mulenburg, G. M.; Vasques, M.; Caldwell, W. F.; Tucker, J.

    1994-01-01

    The Life Science Division at NASA's Ames Research Center has a suite of specialized facilities that enable scientists to study the effects of gravity on living systems. This paper describes some of these facilities and their use in research. Seven centrifuges, each with its own unique abilities, allow testing of a variety of parameters on test subjects ranging from single cells through hardware to humans. The Vestibular Research Facility allows the study of both centrifugation and linear acceleration on animals and humans. The Biocomputation Center uses computers for 3D reconstruction of physiological systems, and interactive research tools for virtual reality modeling. Psycophysiological, cardiovascular, exercise physiology, and biomechanical studies are conducted in the 12 bed Human Research Facility and samples are analyzed in the certified Central Clinical Laboratory and other laboratories at Ames. Human bedrest, water immersion and lower body negative pressure equipment are also available to study physiological changes associated with weightlessness. These and other weightlessness models are used in specialized laboratories for the study of basic physiological mechanisms, metabolism and cell biology. Visual-motor performance, perception, and adaptation are studied using ground-based models as well as short term weightlessness experiments (parabolic flights). The unique combination of Life Science research facilities, laboratories, and equipment at Ames Research Center are described in detail in relation to their research contributions.

  11. Computers and Play in Early Childhood: Affordances and Limitations

    ERIC Educational Resources Information Center

    Verenikina, Irina; Herrington, Jan; Peterson, Rob; Mantei, Jessica

    2010-01-01

    The widespread proliferation of computer games for children as young as six months of age, merits a reexamination of their manner of use and a review of their facility to provide opportunities for developmental play. This article describes a research study conducted to explore the use of computer games by young children, specifically to…

  12. Helms with computers at HRF rack in Destiny module

    NASA Image and Video Library

    2001-05-18

    ISS002-E-6288 (18 May 2001) --- Susan J. Helms, Expedition Two flight engineer, works with three laptop computers at the Human Research Facility (HRF) in the U.S. Laboratory. The image was taken with a digital still camera.

  13. Helms with computers at HRF rack in Destiny module

    NASA Image and Video Library

    2001-05-18

    ISS002-E-6294 (18 May 2001) --- Susan J. Helms, Expedition Two flight engineer, works with three laptop computers at the Human Research Facility (HRF) in the U.S. Laboratory. The image was taken with a digital still camera.

  14. Further Investigation of the Support System Effects and Wing Twist on the NASA Common Research Model

    NASA Technical Reports Server (NTRS)

    Rivers, Melissa B.; Hunter, Craig A.; Campbell, Richard L.

    2012-01-01

    An experimental investigation of the NASA Common Research Model was conducted in the NASA Langley National Transonic Facility and NASA Ames 11-foot Transonic Wind Tunnel Facility for use in the Drag Prediction Workshop. As data from the experimental investigations was collected, a large difference in moment values was seen between the experiment and computational data from the 4th Drag Prediction Workshop. This difference led to a computational assessment to investigate model support system interference effects on the Common Research Model. The results from this investigation showed that the addition of the support system to the computational cases did increase the pitching moment so that it more closely matched the experimental results, but there was still a large discrepancy in pitching moment. This large discrepancy led to an investigation into the shape of the as-built model, which in turn led to a change in the computational grids and re-running of all the previous support system cases. The results of these cases are the focus of this paper.

  15. 1999 NCCS Highlights

    NASA Technical Reports Server (NTRS)

    Bennett, Jerome (Technical Monitor)

    2002-01-01

    The NASA Center for Computational Sciences (NCCS) is a high-performance scientific computing facility operated, maintained and managed by the Earth and Space Data Computing Division (ESDCD) of NASA Goddard Space Flight Center's (GSFC) Earth Sciences Directorate. The mission of the NCCS is to advance leading-edge science by providing the best people, computers, and data storage systems to NASA's Earth and space sciences programs and those of other U.S. Government agencies, universities, and private institutions. Among the many computationally demanding Earth science research efforts supported by the NCCS in Fiscal Year 1999 (FY99) are the NASA Seasonal-to-Interannual Prediction Project, the NASA Search and Rescue Mission, Earth gravitational model development efforts, the National Weather Service's North American Observing System program, Data Assimilation Office studies, a NASA-sponsored project at the Center for Ocean-Land-Atmosphere Studies, a NASA-sponsored microgravity project conducted by researchers at the City University of New York and the University of Pennsylvania, the completion of a satellite-derived global climate data set, simulations of a new geodynamo model, and studies of Earth's torque. This document presents highlights of these research efforts and an overview of the NCCS, its facilities, and its people.

  16. Advanced technology airfoil research, volume 2. [conferences

    NASA Technical Reports Server (NTRS)

    1979-01-01

    A comprehensive review of airfoil research is presented. The major thrust of the research is in three areas: development of computational aerodynamic codes for airfoil analysis and design, development of experimental facilities and test techniques, and all types of airfoil applications.

  17. POLLUX: a program for simulated cloning, mutagenesis and database searching of DNA constructs.

    PubMed

    Dayringer, H E; Sammons, S A

    1991-04-01

    Computer support for research in biotechnology has developed rapidly and has provided several tools to aid the researcher. This report describes the capabilities of new computer software developed in this laboratory to aid in the documentation and planning of experiments in molecular biology. The program, POLLUX, provides a graphical medium for the entry, edit and manipulation of DNA constructs and a textual format for display and edit of construct descriptive data. Program operation and procedures are designed to mimic the actual laboratory experiments with respect to capability and the order in which they are performed. Flexible control over the content of the computer-generated displays and program facilities is provided by a mouse-driven menu interface. Programmed facilities for mutagenesis, simulated cloning and searching of the database from networked workstations are described.

  18. Atmospheric numerical modeling resource enhancement and model convective parameterization/scale interaction studies

    NASA Technical Reports Server (NTRS)

    Cushman, Paula P.

    1993-01-01

    Research will be undertaken in this contract in the area of Modeling Resource and Facilities Enhancement to include computer, technical and educational support to NASA investigators to facilitate model implementation, execution and analysis of output; to provide facilities linking USRA and the NASA/EADS Computer System as well as resident work stations in ESAD; and to provide a centralized location for documentation, archival and dissemination of modeling information pertaining to NASA's program. Additional research will be undertaken in the area of Numerical Model Scale Interaction/Convective Parameterization Studies to include implementation of the comparison of cloud and rain systems and convective-scale processes between the model simulations and what was observed; and to incorporate the findings of these and related research findings in at least two refereed journal articles.

  19. LRC-Katherine-Johnson-interview-2017-0914

    NASA Image and Video Library

    2017-09-14

    Sept. 14, 2017: An interview with Katherine Johnson discussing her career and her reaction to the dedication of the Katherine G. Johnson Computational Research Facility at NASA's Langley Research Center in Hampton, Va., in her honor.

  20. APL: An Alternative to the Multi-Language Environment for Education. Systems Research Memo Number Four.

    ERIC Educational Resources Information Center

    Lippert, Henry T.; Harris, Edward V.

    The diverse requirements for computing facilities in education place heavy demands upon available resources. Although multiple or very large computers can supply such diverse needs, their cost makes them impractical for many institutions. Small computers which serve a few specific needs may be an economical answer. However, to serve operationally…

  1. NASA Lighting Research, Test, & Analysis

    NASA Technical Reports Server (NTRS)

    Clark, Toni

    2015-01-01

    The Habitability and Human Factors Branch, at Johnson Space Center, in Houston, TX, provides technical guidance for the development of spaceflight lighting requirements, verification of light system performance, analysis of integrated environmental lighting systems, and research of lighting-related human performance issues. The Habitability & Human Factors Lighting Team maintains two physical facilities that are integrated to provide support. The Lighting Environment Test Facility (LETF) provides a controlled darkroom environment for physical verification of lighting systems with photometric and spetrographic measurement systems. The Graphics Research & Analysis Facility (GRAF) maintains the capability for computer-based analysis of operational lighting environments. The combined capabilities of the Lighting Team at Johnson Space Center have been used for a wide range of lighting-related issues.

  2. Guidance on the Stand Down, Mothball, and Reactivation of Ground Test Facilities

    NASA Technical Reports Server (NTRS)

    Volkman, Gregrey T.; Dunn, Steven C.

    2013-01-01

    The development of aerospace and aeronautics products typically requires three distinct types of testing resources across research, development, test, and evaluation: experimental ground testing, computational "testing" and development, and flight testing. Over the last twenty plus years, computational methods have replaced some physical experiments and this trend is continuing. The result is decreased utilization of ground test capabilities and, along with market forces, industry consolidation, and other factors, has resulted in the stand down and oftentimes closure of many ground test facilities. Ground test capabilities are (and very likely will continue to be for many years) required to verify computational results and to provide information for regimes where computational methods remain immature. Ground test capabilities are very costly to build and to maintain, so once constructed and operational it may be desirable to retain access to those capabilities even if not currently needed. One means of doing this while reducing ongoing sustainment costs is to stand down the facility into a "mothball" status - keeping it alive to bring it back when needed. Both NASA and the US Department of Defense have policies to accomplish the mothball of a facility, but with little detail. This paper offers a generic process to follow that can be tailored based on the needs of the owner and the applicable facility.

  3. Multiscale Computation. Needs and Opportunities for BER Science

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scheibe, Timothy D.; Smith, Jeremy C.

    2015-01-01

    The Environmental Molecular Sciences Laboratory (EMSL), a scientific user facility managed by Pacific Northwest National Laboratory for the U.S. Department of Energy, Office of Biological and Environmental Research (BER), conducted a one-day workshop on August 26, 2014 on the topic of “Multiscale Computation: Needs and Opportunities for BER Science.” Twenty invited participants, from various computational disciplines within the BER program research areas, were charged with the following objectives; Identify BER-relevant models and their potential cross-scale linkages that could be exploited to better connect molecular-scale research to BER research at larger scales and; Identify critical science directions that will motivate EMSLmore » decisions regarding future computational (hardware and software) architectures.« less

  4. National remote computational flight research facility

    NASA Technical Reports Server (NTRS)

    Rediess, Herman A.

    1989-01-01

    The extension of the NASA Ames-Dryden remotely augmented vehicle (RAV) facility to accommodate flight testing of a hypersonic aircraft utilizing the continental United States as a test range is investigated. The development and demonstration of an automated flight test management system (ATMS) that uses expert system technology for flight test planning, scheduling, and execution is documented.

  5. Computational and Experimental Characterization of the Mach 6 Facility Nozzle Flow for the Enhanced Injection and Mixing Project at NASA Langley Research Center

    NASA Technical Reports Server (NTRS)

    Drozda, Tomasz G.; Cabell, Karen F.; Passe, Bradley J.; Baurle, Robert A.

    2017-01-01

    Computational fluid dynamics analyses and experimental data are presented for the Mach 6 facility nozzle used in the Arc-Heated Scramjet Test Facility for the Enhanced Injection and Mixing Project (EIMP). This project, conducted at the NASA Langley Research Center, aims to investigate supersonic combustion ramjet (scramjet) fuel injection and mixing physics relevant to flight Mach numbers greater than 8. The EIMP experiments use a two-dimensional Mach 6 facility nozzle to provide the high-speed air simulating the combustor entrance flow of a scramjet engine. Of interest are the physical extent and the thermodynamic properties of the core flow at the nozzle exit plane. The detailed characterization of this flow is obtained from three-dimensional, viscous, Reynolds-averaged simulations. Thermodynamic nonequilibrium effects are also investigated. The simulations are compared with the available experimental data, which includes wall static pressures as well as in-stream static pressure, pitot pressure and total temperature obtained via in-stream probes positioned just downstream of the nozzle exit plane.

  6. Using high-performance networks to enable computational aerosciences applications

    NASA Technical Reports Server (NTRS)

    Johnson, Marjory J.

    1992-01-01

    One component of the U.S. Federal High Performance Computing and Communications Program (HPCCP) is the establishment of a gigabit network to provide a communications infrastructure for researchers across the nation. This gigabit network will provide new services and capabilities, in addition to increased bandwidth, to enable future applications. An understanding of these applications is necessary to guide the development of the gigabit network and other high-performance networks of the future. In this paper we focus on computational aerosciences applications run remotely using the Numerical Aerodynamic Simulation (NAS) facility located at NASA Ames Research Center. We characterize these applications in terms of network-related parameters and relate user experiences that reveal limitations imposed by the current wide-area networking infrastructure. Then we investigate how the development of a nationwide gigabit network would enable users of the NAS facility to work in new, more productive ways.

  7. High-Performance Computing User Facility | Computational Science | NREL

    Science.gov Websites

    User Facility High-Performance Computing User Facility The High-Performance Computing User Facility technologies. Photo of the Peregrine supercomputer The High Performance Computing (HPC) User Facility provides Gyrfalcon Mass Storage System. Access Our HPC User Facility Learn more about these systems and how to access

  8. Research in mathematical theory of computation. [computer programming applications

    NASA Technical Reports Server (NTRS)

    Mccarthy, J.

    1973-01-01

    Research progress in the following areas is reviewed: (1) new version of computer program LCF (logic for computable functions) including a facility to search for proofs automatically; (2) the description of the language PASCAL in terms of both LCF and in first order logic; (3) discussion of LISP semantics in LCF and attempt to prove the correctness of the London compilers in a formal way; (4) design of both special purpose and domain independent proving procedures specifically program correctness in mind; (5) design of languages for describing such proof procedures; and (6) the embedding of ideas in the first order checker.

  9. Applied Computational Fluid Dynamics at NASA Ames Research Center

    NASA Technical Reports Server (NTRS)

    Holst, Terry L.; Kwak, Dochan (Technical Monitor)

    1994-01-01

    The field of Computational Fluid Dynamics (CFD) has advanced to the point where it can now be used for many applications in fluid mechanics research and aerospace vehicle design. A few applications being explored at NASA Ames Research Center will be presented and discussed. The examples presented will range in speed from hypersonic to low speed incompressible flow applications. Most of the results will be from numerical solutions of the Navier-Stokes or Euler equations in three space dimensions for general geometry applications. Computational results will be used to highlight the presentation as appropriate. Advances in computational facilities including those associated with NASA's CAS (Computational Aerosciences) Project of the Federal HPCC (High Performance Computing and Communications) Program will be discussed. Finally, opportunities for future research will be presented and discussed. All material will be taken from non-sensitive, previously-published and widely-disseminated work.

  10. The Future is Hera: Analyzing Astronomical Data Over the Internet

    NASA Astrophysics Data System (ADS)

    Valencic, Lynne A.; Snowden, S.; Chai, P.; Shafer, R.

    2009-01-01

    Hera is the new data processing facility provided by the HEASARC at the NASA Goddard Space Flight Center for analyzing astronomical data. Hera provides all the preinstalled software packages, local disk space, and computing resources needed to do general processing of FITS format data files residing on the user's local computer, and to do advanced research using the publicly available data from High Energy Astrophysics missions. Qualified students, educators, and researchers may freely use the Hera services over the internet for research and educational purposes.

  11. An Electronic Pressure Profile Display system for aeronautic test facilities

    NASA Technical Reports Server (NTRS)

    Woike, Mark R.

    1990-01-01

    The NASA Lewis Research Center has installed an Electronic Pressure Profile Display system. This system provides for the real-time display of pressure readings on high resolution graphics monitors. The Electronic Pressure Profile Display system will replace manometer banks currently used in aeronautic test facilities. The Electronic Pressure Profile Display system consists of an industrial type Digital Pressure Transmitter (DPI) unit which interfaces with a host computer. The host computer collects the pressure data from the DPI unit, converts it into engineering units, and displays the readings on a high resolution graphics monitor in bar graph format. Software was developed to accomplish the above tasks and also draw facility diagrams as background information on the displays. Data transfer between host computer and DPT unit is done with serial communications. Up to 64 channels are displayed with one second update time. This paper describes the system configuration, its features, and its advantages over existing systems.

  12. An electronic pressure profile display system for aeronautic test facilities

    NASA Technical Reports Server (NTRS)

    Woike, Mark R.

    1990-01-01

    The NASA Lewis Research Center has installed an Electronic Pressure Profile Display system. This system provides for the real-time display of pressure readings on high resolution graphics monitors. The Electronic Pressure Profile Display system will replace manometer banks currently used in aeronautic test facilities. The Electronic Pressure Profile Display system consists of an industrial type Digital Pressure Transmitter (DPT) unit which interfaces with a host computer. The host computer collects the pressure data from the DPT unit, converts it into engineering units, and displays the readings on a high resolution graphics monitor in bar graph format. Software was developed to accomplish the above tasks and also draw facility diagrams as background information on the displays. Data transfer between host computer and DPT unit is done with serial communications. Up to 64 channels are displayed with one second update time. This paper describes the system configuration, its features, and its advantages over existing systems.

  13. The Future is Hera! Analyzing Astronomical Over the Internet

    NASA Technical Reports Server (NTRS)

    Valencic, L. A.; Chai, P.; Pence, W.; Shafer, R.; Snowden, S.

    2008-01-01

    Hera is the data processing facility provided by the High Energy Astrophysics Science Archive Research Center (HEASARC) at the NASA Goddard Space Flight Center for analyzing astronomical data. Hera provides all the pre-installed software packages, local disk space, and computing resources need to do general processing of FITS format data files residing on the users local computer, and to do research using the publicly available data from the High ENergy Astrophysics Division. Qualified students, educators and researchers may freely use the Hera services over the internet of research and educational purposes.

  14. Next Generation Workload Management System For Big Data on Heterogeneous Distributed Computing

    NASA Astrophysics Data System (ADS)

    Klimentov, A.; Buncic, P.; De, K.; Jha, S.; Maeno, T.; Mount, R.; Nilsson, P.; Oleynik, D.; Panitkin, S.; Petrosyan, A.; Porter, R. J.; Read, K. F.; Vaniachine, A.; Wells, J. C.; Wenaus, T.

    2015-05-01

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS and ALICE are the largest collaborations ever assembled in the sciences and are at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, both experiments rely on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System (WMS) for managing the workflow for all data processing on hundreds of data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. The scale is demonstrated by the following numbers: PanDA manages O(102) sites, O(105) cores, O(108) jobs per year, O(103) users, and ATLAS data volume is O(1017) bytes. In 2013 we started an ambitious program to expand PanDA to all available computing resources, including opportunistic use of commercial and academic clouds and Leadership Computing Facilities (LCF). The project titled ‘Next Generation Workload Management and Analysis System for Big Data’ (BigPanDA) is funded by DOE ASCR and HEP. Extending PanDA to clouds and LCF presents new challenges in managing heterogeneity and supporting workflow. The BigPanDA project is underway to setup and tailor PanDA at the Oak Ridge Leadership Computing Facility (OLCF) and at the National Research Center "Kurchatov Institute" together with ALICE distributed computing and ORNL computing professionals. Our approach to integration of HPC platforms at the OLCF and elsewhere is to reuse, as much as possible, existing components of the PanDA system. We will present our current accomplishments with running the PanDA WMS at OLCF and other supercomputers and demonstrate our ability to use PanDA as a portal independent of the computing facilities infrastructure for High Energy and Nuclear Physics as well as other data-intensive science applications.

  15. Computer-Aided Design Speeds Development of Safe, Affordable, and Efficient

    Science.gov Websites

    Systems Integration Facility's 3-D visualization room. Photo by Dennis Schroeder, NREL 41705 Computer from industry, academia, national laboratories, and other research institutions. Photo by Dennis Dennis Schroeder, NREL 41483 Bringing CAEBAT to the Next Level CAEBAT teams are now working to

  16. Space lab system analysis: Advanced Solid Rocket Motor (ASRM) communications networks analysis

    NASA Technical Reports Server (NTRS)

    Ingels, Frank M.; Moorhead, Robert J., II; Moorhead, Jane N.; Shearin, C. Mark; Thompson, Dale R.

    1990-01-01

    A synopsis of research on computer viruses and computer security is presented. A review of seven technical meetings attended is compiled. A technical discussion on the communication plans for the ASRM facility is presented, with a brief tutorial on the potential local area network media and protocols.

  17. High Performance Computing Facility Operational Assessment 2015: Oak Ridge Leadership Computing Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barker, Ashley D.; Bernholdt, David E.; Bland, Arthur S.

    Oak Ridge National Laboratory’s (ORNL’s) Leadership Computing Facility (OLCF) continues to surpass its operational target goals: supporting users; delivering fast, reliable systems; creating innovative solutions for high-performance computing (HPC) needs; and managing risks, safety, and security aspects associated with operating one of the most powerful computers in the world. The results can be seen in the cutting-edge science delivered by users and the praise from the research community. Calendar year (CY) 2015 was filled with outstanding operational results and accomplishments: a very high rating from users on overall satisfaction that ties the highest-ever mark set in CY 2014; the greatestmore » number of core-hours delivered to research projects; the largest percentage of capability usage since the OLCF began tracking the metric in 2009; and success in delivering on the allocation of 60, 30, and 10% of core hours offered for the INCITE (Innovative and Novel Computational Impact on Theory and Experiment), ALCC (Advanced Scientific Computing Research Leadership Computing Challenge), and Director’s Discretionary programs, respectively. These accomplishments, coupled with the extremely high utilization rate, represent the fulfillment of the promise of Titan: maximum use by maximum-size simulations. The impact of all of these successes and more is reflected in the accomplishments of OLCF users, with publications this year in notable journals Nature, Nature Materials, Nature Chemistry, Nature Physics, Nature Climate Change, ACS Nano, Journal of the American Chemical Society, and Physical Review Letters, as well as many others. The achievements included in the 2015 OLCF Operational Assessment Report reflect first-ever or largest simulations in their communities; for example Titan enabled engineers in Los Angeles and the surrounding region to design and begin building improved critical infrastructure by enabling the highest-resolution Cybershake map for Southern California to date. The Titan system provides the largest extant heterogeneous architecture for computing and computational science. Usage is high, delivering on the promise of a system well-suited for capability simulations for science. This success is due in part to innovations in tracking and reporting the activity on the compute nodes, and using this information to further enable and optimize applications, extending and balancing workload across the entire node. The OLCF continues to invest in innovative processes, tools, and resources necessary to meet continuing user demand. The facility’s leadership in data analysis and workflows was featured at the Department of Energy (DOE) booth at SC15, for the second year in a row, highlighting work with researchers from the National Library of Medicine coupled with unique computational and data resources serving experimental and observational data across facilities. Effective operations of the OLCF play a key role in the scientific missions and accomplishments of its users. Building on the exemplary year of 2014, as shown by the 2014 Operational Assessment Report (OAR) review committee response in Appendix A, this OAR delineates the policies, procedures, and innovations implemented by the OLCF to continue delivering a multi-petaflop resource for cutting-edge research. This report covers CY 2015, which, unless otherwise specified, denotes January 1, 2015, through December 31, 2015.« less

  18. Alex Pines

    Science.gov Websites

    @berkeley.edu 510-642-1220 Research profile » A U.S. Department of Energy National Laboratory Operated by the Computational Study of Excited-State Phenomena in Energy Materials Center for X-ray Optics MSD Facilities Ion Investigators Division Staff Facilities and Centers Staff Jobs Safety Personnel Resources Committees In Case of

  19. Data and Communications in Basic Energy Sciences: Creating a Pathway for Scientific Discovery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nugent, Peter E.; Simonson, J. Michael

    2011-10-24

    This report is based on the Department of Energy (DOE) Workshop on “Data and Communications in Basic Energy Sciences: Creating a Pathway for Scientific Discovery” that was held at the Bethesda Marriott in Maryland on October 24-25, 2011. The workshop brought together leading researchers from the Basic Energy Sciences (BES) facilities and Advanced Scientific Computing Research (ASCR). The workshop was co-sponsored by these two Offices to identify opportunities and needs for data analysis, ownership, storage, mining, provenance and data transfer at light sources, neutron sources, microscopy centers and other facilities. Their charge was to identify current and anticipated issues inmore » the acquisition, analysis, communication and storage of experimental data that could impact the progress of scientific discovery, ascertain what knowledge, methods and tools are needed to mitigate present and projected shortcomings and to create the foundation for information exchanges and collaboration between ASCR and BES supported researchers and facilities. The workshop was organized in the context of the impending data tsunami that will be produced by DOE’s BES facilities. Current facilities, like SLAC National Accelerator Laboratory’s Linac Coherent Light Source, can produce up to 18 terabytes (TB) per day, while upgraded detectors at Lawrence Berkeley National Laboratory’s Advanced Light Source will generate ~10TB per hour. The expectation is that these rates will increase by over an order of magnitude in the coming decade. The urgency to develop new strategies and methods in order to stay ahead of this deluge and extract the most science from these facilities was recognized by all. The four focus areas addressed in this workshop were: Workflow Management - Experiment to Science: Identifying and managing the data path from experiment to publication. Theory and Algorithms: Recognizing the need for new tools for computation at scale, supporting large data sets and realistic theoretical models. Visualization and Analysis: Supporting near-real-time feedback for experiment optimization and new ways to extract and communicate critical information from large data sets. Data Processing and Management: Outlining needs in computational and communication approaches and infrastructure needed to handle unprecedented data volume and information content. It should be noted that almost all participants recognized that there were unlikely to be any turn-key solutions available due to the unique, diverse nature of the BES community, where research at adjacent beamlines at a given light source facility often span everything from biology to materials science to chemistry using scattering, imaging and/or spectroscopy. However, it was also noted that advances supported by other programs in data research, methodologies, and tool development could be implemented on reasonable time scales with modest effort. Adapting available standard file formats, robust workflows, and in-situ analysis tools for user facility needs could pay long-term dividends. Workshop participants assessed current requirements as well as future challenges and made the following recommendations in order to achieve the ultimate goal of enabling transformative science in current and future BES facilities: Theory and analysis components should be integrated seamlessly within experimental workflow. Develop new algorithms for data analysis based on common data formats and toolsets. Move analysis closer to experiment. Move the analysis closer to the experiment to enable real-time (in-situ) streaming capabilities, live visualization of the experiment and an increase of the overall experimental efficiency. Match data management access and capabilities with advancements in detectors and sources. Remove bottlenecks, provide interoperability across different facilities/beamlines and apply forefront mathematical techniques to more efficiently extract science from the experiments. This workshop report examines and reviews the status of several BES facilities and highlights the successes and shortcomings of the current data and communication pathways for scientific discovery. It then ascertains what methods and tools are needed to mitigate present and projected data bottlenecks to science over the next 10 years. The goal of this report is to create the foundation for information exchanges and collaborations among ASCR and BES supported researchers, the BES scientific user facilities, and ASCR computing and networking facilities. To jumpstart these activities, there was a strong desire to see a joint effort between ASCR and BES along the lines of the highly successful Scientific Discovery through Advanced Computing (SciDAC) program in which integrated teams of engineers, scientists and computer scientists were engaged to tackle a complete end-to-end workflow solution at one or more beamlines, to ascertain what challenges will need to be addressed in order to handle future increases in data« less

  20. Architectural Aspects of Grid Computing and its Global Prospects for E-Science Community

    NASA Astrophysics Data System (ADS)

    Ahmad, Mushtaq

    2008-05-01

    The paper reviews the imminent Architectural Aspects of Grid Computing for e-Science community for scientific research and business/commercial collaboration beyond physical boundaries. Grid Computing provides all the needed facilities; hardware, software, communication interfaces, high speed internet, safe authentication and secure environment for collaboration of research projects around the globe. It provides highly fast compute engine for those scientific and engineering research projects and business/commercial applications which are heavily compute intensive and/or require humongous amounts of data. It also makes possible the use of very advanced methodologies, simulation models, expert systems and treasure of knowledge available around the globe under the umbrella of knowledge sharing. Thus it makes possible one of the dreams of global village for the benefit of e-Science community across the globe.

  1. High Performance Distributed Computing in a Supercomputer Environment: Computational Services and Applications Issues

    NASA Technical Reports Server (NTRS)

    Kramer, Williams T. C.; Simon, Horst D.

    1994-01-01

    This tutorial proposes to be a practical guide for the uninitiated to the main topics and themes of high-performance computing (HPC), with particular emphasis to distributed computing. The intent is first to provide some guidance and directions in the rapidly increasing field of scientific computing using both massively parallel and traditional supercomputers. Because of their considerable potential computational power, loosely or tightly coupled clusters of workstations are increasingly considered as a third alternative to both the more conventional supercomputers based on a small number of powerful vector processors, as well as high massively parallel processors. Even though many research issues concerning the effective use of workstation clusters and their integration into a large scale production facility are still unresolved, such clusters are already used for production computing. In this tutorial we will utilize the unique experience made at the NAS facility at NASA Ames Research Center. Over the last five years at NAS massively parallel supercomputers such as the Connection Machines CM-2 and CM-5 from Thinking Machines Corporation and the iPSC/860 (Touchstone Gamma Machine) and Paragon Machines from Intel were used in a production supercomputer center alongside with traditional vector supercomputers such as the Cray Y-MP and C90.

  2. DOE High Performance Computing Operational Review (HPCOR): Enabling Data-Driven Scientific Discovery at HPC Facilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gerber, Richard; Allcock, William; Beggio, Chris

    2014-10-17

    U.S. Department of Energy (DOE) High Performance Computing (HPC) facilities are on the verge of a paradigm shift in the way they deliver systems and services to science and engineering teams. Research projects are producing a wide variety of data at unprecedented scale and level of complexity, with community-specific services that are part of the data collection and analysis workflow. On June 18-19, 2014 representatives from six DOE HPC centers met in Oakland, CA at the DOE High Performance Operational Review (HPCOR) to discuss how they can best provide facilities and services to enable large-scale data-driven scientific discovery at themore » DOE national laboratories. The report contains findings from that review.« less

  3. The NASA Ames 16-Inch Shock Tunnel Nozzle Simulations and Experimental Comparison

    NASA Technical Reports Server (NTRS)

    TokarcikPolsky, S.; Papadopoulos, P.; Venkatapathy, E.; Delwert, G. S.; Edwards, Thomas A. (Technical Monitor)

    1995-01-01

    The 16-Inch Shock Tunnel at NASA Ames Research Center is a unique test facility used for hypersonic propulsion testing. To provide information necessary to understand the hypersonic testing of the combustor model, computational simulations of the facility nozzle were performed and results are compared with available experimental data, namely static pressure along the nozzle walls and pitot pressure at the exit of the nozzle section. Both quasi-one-dimensional and axisymmetric approaches were used to study the numerous modeling issues involved. The facility nozzle flow was examined for three hypersonic test conditions, and the computational results are presented in detail. The effects of variations in reservoir conditions, boundary layer growth, and parameters of numerical modeling are explored.

  4. Investigation of seismicity and related effects at NASA Ames-Dryden Flight Research Facility, Computer Center, Edwards, California

    NASA Technical Reports Server (NTRS)

    Cousineau, R. D.; Crook, R., Jr.; Leeds, D. J.

    1985-01-01

    This report discusses a geological and seismological investigation of the NASA Ames-Dryden Flight Research Facility site at Edwards, California. Results are presented as seismic design criteria, with design values of the pertinent ground motion parameters, probability of recurrence, and recommended analogous time-history accelerograms with their corresponding spectra. The recommendations apply specifically to the Dryden site and should not be extrapolated to other sites with varying foundation and geologic conditions or different seismic environments.

  5. ERA 1103 UNIVAC 2 Calculating Machine

    NASA Image and Video Library

    1955-09-21

    The new 10-by 10-Foot Supersonic Wind Tunnel at the Lewis Flight Propulsion Laboratory included high tech data acquisition and analysis systems. The reliable gathering of pressure, speed, temperature, and other data from test runs in the facilities was critical to the research process. Throughout the 1940s and early 1950s female employees, known as computers, recorded all test data and performed initial calculations by hand. The introduction of punch card computers in the late 1940s gradually reduced the number of hands-on calculations. In the mid-1950s new computational machines were installed in the office building of the 10-by 10-Foot tunnel. The new systems included this UNIVAC 1103 vacuum tube computer—the lab’s first centralized computer system. The programming was done on paper tape and fed into the machine. The 10-by 10 computer center also included the Lewis-designed Computer Automated Digital Encoder (CADDE) and Digital Automated Multiple Pressure Recorder (DAMPR) systems which converted test data to binary-coded decimal numbers and recorded test pressures automatically, respectively. The systems primarily served the 10-by 10, but were also applied to the other large facilities. Engineering Research Associates (ERA) developed the initial UNIVAC computer for the Navy in the late 1940s. In 1952 the company designed a commercial version, the UNIVAC 1103. The 1103 was the first computer designed by Seymour Cray and the first commercially successful computer.

  6. The Research on Application of Information Technology in sports Stadiums

    NASA Astrophysics Data System (ADS)

    Can, Han; Lu, Ma; Gan, Luying

    With the Olympic glory in the national fitness program planning and the smooth development of China, the public's concern for the sport continues to grow, while their physical health is also increasingly fervent desired, the country launched a modern technological construction of sports facilities. Information technology applications in the sports venues in the increasingly wide range of modern venues and facilities, including not only the intelligent application of office automation systems, intelligent systems and sports facilities, communication systems for event management, ticket access control system, contest information systems, television systems, Command and Control System, but also in action including the use of computer technology, image analysis, computer-aided training athletes, sports training system and related data entry systems, decision support systems.Using documentary data method, this paper focuses on the research on application of information technology in Sports Stadiums, and try to explore its future trends.With a view to promote the growth of China's national economyand,so as to improve the students'quality and promote the cause of Chinese sports.

  7. 42 CFR 93.508 - Filing, forms, and service.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... HEALTH EFFECTS STUDIES OF HAZARDOUS SUBSTANCES RELEASES AND FACILITIES PUBLIC HEALTH SERVICE POLICIES ON RESEARCH MISCONDUCT Opportunity To Contest ORI Findings of Research Misconduct and HHS Administrative... nondocumentary materials such as videotapes, computer disks, or physical evidence. This provision does not apply...

  8. 42 CFR 93.508 - Filing, forms, and service.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... HEALTH EFFECTS STUDIES OF HAZARDOUS SUBSTANCES RELEASES AND FACILITIES PUBLIC HEALTH SERVICE POLICIES ON RESEARCH MISCONDUCT Opportunity To Contest ORI Findings of Research Misconduct and HHS Administrative... nondocumentary materials such as videotapes, computer disks, or physical evidence. This provision does not apply...

  9. 42 CFR 93.508 - Filing, forms, and service.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... HEALTH EFFECTS STUDIES OF HAZARDOUS SUBSTANCES RELEASES AND FACILITIES PUBLIC HEALTH SERVICE POLICIES ON RESEARCH MISCONDUCT Opportunity To Contest ORI Findings of Research Misconduct and HHS Administrative... nondocumentary materials such as videotapes, computer disks, or physical evidence. This provision does not apply...

  10. 42 CFR 93.508 - Filing, forms, and service.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... HEALTH EFFECTS STUDIES OF HAZARDOUS SUBSTANCES RELEASES AND FACILITIES PUBLIC HEALTH SERVICE POLICIES ON RESEARCH MISCONDUCT Opportunity To Contest ORI Findings of Research Misconduct and HHS Administrative... nondocumentary materials such as videotapes, computer disks, or physical evidence. This provision does not apply...

  11. 42 CFR 93.508 - Filing, forms, and service.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... HEALTH EFFECTS STUDIES OF HAZARDOUS SUBSTANCES RELEASES AND FACILITIES PUBLIC HEALTH SERVICE POLICIES ON RESEARCH MISCONDUCT Opportunity To Contest ORI Findings of Research Misconduct and HHS Administrative... nondocumentary materials such as videotapes, computer disks, or physical evidence. This provision does not apply...

  12. Development and application of computational aerothermodynamics flowfield computer codes

    NASA Technical Reports Server (NTRS)

    Venkatapathy, Ethiraj

    1994-01-01

    Research was performed in the area of computational modeling and application of hypersonic, high-enthalpy, thermo-chemical nonequilibrium flow (Aerothermodynamics) problems. A number of computational fluid dynamic (CFD) codes were developed and applied to simulate high altitude rocket-plume, the Aeroassist Flight Experiment (AFE), hypersonic base flow for planetary probes, the single expansion ramp model (SERN) connected with the National Aerospace Plane, hypersonic drag devices, hypersonic ramp flows, ballistic range models, shock tunnel facility nozzles, transient and steady flows in the shock tunnel facility, arc-jet flows, thermochemical nonequilibrium flows around simple and complex bodies, axisymmetric ionized flows of interest to re-entry, unsteady shock induced combustion phenomena, high enthalpy pulsed facility simulations, and unsteady shock boundary layer interactions in shock tunnels. Computational modeling involved developing appropriate numerical schemes for the flows on interest and developing, applying, and validating appropriate thermochemical processes. As part of improving the accuracy of the numerical predictions, adaptive grid algorithms were explored, and a user-friendly, self-adaptive code (SAGE) was developed. Aerothermodynamic flows of interest included energy transfer due to strong radiation, and a significant level of effort was spent in developing computational codes for calculating radiation and radiation modeling. In addition, computational tools were developed and applied to predict the radiative heat flux and spectra that reach the model surface.

  13. Oklahoma Center for High Energy Physics (OCHEP)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nandi, S; Strauss, M J; Snow, J

    2012-02-29

    The DOE EPSCoR implementation grant, with the support from the State of Oklahoma and from the three universities, Oklahoma State University, University of Oklahoma and Langston University, resulted in establishing of the Oklahoma Center for High Energy Physics (OCHEP) in 2004. Currently, OCHEP continues to flourish as a vibrant hub for research in experimental and theoretical particle physics and an educational center in the State of Oklahoma. All goals of the original proposal were successfully accomplished. These include foun- dation of a new experimental particle physics group at OSU, the establishment of a Tier 2 computing facility for the Largemore » Hadron Collider (LHC) and Tevatron data analysis at OU and organization of a vital particle physics research center in Oklahoma based on resources of the three universities. OSU has hired two tenure-track faculty members with initial support from the grant funds. Now both positions are supported through OSU budget. This new HEP Experimental Group at OSU has established itself as a full member of the Fermilab D0 Collaboration and LHC ATLAS Experiment and has secured external funds from the DOE and the NSF. These funds currently support 2 graduate students, 1 postdoctoral fellow, and 1 part-time engineer. The grant initiated creation of a Tier 2 computing facility at OU as part of the Southwest Tier 2 facility, and a permanent Research Scientist was hired at OU to maintain and run the facility. Permanent support for this position has now been provided through the OU university budget. OCHEP represents a successful model of cooperation of several universities, providing the establishment of critical mass of manpower, computing and hardware resources. This led to increasing Oklahoma's impact in all areas of HEP, theory, experiment, and computation. The Center personnel are involved in cutting edge research in experimental, theoretical, and computational aspects of High Energy Physics with the research areas ranging from the search for new phenomena at the Fermilab Tevatron and the CERN Large Hadron Collider to theoretical modeling, computer simulation, detector development and testing, and physics analysis. OCHEP faculty members participating on the D0 collaboration at the Fermilab Tevatron and on the ATLAS collaboration at the CERN LHC have made major impact on the Standard Model (SM) Higgs boson search, top quark studies, B physics studies, and measurements of Quantum Chromodynamics (QCD) phenomena. The OCHEP Grid computing facility consists of a large computer cluster which is playing a major role in data analysis and Monte Carlo productions for both the D0 and ATLAS experiments. Theoretical efforts are devoted to new ideas in Higgs bosons physics, extra dimensions, neutrino masses and oscillations, Grand Unified Theories, supersymmetric models, dark matter, and nonperturbative quantum field theory. Theory members are making major contributions to the understanding of phenomena being explored at the Tevatron and the LHC. They have proposed new models for Higgs bosons, and have suggested new signals for extra dimensions, and for the search of supersymmetric particles. During the seven year period when OCHEP was partially funded through the DOE EPSCoR implementation grant, OCHEP members published over 500 refereed journal articles and made over 200 invited presentations at major conferences. The Center is also involved in education and outreach activities by offering summer research programs for high school teachers and college students, and organizing summer workshops for high school teachers, sometimes coordinating with the Quarknet programs at OSU and OU. The details of the Center can be found in http://ochep.phy.okstate.edu.« less

  14. STREAM2016: Streaming Requirements, Experience, Applications and Middleware Workshop

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fox, Geoffrey; Jha, Shantenu; Ramakrishnan, Lavanya

    The Department of Energy (DOE) Office of Science (SC) facilities including accelerators, light sources and neutron sources and sensors that study, the environment, and the atmosphere, are producing streaming data that needs to be analyzed for next-generation scientific discoveries. There has been an explosion of new research and technologies for stream analytics arising from the academic and private sectors. However, there has been no corresponding effort in either documenting the critical research opportunities or building a community that can create and foster productive collaborations. The two-part workshop series, STREAM: Streaming Requirements, Experience, Applications and Middleware Workshop (STREAM2015 and STREAM2016), weremore » conducted to bring the community together and identify gaps and future efforts needed by both NSF and DOE. This report describes the discussions, outcomes and conclusions from STREAM2016: Streaming Requirements, Experience, Applications and Middleware Workshop, the second of these workshops held on March 22-23, 2016 in Tysons, VA. STREAM2016 focused on the Department of Energy (DOE) applications, computational and experimental facilities, as well software systems. Thus, the role of “streaming and steering” as a critical mode of connecting the experimental and computing facilities was pervasive through the workshop. Given the overlap in interests and challenges with industry, the workshop had significant presence from several innovative companies and major contributors. The requirements that drive the proposed research directions, identified in this report, show an important opportunity for building competitive research and development program around streaming data. These findings and recommendations are consistent with vision outlined in NRC Frontiers of Data and National Strategic Computing Initiative (NCSI) [1, 2]. The discussions from the workshop are captured as topic areas covered in this report's sections. The report discusses four research directions driven by current and future application requirements reflecting the areas identified as important by STREAM2016. These include (i) Algorithms, (ii) Programming Models, Languages and Runtime Systems (iii) Human-in-the-loop and Steering in Scientific Workflow and (iv) Facilities.« less

  15. Ant colony optimization for solving university facility layout problem

    NASA Astrophysics Data System (ADS)

    Mohd Jani, Nurul Hafiza; Mohd Radzi, Nor Haizan; Ngadiman, Mohd Salihin

    2013-04-01

    Quadratic Assignment Problems (QAP) is classified as the NP hard problem. It has been used to model a lot of problem in several areas such as operational research, combinatorial data analysis and also parallel and distributed computing, optimization problem such as graph portioning and Travel Salesman Problem (TSP). In the literature, researcher use exact algorithm, heuristics algorithm and metaheuristic approaches to solve QAP problem. QAP is largely applied in facility layout problem (FLP). In this paper we used QAP to model university facility layout problem. There are 8 facilities that need to be assigned to 8 locations. Hence we have modeled a QAP problem with n ≤ 10 and developed an Ant Colony Optimization (ACO) algorithm to solve the university facility layout problem. The objective is to assign n facilities to n locations such that the minimum product of flows and distances is obtained. Flow is the movement from one to another facility, whereas distance is the distance between one locations of a facility to other facilities locations. The objective of the QAP is to obtain minimum total walking (flow) of lecturers from one destination to another (distance).

  16. Construction of Blaze at the University of Illinois at Chicago: A Shared, High-Performance, Visual Computer for Next-Generation Cyberinfrastructure-Accelerated Scientific, Engineering, Medical and Public Policy Research

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Maxine D.; Leigh, Jason

    2014-02-17

    The Blaze high-performance visual computing system serves the high-performance computing research and education needs of University of Illinois at Chicago (UIC). Blaze consists of a state-of-the-art, networked, computer cluster and ultra-high-resolution visualization system called CAVE2(TM) that is currently not available anywhere in Illinois. This system is connected via a high-speed 100-Gigabit network to the State of Illinois' I-WIRE optical network, as well as to national and international high speed networks, such as the Internet2, and the Global Lambda Integrated Facility. This enables Blaze to serve as an on-ramp to national cyberinfrastructure, such as the National Science Foundation’s Blue Waters petascalemore » computer at the National Center for Supercomputing Applications at the University of Illinois at Chicago and the Department of Energy’s Argonne Leadership Computing Facility (ALCF) at Argonne National Laboratory. DOE award # DE-SC005067, leveraged with NSF award #CNS-0959053 for “Development of the Next-Generation CAVE Virtual Environment (NG-CAVE),” enabled us to create a first-of-its-kind high-performance visual computing system. The UIC Electronic Visualization Laboratory (EVL) worked with two U.S. companies to advance their commercial products and maintain U.S. leadership in the global information technology economy. New applications are being enabled with the CAVE2/Blaze visual computing system that is advancing scientific research and education in the U.S. and globally, and help train the next-generation workforce.« less

  17. Interdisciplinary Facilities that Support Collaborative Teaching and Learning

    ERIC Educational Resources Information Center

    Asoodeh, Mike; Bonnette, Roy

    2006-01-01

    It has become widely accepted that the computer is an indispensable tool in the study of science and technology. Thus, in recent years curricular programs such as Industrial Technology and associated scientific disciplines have been adopting and adapting the computer as a tool in new and innovative ways to support teaching, learning, and research.…

  18. Hydrodynamic Analyses and Evaluation of New Fluid Film Bearing Concepts

    NASA Technical Reports Server (NTRS)

    Keith, Theo G., Jr.; Dimofte, Florin

    1998-01-01

    Over the past several years, numerical and experimental investigations have been performed on a waved journal bearing. The research work was undertaken by Dr. Florin Dimofte, a Senior Research Associate in the Mechanical Engineering Department at the University of Toledo. Dr. Theo Keith, Distinguished University Professor in the Mechanical Engineering Department was the Technical Coordinator of the project. The wave journal bearing is a bearing with a slight but precise variation in its circular profile such that a waved profile is circumscribed on the inner bearing diameter. The profile has a wave amplitude that is equal to a fraction of the bearing clearance. Prior to this period of research on the wave bearing, computer codes were written and an experimental facility was established. During this period of research considerable effort was directed towards the study of the bearing's stability. The previously developed computer codes and the experimental facility were of critical importance in performing this stability research. A collection of papers and reports were written to describe the results of this work. The attached captures that effort and represents the research output during the grant period.

  19. Automated Help System For A Supercomputer

    NASA Technical Reports Server (NTRS)

    Callas, George P.; Schulbach, Catherine H.; Younkin, Michael

    1994-01-01

    Expert-system software developed to provide automated system of user-helping displays in supercomputer system at Ames Research Center Advanced Computer Facility. Users located at remote computer terminals connected to supercomputer and each other via gateway computers, local-area networks, telephone lines, and satellite links. Automated help system answers routine user inquiries about how to use services of computer system. Available 24 hours per day and reduces burden on human experts, freeing them to concentrate on helping users with complicated problems.

  20. New project to support scientific collaboration electronically

    NASA Astrophysics Data System (ADS)

    Clauer, C. R.; Rasmussen, C. E.; Niciejewski, R. J.; Killeen, T. L.; Kelly, J. D.; Zambre, Y.; Rosenberg, T. J.; Stauning, P.; Friis-Christensen, E.; Mende, S. B.; Weymouth, T. E.; Prakash, A.; McDaniel, S. E.; Olson, G. M.; Finholt, T. A.; Atkins, D. E.

    A new multidisciplinary effort is linking research in the upper atmospheric and space, computer, and behavioral sciences to develop a prototype electronic environment for conducting team science worldwide. A real-world electronic collaboration testbed has been established to support scientific work centered around the experimental operations being conducted with instruments from the Sondrestrom Upper Atmospheric Research Facility in Kangerlussuaq, Greenland. Such group computing environments will become an important component of the National Information Infrastructure initiative, which is envisioned as the high-performance communications infrastructure to support national scientific research.

  1. The aeroacoustics of supersonic jets

    NASA Technical Reports Server (NTRS)

    Morris, Philip J.; McLaughlin, Dennis K.

    1995-01-01

    This research project was a joint experimental/computational study of noise in supersonic jets. The experiments were performed in a low to moderate Reynolds number anechoic supersonic jet facility. Computations have focused on the modeling of the effect of an external shroud on the generation and radiation of jet noise. This report summarizes the results of the research program in the form of the Masters and Doctoral theses of those students who obtained their degrees with the assistance of this research grant. In addition, the presentations and publications made by the principal investigators and the research students is appended.

  2. Human-Computer Interaction and Virtual Environments

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K. (Compiler)

    1995-01-01

    The proceedings of the Workshop on Human-Computer Interaction and Virtual Environments are presented along with a list of attendees. The objectives of the workshop were to assess the state-of-technology and level of maturity of several areas in human-computer interaction and to provide guidelines for focused future research leading to effective use of these facilities in the design/fabrication and operation of future high-performance engineering systems.

  3. Fusion Energy Sciences Exascale Requirements Review. An Office of Science review sponsored jointly by Advanced Scientific Computing Research and Fusion Energy Sciences, January 27-29, 2016, Gaithersburg, Maryland

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, Choong-Seock; Greenwald, Martin; Riley, Katherine

    The additional computing power offered by the planned exascale facilities could be transformational across the spectrum of plasma and fusion research — provided that the new architectures can be efficiently applied to our problem space. The collaboration that will be required to succeed should be viewed as an opportunity to identify and exploit cross-disciplinary synergies. To assess the opportunities and requirements as part of the development of an overall strategy for computing in the exascale era, the Exascale Requirements Review meeting of the Fusion Energy Sciences (FES) community was convened January 27–29, 2016, with participation from a broad range ofmore » fusion and plasma scientists, specialists in applied mathematics and computer science, and representatives from the U.S. Department of Energy (DOE) and its major computing facilities. This report is a summary of that meeting and the preparatory activities for it and includes a wealth of detail to support the findings. Technical opportunities, requirements, and challenges are detailed in this report (and in the recent report on the Workshop on Integrated Simulation). Science applications are described, along with mathematical and computational enabling technologies. Also see http://exascaleage.org/fes/ for more information.« less

  4. NASA Lewis Wind Tunnel Model Systems Criteria

    NASA Technical Reports Server (NTRS)

    Soeder, Ronald H.; Haller, Henry C.

    1994-01-01

    This report describes criteria for the design, analysis, quality assurance, and documentation of models or test articles that are to be tested in the aeropropulsion facilities at the NASA Lewis Research Center. The report presents three methods for computing model allowable stresses on the basis of the yield stress or ultimate stress, and it gives quality assurance criteria for models tested in Lewis' aeropropulsion facilities. Both customer-furnished model systems and in-house model systems are discussed. The functions of the facility manager, project engineer, operations engineer, research engineer, and facility electrical engineer are defined. The format for pretest meetings, prerun safety meetings, and the model criteria review are outlined Then, the format for the model systems report (a requirement for each model that is to be tested at NASA Lewis) is described, the engineers that are responsible for developing the model systems report are listed, and the time table for its delivery to the facility manager is given.

  5. Virtual worlds to support patient group communication? A questionnaire study investigating potential for virtual world focus group use by respiratory patients.

    PubMed

    Taylor, Michael J; Taylor, Dave; Vlaev, Ivo; Elkin, Sarah

    2017-01-01

    Recent advances in communication technologies enable potential provision of remote education for patients using computer-generated environments known as virtual worlds. Previous research has revealed highly variable levels of patient receptiveness to using information technologies for healthcare-related purposes. This preliminary study involved implementing a questionnaire investigating attitudes and access to computer technologies of respiratory outpatients, in order to assess potential for use of virtual worlds to facilitate health-related education for this sample. Ninety-four patients with a chronic respiratory condition completed surveys, which were distributed at a Chest Clinic. In accordance with our prediction, younger participants were more likely to be able to use, and have access to a computer and some patients were keen to explore use virtual worlds for healthcare-related purposes: Of those with access to computer facilities, 14.50% expressed a willingness to attend a virtual world focus group. Results indicate future virtual world health education facilities should be designed to cater for younger patients, because this group are most likely to accept and use such facilities. Within the study sample, this is likely to comprise of people diagnosed with asthma. Future work could investigate the potential of creating a virtual world asthma education facility.

  6. Atmospheric concentrations of polybrominated diphenyl ethers at near-source sites.

    PubMed

    Cahill, Thomas M; Groskova, Danka; Charles, M Judith; Sanborn, James R; Denison, Michael S; Baker, Lynton

    2007-09-15

    Concentrations of polybrominated diphenyl ethers (PBDEs) were determined in air samples from near suspected sources, namely an indoors computer laboratory, indoors and outdoors at an electronics recycling facility, and outdoors at an automotive shredding and metal recycling facility. The results showed that (1) PBDE concentrations in the computer laboratorywere higherwith computers on compared with the computers off, (2) indoor concentrations at an electronics recycling facility were as high as 650,000 pg/m3 for decabromodiphenyl ether (PBDE 209), and (3) PBDE 209 concentrations were up to 1900 pg/m3 at the downwind fenceline at an automotive shredding/metal recycling facility. The inhalation exposure estimates for all the sites were typically below 110 pg/kg/day with the exception of the indoor air samples adjacent to the electronics shredding equipment, which gave exposure estimates upward of 40,000 pg/kg/day. Although there were elevated inhalation exposures at the three source sites, the exposure was not expected to cause adverse health effects based on the lowest reference dose (RfD) currently in the Integrated Risk Information System (IRIS), although these RfD values are currently being re-evaluated by the U.S. Environmental Protection Agency. More research is needed on the potential health effects of PBDEs.

  7. Virtual worlds to support patient group communication? A questionnaire study investigating potential for virtual world focus group use by respiratory patients

    PubMed Central

    Taylor, Michael J.; Taylor, Dave; Vlaev, Ivo; Elkin, Sarah

    2015-01-01

    Recent advances in communication technologies enable potential provision of remote education for patients using computer-generated environments known as virtual worlds. Previous research has revealed highly variable levels of patient receptiveness to using information technologies for healthcare-related purposes. This preliminary study involved implementing a questionnaire investigating attitudes and access to computer technologies of respiratory outpatients, in order to assess potential for use of virtual worlds to facilitate health-related education for this sample. Ninety-four patients with a chronic respiratory condition completed surveys, which were distributed at a Chest Clinic. In accordance with our prediction, younger participants were more likely to be able to use, and have access to a computer and some patients were keen to explore use virtual worlds for healthcare-related purposes: Of those with access to computer facilities, 14.50% expressed a willingness to attend a virtual world focus group. Results indicate future virtual world health education facilities should be designed to cater for younger patients, because this group are most likely to accept and use such facilities. Within the study sample, this is likely to comprise of people diagnosed with asthma. Future work could investigate the potential of creating a virtual world asthma education facility. PMID:28239187

  8. Aeroacoustic Simulation of a Nose Landing Gear in an Open Jet Facility Using FUN3D

    NASA Technical Reports Server (NTRS)

    Vatsa, Veer N.; Lockard, David P.; Khorrami, Mehdi R.; Carlson, Jan-Renee

    2012-01-01

    Numerical simulations have been performed for a partially-dressed, cavity-closed nose landing gear configuration that was tested in NASA Langley s closed-wall Basic Aerodynamic Research Tunnel (BART) and in the University of Florida s open-jet acoustic facility known as UFAFF. The unstructured-grid flow solver, FUN3D, developed at NASA Langley Research center is used to compute the unsteady flow field for this configuration. A hybrid Reynolds-averaged Navier-Stokes/large eddy simulation (RANS/LES) turbulence model is used for these computations. Time-averaged and instantaneous solutions compare favorably with the measured data. Unsteady flowfield data obtained from the FUN3D code are used as input to a Ffowcs Williams-Hawkings noise propagation code to compute the sound pressure levels at microphones placed in the farfield. Significant improvement in predicted noise levels is obtained when the flowfield data from the open jet UFAFF simulations is used as compared to the case using flowfield data from the closed-wall BART configuration.

  9. ASCR/HEP Exascale Requirements Review Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Habib, Salman; Roser, Robert; Gerber, Richard

    This draft report summarizes and details the findings, results, and recommendations derived from the ASCR/HEP Exascale Requirements Review meeting held in June, 2015. The main conclusions are as follows. 1) Larger, more capable computing and data facilities are needed to support HEP science goals in all three frontiers: Energy, Intensity, and Cosmic. The expected scale of the demand at the 2025 timescale is at least two orders of magnitude -- and in some cases greater -- than that available currently. 2) The growth rate of data produced by simulations is overwhelming the current ability, of both facilities and researchers, tomore » store and analyze it. Additional resources and new techniques for data analysis are urgently needed. 3) Data rates and volumes from HEP experimental facilities are also straining the ability to store and analyze large and complex data volumes. Appropriately configured leadership-class facilities can play a transformational role in enabling scientific discovery from these datasets. 4) A close integration of HPC simulation and data analysis will aid greatly in interpreting results from HEP experiments. Such an integration will minimize data movement and facilitate interdependent workflows. 5) Long-range planning between HEP and ASCR will be required to meet HEP's research needs. To best use ASCR HPC resources the experimental HEP program needs a) an established long-term plan for access to ASCR computational and data resources, b) an ability to map workflows onto HPC resources, c) the ability for ASCR facilities to accommodate workflows run by collaborations that can have thousands of individual members, d) to transition codes to the next-generation HPC platforms that will be available at ASCR facilities, e) to build up and train a workforce capable of developing and using simulations and analysis to support HEP scientific research on next-generation systems.« less

  10. ASCR/HEP Exascale Requirements Review Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Habib, Salman; et al.

    2016-03-30

    This draft report summarizes and details the findings, results, and recommendations derived from the ASCR/HEP Exascale Requirements Review meeting held in June, 2015. The main conclusions are as follows. 1) Larger, more capable computing and data facilities are needed to support HEP science goals in all three frontiers: Energy, Intensity, and Cosmic. The expected scale of the demand at the 2025 timescale is at least two orders of magnitude -- and in some cases greater -- than that available currently. 2) The growth rate of data produced by simulations is overwhelming the current ability, of both facilities and researchers, tomore » store and analyze it. Additional resources and new techniques for data analysis are urgently needed. 3) Data rates and volumes from HEP experimental facilities are also straining the ability to store and analyze large and complex data volumes. Appropriately configured leadership-class facilities can play a transformational role in enabling scientific discovery from these datasets. 4) A close integration of HPC simulation and data analysis will aid greatly in interpreting results from HEP experiments. Such an integration will minimize data movement and facilitate interdependent workflows. 5) Long-range planning between HEP and ASCR will be required to meet HEP's research needs. To best use ASCR HPC resources the experimental HEP program needs a) an established long-term plan for access to ASCR computational and data resources, b) an ability to map workflows onto HPC resources, c) the ability for ASCR facilities to accommodate workflows run by collaborations that can have thousands of individual members, d) to transition codes to the next-generation HPC platforms that will be available at ASCR facilities, e) to build up and train a workforce capable of developing and using simulations and analysis to support HEP scientific research on next-generation systems.« less

  11. Report on Computing and Networking in the Space Science Laboratory by the SSL Computer Committee

    NASA Technical Reports Server (NTRS)

    Gallagher, D. L. (Editor)

    1993-01-01

    The Space Science Laboratory (SSL) at Marshall Space Flight Center is a multiprogram facility. Scientific research is conducted in four discipline areas: earth science and applications, solar-terrestrial physics, astrophysics, and microgravity science and applications. Representatives from each of these discipline areas participate in a Laboratory computer requirements committee, which developed this document. The purpose is to establish and discuss Laboratory objectives for computing and networking in support of science. The purpose is also to lay the foundation for a collective, multiprogram approach to providing these services. Special recognition is given to the importance of the national and international efforts of our research communities toward the development of interoperable, network-based computer applications.

  12. Astronomy research via the Internet

    NASA Astrophysics Data System (ADS)

    Ratnatunga, Kavan U.

    Small developing countries may not have a dark site with good seeing for an astronomical observatory or be able to afford the financial commitment to set up and support such a facility. Much of astronomical research today is however done with remote observations, such as from telescopes in space, or obtained by service observing at large facilities on the ground. Cutting-edge astronomical research can now be done with low-cost computers, with a good Internet connection to get on-line access to astronomical observations, journals and most recent preprints. E-mail allows fast easy collaboration between research scientitists around the world. An international program with some short-term collaborative visits, could mine data and publish results from available astronomical observations for a fraction of the investment and cost of running even a small local observatory. Students who have been trained in the use of computers and software by such a program would also be more employable in the current job market. The Internet can reach you wherever you like to be and give you direct access to whatever you need for astronomical research.

  13. Efficacy of a Computer-Based Program on Acquisition of Reading Skills of Incarcerated Youth

    ERIC Educational Resources Information Center

    Shippen, Margaret E.; Morton, Rhonda Collins; Flynt, Samuel W.; Houchins, David E.; Smitherman, Tracy

    2012-01-01

    Despite the importance of literacy skill training for incarcerated youth, a very limited number of empirically based research studies have examined reading instruction in correctional facilities. The purpose of this study was to determine whether the Fast ForWord computer-assisted reading program improved the reading and spelling abilities of…

  14. Computer-Aided Facilities Management Systems (CAFM).

    ERIC Educational Resources Information Center

    Cyros, Kreon L.

    Computer-aided facilities management (CAFM) refers to a collection of software used with increasing frequency by facilities managers. The six major CAFM components are discussed with respect to their usefulness and popularity in facilities management applications: (1) computer-aided design; (2) computer-aided engineering; (3) decision support…

  15. The Revolutionary Vertical Lift Technology (RVLT) Project

    NASA Technical Reports Server (NTRS)

    Yamauchi, Gloria K.

    2018-01-01

    The Revolutionary Vertical Lift Technology (RVLT) Project is one of six projects in the Advanced Air Vehicles Program (AAVP) of the NASA Aeronautics Research Mission Directorate. The overarching goal of the RVLT Project is to develop and validate tools, technologies, and concepts to overcome key barriers for vertical lift vehicles. The project vision is to enable the next generation of vertical lift vehicles with aggressive goals for efficiency, noise, and emissions, to expand current capabilities and develop new commercial markets. The RVLT Project invests in technologies that support conventional, non-conventional, and emerging vertical-lift aircraft in the very light to heavy vehicle classes. Research areas include acoustic, aeromechanics, drive systems, engines, icing, hybrid-electric systems, impact dynamics, experimental techniques, computational methods, and conceptual design. The project research is executed at NASA Ames, Glenn, and Langley Research Centers; the research extensively leverages partnerships with the US Army, the Federal Aviation Administration, industry, and academia. The primary facilities used by the project for testing of vertical-lift technologies include the 14- by 22-Ft Wind Tunnel, Icing Research Tunnel, National Full-Scale Aerodynamics Complex, 7- by 10-Ft Wind Tunnel, Rotor Test Cell, Landing and Impact Research facility, Compressor Test Facility, Drive System Test Facilities, Transonic Turbine Blade Cascade Facility, Vertical Motion Simulator, Mobile Acoustic Facility, Exterior Effects Synthesis and Simulation Lab, and the NASA Advanced Supercomputing Complex. To learn more about the RVLT Project, please stop by booth #1004 or visit their website at https://www.nasa.gov/aeroresearch/programs/aavp/rvlt.

  16. A Simple and Resource-efficient Setup for the Computer-aided Drug Design Laboratory.

    PubMed

    Moretti, Loris; Sartori, Luca

    2016-10-01

    Undertaking modelling investigations for Computer-Aided Drug Design (CADD) requires a proper environment. In principle, this could be done on a single computer, but the reality of a drug discovery program requires robustness and high-throughput computing (HTC) to efficiently support the research. Therefore, a more capable alternative is needed but its implementation has no widespread solution. Here, the realization of such a computing facility is discussed, from general layout to technical details all aspects are covered. © 2016 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. X-ray ptychography, fluorescence microscopy combo sheds new light on trace

    Science.gov Websites

    Research Divisions Computing, Environment and Life Sciences BIOBiosciences CPSComputational Science DSLData CEESCenter for Electrochemical Energy Science CTRCenter for Transportation Research CRIChain Reaction Navigation Toggle Search Energy Environment National Security User Facilities Science Work with Us About

  18. Next Generation Workload Management System For Big Data on Heterogeneous Distributed Computing

    DOE PAGES

    Klimentov, A.; Buncic, P.; De, K.; ...

    2015-05-22

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS and ALICE are the largest collaborations ever assembled in the sciences and are at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, both experiments rely on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Managementmore » System (WMS) for managing the workflow for all data processing on hundreds of data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. The scale is demonstrated by the following numbers: PanDA manages O(10 2) sites, O(10 5) cores, O(10 8) jobs per year, O(10 3) users, and ATLAS data volume is O(10 17) bytes. In 2013 we started an ambitious program to expand PanDA to all available computing resources, including opportunistic use of commercial and academic clouds and Leadership Computing Facilities (LCF). The project titled 'Next Generation Workload Management and Analysis System for Big Data' (BigPanDA) is funded by DOE ASCR and HEP. Extending PanDA to clouds and LCF presents new challenges in managing heterogeneity and supporting workflow. The BigPanDA project is underway to setup and tailor PanDA at the Oak Ridge Leadership Computing Facility (OLCF) and at the National Research Center "Kurchatov Institute" together with ALICE distributed computing and ORNL computing professionals. Our approach to integration of HPC platforms at the OLCF and elsewhere is to reuse, as much as possible, existing components of the PanDA system. Finally, we will present our current accomplishments with running the PanDA WMS at OLCF and other supercomputers and demonstrate our ability to use PanDA as a portal independent of the computing facilities infrastructure for High Energy and Nuclear Physics as well as other data-intensive science applications.« less

  19. Next Generation Workload Management System For Big Data on Heterogeneous Distributed Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klimentov, A.; Buncic, P.; De, K.

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS and ALICE are the largest collaborations ever assembled in the sciences and are at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, both experiments rely on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Managementmore » System (WMS) for managing the workflow for all data processing on hundreds of data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. The scale is demonstrated by the following numbers: PanDA manages O(10 2) sites, O(10 5) cores, O(10 8) jobs per year, O(10 3) users, and ATLAS data volume is O(10 17) bytes. In 2013 we started an ambitious program to expand PanDA to all available computing resources, including opportunistic use of commercial and academic clouds and Leadership Computing Facilities (LCF). The project titled 'Next Generation Workload Management and Analysis System for Big Data' (BigPanDA) is funded by DOE ASCR and HEP. Extending PanDA to clouds and LCF presents new challenges in managing heterogeneity and supporting workflow. The BigPanDA project is underway to setup and tailor PanDA at the Oak Ridge Leadership Computing Facility (OLCF) and at the National Research Center "Kurchatov Institute" together with ALICE distributed computing and ORNL computing professionals. Our approach to integration of HPC platforms at the OLCF and elsewhere is to reuse, as much as possible, existing components of the PanDA system. Finally, we will present our current accomplishments with running the PanDA WMS at OLCF and other supercomputers and demonstrate our ability to use PanDA as a portal independent of the computing facilities infrastructure for High Energy and Nuclear Physics as well as other data-intensive science applications.« less

  20. Aeroelastic, CFD, and Dynamics Computation and Optimization for Buffet and Flutter Applications

    NASA Technical Reports Server (NTRS)

    Kandil, Osama A.

    1997-01-01

    Accomplishments achieved during the reporting period are listed. These accomplishments included 6 papers published in various journals or presented at various conferences; 1 abstract submitted to a technical conference; production of 2 animated movies; and a proposal for use of the National Aerodynamic Simulation Facility at NASA Ames Research Center for further research. The published and presented papers and animated movies addressed the following topics: aeroelasticity, computational fluid dynamics, structural dynamics, wing and tail buffet, vortical flow interactions, and delta wings.

  1. Design and implementation of a telecommunication interface for the TAATM/TCV real-time experiment

    NASA Technical Reports Server (NTRS)

    Nolan, J. D.

    1981-01-01

    The traffic situation display experiment of the terminal configured vehicle (TCV) research program requires a bidirectional data communications tie line between an computer complex. The tie line is used in a real time environment on the CYBER 175 computer by the terminal area air traffic model (TAATM) simulation program. Aircraft position data are processed by TAATM with the resultant output sent to the facility for the generation of air traffic situation displays which are transmitted to a research aircraft.

  2. MIT Laboratory for Computer Science Progress Report, July 1984-June 1985

    DTIC Science & Technology

    1985-06-01

    larger (up to several thousand machines) multiprocessor systems. This facility, funded by the newly formed Strategic Computing Program of the Defense...Szolovits, Group Leader R. Patil Collaborating Investigators M. Criscitiello, M.D., Tufts-New England Medical Center Hospital J. Dzierzanowski, Ph.D., Dept...COMPUTATION STRUCTURES Academic Staff J. B. Dennis, Group Leader Research Staff W. B. Ackerman G. A. Boughton W. Y-P. Lim Graduate Students T-A. Chu S

  3. Neutron flux characterization of californium-252 Neutron Research Facility at the University of Texas - Pan American by nuclear analytical technique

    NASA Astrophysics Data System (ADS)

    Wahid, Kareem; Sanchez, Patrick; Hannan, Mohammad

    2014-03-01

    In the field of nuclear science, neutron flux is an intrinsic property of nuclear reaction facilities that is the basis for experimental irradiation calculations and analysis. In the Rio Grande Valley (Texas), the UTPA Neutron Research Facility (NRF) is currently the only neutron facility available for experimental research purposes. The facility is comprised of a 20-microgram californium-252 neutron source surrounded by a shielding cascade containing different irradiation cavities. Thermal and fast neutron flux values for the UTPA NRF have yet to be fully investigated and may be of particular interest to biomedical studies in low neutron dose applications. Though a variety of techniques exist for the characterization of neutron flux, neutron activation analysis (NAA) of metal and nonmetal foils is a commonly utilized experimental method because of its detection sensitivity and availability. The aim of our current investigation is to employ foil activation in the determination of neutron flux values for the UTPA NSRF for further research purposes. Neutron spectrum unfolding of the acquired experimental data via specialized software and subsequent comparison for consistency with computational models lends confidence to the results.

  4. LaRC local area networks to support distributed computing

    NASA Technical Reports Server (NTRS)

    Riddle, E. P.

    1984-01-01

    The Langley Research Center's (LaRC) Local Area Network (LAN) effort is discussed. LaRC initiated the development of a LAN to support a growing distributed computing environment at the Center. The purpose of the network is to provide an improved capability (over inteactive and RJE terminal access) for sharing multivendor computer resources. Specifically, the network will provide a data highway for the transfer of files between mainframe computers, minicomputers, work stations, and personal computers. An important influence on the overall network design was the vital need of LaRC researchers to efficiently utilize the large CDC mainframe computers in the central scientific computing facility. Although there was a steady migration from a centralized to a distributed computing environment at LaRC in recent years, the work load on the central resources increased. Major emphasis in the network design was on communication with the central resources within the distributed environment. The network to be implemented will allow researchers to utilize the central resources, distributed minicomputers, work stations, and personal computers to obtain the proper level of computing power to efficiently perform their jobs.

  5. YALINA facility a sub-critical Accelerator- Driven System (ADS) for nuclear energy research facility description and an overview of the research program (1997-2008).

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gohar, Y.; Smith, D. L.; Nuclear Engineering Division

    2010-04-28

    The YALINA facility is a zero-power, sub-critical assembly driven by a conventional neutron generator. It was conceived, constructed, and put into operation at the Radiation Physics and Chemistry Problems Institute of the National Academy of Sciences of Belarus located in Minsk-Sosny, Belarus. This facility was conceived for the purpose of investigating the static and dynamic neutronics properties of accelerator driven sub-critical systems, and to serve as a neutron source for investigating the properties of nuclear reactions, in particular transmutation reactions involving minor-actinide nuclei. This report provides a detailed description of this facility and documents the progress of research carried outmore » there during a period of approximately a decade since the facility was conceived and built until the end of 2008. During its history of development and operation to date (1997-2008), the YALINA facility has hosted several foreign groups that worked with the resident staff as collaborators. The participation of Argonne National Laboratory in the YALINA research programs commenced in 2005. For obvious reasons, special emphasis is placed in this report on the work at YALINA facility that has involved Argonne's participation. Attention is given here to the experimental program at YALINA facility as well as to analytical investigations aimed at validating codes and computational procedures and at providing a better understanding of the physics and operational behavior of the YALINA facility in particular, and ADS systems in general, during the period 1997-2008.« less

  6. The challenges of developing computational physics: the case of South Africa

    NASA Astrophysics Data System (ADS)

    Salagaram, T.; Chetty, N.

    2013-08-01

    Most modern scientific research problems are complex and interdisciplinary in nature. It is impossible to study such problems in detail without the use of computation in addition to theory and experiment. Although it is widely agreed that students should be introduced to computational methods at the undergraduate level, it remains a challenge to do this in a full traditional undergraduate curriculum. In this paper, we report on a survey that we conducted of undergraduate physics curricula in South Africa to determine the content and the approach taken in the teaching of computational physics. We also considered the pedagogy of computational physics at the postgraduate and research levels at various South African universities, research facilities and institutions. We conclude that the state of computational physics training in South Africa, especially at the undergraduate teaching level, is generally weak and needs to be given more attention at all universities. Failure to do so will impact negatively on the countrys capacity to grow its endeavours generally in the field of computational sciences, with negative impacts on research, and in commerce and industry.

  7. Information Presentation and Control in a Modern Air Traffic Control Tower Simulator

    NASA Technical Reports Server (NTRS)

    Haines, Richard F.; Doubek, Sharon; Rabin, Boris; Harke, Stanton

    1996-01-01

    The proper presentation and management of information in America's largest and busiest (Level V) air traffic control towers calls for an in-depth understanding of many different human-computer considerations: user interface design for graphical, radar, and text; manual and automated data input hardware; information/display output technology; reconfigurable workstations; workload assessment; and many other related subjects. This paper discusses these subjects in the context of the Surface Development and Test Facility (SDTF) currently under construction at NASA's Ames Research Center, a full scale, multi-manned, air traffic control simulator which will provide the "look and feel" of an actual airport tower cab. Special emphasis will be given to the human-computer interfaces required for the different kinds of information displayed at the various controller and supervisory positions and to the computer-aided design (CAD) and other analytic, computer-based tools used to develop the facility.

  8. The space physics analysis network

    NASA Astrophysics Data System (ADS)

    Green, James L.

    1988-04-01

    The Space Physics Analysis Network, or SPAN, is emerging as a viable method for solving an immediate communication problem for space and Earth scientists and has been operational for nearly 7 years. SPAN and its extension into Europe, utilizes computer-to-computer communications allowing mail, binary and text file transfer, and remote logon capability to over 1000 space science computer systems. The network has been used to successfully transfer real-time data to remote researchers for rapid data analysis but its primary function is for non-real-time applications. One of the major advantages for using SPAN is its spacecraft mission independence. Space science researchers using SPAN are located in universities, industries and government institutions all across the United States and Europe. These researchers are in such fields as magnetospheric physics, astrophysics, ionosperic physics, atmospheric physics, climatology, meteorology, oceanography, planetary physics and solar physics. SPAN users have access to space and Earth science data bases, mission planning and information systems, and computational facilities for the purposes of facilitating correlative space data exchange, data analysis and space research. For example, the National Space Science Data Center (NSSDC), which manages the network, is providing facilities on SPAN such as the Network Information Center (SPAN NIC). SPAN has interconnections with several national and international networks such as HEPNET and TEXNET forming a transparent DECnet network. The combined total number of computers now reachable over these combined networks is about 2000. In addition, SPAN supports full function capabilities over the international public packet switched networks (e.g. TELENET) and has mail gateways to ARPANET, BITNET and JANET.

  9. Advanced Simulation and Computing Fiscal Year 2016 Implementation Plan, Version 0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCoy, M.; Archer, B.; Hendrickson, B.

    2015-08-27

    The Stockpile Stewardship Program (SSP) is an integrated technical program for maintaining the safety, surety, and reliability of the U.S. nuclear stockpile. The SSP uses nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of experimental facilities and programs, and the computational capabilities to support these programs. The purpose of this IP is to outline key work requirements to be performed and to control individualmore » work activities within the scope of work. Contractors may not deviate from this plan without a revised WA or subsequent IP.« less

  10. A research study for the preliminary definition of an aerophysics free-flight laboratory facility

    NASA Technical Reports Server (NTRS)

    Canning, Thomas N.

    1988-01-01

    A renewed interest in hypervelocity vehicles requires an increase in the knowledge of aerodynamic phenomena. Tests conducted with ground-based facilities can be used both to better understand the physics of hypervelocity flight, and to calibrate and validate computer codes designed to predict vehicle performance in the hypervelocity environment. This research reviews the requirements for aerothermodynamic testing and discusses the ballistic range and its capabilities. Examples of the kinds of testing performed in typical high performance ballistic ranges are described. We draw heavily on experience obtained in the ballistics facilities at NASA Ames Research Center, Moffett Field, California. Prospects for improving the capabilities of the ballistic range by using advanced instrumentation are discussed. Finally, recent developments in gun technology and their application to extend the capability of the ballistic range are summarized.

  11. Safety Precautions and Operating Procedures in an (A)BSL-4 Laboratory: 4. Medical Imaging Procedures.

    PubMed

    Byrum, Russell; Keith, Lauren; Bartos, Christopher; St Claire, Marisa; Lackemeyer, Matthew G; Holbrook, Michael R; Janosko, Krisztina; Barr, Jason; Pusl, Daniela; Bollinger, Laura; Wada, Jiro; Coe, Linda; Hensley, Lisa E; Jahrling, Peter B; Kuhn, Jens H; Lentz, Margaret R

    2016-10-03

    Medical imaging using animal models for human diseases has been utilized for decades; however, until recently, medical imaging of diseases induced by high-consequence pathogens has not been possible. In 2014, the National Institutes of Health, National Institute of Allergy and Infectious Diseases, Integrated Research Facility at Fort Detrick opened an Animal Biosafety Level 4 (ABSL-4) facility to assess the clinical course and pathology of infectious diseases in experimentally infected animals. Multiple imaging modalities including computed tomography (CT), magnetic resonance imaging, positron emission tomography, and single photon emission computed tomography are available to researchers for these evaluations. The focus of this article is to describe the workflow for safely obtaining a CT image of a live guinea pig in an ABSL-4 facility. These procedures include animal handling, anesthesia, and preparing and monitoring the animal until recovery from sedation. We will also discuss preparing the imaging equipment, performing quality checks, communication methods from "hot side" (containing pathogens) to "cold side," and moving the animal from the holding room to the imaging suite.

  12. Redirecting Under-Utilised Computer Laboratories into Cluster Computing Facilities

    ERIC Educational Resources Information Center

    Atkinson, John S.; Spenneman, Dirk H. R.; Cornforth, David

    2005-01-01

    Purpose: To provide administrators at an Australian university with data on the feasibility of redirecting under-utilised computer laboratories facilities into a distributed high performance computing facility. Design/methodology/approach: The individual log-in records for each computer located in the computer laboratories at the university were…

  13. NHERI: Advancing the Research Infrastructure of the Multi-Hazard Community

    NASA Astrophysics Data System (ADS)

    Blain, C. A.; Ramirez, J. A.; Bobet, A.; Browning, J.; Edge, B.; Holmes, W.; Johnson, D.; Robertson, I.; Smith, T.; Zuo, D.

    2017-12-01

    The Natural Hazards Engineering Research Infrastructure (NHERI), supported by the National Science Foundation (NSF), is a distributed, multi-user national facility that provides the natural hazards research community with access to an advanced research infrastructure. Components of NHERI are comprised of a Network Coordination Office (NCO), a cloud-based cyberinfrastructure (DesignSafe-CI), a computational modeling and simulation center (SimCenter), and eight Experimental Facilities (EFs), including a post-disaster, rapid response research facility (RAPID). Utimately NHERI enables researchers to explore and test ground-breaking concepts to protect homes, businesses and infrastructure lifelines from earthquakes, windstorms, tsunamis, and surge enabling innovations to help prevent natural hazards from becoming societal disasters. When coupled with education and community outreach, NHERI will facilitate research and educational advances that contribute knowledge and innovation toward improving the resiliency of the nation's civil infrastructure to withstand natural hazards. The unique capabilities and coordinating activities over Year 1 between NHERI's DesignSafe-CI, the SimCenter, and individual EFs will be presented. Basic descriptions of each component are also found at https://www.designsafe-ci.org/facilities/. Additionally to be discussed are the various roles of the NCO in leading development of a 5-year multi-hazard science plan, coordinating facility scheduling and fostering the sharing of technical knowledge and best practices, leading education and outreach programs such as the recent Summer Institute and multi-facility REU program, ensuring a platform for technology transfer to practicing engineers, and developing strategic national and international partnerships to support a diverse multi-hazard research and user community.

  14. Future experimental needs to support applied aerodynamics - A transonic perspective

    NASA Technical Reports Server (NTRS)

    Gloss, Blair B.

    1992-01-01

    Advancements in facilities, test techniques, and instrumentation are needed to provide data required for the development of advanced aircraft and to verify computational methods. An industry survey of major users of wind tunnel facilities at Langley Research Center (LaRC) was recently carried out to determine future facility requirements, test techniques, and instrumentation requirements; results from this survey are reflected in this paper. In addition, areas related to transonic testing at LaRC which are either currently being developed or are recognized as needing improvements are discussed.

  15. Evaluative studies in nuclear medicine research. Emission-computed tomography assessment. Progress report 1 January-15 August 1981

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Potchen, E.J.

    Questions regarding what imaging performance goals need to be met to produce effective biomedical research using positron emission computer tomography, how near those performance goals are to being realized by imaging systems, and the dependence of currently-unachieved performance goals on design and operational factors have been addressed in the past year, along with refinement of economic estimates for the capital and operating costs of a PECT research facility. The two primary sources of information have been solicitations of expert opinion and review of current literature. (ACR)

  16. Techniques for animation of CFD results. [computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Horowitz, Jay; Hanson, Jeffery C.

    1992-01-01

    Video animation is becoming increasingly vital to the computational fluid dynamics researcher, not just for presentation, but for recording and comparing dynamic visualizations that are beyond the current capabilities of even the most powerful graphic workstation. To meet these needs, Lewis Research Center has recently established a facility to provide users with easy access to advanced video animation capabilities. However, producing animation that is both visually effective and scientifically accurate involves various technological and aesthetic considerations that must be understood both by the researcher and those supporting the visualization process. These considerations include: scan conversion, color conversion, and spatial ambiguities.

  17. Science and technology review, March 1997

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Upadhye, R.

    The articles in this month`s issue are entitled Site 300`s New Contained Firing Facility, Computational Electromagnetics: Codes and Capabilities, Ergonomics Research:Impact on Injuries, and The Linear Electric Motor: Instability at 1,000 g`s.

  18. Time transfer between the Goddard Optical Research Facility and the U.S. Naval Observatory using 100 picosecond laser pulses

    NASA Technical Reports Server (NTRS)

    Alley, C. O.; Rayner, J. D.; Steggerda, C. A.; Mullendore, J. V.; Small, L.; Wagner, S.

    1983-01-01

    A horizontal two-way time comparison link in air between the University of Maryland laser ranging and time transfer equipment at the Goddard Optical Research Facility (GORF) 1.2 m telescope and the Time Services Division of the U.S. Naval Observatory (USNO) was established. Flat mirrors of 25 cm and 30 cm diameter respectively were placed on top of the Washington Cathedral and on a water tower at the Beltsville Agricultural Research Center. Two optical corner reflectors at the USNO reflect the laser pulses back to the GORF. Light pulses of 100 ps duration and an energy of several hundred microjoules are sent at the rate of 10 pulses per second. The detection at the USNO is by means of an RCA C30902E avalanche photodiode and the timing is accomplished by an HP 5370A computing counter and an HP 1000 computer with respect to a 10 pps pulse train from the Master Clock.

  19. Light transport and general aviation aircraft icing research requirements

    NASA Technical Reports Server (NTRS)

    Breeze, R. K.; Clark, G. M.

    1981-01-01

    A short term and a long term icing research and technology program plan was drafted for NASA LeRC based on 33 separate research items. The specific items listed resulted from a comprehensive literature search, organized and assisted by a computer management file and an industry/Government agency survey. Assessment of the current facilities and icing technology was accomplished by presenting summaries of ice sensitive components and protection methods; and assessments of penalty evaluation, the experimental data base, ice accretion prediction methods, research facilities, new protection methods, ice protection requirements, and icing instrumentation. The intent of the research plan was to determine what icing research NASA LeRC must do or sponsor to ultimately provide for increased utilization and safety of light transport and general aviation aircraft.

  20. Designing Facilities for Collaborative Operations

    NASA Technical Reports Server (NTRS)

    Norris, Jeffrey; Powell, Mark; Backes, Paul; Steinke, Robert; Tso, Kam; Wales, Roxana

    2003-01-01

    A methodology for designing operational facilities for collaboration by multiple experts has begun to take shape as an outgrowth of a project to design such facilities for scientific operations of the planned 2003 Mars Exploration Rover (MER) mission. The methodology could also be applicable to the design of military "situation rooms" and other facilities for terrestrial missions. It was recognized in this project that modern mission operations depend heavily upon the collaborative use of computers. It was further recognized that tests have shown that layout of a facility exerts a dramatic effect on the efficiency and endurance of the operations staff. The facility designs (for example, see figure) and the methodology developed during the project reflect this recognition. One element of the methodology is a metric, called effective capacity, that was created for use in evaluating proposed MER operational facilities and may also be useful for evaluating other collaboration spaces, including meeting rooms and military situation rooms. The effective capacity of a facility is defined as the number of people in the facility who can be meaningfully engaged in its operations. A person is considered to be meaningfully engaged if the person can (1) see, hear, and communicate with everyone else present; (2) see the material under discussion (typically data on a piece of paper, computer monitor, or projection screen); and (3) provide input to the product under development by the group. The effective capacity of a facility is less than the number of people that can physically fit in the facility. For example, a typical office that contains a desktop computer has an effective capacity of .4, while a small conference room that contains a projection screen has an effective capacity of around 10. Little or no benefit would be derived from allowing the number of persons in an operational facility to exceed its effective capacity: At best, the operations staff would be underutilized; at worst, operational performance would deteriorate. Elements of this methodology were applied to the design of three operations facilities for a series of rover field tests. These tests were observed by human-factors researchers and their conclusions are being used to refine and extend the methodology to be used in the final design of the MER operations facility. Further work is underway to evaluate the use of personal digital assistant (PDA) units as portable input interfaces and communication devices in future mission operations facilities. A PDA equipped for wireless communication and Ethernet, Bluetooth, or another networking technology would cost less than a complete computer system, and would enable a collaborator to communicate electronically with computers and with other collaborators while moving freely within the virtual environment created by a shared immersive graphical display.

  1. The NHERI RAPID Facility: Enabling the Next-Generation of Natural Hazards Reconnaissance

    NASA Astrophysics Data System (ADS)

    Wartman, J.; Berman, J.; Olsen, M. J.; Irish, J. L.; Miles, S.; Gurley, K.; Lowes, L.; Bostrom, A.

    2017-12-01

    The NHERI post-disaster, rapid response research (or "RAPID") facility, headquartered at the University of Washington (UW), is a collaboration between UW, Oregon State University, Virginia Tech, and the University of Florida. The RAPID facility will enable natural hazard researchers to conduct next-generation quick response research through reliable acquisition and community sharing of high-quality, post-disaster data sets that will enable characterization of civil infrastructure performance under natural hazard loads, evaluation of the effectiveness of current and previous design methodologies, understanding of socio-economic dynamics, calibration of computational models used to predict civil infrastructure component and system response, and development of solutions for resilient communities. The facility will provide investigators with the hardware, software and support services needed to collect, process and assess perishable interdisciplinary data following extreme natural hazard events. Support to the natural hazards research community will be provided through training and educational activities, field deployment services, and by promoting public engagement with science and engineering. Specifically, the RAPID facility is undertaking the following strategic activities: (1) acquiring, maintaining, and operating state-of-the-art data collection equipment; (2) developing and supporting mobile applications to support interdisciplinary field reconnaissance; (3) providing advisory services and basic logistics support for research missions; (4) facilitating the systematic archiving, processing and visualization of acquired data in DesignSafe-CI; (5) training a broad user base through workshops and other activities; and (6) engaging the public through citizen science, as well as through community outreach and education. The facility commenced operations in September 2016 and will begin field deployments beginning in September 2018. This poster will provide an overview of the vision for the RAPID facility, the equipment that will be available for use, the facility's operations, and opportunities for user training and facility use.

  2. Test Facilities and Experience on Space Nuclear System Developments at the Kurchatov Institute

    NASA Astrophysics Data System (ADS)

    Ponomarev-Stepnoi, Nikolai N.; Garin, Vladimir P.; Glushkov, Evgeny S.; Kompaniets, George V.; Kukharkin, Nikolai E.; Madeev, Vicktor G.; Papin, Vladimir K.; Polyakov, Dmitry N.; Stepennov, Boris S.; Tchuniyaev, Yevgeny I.; Tikhonov, Lev Ya.; Uksusov, Yevgeny I.

    2004-02-01

    The complexity of space fission systems and rigidity of requirement on minimization of weight and dimension characteristics along with the wish to decrease expenditures on their development demand implementation of experimental works which results shall be used in designing, safety substantiation, and licensing procedures. Experimental facilities are intended to solve the following tasks: obtainment of benchmark data for computer code validations, substantiation of design solutions when computational efforts are too expensive, quality control in a production process, and ``iron'' substantiation of criticality safety design solutions for licensing and public relations. The NARCISS and ISKRA critical facilities and unique ORM facility on shielding investigations at the operating OR nuclear research reactor were created in the Kurchatov Institute to solve the mentioned tasks. The range of activities performed at these facilities within the implementation of the previous Russian nuclear power system programs is briefly described in the paper. This experience shall be analyzed in terms of methodological approach to development of future space nuclear systems (this analysis is beyond this paper). Because of the availability of these facilities for experiments, the brief description of their critical assemblies and characteristics is given in this paper.

  3. Matched Index of Refraction Flow Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mcllroy, Hugh

    What's 27 feet long, 10 feet tall and full of mineral oil (3000 gallons' worth)? If you said INL's Matched Index of Refraction facility, give yourself a gold star. Scientists use computers to model the inner workings of nuclear reactors, and MIR helps validate those models. INL's Hugh McIlroy explains in this video. You can learn more about INL energy research at the lab's facebook site http://www.facebook.com/idahonationallaboratory.

  4. Matched Index of Refraction Flow Facility

    ScienceCinema

    Mcllroy, Hugh

    2018-01-08

    What's 27 feet long, 10 feet tall and full of mineral oil (3000 gallons' worth)? If you said INL's Matched Index of Refraction facility, give yourself a gold star. Scientists use computers to model the inner workings of nuclear reactors, and MIR helps validate those models. INL's Hugh McIlroy explains in this video. You can learn more about INL energy research at the lab's facebook site http://www.facebook.com/idahonationallaboratory.

  5. CFD Simulations of the IHF Arc-Jet Flow: Compression-Pad/Separation Bolt Wedge Tests

    NASA Technical Reports Server (NTRS)

    Gokcen, Tahir; Skokova, Kristina A.

    2017-01-01

    This paper reports computational analyses in support of two wedge tests in a high enthalpy arc-jet facility at NASA Ames Research Center. These tests were conducted using two different wedge models, each placed in a free jet downstream of a corresponding different conical nozzle in the Ames 60-MW Interaction Heating Facility. Panel test articles included a metallic separation bolt imbedded in the compression-pad and heat shield materials, resulting in a circular protuberance over a flat plate. As part of the test calibration runs, surface pressure and heat flux measurements on water-cooled calibration plates integrated with the wedge models were also obtained. Surface heating distributions on the test articles as well as arc-jet test environment parameters for each test configuration are obtained through computational fluid dynamics simulations, consistent with the facility and calibration measurements. The present analysis comprises simulations of the non-equilibrium flow field in the facility nozzle, test box, and flow field over test articles, and comparisons with the measured calibration data.

  6. Brief Survey of TSC Computing Facilities

    DOT National Transportation Integrated Search

    1972-05-01

    The Transportation Systems Center (TSC) has four, essentially separate, in-house computing facilities. We shall call them Honeywell Facility, the Hybrid Facility, the Multimode Simulation Facility, and the Central Facility. In addition to these four,...

  7. Spacecraft crew procedures from paper to computers

    NASA Technical Reports Server (NTRS)

    Oneal, Michael; Manahan, Meera

    1991-01-01

    Described here is a research project that uses human factors and computer systems knowledge to explore and help guide the design and creation of an effective Human-Computer Interface (HCI) for spacecraft crew procedures. By having a computer system behind the user interface, it is possible to have increased procedure automation, related system monitoring, and personalized annotation and help facilities. The research project includes the development of computer-based procedure system HCI prototypes and a testbed for experiments that measure the effectiveness of HCI alternatives in order to make design recommendations. The testbed will include a system for procedure authoring, editing, training, and execution. Progress on developing HCI prototypes for a middeck experiment performed on Space Shuttle Mission STS-34 and for upcoming medical experiments are discussed. The status of the experimental testbed is also discussed.

  8. [Organization of clinical research: in general and visceral surgery].

    PubMed

    Schneider, M; Werner, J; Weitz, J; Büchler, M W

    2010-04-01

    The structural organization of research facilities within a surgical university center should aim at strengthening the department's research output and likewise provide opportunities for the scientific education of academic surgeons. We suggest a model in which several independent research groups within a surgical department engage in research projects covering various aspects of surgically relevant basic, translational or clinical research. In order to enhance the translational aspects of surgical research, a permanent link needs to be established between the department's scientific research projects and its chief interests in clinical patient care. Importantly, a focus needs to be placed on obtaining evidence-based data to judge the efficacy of novel diagnostic and treatment concepts. Integration of modern technologies from the fields of physics, computer science and molecular medicine into surgical research necessitates cooperation with external research facilities, which can be strengthened by coordinated support programs offered by research funding institutions.

  9. Closely Spaced Independent Parallel Runway Simulation.

    DTIC Science & Technology

    1984-10-01

    facility consists of the Central Computer Facility, the Controller Laboratory, and the Simulator Pilot Complex. CENTRAL COMPUTER FACILITY. The Central... Computer Facility consists of a group of mainframes, minicomputers, and associated peripherals which host the operational and data acquisition...in the Controller Laboratory and convert their verbal directives into a keyboard entry which is transmitted to the Central Computer Complex, where

  10. NACA Computer Operates an IBM Telereader

    NASA Image and Video Library

    1952-02-21

    A staff member from the Computing Section at the National Advisory Committee for Aeronautics (NACA) Lewis Flight Propulsion Laboratory operates an International Business Machines (IBM) telereader at the 8- by 6-Foot Supersonic Wind Tunnel. The telereader was used to measure recorded data from motion picture film or oscillographs. The machine could perform 50 measurements per minute. The component to her right is a telerecordex that was used convert the telereader measurements into decimal form and record the data on computer punch cards. During test runs in the 8- by 6-foot tunnel, or the other large test facilities, pressure sensors on the test article were connected to mercury-filled manometer tubes located below the test section. The mercury would rise or fall in relation to the pressure fluctuations in the test section. Initially, female staff members, known as “computers,” transcribed all the measurements by hand. The process became automated with the introduction of the telereader and other data reduction equipment in the early 1950s. The Computer Section staff members were still needed to operate the machines. The Computing Section was introduced during World War II to relieve short-handed research engineers of some of the tedious work. The computers made the initial computations and plotted the data graphically. The researcher then analyzed the data and either summarized the findings in a report or made modifications or ran the test again. The computers and analysts were located in the Altitude Wind Tunnel Shop and Office Building office wing during the 1940s. They were transferred to the new facility when the 8- by 6-Foot tunnel began operations in 1948.

  11. GIS Facility and Services at the Ronald Greeley Center for Planetary Studies

    NASA Astrophysics Data System (ADS)

    Nelson, D. M.; Williams, D. A.

    2017-06-01

    At the RGCPS, we established a Geographic Information Systems (GIS) computer laboratory, where we instruct researchers how to use GIS and image processing software. Seminars demonstrate viewing, integrating, and digitally mapping planetary data.

  12. Toward the Factory of the Future.

    ERIC Educational Resources Information Center

    Hazony, Yehonathan

    1983-01-01

    Computer-integrated manufacturing (CIM) involves use of data processing technology as the vehicle for full integration of the total manufacturing process. A prototype research and educational facility for CIM developed with industrial sponsorship at Princeton University is described. (JN)

  13. NASA Ames aerospace systems directorate research

    NASA Technical Reports Server (NTRS)

    Albers, James A.

    1991-01-01

    The Aerospace Systems Directorate is one of four research directorates at the NASA Ames Research Center. The Directorate conducts research and technology development for advanced aircraft and aircraft systems in intelligent computational systems and human-machine systems for aeronautics and space. The Directorate manages research and aircraft technology development projects, and operates and maintains major wind tunnels and flight simulation facilities. The Aerospace Systems Directorate's research and technology as it relates to NASA agency goals and specific strategic thrusts are discussed.

  14. Multi-year Content Analysis of User Facility Related Publications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Patton, Robert M; Stahl, Christopher G; Hines, Jayson

    2013-01-01

    Scientific user facilities provide resources and support that enable scientists to conduct experiments or simulations pertinent to their respective research. Consequently, it is critical to have an informed understanding of the impact and contributions that these facilities have on scientific discoveries. Leveraging insight into scientific publications that acknowledge the use of these facilities enables more informed decisions by facility management and sponsors in regard to policy, resource allocation, and influencing the direction of science as well as more effectively understand the impact of a scientific user facility. This work discusses preliminary results of mining scientific publications that utilized resources atmore » the Oak Ridge Leadership Computing Facility (OLCF) at Oak Ridge National Laboratory (ORNL). These results show promise in identifying and leveraging multi-year trends and providing a higher resolution view of the impact that a scientific user facility may have on scientific discoveries.« less

  15. Aeronautics Technology Possibilities for 2000: Report of a workshop

    NASA Technical Reports Server (NTRS)

    1984-01-01

    The potential of aeronautical research and technology (R&T) development, which could provide the basis for facility planning and long range guidance of R&T programs and could establish justification for support of aeronautical research and technology was studied. The projections served specific purposes: (1) to provide a base for research and future facilities needed to support the projected technologies, and development advanced vehicles; (2) to provide insight on the possible state of the art in aeronautical technology by the year 2000 for civil and military planners of air vehicles and systems. Topics discussed include: aerodynamics; propulsion; structures; materials; guidance, navigation and control; computer and information technology; human factors; and systems integration.

  16. Mission Simulation Facility: Simulation Support for Autonomy Development

    NASA Technical Reports Server (NTRS)

    Pisanich, Greg; Plice, Laura; Neukom, Christian; Flueckiger, Lorenzo; Wagner, Michael

    2003-01-01

    The Mission Simulation Facility (MSF) supports research in autonomy technology for planetary exploration vehicles. Using HLA (High Level Architecture) across distributed computers, the MSF connects users autonomy algorithms with provided or third-party simulations of robotic vehicles and planetary surface environments, including onboard components and scientific instruments. Simulation fidelity is variable to meet changing needs as autonomy technology advances in Technical Readiness Level (TRL). A virtual robot operating in a virtual environment offers numerous advantages over actual hardware, including availability, simplicity, and risk mitigation. The MSF is in use by researchers at NASA Ames Research Center (ARC) and has demonstrated basic functionality. Continuing work will support the needs of a broader user base.

  17. Computational model of gamma irradiation room at ININ

    NASA Astrophysics Data System (ADS)

    Rodríguez-Romo, Suemi; Patlan-Cardoso, Fernando; Ibáñez-Orozco, Oscar; Vergara Martínez, Francisco Javier

    2018-03-01

    In this paper, we present a model of the gamma irradiation room at the National Institute of Nuclear Research (ININ is its acronym in Spanish) in Mexico to improve the use of physics in dosimetry for human protection. We deal with air-filled ionization chambers and scientific computing made in house and framed in both the GEANT4 scheme and our analytical approach to characterize the irradiation room. This room is the only secondary dosimetry facility in Mexico. Our aim is to optimize its experimental designs, facilities, and industrial applications of physical radiation. The computational results provided by our model are supported by all the known experimental data regarding the performance of the ININ gamma irradiation room and allow us to predict the values of the main variables related to this fully enclosed space to within an acceptable margin of error.

  18. NASA Aeronautics: Research and Technology Program Highlights

    NASA Technical Reports Server (NTRS)

    1990-01-01

    This report contains numerous color illustrations to describe the NASA programs in aeronautics. The basic ideas involved are explained in brief paragraphs. The seven chapters deal with Subsonic aircraft, High-speed transport, High-performance military aircraft, Hypersonic/Transatmospheric vehicles, Critical disciplines, National facilities and Organizations & installations. Some individual aircraft discussed are : the SR-71 aircraft, aerospace planes, the high-speed civil transport (HSCT), the X-29 forward-swept wing research aircraft, and the X-31 aircraft. Critical disciplines discussed are numerical aerodynamic simulation, computational fluid dynamics, computational structural dynamics and new experimental testing techniques.

  19. GATECloud.net: a platform for large-scale, open-source text processing on the cloud.

    PubMed

    Tablan, Valentin; Roberts, Ian; Cunningham, Hamish; Bontcheva, Kalina

    2013-01-28

    Cloud computing is increasingly being regarded as a key enabler of the 'democratization of science', because on-demand, highly scalable cloud computing facilities enable researchers anywhere to carry out data-intensive experiments. In the context of natural language processing (NLP), algorithms tend to be complex, which makes their parallelization and deployment on cloud platforms a non-trivial task. This study presents a new, unique, cloud-based platform for large-scale NLP research--GATECloud. net. It enables researchers to carry out data-intensive NLP experiments by harnessing the vast, on-demand compute power of the Amazon cloud. Important infrastructural issues are dealt with by the platform, completely transparently for the researcher: load balancing, efficient data upload and storage, deployment on the virtual machines, security and fault tolerance. We also include a cost-benefit analysis and usage evaluation.

  20. Computational fluid dynamics for propulsion technology: Geometric grid visualization in CFD-based propulsion technology research

    NASA Technical Reports Server (NTRS)

    Ziebarth, John P.; Meyer, Doug

    1992-01-01

    The coordination is examined of necessary resources, facilities, and special personnel to provide technical integration activities in the area of computational fluid dynamics applied to propulsion technology. Involved is the coordination of CFD activities between government, industry, and universities. Current geometry modeling, grid generation, and graphical methods are established to use in the analysis of CFD design methodologies.

  1. Wright Research and Development Center Test Facilities Handbook

    DTIC Science & Technology

    1990-01-01

    Variable Temperature (2-400K) and Field (0-5 Tesla) Squid Susceptometer Variable Temperature (10-80K) and Field (0-10 Tesla) Transport Current...determine products of combustion using extraction type probes INSTRUMENTATION: Mini computer/data acquisiton system Networking provides access to larger...data recorder, Masscomp MC-500 computer with acquisition digitizer, laser and ink -jet printers,lo-pass filters, pulse code modulation AVAILABILITY

  2. Laser Assisted CVD Growth of A1N and GaN

    DTIC Science & Technology

    1990-08-31

    additional cost sharing. RESEARCH FACILITIES The york is being performed in the Howard University Laser Laboratory. This is a free-standing buildinq...would be used to optimize computer models of the laser induced CVD reactor. FACILITIES AND EQUIPMENT - ADDITIONAL COST SHARING This year Howard ... University has provided $45,000 for the purchase of an excimer laser to be shared by Dr. Crye for the diode laser probe experiments and another Assistant

  3. Mira: Argonne's 10-petaflops supercomputer

    ScienceCinema

    Papka, Michael; Coghlan, Susan; Isaacs, Eric; Peters, Mark; Messina, Paul

    2018-02-13

    Mira, Argonne's petascale IBM Blue Gene/Q system, ushers in a new era of scientific supercomputing at the Argonne Leadership Computing Facility. An engineering marvel, the 10-petaflops supercomputer is capable of carrying out 10 quadrillion calculations per second. As a machine for open science, any researcher with a question that requires large-scale computing resources can submit a proposal for time on Mira, typically in allocations of millions of core-hours, to run programs for their experiments. This adds up to billions of hours of computing time per year.

  4. Mira: Argonne's 10-petaflops supercomputer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Papka, Michael; Coghlan, Susan; Isaacs, Eric

    2013-07-03

    Mira, Argonne's petascale IBM Blue Gene/Q system, ushers in a new era of scientific supercomputing at the Argonne Leadership Computing Facility. An engineering marvel, the 10-petaflops supercomputer is capable of carrying out 10 quadrillion calculations per second. As a machine for open science, any researcher with a question that requires large-scale computing resources can submit a proposal for time on Mira, typically in allocations of millions of core-hours, to run programs for their experiments. This adds up to billions of hours of computing time per year.

  5. Multi-man flight simulator

    NASA Technical Reports Server (NTRS)

    Macdonald, G.

    1983-01-01

    A prototype Air Traffic Control facility and multiman flight simulator facility was designed and one of the component simulators fabricated as a proof of concept. The facility was designed to provide a number of independent simple simulator cabs that would have the capability of some local, stand alone processing that would in turn interface with a larger host computer. The system can accommodate up to eight flight simulators (commercially available instrument trainers) which could be operated stand alone if no graphics were required or could operate in a common simulated airspace if connected to the host computer. A proposed addition to the original design is the capability of inputing pilot inputs and quantities displayed on the flight and navigation instruments to the microcomputer when the simulator operates in the stand alone mode to allow independent use of these commercially available instrument trainers for research. The conceptual design of the system and progress made to date on its implementation are described.

  6. Low Prevalence of Chronic Beryllium Disease Among Workers at aNuclearWeaponsResearchandDevelopmentFacility

    PubMed Central

    Arjomandi, Mehrdad; Seward, James; Gotway, Michael B.; Nishimura, Stephen; Fulton, George P.; Thundiyil, Josef; King, Talmadge E.; Harber, Philip; Balmes, John R.

    2012-01-01

    Objective To study the prevalence of beryllium sensitization (BeS) and chronic beryllium disease (CBD) in a cohort of workers from a nuclear weapons research and development facility. Methods We evaluated 50 workers with BeS with medical and occupational histories, physical examination, chest imaging with high-resolution computed tomography (N = 49), and pulmonary function testing. Forty of these workers also underwent bronchoscopy for bronchoalveolar lavage and transbronchial biopsies. Results The mean duration of employment at the facility was 18 years and the mean latency (from first possible exposure) to time of evaluation was 32 years. Five of the workers had CBD at the time of evaluation (based on histology or high-resolution computed tomography); three others had evidence of probable CBD. Conclusions These workers with BeS, characterized by a long duration of potential Be exposure and a long latency, had a low prevalence of CBD. PMID:20523233

  7. Yahoo! Compute Coop (YCC). A Next-Generation Passive Cooling Design for Data Centers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robison, AD; Page, Christina; Lytle, Bob

    The purpose of the Yahoo! Compute Coop (YCC) project is to research, design, build and implement a greenfield "efficient data factory" and to specifically demonstrate that the YCC concept is feasible for large facilities housing tens of thousands of heat-producing computing servers. The project scope for the Yahoo! Compute Coop technology includes: - Analyzing and implementing ways in which to drastically decrease energy consumption and waste output. - Analyzing the laws of thermodynamics and implementing naturally occurring environmental effects in order to maximize the "free-cooling" for large data center facilities. "Free cooling" is the direct usage of outside air tomore » cool the servers vs. traditional "mechanical cooling" which is supplied by chillers or other Dx units. - Redesigning and simplifying building materials and methods. - Shortening and simplifying build-to-operate schedules while at the same time reducing initial build and operating costs. Selected for its favorable climate, the greenfield project site is located in Lockport, NY. Construction on the 9.0 MW critical load data center facility began in May 2009, with the fully operational facility deployed in September 2010. The relatively low initial build cost, compatibility with current server and network models, and the efficient use of power and water are all key features that make it a highly compatible and globally implementable design innovation for the data center industry. Yahoo! Compute Coop technology is designed to achieve 99.98% uptime availability. This integrated building design allows for free cooling 99% of the year via the building's unique shape and orientation, as well as server physical configuration.« less

  8. Enabling Extreme Scale Earth Science Applications at the Oak Ridge Leadership Computing Facility

    NASA Astrophysics Data System (ADS)

    Anantharaj, V. G.; Mozdzynski, G.; Hamrud, M.; Deconinck, W.; Smith, L.; Hack, J.

    2014-12-01

    The Oak Ridge Leadership Facility (OLCF), established at the Oak Ridge National Laboratory (ORNL) under the auspices of the U.S. Department of Energy (DOE), welcomes investigators from universities, government agencies, national laboratories and industry who are prepared to perform breakthrough research across a broad domain of scientific disciplines, including earth and space sciences. Titan, the OLCF flagship system, is currently listed as #2 in the Top500 list of supercomputers in the world, and the largest available for open science. The computational resources are allocated primarily via the Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program, sponsored by the U.S. DOE Office of Science. In 2014, over 2.25 billion core hours on Titan were awarded via INCITE projects., including 14% of the allocation toward earth sciences. The INCITE competition is also open to research scientists based outside the USA. In fact, international research projects account for 12% of the INCITE awards in 2014. The INCITE scientific review panel also includes 20% participation from international experts. Recent accomplishments in earth sciences at OLCF include the world's first continuous simulation of 21,000 years of earth's climate history (2009); and an unprecedented simulation of a magnitude 8 earthquake over 125 sq. miles. One of the ongoing international projects involves scaling the ECMWF Integrated Forecasting System (IFS) model to over 200K cores of Titan. ECMWF is a partner in the EU funded Collaborative Research into Exascale Systemware, Tools and Applications (CRESTA) project. The significance of the research carried out within this project is the demonstration of techniques required to scale current generation Petascale capable simulation codes towards the performance levels required for running on future Exascale systems. One of the techniques pursued by ECMWF is to use Fortran2008 coarrays to overlap computations and communications and to reduce the total volume of data communicated. Use of Titan has enabled ECMWF to plan future scalability developments and resource requirements. We will also discuss the best practices developed over the years in navigating logistical, legal and regulatory hurdles involved in supporting the facility's diverse user community.

  9. 2009 ALCF annual report.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beckman, P.; Martin, D.; Drugan, C.

    2010-11-23

    This year the Argonne Leadership Computing Facility (ALCF) delivered nearly 900 million core hours of science. The research conducted at their leadership class facility touched our lives in both minute and massive ways - whether it was studying the catalytic properties of gold nanoparticles, predicting protein structures, or unearthing the secrets of exploding stars. The authors remained true to their vision to act as the forefront computational center in extending science frontiers by solving pressing problems for our nation. Our success in this endeavor was due mainly to the Department of Energy's (DOE) INCITE (Innovative and Novel Computational Impact onmore » Theory and Experiment) program. The program awards significant amounts of computing time to computationally intensive, unclassified research projects that can make high-impact scientific advances. This year, DOE allocated 400 million hours of time to 28 research projects at the ALCF. Scientists from around the world conducted the research, representing such esteemed institutions as the Princeton Plasma Physics Laboratory, National Institute of Standards and Technology, and European Center for Research and Advanced Training in Scientific Computation. Argonne also provided Director's Discretionary allocations for research challenges, addressing such issues as reducing aerodynamic noise, critical for next-generation 'green' energy systems. Intrepid - the ALCF's 557-teraflops IBM Blue/Gene P supercomputer - enabled astounding scientific solutions and discoveries. Intrepid went into full production five months ahead of schedule. As a result, the ALCF nearly doubled the days of production computing available to the DOE Office of Science, INCITE awardees, and Argonne projects. One of the fastest supercomputers in the world for open science, the energy-efficient system uses about one-third as much electricity as a machine of comparable size built with more conventional parts. In October 2009, President Barack Obama recognized the excellence of the entire Blue Gene series by awarding it to the National Medal of Technology and Innovation. Other noteworthy achievements included the ALCF's collaboration with the National Energy Research Scientific Computing Center (NERSC) to examine cloud computing as a potential new computing paradigm for scientists. Named Magellan, the DOE-funded initiative will explore which science application programming models work well within the cloud, as well as evaluate the challenges that come with this new paradigm. The ALCF obtained approval for its next-generation machine, a 10-petaflops system to be delivered in 2012. This system will allow us to resolve ever more pressing problems, even more expeditiously through breakthrough science in the years to come.« less

  10. Research on Automatic Programming

    DTIC Science & Technology

    1975-12-31

    Sequential processes, deadlocks, and semaphore primitives , Ph.D. Thesis, Harvard University, November 1974; Center for Research in Computing...verified. 13 Code generated to effect the synchronization makes use of the ECL control extension facility (Prenner’s CI, see [Prenner]). The... semaphore operations [Dijkstra] is being developed. Initial results for this code generator are very encouraging; in many cases generated code is

  11. Integration Of PanDA Workload Management System With Supercomputers for ATLAS and Data Intensive Science

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    De, K; Jha, S; Klimentov, A

    2016-01-01

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Managementmore » System for managing the workflow for all data processing on over 150 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250,000 cores with a peak performance of 0.3 petaFLOPS, LHC data taking runs require more resources than Grid computing can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, Europe and Russia (in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF), MIRA supercomputer at Argonne Leadership Computing Facilities (ALCF), Supercomputer at the National Research Center Kurchatov Institute , IT4 in Ostrava and others). Current approach utilizes modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on LCFs multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads on several supercomputing platforms for ALICE and ATLAS experiments and it is in full production for the ATLAS experiment since September 2015. We will present our current accomplishments with running PanDA WMS at supercomputers and demonstrate our ability to use PanDA as a portal independent of the computing facilities infrastructure for High Energy and Nuclear Physics as well as other data-intensive science applications, such as bioinformatics and astro-particle physics.« less

  12. LXtoo: an integrated live Linux distribution for the bioinformatics community

    PubMed Central

    2012-01-01

    Background Recent advances in high-throughput technologies dramatically increase biological data generation. However, many research groups lack computing facilities and specialists. This is an obstacle that remains to be addressed. Here, we present a Linux distribution, LXtoo, to provide a flexible computing platform for bioinformatics analysis. Findings Unlike most of the existing live Linux distributions for bioinformatics limiting their usage to sequence analysis and protein structure prediction, LXtoo incorporates a comprehensive collection of bioinformatics software, including data mining tools for microarray and proteomics, protein-protein interaction analysis, and computationally complex tasks like molecular dynamics. Moreover, most of the programs have been configured and optimized for high performance computing. Conclusions LXtoo aims to provide well-supported computing environment tailored for bioinformatics research, reducing duplication of efforts in building computing infrastructure. LXtoo is distributed as a Live DVD and freely available at http://bioinformatics.jnu.edu.cn/LXtoo. PMID:22813356

  13. LXtoo: an integrated live Linux distribution for the bioinformatics community.

    PubMed

    Yu, Guangchuang; Wang, Li-Gen; Meng, Xiao-Hua; He, Qing-Yu

    2012-07-19

    Recent advances in high-throughput technologies dramatically increase biological data generation. However, many research groups lack computing facilities and specialists. This is an obstacle that remains to be addressed. Here, we present a Linux distribution, LXtoo, to provide a flexible computing platform for bioinformatics analysis. Unlike most of the existing live Linux distributions for bioinformatics limiting their usage to sequence analysis and protein structure prediction, LXtoo incorporates a comprehensive collection of bioinformatics software, including data mining tools for microarray and proteomics, protein-protein interaction analysis, and computationally complex tasks like molecular dynamics. Moreover, most of the programs have been configured and optimized for high performance computing. LXtoo aims to provide well-supported computing environment tailored for bioinformatics research, reducing duplication of efforts in building computing infrastructure. LXtoo is distributed as a Live DVD and freely available at http://bioinformatics.jnu.edu.cn/LXtoo.

  14. Using Computer Simulation for Neurolab 2 Mission Planning

    NASA Technical Reports Server (NTRS)

    Sanders, Betty M.

    1997-01-01

    This paper presents an overview of the procedure used in the creation of a computer simulation video generated by the Graphics Research and Analysis Facility at NASA/Johnson Space Center. The simulation was preceded by an analysis of anthropometric characteristics of crew members and workspace requirements for 13 experiments to be conducted on Neurolab 2 which is dedicated to neuroscience and behavioral research. Neurolab 2 is being carried out as a partnership among national domestic research institutes and international space agencies. The video is a tour of the Spacelab module as it will be configured for STS-90, scheduled for launch in the spring of 1998, and identifies experiments that can be conducted in parallel during that mission. Therefore, this paper will also address methods for using computer modeling to facilitate the mission planning activity.

  15. A decade of aeroacoustic research at NASA Ames Research Center

    NASA Technical Reports Server (NTRS)

    Schmitz, Frederic H.; Mosher, M.; Kitaplioglu, Cahit; Cross, J.; Chang, I.

    1988-01-01

    The rotorcraft aeroacoustic research accomplishments of the past decade at Ames Research Center are reviewed. These include an extensive sequence of flight, ground, and wind tunnel tests that have utilized the facilities to guide and pioneer theoretical research. Many of these experiments were of benchmark quality. The experiments were used to isolate the inadequacies of linear theory in high-speed impulsive noise research, have led to the development of theoretical approaches, and have guided the emerging discipline of computational fluid dynamics to rotorcraft aeroacoustic problems.

  16. A Virtual Astronomical Research Machine in No Time (VARMiNT)

    NASA Astrophysics Data System (ADS)

    Beaver, John

    2012-05-01

    We present early results of using virtual machine software to help make astronomical research computing accessible to a wider range of individuals. Our Virtual Astronomical Research Machine in No Time (VARMiNT) is an Ubuntu Linux virtual machine with free, open-source software already installed and configured (and in many cases documented). The purpose of VARMiNT is to provide a ready-to-go astronomical research computing environment that can be freely shared between researchers, or between amateur and professional, teacher and student, etc., and to circumvent the often-difficult task of configuring a suitable computing environment from scratch. Thus we hope that VARMiNT will make it easier for individuals to engage in research computing even if they have no ready access to the facilities of a research institution. We describe our current version of VARMiNT and some of the ways it is being used at the University of Wisconsin - Fox Valley, a two-year teaching campus of the University of Wisconsin System, as a means to enhance student independent study research projects and to facilitate collaborations with researchers at other locations. We also outline some future plans and prospects.

  17. Apollo experience report: Real-time auxiliary computing facility development

    NASA Technical Reports Server (NTRS)

    Allday, C. E.

    1972-01-01

    The Apollo real time auxiliary computing function and facility were an extension of the facility used during the Gemini Program. The facility was expanded to include support of all areas of flight control, and computer programs were developed for mission and mission-simulation support. The scope of the function was expanded to include prime mission support functions in addition to engineering evaluations, and the facility became a mandatory mission support facility. The facility functioned as a full scale mission support activity until after the first manned lunar landing mission. After the Apollo 11 mission, the function and facility gradually reverted to a nonmandatory, offline, on-call operation because the real time program flexibility was increased and verified sufficiently to eliminate the need for redundant computations. The evaluation of the facility and function and recommendations for future programs are discussed in this report.

  18. Jackson State University's Center for Spatial Data Research and Applications: New facilities and new paradigms

    NASA Technical Reports Server (NTRS)

    Davis, Bruce E.; Elliot, Gregory

    1989-01-01

    Jackson State University recently established the Center for Spatial Data Research and Applications, a Geographical Information System (GIS) and remote sensing laboratory. Taking advantage of new technologies and new directions in the spatial (geographic) sciences, JSU is building a Center of Excellence in Spatial Data Management. New opportunities for research, applications, and employment are emerging. GIS requires fundamental shifts and new demands in traditional computer science and geographic training. The Center is not merely another computer lab but is one setting the pace in a new applied frontier. GIS and its associated technologies are discussed. The Center's facilities are described. An ARC/INFO GIS runs on a Vax mainframe, with numerous workstations. Image processing packages include ELAS, LIPS, VICAR, and ERDAS. A host of hardware and software peripheral are used in support. Numerous projects are underway, such as the construction of a Gulf of Mexico environmental data base, development of AI in image processing, a land use dynamics study of metropolitan Jackson, and others. A new academic interdisciplinary program in Spatial Data Management is under development, combining courses in Geography and Computer Science. The broad range of JSU's GIS and remote sensing activities is addressed. The impacts on changing paradigms in the university and in the professional world conclude the discussion.

  19. Electric Power Research Institute | Energy Systems Integration Facility |

    Science.gov Websites

    -10 megawatts of aggregated generation capacity. A photo of four men looking at something one man is pointing to on a desk while another man sits at the desk typing on a computer. EPRI and Schneider Electric

  20. ASC FY17 Implementation Plan, Rev. 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hamilton, P. G.

    The Stockpile Stewardship Program (SSP) is an integrated technical program for maintaining the safety, surety, and reliability of the U.S. nuclear stockpile. The SSP uses nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of experimental facilities and programs, and the computational capabilities to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities and computationalmore » resources that support annual stockpile assessment and certification, study advanced nuclear weapons design and manufacturing processes, analyze accident scenarios and weapons aging, and provide the tools to enable stockpile Life Extension Programs (LEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balance of resources, including technical staff, hardware, simulation software, and computer science solutions.« less

  1. Statistical Evaluation of Utilization of the ISS

    NASA Technical Reports Server (NTRS)

    Andrews, Ross; Andrews, Alida

    2006-01-01

    PayLoad Utilization Modeler (PLUM) is a statistical-modeling computer program used to evaluate the effectiveness of utilization of the International Space Station (ISS) in terms of the number of research facilities that can be operated within a specified interval of time. PLUM is designed to balance the requirements of research facilities aboard the ISS against the resources available on the ISS. PLUM comprises three parts: an interface for the entry of data on constraints and on required and available resources, a database that stores these data as well as the program output, and a modeler. The modeler comprises two subparts: one that generates tens of thousands of random combinations of research facilities and another that calculates the usage of resources for each of those combinations. The results of these calculations are used to generate graphical and tabular reports to determine which facilities are most likely to be operable on the ISS, to identify which ISS resources are inadequate to satisfy the demands upon them, and to generate other data useful in allocation of and planning of resources.

  2. Comparison of groundwater flow in Southern California coastal aquifers

    USGS Publications Warehouse

    Hanson, Randall T.; Izbicki, John A.; Reichard, Eric G.; Edwards, Brian D.; Land, Michael; Martin, Peter

    2009-01-01

    Maintaining the sustainability of Southern California coastal aquifers requires joint management of surface water and groundwater (conjunctive use). This requires new data collection and analyses (including research drilling, modern geohydrologic investigations, and development of detailed computer groundwater models that simulate the supply and demand components separately), implementation of new facilities (including spreading and injection facilities for artificial recharge), and establishment of new institutions and policies that help to sustain the water resources and better manage regional development.

  3. Structures and Dynamics Division: Research and technology plans for FY 1983 and accomplishments for FY 1982

    NASA Technical Reports Server (NTRS)

    Bales, K. S.

    1983-01-01

    The objectives, expected results, approach, and milestones for research projects of the IPAD Project Office and the impact dynamics, structural mechanics, and structural dynamics branches of the Structures and Dynamics Division are presented. Research facilities are described. Topics covered include computer aided design; general aviation/transport crash dynamics; aircraft ground performance; composite structures; failure analysis, space vehicle dynamics; and large space structures.

  4. Computing and information services at the Jet Propulsion Laboratory - A management approach to a diversity of needs

    NASA Technical Reports Server (NTRS)

    Felberg, F. H.

    1984-01-01

    The Jet Propulsion Laboratory, a research and development organization with about 5,000 employees, presents a complicated set of requirements for an institutional system of computing and informational services. The approach taken by JPL in meeting this challenge is one of controlled flexibility. A central communications network is provided, together with selected computing facilities for common use. At the same time, staff members are given considerable discretion in choosing the mini- and microcomputers that they believe will best serve their needs. Consultation services, computer education, and other support functions are also provided.

  5. Workflow Management Systems for Molecular Dynamics on Leadership Computers

    NASA Astrophysics Data System (ADS)

    Wells, Jack; Panitkin, Sergey; Oleynik, Danila; Jha, Shantenu

    Molecular Dynamics (MD) simulations play an important role in a range of disciplines from Material Science to Biophysical systems and account for a large fraction of cycles consumed on computing resources. Increasingly science problems require the successful execution of ''many'' MD simulations as opposed to a single MD simulation. There is a need to provide scalable and flexible approaches to the execution of the workload. We present preliminary results on the Titan computer at the Oak Ridge Leadership Computing Facility that demonstrate a general capability to manage workload execution agnostic of a specific MD simulation kernel or execution pattern, and in a manner that integrates disparate grid-based and supercomputing resources. Our results build upon our extensive experience of distributed workload management in the high-energy physics ATLAS project using PanDA (Production and Distributed Analysis System), coupled with recent conceptual advances in our understanding of workload management on heterogeneous resources. We will discuss how we will generalize these initial capabilities towards a more production level service on DOE leadership resources. This research is sponsored by US DOE/ASCR and used resources of the OLCF computing facility.

  6. Ubiquitous Green Computing Techniques for High Demand Applications in Smart Environments

    PubMed Central

    Zapater, Marina; Sanchez, Cesar; Ayala, Jose L.; Moya, Jose M.; Risco-Martín, José L.

    2012-01-01

    Ubiquitous sensor network deployments, such as the ones found in Smart cities and Ambient intelligence applications, require constantly increasing high computational demands in order to process data and offer services to users. The nature of these applications imply the usage of data centers. Research has paid much attention to the energy consumption of the sensor nodes in WSNs infrastructures. However, supercomputing facilities are the ones presenting a higher economic and environmental impact due to their very high power consumption. The latter problem, however, has been disregarded in the field of smart environment services. This paper proposes an energy-minimization workload assignment technique, based on heterogeneity and application-awareness, that redistributes low-demand computational tasks from high-performance facilities to idle nodes with low and medium resources in the WSN infrastructure. These non-optimal allocation policies reduce the energy consumed by the whole infrastructure and the total execution time. PMID:23112621

  7. Ubiquitous green computing techniques for high demand applications in Smart environments.

    PubMed

    Zapater, Marina; Sanchez, Cesar; Ayala, Jose L; Moya, Jose M; Risco-Martín, José L

    2012-01-01

    Ubiquitous sensor network deployments, such as the ones found in Smart cities and Ambient intelligence applications, require constantly increasing high computational demands in order to process data and offer services to users. The nature of these applications imply the usage of data centers. Research has paid much attention to the energy consumption of the sensor nodes in WSNs infrastructures. However, supercomputing facilities are the ones presenting a higher economic and environmental impact due to their very high power consumption. The latter problem, however, has been disregarded in the field of smart environment services. This paper proposes an energy-minimization workload assignment technique, based on heterogeneity and application-awareness, that redistributes low-demand computational tasks from high-performance facilities to idle nodes with low and medium resources in the WSN infrastructure. These non-optimal allocation policies reduce the energy consumed by the whole infrastructure and the total execution time.

  8. Production Support Flight Control Computers: Research Capability for F/A-18 Aircraft at Dryden Flight Research Center

    NASA Technical Reports Server (NTRS)

    Carter, John F.

    1997-01-01

    NASA Dryden Flight Research Center (DFRC) is working with the United States Navy to complete ground testing and initiate flight testing of a modified set of F/A-18 flight control computers. The Production Support Flight Control Computers (PSFCC) can give any fleet F/A-18 airplane an in-flight, pilot-selectable research control law capability. NASA DFRC can efficiently flight test the PSFCC for the following four reasons: (1) Six F/A-18 chase aircraft are available which could be used with the PSFCC; (2) An F/A-18 processor-in-the-loop simulation exists for validation testing; (3) The expertise has been developed in programming the research processor in the PSFCC; and (4) A well-defined process has been established for clearing flight control research projects for flight. This report presents a functional description of the PSFCC. Descriptions of the NASA DFRC facilities, PSFCC verification and validation process, and planned PSFCC projects are also provided.

  9. Research Opportunities

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  10. Communication and computing technology in biocontainment laboratories using the NEIDL as a model.

    PubMed

    McCall, John; Hardcastle, Kath

    2014-07-01

    The National Emerging Infectious Diseases Laboratories (NEIDL), Boston University, is a globally unique biocontainment research facility housing biosafety level 2 (BSL-2), BSL-3, and BSL-4 laboratories. Located in the BioSquare area at the University's Medical Campus, it is part of a national network of secure facilities constructed to study infectious diseases of major public health concern. The NEIDL allows for basic, translational, and clinical phases of research to be carried out in a single facility with the overall goal of accelerating understanding, treatment, and prevention of infectious diseases. The NEIDL will also act as a center of excellence providing training and education in all aspects of biocontainment research. Within every detail of NEIDL operations is a primary emphasis on safety and security. The ultramodern NEIDL has required a new approach to communications technology solutions in order to ensure safety and security and meet the needs of investigators working in this complex building. This article discusses the implementation of secure wireless networks and private cloud computing to promote operational efficiency, biosecurity, and biosafety with additional energy-saving advantages. The utilization of a dedicated data center, virtualized servers, virtualized desktop integration, multichannel secure wireless networks, and a NEIDL-dedicated Voice over Internet Protocol (VoIP) network are all discussed. © 2014 Federation of European Microbiological Societies. Published by John Wiley & Sons Ltd. All rights reserved.

  11. Reproducible Research in the Geosciences at Scale: Achievable Goal or Elusive Dream?

    NASA Astrophysics Data System (ADS)

    Wyborn, L. A.; Evans, B. J. K.

    2016-12-01

    Reproducibility is a fundamental tenant of the scientific method: it implies that any researcher, or a third party working independently, can duplicate any experiment or investigation and produce the same results. Historically computationally based research involved an individual using their own data and processing it in their own private area, often using software they wrote or inherited from close collaborators. Today, a researcher is likely to be part of a large team that will use a subset of data from an external repository and then process the data on a public or private cloud or on a large centralised supercomputer, using a mixture of their own code, third party software and libraries, or global community codes. In 'Big Geoscience' research it is common for data inputs to be extracts from externally managed dynamic data collections, where new data is being regularly appended, or existing data is revised when errors are detected and/or as processing methods are improved. New workflows increasingly use services to access data dynamically to create subsets on-the-fly from distributed sources, each of which can have a complex history. At major computational facilities, underlying systems, libraries, software and services are being constantly tuned and optimised, or as new or replacement infrastructure being installed. Likewise code used from a community repository is continually being refined, re-packaged and ported to the target platform. To achieve reproducibility, today's researcher increasingly needs to track their workflow, including querying information on the current or historical state of facilities used. Versioning methods are standard practice for software repositories or packages, but it is not common for either data repositories or data services to provide information about their state, or for systems to provide query-able access to changes in the underlying software. While a researcher can achieve transparency and describe steps in their workflow so that others can repeat them and replicate processes undertaken, they cannot achieve exact reproducibility or even transparency of results generated. In Big Geoscience, full reproducibiliy will be an elusive dream until data repositories and compute facilities can provide provenance information in a standards compliant, machine query-able way.

  12. DOE EPSCoR Initiative in Structural and computational Biology/Bioinformatics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wallace, Susan S.

    2008-02-21

    The overall goal of the DOE EPSCoR Initiative in Structural and Computational Biology was to enhance the competiveness of Vermont research in these scientific areas. To develop self-sustaining infrastructure, we increased the critical mass of faculty, developed shared resources that made junior researchers more competitive for federal research grants, implemented programs to train graduate and undergraduate students who participated in these research areas and provided seed money for research projects. During the time period funded by this DOE initiative: (1) four new faculty were recruited to the University of Vermont using DOE resources, three in Computational Biology and one inmore » Structural Biology; (2) technical support was provided for the Computational and Structural Biology facilities; (3) twenty-two graduate students were directly funded by fellowships; (4) fifteen undergraduate students were supported during the summer; and (5) twenty-eight pilot projects were supported. Taken together these dollars resulted in a plethora of published papers, many in high profile journals in the fields and directly impacted competitive extramural funding based on structural or computational biology resulting in 49 million dollars awarded in grants (Appendix I), a 600% return on investment by DOE, the State and University.« less

  13. National research and education network

    NASA Technical Reports Server (NTRS)

    Villasenor, Tony

    1991-01-01

    Some goals of this network are as follows: Extend U.S. technological leadership in high performance computing and computer communications; Provide wide dissemination and application of the technologies both to the speed and the pace of innovation and to serve the national economy, national security, education, and the global environment; and Spur gains in the U.S. productivity and industrial competitiveness by making high performance computing and networking technologies an integral part of the design and production process. Strategies for achieving these goals are as follows: Support solutions to important scientific and technical challenges through a vigorous R and D effort; Reduce the uncertainties to industry for R and D and use of this technology through increased cooperation between government, industry, and universities and by the continued use of government and government funded facilities as a prototype user for early commercial HPCC products; and Support underlying research, network, and computational infrastructures on which U.S. high performance computing technology is based.

  14. Experimental program for real gas flow code validation at NASA Ames Research Center

    NASA Technical Reports Server (NTRS)

    Deiwert, George S.; Strawa, Anthony W.; Sharma, Surendra P.; Park, Chul

    1989-01-01

    The experimental program for validating real gas hypersonic flow codes at NASA Ames Rsearch Center is described. Ground-based test facilities used include ballistic ranges, shock tubes and shock tunnels, arc jet facilities and heated-air hypersonic wind tunnels. Also included are large-scale computer systems for kinetic theory simulations and benchmark code solutions. Flight tests consist of the Aeroassist Flight Experiment, the Space Shuttle, Project Fire 2, and planetary probes such as Galileo, Pioneer Venus, and PAET.

  15. Wind tunnel wall interference

    NASA Technical Reports Server (NTRS)

    Newman, Perry A.; Mineck, Raymond E.; Barnwell, Richard W.; Kemp, William B., Jr.

    1986-01-01

    About a decade ago, interest in alleviating wind tunnel wall interference was renewed by advances in computational aerodynamics, concepts of adaptive test section walls, and plans for high Reynolds number transonic test facilities. Selection of NASA Langley cryogenic concept for the National Transonic Facility (NTF) tended to focus the renewed wall interference efforts. A brief overview and current status of some Langley sponsored transonic wind tunnel wall interference research are presented. Included are continuing efforts in basic wall flow studies, wall interference assessment/correction procedures, and adaptive wall technology.

  16. New Phone System Coming to NCI Campus at Frederick | Poster

    Cancer.gov

    By Travis Fouche and Trent McKee, Guest Writers Beginning in September, phones at the NCI Campus at Frederick will begin to be replaced, as the project to upgrade the current phone system ramps up. Over the next 16 months, the Information Systems Program (ISP) will be working with Facilities Maintenance and Engineering and Computer & Statistical Services to replace the current Avaya phone system with a Cisco Unified Communications phone system. The Cisco system is already in use at the Advanced Technology Research Facility (ATRF).

  17. The Australian Replacement Research Reactor

    NASA Astrophysics Data System (ADS)

    Kennedy, Shane; Robinson, Robert

    2004-03-01

    The 20-MW Australian Replacement Research Reactor represents possibly the greatest single research infrastructure investment in Australia's history. Construction of the facility has commenced, following award of the construction contract in July 2000, and the construction licence in April 2002. The project includes a large state-of-the-art liquid deuterium cold-neutron source and supermirror guides feeding a large modern guide hall, in which most of the instruments are placed. Alongside the guide hall, there is good provision of laboratory, office and space for support activities. While the facility has "space" for up to 18 instruments, the project has funding for an initial set of 8 instruments, which will be ready when the reactor is fully operational in July 2006. Instrument performance will be competitive with the best research-reactor facilities anywhere, and our goal is to be in the top 3 such facilities worldwide. Staff to lead the design effort and man these instruments have been hired on the international market from leading overseas facilities, and from within Australia, and 7 out of 8 instruments have been specified and costed. At present the instrumentation project carries 10contingency. An extensive dialogue has taken place with the domestic user community and our international peers, via various means including a series of workshops over the last 2 years covering all 8 instruments, emerging areas of application like biology and the earth sciences, and computing infrastructure for the instruments.

  18. Science & Technology Review: September 2016

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vogt, Ramona L.; Meissner, Caryn N.; Chinn, Ken B.

    2016-09-30

    This is the September issue of the Lawrence Livermore National Laboratory's Science & Technology Review, which communicates, to a broad audience, the Laboratory’s scientific and technological accomplishments in fulfilling its primary missions. This month, there are features on "Laboratory Investments Drive Computational Advances" and "Laying the Groundwork for Extreme-Scale Computing." Research highlights include "Nuclear Data Moves into the 21st Century", "Peering into the Future of Lick Observatory", and "Facility Drives Hydrogen Vehicle Innovations."

  19. Laboratory for Computer Science Progress Report 21, July 1983-June 1984.

    DTIC Science & Technology

    1984-06-01

    Systems 269 4. Distributed Consensus 270 5. Election of a Leader in a Distributed Ring of Processors 273 6. Distributed Network Algorithms 274 7. Diagnosis...multiprocessor systems. This facility, funded by the new!y formed Strategic Computing Program of the Defense Advanced Research Projects Agency, will enable...Academic Staff P. Szo)ovits, Group Leader R. Patil Collaborating Investigators M. Criscitiello, M.D., Tufts-New England Medical Center Hospital R

  20. A resource facility for kinetic analysis: modeling using the SAAM computer programs.

    PubMed

    Foster, D M; Boston, R C; Jacquez, J A; Zech, L

    1989-01-01

    Kinetic analysis and integrated system modeling have contributed significantly to understanding the physiology and pathophysiology of metabolic systems in humans and animals. Many experimental biologists are aware of the usefulness of these techniques and recognize that kinetic modeling requires special expertise. The Resource Facility for Kinetic Analysis (RFKA) provides this expertise through: (1) development and application of modeling technology for biomedical problems, and (2) development of computer-based kinetic modeling methodologies concentrating on the computer program Simulation, Analysis, and Modeling (SAAM) and its conversational version, CONversational SAAM (CONSAM). The RFKA offers consultation to the biomedical community in the use of modeling to analyze kinetic data and trains individuals in using this technology for biomedical research. Early versions of SAAM were widely applied in solving dosimetry problems; many users, however, are not familiar with recent improvements to the software. The purpose of this paper is to acquaint biomedical researchers in the dosimetry field with RFKA, which, together with the joint National Cancer Institute-National Heart, Lung and Blood Institute project, is overseeing SAAM development and applications. In addition, RFKA provides many service activities to the SAAM user community that are relevant to solving dosimetry problems.

  1. Laboratory services series: a programmed maintenance system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tuxbury, D.C.; Srite, B.E.

    1980-01-01

    The diverse facilities, operations and equipment at a major national research and development laboratory require a systematic, analytical approach to operating equipment maintenance. A computer-scheduled preventive maintenance program is described including program development, equipment identification, maintenance and inspection instructions, scheduling, personnel, and equipment history.

  2. Laboratory Directed Research & Development (LDRD)

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  3. HOLISTIC APPROACH TO ENVIRONMENTAL MANAGEMENT OF MUNICIPAL SOLID WASTE

    EPA Science Inventory

    The paper presents results from the application of a new municipal solid waste (MSW) management planning aid to EPA's new facility in the Research Triangle Park, NC. This planning aid, or decision support tool, is computer software that analyzes the cost and environmental impact ...

  4. A prototype Upper Atmospheric Research Collaboratory (UARC)

    NASA Technical Reports Server (NTRS)

    Clauer, C. R.; Atkins, D. E; Weymouth, T. E.; Olson, G. M.; Niciejewski, R.; Finholt, T. A.; Prakash, A.; Rasmussen, C. E.; Killeen, T.; Rosenberg, T. J.

    1995-01-01

    The National Collaboratory concept has great potential for enabling 'critical mass' working groups and highly interdisciplinary research projects. We report here on a new program to build a prototype collaboratory using the Sondrestrom Upper Atmospheric Research Facility in Kangerlussuaq, Greenland and a group of associated scientists. The Upper Atmospheric Research Collaboratory (UARC) is a joint venture of researchers in upper atmospheric and space science, computer science, and behavioral science to develop a testbed for collaborative remote research. We define the 'collaboratory' as an advanced information technology environment which enables teams to work together over distance and time on a wide variety of intellectual tasks. It provides: (1) human-to-human communications using shared computer tools and work spaces; (2) group access and use of a network of information, data, and knowledge sources; and (3) remote access and control of instruments for data acquisition. The UARC testbed is being implemented to support a distributed community of space scientists so that they have network access to the remote instrument facility in Kangerlussuaq and are able to interact among geographically distributed locations. The goal is to enable them to use the UARC rather than physical travel to Greenland to conduct team research campaigns. Even on short notice through the collaboratory from their home institutions, participants will be able to meet together to operate a battery of remote interactive observations and to acquire, process, and interpret the data.

  5. Feasibility of MHD submarine propulsion. Phase II, MHD propulsion: Testing in a two Tesla test facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Doss, E.D.; Sikes, W.C.

    1992-09-01

    This report describes the work performed during Phase 1 and Phase 2 of the collaborative research program established between Argonne National Laboratory (ANL) and Newport News Shipbuilding and Dry Dock Company (NNS). Phase I of the program focused on the development of computer models for Magnetohydrodynamic (MHD) propulsion. Phase 2 focused on the experimental validation of the thruster performance models and the identification, through testing, of any phenomena which may impact the attractiveness of this propulsion system for shipboard applications. The report discusses in detail the work performed in Phase 2 of the program. In Phase 2, a two Teslamore » test facility was designed, built, and operated. The facility test loop, its components, and their design are presented. The test matrix and its rationale are discussed. Representative experimental results of the test program are presented, and are compared to computer model predictions. In general, the results of the tests and their comparison with the predictions indicate that thephenomena affecting the performance of MHD seawater thrusters are well understood and can be accurately predicted with the developed thruster computer models.« less

  6. Metering Best Practices Applied in the National Renewable Energy Laboratory's Research Support Facility: A Primer to the 2011 Measured and Modeled Energy Consumption Datasets

    DOE Data Explorer

    Sheppy, Michael; Beach, A.; Pless, Shanti

    2016-08-09

    Modern buildings are complex energy systems that must be controlled for energy efficiency. The Research Support Facility (RSF) at the National Renewable Energy Laboratory (NREL) has hundreds of controllers -- computers that communicate with the building's various control systems -- to control the building based on tens of thousands of variables and sensor points. These control strategies were designed for the RSF's systems to efficiently support research activities. Many events that affect energy use cannot be reliably predicted, but certain decisions (such as control strategies) must be made ahead of time. NREL researchers modeled the RSF systems to predict how they might perform. They then monitor these systems to understand how they are actually performing and reacting to the dynamic conditions of weather, occupancy, and maintenance.

  7. ORNL Sustainable Campus Initiative

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Halford, Christopher K

    2012-01-01

    The research conducted at Oak Ridge National Laboratory (ORNL) spans many disciplines and has the potential for far-reaching impact in many areas of everyday life. ORNL researchers and operations staff work on projects in areas as diverse as nuclear power generation, transportation, materials science, computing, and building technologies. As the U.S. Department of Energy s (DOE) largest science and energy research facility, ORNL seeks to establish partnerships with industry in the development of innovative new technologies. The primary focus of this current research deals with developing technologies which improve or maintain the quality of life for humans while reducing themore » overall impact on the environment. In its interactions with industry, ORNL serves as both a facility for sustainable research, as well as a representative of DOE to the private sector. For these reasons it is important that the everyday operations of the Laboratory reflect a dedication to the concepts of stewardship and sustainability.« less

  8. Flow Characterization Studies of the 10-MW TP3 Arc-Jet Facility: Probe Sweeps

    NASA Technical Reports Server (NTRS)

    Goekcen, Tahir; Alunni, Antonella I.

    2016-01-01

    This paper reports computational simulations and analysis in support of calibration and flow characterization tests in a high enthalpy arc-jet facility at NASA Ames Research Center. These tests were conducted in the NASA Ames 10-MW TP3 facility using flat-faced stagnation calorimeters at six conditions corresponding to the steps of a simulated flight heating profile. Data were obtained using a conical nozzle test configuration in which the models were placed in a free jet downstream of the nozzle. Experimental surveys of arc-jet test flow with pitot pressure and heat flux probes were also performed at these arc-heater conditions, providing assessment of the flow uniformity and valuable data for the flow characterization. Two different sets of pitot pressure and heat probes were used: 9.1-mm sphere-cone probes (nose radius of 4.57 mm or 0.18 in) with null-point heat flux gages, and 15.9-mm (0.625 in) diameter hemisphere probes with Gardon gages. The probe survey data clearly show that the test flow in the TP3 facility is not uniform at most conditions (not even axisymmetric at some conditions), and the extent of non-uniformity is highly dependent on various arc-jet parameters such as arc current, mass flow rate, and the amount of cold-gas injection at the arc-heater plenum. The present analysis comprises computational fluid dynamics simulations of the nonequilibrium flowfield in the facility nozzle and test box, including the models tested. Comparisons of computations with the experimental measurements show reasonably good agreement except at the extreme low pressure conditions of the facility envelope.

  9. BigData and computing challenges in high energy and nuclear physics

    NASA Astrophysics Data System (ADS)

    Klimentov, A.; Grigorieva, M.; Kiryanov, A.; Zarochentsev, A.

    2017-06-01

    In this contribution we discuss the various aspects of the computing resource needs experiments in High Energy and Nuclear Physics, in particular at the Large Hadron Collider. This will evolve in the future when moving from LHC to HL-LHC in ten years from now, when the already exascale levels of data we are processing could increase by a further order of magnitude. The distributed computing environment has been a great success and the inclusion of new super-computing facilities, cloud computing and volunteering computing for the future is a big challenge, which we are successfully mastering with a considerable contribution from many super-computing centres around the world, academic and commercial cloud providers. We also discuss R&D computing projects started recently in National Research Center ``Kurchatov Institute''

  10. The 1984 NASA/ASEE summer faculty fellowship program

    NASA Technical Reports Server (NTRS)

    Mcinnis, B. C.; Duke, M. B.; Crow, B.

    1984-01-01

    An overview is given of the program management and activities. Participants and research advisors are listed. Abstracts give describe and present results of research assignments performed by 31 fellows either at the Johnson Space Center, at the White Sands test Facility, or at the California Space Institute in La Jolla. Disciplines studied include engineering; biology/life sciences; Earth sciences; chemistry; mathematics/statistics/computer sciences; and physics/astronomy.

  11. High-temperature behavior of advanced spacecraft TPS

    NASA Technical Reports Server (NTRS)

    Pallix, Joan

    1994-01-01

    The objective of this work has been to develop more efficient, lighter weight, and higher temperature thermal protection systems (TPS) for future reentry space vehicles. The research carried out during this funding period involved the design, analysis, testing, fabrication, and characterization of thermal protection materials to be used on future hypersonic vehicles. This work is important for the prediction of material performance at high temperature and aids in the design of thermal protection systems for a number of programs including programs such as the National Aerospace Plane (NASP), Pegasus and Pegasus/SWERVE, the Comet Rendezvous and Flyby Vehicle (CRAF), and the Mars mission entry vehicles. Research has been performed in two main areas including development and testing of thermal protection systems (TPS) and computational research. A variety of TPS materials and coatings have been developed during this funding period. Ceramic coatings were developed for flexible insulations as well as for low density ceramic insulators. Chemical vapor deposition processes were established for the fabrication of ceramic matrix composites. Experimental testing and characterization of these materials has been carried out in the NASA Ames Research Center Thermophysics Facilities and in the Ames time-of-flight mass spectrometer facility. By means of computation, we have been better able to understand the flow structure and properties of the TPS components and to estimate the aerothermal heating, stress, ablation rate, thermal response, and shape change on the surfaces of TPS. In addition, work for the computational surface thermochemistry project has included modification of existing computer codes and creating new codes to model material response and shape change on atmospheric entry vehicles in a variety of environments (e.g., earth and Mars atmospheres).

  12. High-temperature behavior of advanced spacecraft TPS

    NASA Astrophysics Data System (ADS)

    Pallix, Joan

    1994-05-01

    The objective of this work has been to develop more efficient, lighter weight, and higher temperature thermal protection systems (TPS) for future reentry space vehicles. The research carried out during this funding period involved the design, analysis, testing, fabrication, and characterization of thermal protection materials to be used on future hypersonic vehicles. This work is important for the prediction of material performance at high temperature and aids in the design of thermal protection systems for a number of programs including programs such as the National Aerospace Plane (NASP), Pegasus and Pegasus/SWERVE, the Comet Rendezvous and Flyby Vehicle (CRAF), and the Mars mission entry vehicles. Research has been performed in two main areas including development and testing of thermal protection systems (TPS) and computational research. A variety of TPS materials and coatings have been developed during this funding period. Ceramic coatings were developed for flexible insulations as well as for low density ceramic insulators. Chemical vapor deposition processes were established for the fabrication of ceramic matrix composites. Experimental testing and characterization of these materials has been carried out in the NASA Ames Research Center Thermophysics Facilities and in the Ames time-of-flight mass spectrometer facility. By means of computation, we have been better able to understand the flow structure and properties of the TPS components and to estimate the aerothermal heating, stress, ablation rate, thermal response, and shape change on the surfaces of TPS. In addition, work for the computational surface thermochemistry project has included modification of existing computer codes and creating new codes to model material response and shape change on atmospheric entry vehicles in a variety of environments (e.g., earth and Mars atmospheres).

  13. Workers in SSPF monitor Multi-Equipment Interface Test.

    NASA Technical Reports Server (NTRS)

    2000-01-01

    Workers in the Space Station Processing Facility control room monitor computers during a Multi-Equipment Interface Test (MEIT) in the U.S. Lab Destiny. Members of the STS-98 crew are taking part in the MEIT checking out some of the equipment in the Lab. During the STS-98 mission, the crew will install the Lab on the station during a series of three space walks. The crew comprises five members: Commander Kenneth D. Cockrell, Pilot Mark L. Polansky, and Mission Specialists Robert L. Curbeam Jr., Thomas D. Jones (Ph.D.) and Marsha S. Ivins. The mission will provide the station with science research facilities and expand its power, life support and control capabilities. The U.S. Laboratory Module continues a long tradition of microgravity materials research, first conducted by Skylab and later Shuttle and Spacelab missions. Destiny is expected to be a major feature in future research, providing facilities for biotechnology, fluid physics, combustion, and life sciences research. The Lab is planned for launch aboard Space Shuttle Atlantis on the sixth ISS flight, currently targeted no earlier than Aug. 19, 2000.

  14. KSC-00pp0188

    NASA Image and Video Library

    2000-02-03

    Workers in the Space Station Processing Facility control room monitor computers during a Multi-Equipment Interface Test (MEIT) in the U.S. Lab Destiny. Members of the STS-98 crew are taking part in the MEIT checking out some of the equipment in the Lab. During the STS-98 mission, the crew will install the Lab on the station during a series of three space walks. The crew comprises five members: Commander Kenneth D. Cockrell, Pilot Mark L. Polansky, and Mission Specialists Robert L. Curbeam Jr., Thomas D. Jones (Ph.D.) and Marsha S. Ivins. The mission will provide the station with science research facilities and expand its power, life support and control capabilities. The U.S. Laboratory Module continues a long tradition of microgravity materials research, first conducted by Skylab and later Shuttle and Spacelab missions. Destiny is expected to be a major feature in future research, providing facilities for biotechnology, fluid physics, combustion, and life sciences research. The Lab is planned for launch aboard Space Shuttle Atlantis on the sixth ISS flight, currently targeted no earlier than Aug. 19, 2000

  15. The Magellan Final Report on Cloud Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    ,; Coghlan, Susan; Yelick, Katherine

    The goal of Magellan, a project funded through the U.S. Department of Energy (DOE) Office of Advanced Scientific Computing Research (ASCR), was to investigate the potential role of cloud computing in addressing the computing needs for the DOE Office of Science (SC), particularly related to serving the needs of mid- range computing and future data-intensive computing workloads. A set of research questions was formed to probe various aspects of cloud computing from performance, usability, and cost. To address these questions, a distributed testbed infrastructure was deployed at the Argonne Leadership Computing Facility (ALCF) and the National Energy Research Scientific Computingmore » Center (NERSC). The testbed was designed to be flexible and capable enough to explore a variety of computing models and hardware design points in order to understand the impact for various scientific applications. During the project, the testbed also served as a valuable resource to application scientists. Applications from a diverse set of projects such as MG-RAST (a metagenomics analysis server), the Joint Genome Institute, the STAR experiment at the Relativistic Heavy Ion Collider, and the Laser Interferometer Gravitational Wave Observatory (LIGO), were used by the Magellan project for benchmarking within the cloud, but the project teams were also able to accomplish important production science utilizing the Magellan cloud resources.« less

  16. Walter C. Williams Research Aircraft Integration Facility (RAIF)

    NASA Technical Reports Server (NTRS)

    1996-01-01

    The NASA-Dryden Integrated Test Facility (ITF), also known as the Walter C. Williams Research Aircraft Integration Facility (RAIF), provides an environment for conducting efficient and thorough testing of advanced, highly integrated research aircraft. Flight test confidence is greatly enhanced by the ability to qualify interactive aircraft systems in a controlled environment. In the ITF, each element of a flight vehicle can be regulated and monitored in real time as it interacts with the rest of the aircraft systems. Testing in the ITF is accomplished through automated techniques in which the research aircraft is interfaced to a high-fidelity real-time simulation. Electric and hydraulic power are also supplied, allowing all systems except the engines to function as if in flight. The testing process is controlled by an engineering workstation that sets up initial conditions for a test, initiates the test run, monitors its progress, and archives the data generated. The workstation is also capable of analyzing results of individual tests, comparing results of multiple tests, and producing reports. The computers used in the automated aircraft testing process are also capable of operating in a stand-alone mode with a simulation cockpit, complete with its own instruments and controls. Control law development and modification, aerodynamic, propulsion, guidance model qualification, and flight planning -- functions traditionally associated with real-time simulation -- can all be performed in this manner. The Remotely Augmented Vehicles (RAV) function, now located in the ITF, is a mainstay in the research techniques employed at Dryden. This function is used for tests that are too dangerous for direct human involvement or for which computational capacity does not exist onboard a research aircraft. RAV provides the researcher with a ground-based computer that is radio linked to the test aircraft during actual flight. The Ground Vibration Testing (GVT) system, formerly housed in the Thermostructural Laboratory, now also resides in the ITF. In preparing a research aircraft for flight testing, it is vital to measure its structural frequencies and mode shapes and compare results to the models used in design analysis. The final function performed in the ITF is routine aircraft maintenance. This includes preflight and post-flight instrumentation checks and the servicing of hydraulics, avionics, and engines necessary on any research aircraft. Aircraft are not merely moved to the ITF for automated testing purposes but are housed there throughout their flight test programs.

  17. EUROPLANET-RI modelling service for the planetary science community: European Modelling and Data Analysis Facility (EMDAF)

    NASA Astrophysics Data System (ADS)

    Khodachenko, Maxim; Miller, Steven; Stoeckler, Robert; Topf, Florian

    2010-05-01

    Computational modeling and observational data analysis are two major aspects of the modern scientific research. Both appear nowadays under extensive development and application. Many of the scientific goals of planetary space missions require robust models of planetary objects and environments as well as efficient data analysis algorithms, to predict conditions for mission planning and to interpret the experimental data. Europe has great strength in these areas, but it is insufficiently coordinated; individual groups, models, techniques and algorithms need to be coupled and integrated. Existing level of scientific cooperation and the technical capabilities for operative communication, allow considerable progress in the development of a distributed international Research Infrastructure (RI) which is based on the existing in Europe computational modelling and data analysis centers, providing the scientific community with dedicated services in the fields of their computational and data analysis expertise. These services will appear as a product of the collaborative communication and joint research efforts of the numerical and data analysis experts together with planetary scientists. The major goal of the EUROPLANET-RI / EMDAF is to make computational models and data analysis algorithms associated with particular national RIs and teams, as well as their outputs, more readily available to their potential user community and more tailored to scientific user requirements, without compromising front-line specialized research on model and data analysis algorithms development and software implementation. This objective will be met through four keys subdivisions/tasks of EMAF: 1) an Interactive Catalogue of Planetary Models; 2) a Distributed Planetary Modelling Laboratory; 3) a Distributed Data Analysis Laboratory, and 4) enabling Models and Routines for High Performance Computing Grids. Using the advantages of the coordinated operation and efficient communication between the involved computational modelling, research and data analysis expert teams and their related research infrastructures, EMDAF will provide a 1) flexible, 2) scientific user oriented, 3) continuously developing and fast upgrading computational and data analysis service to support and intensify the European planetary scientific research. At the beginning EMDAF will create a set of demonstrators and operational tests of this service in key areas of European planetary science. This work will aim at the following objectives: (a) Development and implementation of tools for distant interactive communication between the planetary scientists and computing experts (including related RIs); (b) Development of standard routine packages, and user-friendly interfaces for operation of the existing numerical codes and data analysis algorithms by the specialized planetary scientists; (c) Development of a prototype of numerical modelling services "on demand" for space missions and planetary researchers; (d) Development of a prototype of data analysis services "on demand" for space missions and planetary researchers; (e) Development of a prototype of coordinated interconnected simulations of planetary phenomena and objects (global multi-model simulators); (f) Providing the demonstrators of a coordinated use of high performance computing facilities (super-computer networks), done in cooperation with European HPC Grid DEISA.

  18. Central Computational Facility CCF communications subsystem options

    NASA Technical Reports Server (NTRS)

    Hennigan, K. B.

    1979-01-01

    A MITRE study which investigated the communication options available to support both the remaining Central Computational Facility (CCF) computer systems and the proposed U1108 replacements is presented. The facilities utilized to link the remote user terminals with the CCF were analyzed and guidelines to provide more efficient communications were established.

  19. Academic Computing Facilities and Services in Higher Education--A Survey.

    ERIC Educational Resources Information Center

    Warlick, Charles H.

    1986-01-01

    Presents statistics about academic computing facilities based on data collected over the past six years from 1,753 institutions in the United States, Canada, Mexico, and Puerto Rico for the "Directory of Computing Facilities in Higher Education." Organizational, functional, and financial characteristics are examined as well as types of…

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    John Wooley; Herbert S. Lin

    This study is the first comprehensive NRC study that suggests a high-level intellectual structure for Federal agencies for supporting work at the biology/computing interface. The report seeks to establish the intellectual legitimacy of a fundamentally cross-disciplinary collaboration between biologists and computer scientists. That is, while some universities are increasingly favorable to research at the intersection, life science researchers at other universities are strongly impeded in their efforts to collaborate. This report addresses these impediments and describes proven strategies for overcoming them. An important feature of the report is the use of well-documented examples that describe clearly to individuals not trainedmore » in computer science the value and usage of computing across the biological sciences, from genes and proteins to networks and pathways, from organelles to cells, and from individual organisms to populations and ecosystems. It is hoped that these examples will be useful to students in the life sciences to motivate (continued) study in computer science that will enable them to be more facile users of computing in their future biological studies.« less

  1. Designing for aircraft structural crashworthiness

    NASA Technical Reports Server (NTRS)

    Thomson, R. G.; Caiafa, C.

    1981-01-01

    This report describes structural aviation crash dynamics research activities being conducted on general aviation aircraft and transport aircraft. The report includes experimental and analytical correlations of load-limiting subfloor and seat configurations tested dynamically in vertical drop tests and in a horizontal sled deceleration facility. Computer predictions using a finite-element nonlinear computer program, DYCAST, of the acceleration time-histories of these innovative seat and subfloor structures are presented. Proposed application of these computer techniques, and the nonlinear lumped mass computer program KRASH, to transport aircraft crash dynamics is discussed. A proposed FAA full-scale crash test of a fully instrumented radio controlled transport airplane is also described.

  2. High performance network and channel-based storage

    NASA Technical Reports Server (NTRS)

    Katz, Randy H.

    1991-01-01

    In the traditional mainframe-centered view of a computer system, storage devices are coupled to the system through complex hardware subsystems called input/output (I/O) channels. With the dramatic shift towards workstation-based computing, and its associated client/server model of computation, storage facilities are now found attached to file servers and distributed throughout the network. We discuss the underlying technology trends that are leading to high performance network-based storage, namely advances in networks, storage devices, and I/O controller and server architectures. We review several commercial systems and research prototypes that are leading to a new approach to high performance computing based on network-attached storage.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    The Mira supercomputer at the Argonne Leadership Computing Facility helped Argonne researchers model what happens inside an engine when you use gasoline in a diesel engine. Engineers are exploring this type of combustion as a sustainable transportation option because it may be more efficient than traditional gasoline combustion engines but produce less soot than diesel.

  4. A Research Program on Artificial Intelligence in Process Engineering.

    ERIC Educational Resources Information Center

    Stephanopoulos, George

    1986-01-01

    Discusses the use of artificial intelligence systems in process engineering. Describes a new program at the Massachusetts Institute of Technology which attempts to advance process engineering through technological advances in the areas of artificial intelligence and computers. Identifies the program's hardware facilities, software support,…

  5. Research and Development in Natural Language Understanding as Part of the Strategic Computing Program.

    DTIC Science & Technology

    1987-04-01

    facilities. BBN is developing a series of increasingly sophisticated natural language understanding systems which will serve as an integrated interface...Haas, A.R. A Syntactic Theory of Belief and Action. Artificial Intelligence. 1986. Forthcoming. [6] Hinrichs, E. Temporale Anaphora im Englischen

  6. Hypersonic propulsion: Status and challenge

    NASA Technical Reports Server (NTRS)

    Guy, R. Wayne

    1990-01-01

    Scientists in the U.S. are again focusing on the challenge of hypersonic flight with the proposed National Aerospace Plane (NASP). This renewed interest has led to an expansion of research related to high speed airbreathing propulsion, in particular, the supersonic combustion ramjet, or scramjet. The history is briefly traced of scramjet research in the U.S., with emphasis on NASA sponsored efforts, from the Hypersonic Research Engine (HRE) to the current status of today's airframe integrated scramjets. The challenges of scramjet technology development from takeover to orbital speeds are outlined. Existing scramjet test facilities such as NASA Langley's Scramjet Test Complex as well as new high Mach number pulse facilities are discussed. The important partnership role of experimental methods and computational fluid dynamics is emphasized for the successful design of single stage to orbit vehicles.

  7. Construction of a 2- by 2-foot transonic adaptive-wall test section at the NASA Ames Research Center

    NASA Technical Reports Server (NTRS)

    Morgan, Daniel G.; Lee, George

    1986-01-01

    The development of a new production-size, two-dimensional, adaptive-wall test section with ventilated walls at the NASA Ames Research Center is described. The new facility incorporates rapid closed-loop operation, computer/sensor integration, and on-line interference assessment and wall corrections. Air flow through the test section is controlled by a series of plenum compartments and three-way slide vales. A fast-scan laser velocimeter was built to measure velocity boundary conditions for the interference assessment scheme. A 15.2-cm- (6.0-in.-) chord NACA 0012 airfoil model will be used in the first experiments during calibration of the facility.

  8. NASTRAN users' experience of Avco Aerostructures Division

    NASA Technical Reports Server (NTRS)

    Blackburn, C. L.; Wilhelm, C. A.

    1973-01-01

    The NASTRAN experiences of a major structural design and fabrication subcontractor that has less engineering personnel and computer facilities than those available to large prime contractors are discussed. Efforts to obtain sufficient computer capacity and the development and implementation of auxiliary programs to reduce manpower requirements are described. Applications of the NASTRAN program for training users, checking out auxiliary programs, performing in-house research and development, and structurally analyzing an Avco designed and manufactured missile case are presented.

  9. Upgrades at the NASA Langley Research Center National Transonic Facility

    NASA Technical Reports Server (NTRS)

    Paryz, Roman W.

    2012-01-01

    Several projects have been completed or are nearing completion at the NASA Langley Research Center (LaRC) National Transonic Facility (NTF). The addition of a Model Flow-Control/Propulsion Simulation test capability to the NTF provides a unique, transonic, high-Reynolds number test capability that is well suited for research in propulsion airframe integration studies, circulation control high-lift concepts, powered lift, and cruise separation flow control. A 1992 vintage Facility Automation System (FAS) that performs the control functions for tunnel pressure, temperature, Mach number, model position, safety interlock and supervisory controls was replaced using current, commercially available components. This FAS upgrade also involved a design study for the replacement of the facility Mach measurement system and the development of a software-based simulation model of NTF processes and control systems. The FAS upgrades were validated by a post upgrade verification wind tunnel test. The data acquisition system (DAS) upgrade project involves the design, purchase, build, integration, installation and verification of a new DAS by replacing several early 1990's vintage computer systems with state of the art hardware/software. This paper provides an update on the progress made in these efforts. See reference 1.

  10. Terminal configured vehicle program: Test facilities guide

    NASA Technical Reports Server (NTRS)

    1980-01-01

    The terminal configured vehicle (TCV) program was established to conduct research and to develop and evaluate aircraft and flight management system technology concepts that will benefit conventional take off and landing operations in the terminal area. Emphasis is placed on the development of operating methods for the highly automated environment anticipated in the future. The program involves analyses, simulation, and flight experiments. Flight experiments are conducted using a modified Boeing 737 airplane equipped with highly flexible display and control equipment and an aft flight deck for research purposes. The experimental systems of the Boeing 737 are described including the flight control computer systems, the navigation/guidance system, the control and command panel, and the electronic display system. The ground based facilities used in the program are described including the visual motion simulator, the fixed base simulator, the verification and validation laboratory, and the radio frequency anechoic facility.

  11. Specialized computer architectures for computational aerodynamics

    NASA Technical Reports Server (NTRS)

    Stevenson, D. K.

    1978-01-01

    In recent years, computational fluid dynamics has made significant progress in modelling aerodynamic phenomena. Currently, one of the major barriers to future development lies in the compute-intensive nature of the numerical formulations and the relative high cost of performing these computations on commercially available general purpose computers, a cost high with respect to dollar expenditure and/or elapsed time. Today's computing technology will support a program designed to create specialized computing facilities to be dedicated to the important problems of computational aerodynamics. One of the still unresolved questions is the organization of the computing components in such a facility. The characteristics of fluid dynamic problems which will have significant impact on the choice of computer architecture for a specialized facility are reviewed.

  12. Advanced technology airfoil research, volume 1, part 2

    NASA Technical Reports Server (NTRS)

    1978-01-01

    This compilation contains papers presented at the NASA Conference on Advanced Technology Airfoil Research held at Langley Research Center on March 7-9, 1978, which have unlimited distribution. This conference provided a comprehensive review of all NASA airfoil research, conducted in-house and under grant and contract. A broad spectrum of airfoil research outside of NASA was also reviewed. The major thrust of the technical sessions were in three areas: development of computational aerodynamic codes for airfoil analysis and design, development of experimental facilities and test techniques, and all types of airfoil applications.

  13. Flight Research Using F100 Engine P680063 in the NASA F-15 Airplane

    NASA Technical Reports Server (NTRS)

    Burcham, Frank W., Jr.; Conners, Timothy R.; Maxwell, Michael D.

    1994-01-01

    The value of flight research in developing and evaluating gas turbine engines is high. NASA Dryden Flight Research Center has been conducting flight research on propulsion systems for many years. The F100 engine has been tested in the NASA F-15 research airplane in the last three decades. One engine in particular, S/N P680063, has been used for the entire program and has been flown in many pioneering propulsion flight research activities. Included are detailed flight-to-ground facility tests; tests of the first production digital engine control system, the first active stall margin control system, the first performance-seeking control system; and the first use of computer-controlled engine thrust for emergency flight control. The flight research has been supplemented with altitude facility tests at key times. This paper presents a review of the tests of engine P680063, the F-15 airplanes in which it flew, and the role of the flight test in maturing propulsion technology.

  14. High Performance Computing Facility Operational Assessment, FY 2011 Oak Ridge Leadership Computing Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baker, Ann E; Bland, Arthur S Buddy; Hack, James J

    Oak Ridge National Laboratory's Leadership Computing Facility (OLCF) continues to deliver the most powerful resources in the U.S. for open science. At 2.33 petaflops peak performance, the Cray XT Jaguar delivered more than 1.5 billion core hours in calendar year (CY) 2010 to researchers around the world for computational simulations relevant to national and energy security; advancing the frontiers of knowledge in physical sciences and areas of biological, medical, environmental, and computer sciences; and providing world-class research facilities for the nation's science enterprise. Scientific achievements by OLCF users range from collaboration with university experimentalists to produce a working supercapacitor thatmore » uses atom-thick sheets of carbon materials to finely determining the resolution requirements for simulations of coal gasifiers and their components, thus laying the foundation for development of commercial-scale gasifiers. OLCF users are pushing the boundaries with software applications sustaining more than one petaflop of performance in the quest to illuminate the fundamental nature of electronic devices. Other teams of researchers are working to resolve predictive capabilities of climate models, to refine and validate genome sequencing, and to explore the most fundamental materials in nature - quarks and gluons - and their unique properties. Details of these scientific endeavors - not possible without access to leadership-class computing resources - are detailed in Section 4 of this report and in the INCITE in Review. Effective operations of the OLCF play a key role in the scientific missions and accomplishments of its users. This Operational Assessment Report (OAR) will delineate the policies, procedures, and innovations implemented by the OLCF to continue delivering a petaflop-scale resource for cutting-edge research. The 2010 operational assessment of the OLCF yielded recommendations that have been addressed (Reference Section 1) and where appropriate, changes in Center metrics were introduced. This report covers CY 2010 and CY 2011 Year to Date (YTD) that unless otherwise specified, denotes January 1, 2011 through June 30, 2011. User Support remains an important element of the OLCF operations, with the philosophy 'whatever it takes' to enable successful research. Impact of this center-wide activity is reflected by the user survey results that show users are 'very satisfied.' The OLCF continues to aggressively pursue outreach and training activities to promote awareness - and effective use - of U.S. leadership-class resources (Reference Section 2). The OLCF continues to meet and in many cases exceed DOE metrics for capability usage (35% target in CY 2010, delivered 39%; 40% target in CY 2011, 54% January 1, 2011 through June 30, 2011). The Schedule Availability (SA) and Overall Availability (OA) for Jaguar were exceeded in CY2010. Given the solution to the VRM problem the SA and OA for Jaguar in CY 2011 are expected to exceed the target metrics of 95% and 90%, respectively (Reference Section 3). Numerous and wide-ranging research accomplishments, scientific support, and technological innovations are more fully described in Sections 4 and 6 and reflect OLCF leadership in enabling high-impact science solutions and vision in creating an exascale-ready center. Financial Management (Section 5) and Risk Management (Section 7) are carried out using best practices approved of by DOE. The OLCF has a valid cyber security plan and Authority to Operate (Section 8). The proposed metrics for 2012 are reflected in Section 9.« less

  15. Public Computer Assisted Learning Facilities for Children with Visual Impairment: Universal Design for Inclusive Learning

    ERIC Educational Resources Information Center

    Siu, Kin Wai Michael; Lam, Mei Seung

    2012-01-01

    Although computer assisted learning (CAL) is becoming increasingly popular, people with visual impairment face greater difficulty in accessing computer-assisted learning facilities. This is primarily because most of the current CAL facilities are not visually impaired friendly. People with visual impairment also do not normally have access to…

  16. The Argonne Leadership Computing Facility 2010 annual report.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Drugan, C.

    Researchers found more ways than ever to conduct transformative science at the Argonne Leadership Computing Facility (ALCF) in 2010. Both familiar initiatives and innovative new programs at the ALCF are now serving a growing, global user community with a wide range of computing needs. The Department of Energy's (DOE) INCITE Program remained vital in providing scientists with major allocations of leadership-class computing resources at the ALCF. For calendar year 2011, 35 projects were awarded 732 million supercomputer processor-hours for computationally intensive, large-scale research projects with the potential to significantly advance key areas in science and engineering. Argonne also continued tomore » provide Director's Discretionary allocations - 'start up' awards - for potential future INCITE projects. And DOE's new ASCR Leadership Computing (ALCC) Program allocated resources to 10 ALCF projects, with an emphasis on high-risk, high-payoff simulations directly related to the Department's energy mission, national emergencies, or for broadening the research community capable of using leadership computing resources. While delivering more science today, we've also been laying a solid foundation for high performance computing in the future. After a successful DOE Lehman review, a contract was signed to deliver Mira, the next-generation Blue Gene/Q system, to the ALCF in 2012. The ALCF is working with the 16 projects that were selected for the Early Science Program (ESP) to enable them to be productive as soon as Mira is operational. Preproduction access to Mira will enable ESP projects to adapt their codes to its architecture and collaborate with ALCF staff in shaking down the new system. We expect the 10-petaflops system to stoke economic growth and improve U.S. competitiveness in key areas such as advancing clean energy and addressing global climate change. Ultimately, we envision Mira as a stepping-stone to exascale-class computers that will be faster than petascale-class computers by a factor of a thousand. Pete Beckman, who served as the ALCF's Director for the past few years, has been named director of the newly created Exascale Technology and Computing Institute (ETCi). The institute will focus on developing exascale computing to extend scientific discovery and solve critical science and engineering problems. Just as Pete's leadership propelled the ALCF to great success, we know that that ETCi will benefit immensely from his expertise and experience. Without question, the future of supercomputing is certainly in good hands. I would like to thank Pete for all his effort over the past two years, during which he oversaw the establishing of ALCF2, the deployment of the Magellan project, increases in utilization, availability, and number of projects using ALCF1. He managed the rapid growth of ALCF staff and made the facility what it is today. All the staff and users are better for Pete's efforts.« less

  17. Automated Heat-Flux-Calibration Facility

    NASA Technical Reports Server (NTRS)

    Liebert, Curt H.; Weikle, Donald H.

    1989-01-01

    Computer control speeds operation of equipment and processing of measurements. New heat-flux-calibration facility developed at Lewis Research Center. Used for fast-transient heat-transfer testing, durability testing, and calibration of heat-flux gauges. Calibrations performed at constant or transient heat fluxes ranging from 1 to 6 MW/m2 and at temperatures ranging from 80 K to melting temperatures of most materials. Facility developed because there is need to build and calibrate very-small heat-flux gauges for Space Shuttle main engine (SSME).Includes lamp head attached to side of service module, an argon-gas-recirculation module, reflector, heat exchanger, and high-speed positioning system. This type of automated heat-flux calibration facility installed in industrial plants for onsite calibration of heat-flux gauges measuring fluxes of heat in advanced gas-turbine and rocket engines.

  18. Bulk Enthalpy Calculations in the Arc Jet Facility at NASA ARC

    NASA Technical Reports Server (NTRS)

    Thompson, Corinna S.; Prabhu, Dinesh; Terrazas-Salinas, Imelda; Mach, Jeffrey J.

    2011-01-01

    The Arc Jet Facilities at NASA Ames Research Center generate test streams with enthalpies ranging from 5 MJ/kg to 25 MJ/kg. The present work describes a rigorous method, based on equilibrium thermodynamics, for calculating the bulk enthalpy of the flow produced in two of these facilities. The motivation for this work is to determine a dimensionally-correct formula for calculating the bulk enthalpy that is at least as accurate as the conventional formulas that are currently used. Unlike previous methods, the new method accounts for the amount of argon that is present in the flow. Comparisons are made with bulk enthalpies computed from an energy balance method. An analysis of primary facility operating parameters and their associated uncertainties is presented in order to further validate the enthalpy calculations reported herein.

  19. Real science at the petascale.

    PubMed

    Saksena, Radhika S; Boghosian, Bruce; Fazendeiro, Luis; Kenway, Owain A; Manos, Steven; Mazzeo, Marco D; Sadiq, S Kashif; Suter, James L; Wright, David; Coveney, Peter V

    2009-06-28

    We describe computational science research that uses petascale resources to achieve scientific results at unprecedented scales and resolution. The applications span a wide range of domains, from investigation of fundamental problems in turbulence through computational materials science research to biomedical applications at the forefront of HIV/AIDS research and cerebrovascular haemodynamics. This work was mainly performed on the US TeraGrid 'petascale' resource, Ranger, at Texas Advanced Computing Center, in the first half of 2008 when it was the largest computing system in the world available for open scientific research. We have sought to use this petascale supercomputer optimally across application domains and scales, exploiting the excellent parallel scaling performance found on up to at least 32 768 cores for certain of our codes in the so-called 'capability computing' category as well as high-throughput intermediate-scale jobs for ensemble simulations in the 32-512 core range. Furthermore, this activity provides evidence that conventional parallel programming with MPI should be successful at the petascale in the short to medium term. We also report on the parallel performance of some of our codes on up to 65 636 cores on the IBM Blue Gene/P system at the Argonne Leadership Computing Facility, which has recently been named the fastest supercomputer in the world for open science.

  20. Data Acquisition Systems

    NASA Technical Reports Server (NTRS)

    1994-01-01

    In the mid-1980s, Kinetic Systems and Langley Research Center determined that high speed CAMAC (Computer Automated Measurement and Control) data acquisition systems could significantly improve Langley's ARTS (Advanced Real Time Simulation) system. The ARTS system supports flight simulation R&D, and the CAMAC equipment allowed 32 high performance simulators to be controlled by centrally located host computers. This technology broadened Kinetic Systems' capabilities and led to several commercial applications. One of them is General Atomics' fusion research program. Kinetic Systems equipment allows tokamak data to be acquired four to 15 times more rapidly. Ford Motor company uses the same technology to control and monitor transmission testing facilities.

  1. User's manual for EZPLOT version 5.5: A FORTRAN program for 2-dimensional graphic display of data

    NASA Technical Reports Server (NTRS)

    Garbinski, Charles; Redin, Paul C.; Budd, Gerald D.

    1988-01-01

    EZPLOT is a computer applications program that converts data resident on a file into a plot displayed on the screen of a graphics terminal. This program generates either time history or x-y plots in response to commands entered interactively from a terminal keyboard. Plot parameters consist of a single independent parameter and from one to eight dependent parameters. Various line patterns, symbol shapes, axis scales, text labels, and data modification techniques are available. This user's manual describes EZPLOT as it is implemented on the Ames Research Center, Dryden Research Facility ELXSI computer using DI-3000 graphics software tools.

  2. Integrated Test Facility (ITF)

    NASA Technical Reports Server (NTRS)

    1992-01-01

    The NASA-Dryden Integrated Test Facility (ITF), also known as the Walter C. Williams Research Aircraft Integration Facility (RAIF), provides an environment for conducting efficient and thorough testing of advanced, highly integrated research aircraft. Flight test confidence is greatly enhanced by the ability to qualify interactive aircraft systems in a controlled environment. In the ITF, each element of a flight vehicle can be regulated and monitored in real time as it interacts with the rest of the aircraft systems. Testing in the ITF is accomplished through automated techniques in which the research aircraft is interfaced to a high-fidelity real-time simulation. Electric and hydraulic power are also supplied, allowing all systems except the engines to function as if in flight. The testing process is controlled by an engineering workstation that sets up initial conditions for a test, initiates the test run, monitors its progress, and archives the data generated. The workstation is also capable of analyzing results of individual tests, comparing results of multiple tests, and producing reports. The computers used in the automated aircraft testing process are also capable of operating in a stand-alone mode with a simulation cockpit, complete with its own instruments and controls. Control law development and modification, aerodynamic, propulsion, guidance model qualification, and flight planning -- functions traditionally associated with real-time simulation -- can all be performed in this manner. The Remotely Augmented Vehicles (RAV) function, now located in the ITF, is a mainstay in the research techniques employed at Dryden. This function is used for tests that are too dangerous for direct human involvement or for which computational capacity does not exist onboard a research aircraft. RAV provides the researcher with a ground-based computer that is radio linked to the test aircraft during actual flight. The Ground Vibration Testing (GVT) system, formerly housed in the Thermostructural Laboratory, now also resides in the ITF. In preparing a research aircraft for flight testing, it is vital to measure its structural frequencies and mode shapes and compare results to the models used in design analysis. The final function performed in the ITF is routine aircraft maintenance. This includes preflight and post-flight instrumentation checks and the servicing of hydraulics, avionics, and engines necessary on any research aircraft. Aircraft are not merely moved to the ITF for automated testing purposes but are housed there throughout their flight test programs.

  3. Integrated Test Facility (ITF)

    NASA Technical Reports Server (NTRS)

    1991-01-01

    The NASA-Dryden Integrated Test Facility (ITF), also known as the Walter C. Williams Research Aircraft Integration Facility (RAIF), provides an environment for conducting efficient and thorough testing of advanced, highly integrated research aircraft. Flight test confidence is greatly enhanced by the ability to qualify interactive aircraft systems in a controlled environment. In the ITF, each element of a flight vehicle can be regulated and monitored in real time as it interacts with the rest of the aircraft systems. Testing in the ITF is accomplished through automated techniques in which the research aircraft is interfaced to a high-fidelity real-time simulation. Electric and hydraulic power are also supplied, allowing all systems except the engines to function as if in flight. The testing process is controlled by an engineering workstation that sets up initial conditions for a test, initiates the test run, monitors its progress, and archives the data generated. The workstation is also capable of analyzing results of individual tests, comparing results of multiple tests, and producing reports. The computers used in the automated aircraft testing process are also capable of operating in a stand-alone mode with a simulation cockpit, complete with its own instruments and controls. Control law development and modification, aerodynamic, propulsion, guidance model qualification, and flight planning -- functions traditionally associated with real-time simulation -- can all be performed in this manner. The Remotely Augmented Vehicles (RAV) function, now located in the ITF, is a mainstay in the research techniques employed at Dryden. This function is used for tests that are too dangerous for direct human involvement or for which computational capacity does not exist onboard a research aircraft. RAV provides the researcher with a ground-based computer that is radio linked to the test aircraft during actual flight. The Ground Vibration Testing (GVT) system, formerly housed in the Thermostructural Laboratory, now also resides in the ITF. In preparing a research aircraft for flight testing, it is vital to measure its structural frequencies and mode shapes and compare results to the models used in design analysis. The final function performed in the ITF is routine aircraft maintenance. This includes preflight and post-flight instrumentation checks and the servicing of hydraulics, avionics, and engines necessary on any research aircraft. Aircraft are not merely moved to the ITF for automated testing purposes but are housed there throughout their flight test programs.

  4. Proceedings of a Conference on Medical Information Systems.

    ERIC Educational Resources Information Center

    Health Services and Mental Health Administration (DHEW), Bethesda, MD.

    The purposes of this conference are: to define the current state of technology; to identify the problems, needs and emerging technology; and to consider alternative computer applications to multiple-facility medical information systems for the delivery of medical care and for health services research. The papers presented include: (1) General…

  5. CERC Field Research Facility Environmental Data Summary, 1977-79.

    DTIC Science & Technology

    1982-12-01

    Motorola "Mini-Ranger," coupled to a Hewlett-Packard Mini-Computer and flatbed plotter. This positioning system was put together and operated by Prank... laminations within the core. While one diver collected the sample, the second diver recorded conditions on the bottom. This description included sediment

  6. Developing Games and Simulations for Today and Tomorrow's Tech Savvy Youth

    ERIC Educational Resources Information Center

    Klopfer, Eric; Yoon, Susan

    2005-01-01

    Constructively promoting the educational development of today's young tech savvy students and fostering the productive technological facility of tomorrow's youth requires harnessing new technological tools creatively. The MIT Teacher Education Program (TEP) focuses on the research and development of educational computer-based simulations and games…

  7. Farmers' Opinions about Third-Wave Technologies.

    ERIC Educational Resources Information Center

    Lasley, Paul; Bultena, Gordon

    The opinions of 1,585 Iowa farmers about 8 emergent agricultural technologies (energy production from feed grains and oils; energy production from livestock waste; genetic engineering research on plants, livestock, and humans; robotics for on-farm use; confinement livestock facilities; and personal computers for farm families) were found to be…

  8. Annual Report and Abstracts of Research, July 1977-June 1978.

    ERIC Educational Resources Information Center

    Ohio State Univ., Columbus. Dept. of Computer and Information Science.

    This annual report of the Department of Computer and Information Science at Ohio State University for July 1977-June 1978 covers the department's organizational structure, objectives, highlights of department activities (such as grants and faculty appointments), instructional programs/course offerings, and facilities. In the second half of the…

  9. CFD Simulations of the IHF Arc-Jet Flow: Compression-Pad Separation Bolt Wedge Tests

    NASA Technical Reports Server (NTRS)

    Gokcen, Tahir; Skokova, Kristina A.

    2017-01-01

    This paper reports computational analyses in support of two wedge tests in a high enthalpy arc-jet facility at NASA Ames Research Center. These tests were conducted using two different wedge models, each placed in a free jet downstream of a corresponding different conical nozzle in the Ames 60-MW Interaction Heating Facility. Each panel test article included a metallic separation bolt imbedded in Orion compression-pad and heatshield materials, resulting in a circular protuberance over a flat plate. The protuberances produce complex model flowfields, containing shock-shock and shock-boundary layer interactions, and multiple augmented heating regions on the test plate. As part of the test calibration runs, surface pressure and heat flux measurements on water-cooled calibration plates integrated with the wedge models were also obtained. Surface heating distributions on the test articles as well as arc-jet test environment parameters for each test configuration are obtained through computational fluid dynamics simulations, consistent with the facility and calibration measurements. The present analysis comprises simulations of the non-equilibrium flow field in the facility nozzle, test box, and flow field over test articles, and comparisons with the measured calibration data.

  10. CFD Simulations of the IHF Arc-Jet Flow: Compression-Pad/Separation Bolt Wedge Tests

    NASA Technical Reports Server (NTRS)

    Goekcen, Tahir; Skokova, Kristina A.

    2017-01-01

    This paper reports computational analyses in support of two wedge tests in a high enthalpy arc-jet facility at NASA Ames Research Center. These tests were conducted using two different wedge models, each placed in a free jet downstream of a corresponding different conical nozzle in the Ames 60-MW Interaction Heating Facility. Each panel test article included a metallic separation bolt imbedded in Orion compression-pad and heatshield materials, resulting in a circular protuberance over a flat plate. The protuberances produce complex model flowfields, containing shock-shock and shock-boundary layer interactions, and multiple augmented heating regions on the test plate. As part of the test calibration runs, surface pressure and heat flux measurements on water-cooled calibration plates integrated with the wedge models were also obtained. Surface heating distributions on the test articles as well as arc-jet test environment parameters for each test configuration are obtained through computational fluid dynamics simulations, consistent with the facility and calibration measurements. The present analysis comprises simulations of the nonequilibrium flowfield in the facility nozzle, test box, and flowfield over test articles, and comparisons with the measured calibration data.

  11. A Functional Description of a Digital Flight Test System for Navigation and Guidance Research in the Terminal Area

    NASA Technical Reports Server (NTRS)

    Hegarty, D. M.

    1974-01-01

    A guidance, navigation, and control system, the Simulated Shuttle Flight Test System (SS-FTS), when interfaced with existing aircraft systems, provides a research facility for studying concepts for landing the space shuttle orbiter and conventional jet aircraft. The SS-FTS, which includes a general-purpose computer, performs all computations for precisely following a prescribed approach trajectory while properly managing the vehicle energy to allow safe arrival at the runway and landing within prescribed dispersions. The system contains hardware and software provisions for navigation with several combinations of possible navigation aids that have been suggested for the shuttle. The SS-FTS can be reconfigured to study different guidance and navigation concepts by changing only the computer software, and adapted to receive different radio navigation information through minimum hardware changes. All control laws, logic, and mode interlocks reside solely in the computer software.

  12. The Air Force Interactive Meteorological System: A Research Tool for Satellite Meteorology

    DTIC Science & Technology

    1992-12-02

    NFARnet itself is a subnet to the global computer network INTERNET that links nearly all U.S. government research facilities and universi- ties along...required input to a generalized mathematical solution to the satellite/earth coordinate transform used for earth location of GOES sensor data. A direct...capability also exists to convert absolute coordinates to relative coordinates for transformations associated with gridded fields. 3. Spatial objective

  13. The Kinetics of Evolution of Water Vapor Clusters in Air

    DTIC Science & Technology

    1975-12-01

    Academy Annapnlis, Mazylsnd 21402 D IUP 17% Work Supported by: Power Branch and Atmospheric Sciences Program, Office of Naval Research and Naval Air...to experiments in supersonic nozzles. The patient support of the Power Branch and the Atmospheric Sciences Program, Office of Naval Research over...the start by relying on the dioital compxiter from the start of development. Time- shared computer facilities were provided by the Naval Weapons Lab

  14. White paper: A plan for cooperation between NASA and DARPA to establish a center for advanced architectures

    NASA Technical Reports Server (NTRS)

    Denning, P. J.; Adams, G. B., III; Brown, R. L.; Kanerva, P.; Leiner, B. M.; Raugh, M. R.

    1986-01-01

    Large, complex computer systems require many years of development. It is recognized that large scale systems are unlikely to be delivered in useful condition unless users are intimately involved throughout the design process. A mechanism is described that will involve users in the design of advanced computing systems and will accelerate the insertion of new systems into scientific research. This mechanism is embodied in a facility called the Center for Advanced Architectures (CAA). CAA would be a division of RIACS (Research Institute for Advanced Computer Science) and would receive its technical direction from a Scientific Advisory Board established by RIACS. The CAA described here is a possible implementation of a center envisaged in a proposed cooperation between NASA and DARPA.

  15. Costa - Introduction to 2015 Annual Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Costa, James E.

    In parallel with Sandia National Laboratories having two major locations (NM and CA), along with a number of smaller facilities across the nation, so too is the distribution of scientific, engineering and computing resources. As a part of Sandia’s Institutional Computing Program, CA site-based Sandia computer scientists and engineers have been providing mission and research staff with local CA resident expertise on computing options while also focusing on two growing high performance computing research problems. The first is how to increase system resilience to failure, as machines grow larger, more complex and heterogeneous. The second is how to ensure thatmore » computer hardware and configurations are optimized for specialized data analytical mission needs within the overall Sandia computing environment, including the HPC subenvironment. All of these activities support the larger Sandia effort in accelerating development and integration of high performance computing into national security missions. Sandia continues to both promote national R&D objectives, including the recent Presidential Executive Order establishing the National Strategic Computing Initiative and work to ensure that the full range of computing services and capabilities are available for all mission responsibilities, from national security to energy to homeland defense.« less

  16. Laboratory Directed Research and Development Program FY 2006

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hansen

    2007-03-08

    The Ernest Orlando Lawrence Berkeley National Laboratory (Berkeley Lab or LBNL) is a multi-program national research facility operated by the University of California for the Department of Energy (DOE). As an integral element of DOE's National Laboratory System, Berkeley Lab supports DOE's missions in fundamental science, energy resources, and environmental quality. Berkeley Lab programs advance four distinct goals for DOE and the nation: (1) To perform leading multidisciplinary research in the computing sciences, physical sciences, energy sciences, biosciences, and general sciences in a manner that ensures employee and public safety and protection of the environment. (2) To develop and operatemore » unique national experimental facilities for qualified investigators. (3) To educate and train future generations of scientists and engineers to promote national science and education goals. (4) To transfer knowledge and technological innovations and to foster productive relationships among Berkeley Lab's research programs, universities, and industry in order to promote national economic competitiveness.« less

  17. High-Performance Computing Data Center | Energy Systems Integration

    Science.gov Websites

    Facility | NREL High-Performance Computing Data Center High-Performance Computing Data Center The Energy Systems Integration Facility's High-Performance Computing Data Center is home to Peregrine -the largest high-performance computing system in the world exclusively dedicated to advancing

  18. Integration of Panda Workload Management System with supercomputers

    NASA Astrophysics Data System (ADS)

    De, K.; Jha, S.; Klimentov, A.; Maeno, T.; Mashinistov, R.; Nilsson, P.; Novikov, A.; Oleynik, D.; Panitkin, S.; Poyda, A.; Read, K. F.; Ryabinkin, E.; Teslyuk, A.; Velikhov, V.; Wells, J. C.; Wenaus, T.

    2016-09-01

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 140 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250000 cores with a peak performance of 0.3+ petaFLOPS, next LHC data taking runs will require more resources than Grid computing can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, Europe and Russia (in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF), Supercomputer at the National Research Center "Kurchatov Institute", IT4 in Ostrava, and others). The current approach utilizes a modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run singlethreaded workloads in parallel on Titan's multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads on several supercomputing platforms. We will present our current accomplishments in running PanDA WMS at supercomputers and demonstrate our ability to use PanDA as a portal independent of the computing facility's infrastructure for High Energy and Nuclear Physics, as well as other data-intensive science applications, such as bioinformatics and astro-particle physics.

  19. Flying a College on the Computer. The Use of the Computer in Planning Buildings.

    ERIC Educational Resources Information Center

    Saint Louis Community Coll., MO.

    Upon establishment of the St. Louis Junior College District, it was decided to make use of computer si"ulation facilities of a nearby aero-space contractor to develop a master schedule for facility planning purposes. Projected enrollments and course offerings were programmed with idealized student-teacher ratios to project facility needs. In…

  20. Development of computer-based analytical tool for assessing physical protection system

    NASA Astrophysics Data System (ADS)

    Mardhi, Alim; Pengvanich, Phongphaeth

    2016-01-01

    Assessment of physical protection system effectiveness is the priority for ensuring the optimum protection caused by unlawful acts against a nuclear facility, such as unauthorized removal of nuclear materials and sabotage of the facility itself. Since an assessment based on real exercise scenarios is costly and time-consuming, the computer-based analytical tool can offer the solution for approaching the likelihood threat scenario. There are several currently available tools that can be used instantly such as EASI and SAPE, however for our research purpose it is more suitable to have the tool that can be customized and enhanced further. In this work, we have developed a computer-based analytical tool by utilizing the network methodological approach for modelling the adversary paths. The inputs are multi-elements in security used for evaluate the effectiveness of the system's detection, delay, and response. The tool has capability to analyze the most critical path and quantify the probability of effectiveness of the system as performance measure.

  1. Design and Development of a Real-Time Model Attitude Measurement System for Hypersonic Facilities

    NASA Technical Reports Server (NTRS)

    Jones, Thomas W.; Lunsford, Charles B.

    2005-01-01

    A series of wind tunnel tests have been conducted to evaluate a multi-camera videogrammetric system designed to measure model attitude in hypersonic facilities. The technique utilizes processed video data and applies photogrammetric principles for point tracking to compute model position including pitch, roll and yaw variables. A discussion of the constraints encountered during the design, development, and testing process, including lighting, vibration, operational range and optical access is included. Initial measurement results from the NASA Langley Research Center (LaRC) 31-Inch Mach 10 tunnel are presented.

  2. Design and Development of a Real-Time Model Attitude Measurement System for Hypersonic Facilities

    NASA Technical Reports Server (NTRS)

    Jones, Thomas W.; Lunsford, Charles B.

    2004-01-01

    A series of wind tunnel tests have been conducted to evaluate a multi-camera videogrammetric system designed to measure model attitude in hypersonic facilities. The technique utilizes processed video data and applies photogrammetric principles for point tracking to compute model position including pitch, roll and yaw variables. A discussion of the constraints encountered during the design, development, and testing process, including lighting, vibration, operational range and optical access is included. Initial measurement results from the NASA Langley Research Center (LaRC) 31-Inch Mach 10 tunnel are presented.

  3. Integrated Component-based Data Acquisition Systems for Aerospace Test Facilities

    NASA Technical Reports Server (NTRS)

    Ross, Richard W.

    2001-01-01

    The Multi-Instrument Integrated Data Acquisition System (MIIDAS), developed by the NASA Langley Research Center, uses commercial off the shelf (COTS) products, integrated with custom software, to provide a broad range of capabilities at a low cost throughout the system s entire life cycle. MIIDAS combines data acquisition capabilities with online and post-test data reduction computations. COTS products lower purchase and maintenance costs by reducing the level of effort required to meet system requirements. Object-oriented methods are used to enhance modularity, encourage reusability, and to promote adaptability, reducing software development costs. Using only COTS products and custom software supported on multiple platforms reduces the cost of porting the system to other platforms. The post-test data reduction capabilities of MIIDAS have been installed at four aerospace testing facilities at NASA Langley Research Center. The systems installed at these facilities provide a common user interface, reducing the training time required for personnel that work across multiple facilities. The techniques employed by MIIDAS enable NASA to build a system with a lower initial purchase price and reduced sustaining maintenance costs. With MIIDAS, NASA has built a highly flexible next generation data acquisition and reduction system for aerospace test facilities that meets customer expectations.

  4. Development and Use of a Virtual NMR Facility

    NASA Astrophysics Data System (ADS)

    Keating, Kelly A.; Myers, James D.; Pelton, Jeffrey G.; Bair, Raymond A.; Wemmer, David E.; Ellis, Paul D.

    2000-03-01

    We have developed a "virtual NMR facility" (VNMRF) to enhance access to the NMR spectrometers in Pacific Northwest National Laboratory's Environmental Molecular Sciences Laboratory (EMSL). We use the term virtual facility to describe a real NMR facility made accessible via the Internet. The VNMRF combines secure remote operation of the EMSL's NMR spectrometers over the Internet with real-time videoconferencing, remotely controlled laboratory cameras, real-time computer display sharing, a Web-based electronic laboratory notebook, and other capabilities. Remote VNMRF users can see and converse with EMSL researchers, directly and securely control the EMSL spectrometers, and collaboratively analyze results. A customized Electronic Laboratory Notebook allows interactive Web-based access to group notes, experimental parameters, proposed molecular structures, and other aspects of a research project. This paper describes our experience developing a VNMRF and details the specific capabilities available through the EMSL VNMRF. We show how the VNMRF has evolved during a test project and present an evaluation of its impact in the EMSL and its potential as a model for other scientific facilities. All Collaboratory software used in the VNMRF is freely available from http://www.emsl.pnl.gov:2080/docs/collab.

  5. IYA Outreach Plans for Appalachian State University's Observatories

    NASA Astrophysics Data System (ADS)

    Caton, Daniel B.; Pollock, J. T.; Saken, J. M.

    2009-01-01

    Appalachian State University will provide a variety of observing opportunities for the public during the International Year of Astronomy. These will be focused at both the campus GoTo Telescope Facility used by Introductory Astronomy students and the research facilities at our Dark Sky Observatory. The campus facility is composed of a rooftop deck with a roll-off roof housing fifteen Celestron C11 telescopes. During astronomy lab class meetings these telescopes are used either in situ or remotely by computer control from the adjacent classroom. For the IYA we will host the public for regular observing sessions at these telescopes. The research facility features a 32-inch DFM Engineering telescope with its dome attached to the Cline Visitor Center. The Visitor Center is still under construction and we anticipate its completion for a spring opening during IYA. The CVC will provide areas for educational outreach displays and a view of the telescope control room. Visitors will view celestial objects directly at the eyepiece. We are grateful for the support of the National Science Foundation, through grant number DUE-0536287, which provided instrumentation for the GoTO facility, and to J. Donald Cline for support of the Visitor Center.

  6. Operating capability and current status of the reactivated NASA Lewis Research Center Hypersonic Tunnel Facility

    NASA Technical Reports Server (NTRS)

    Thomas, Scott R.; Trefny, Charles J.; Pack, William D.

    1995-01-01

    The NASA Lewis Research Center's Hypersonic Tunnel Facility (HTF) is a free-jet, blowdown propulsion test facility that can simulate up to Mach-7 flight conditions with true air composition. Mach-5, -6, and -7 nozzles, each with a 42 inch exit diameter, are available. Previously obtained calibration data indicate that the test flow uniformity of the HTF is good. The facility, without modifications, can accommodate models approximately 10 feet long. The test gas is heated using a graphite core induction heater that generates a nonvitiated flow. The combination of clean-air, large-scale, and Mach-7 capabilities is unique to the HTF and enables an accurate propulsion performance determination. The reactivation of the HTF, in progress since 1990, includes refurbishing the graphite heater, the steam generation plant, the gaseous oxygen system, and all control systems. All systems were checked out and recertified, and environmental systems were upgraded to meet current standards. The data systems were also upgraded to current standards and a communication link with NASA-wide computers was added. In May 1994, the reactivation was complete, and an integrated systems test was conducted to verify facility operability. This paper describes the reactivation, the facility status, the operating capabilities, and specific applications of the HTF.

  7. Progress in aeronautical research and technology applicable to civil air transports

    NASA Technical Reports Server (NTRS)

    Bower, R. E.

    1981-01-01

    Recent progress in the aeronautical research and technology program being conducted by the United States National Aeronautics and Space Administration is discussed. Emphasis is on computational capability, new testing facilities, drag reduction, turbofan and turboprop propulsion, noise, composite materials, active controls, integrated avionics, cockpit displays, flight management, and operating problems. It is shown that this technology is significantly impacting the efficiency of the new civil air transports. The excitement of emerging research promises even greater benefits to future aircraft developments.

  8. Method and computer program product for maintenance and modernization backlogging

    DOEpatents

    Mattimore, Bernard G; Reynolds, Paul E; Farrell, Jill M

    2013-02-19

    According to one embodiment, a computer program product for determining future facility conditions includes a computer readable medium having computer readable program code stored therein. The computer readable program code includes computer readable program code for calculating a time period specific maintenance cost, for calculating a time period specific modernization factor, and for calculating a time period specific backlog factor. Future facility conditions equal the time period specific maintenance cost plus the time period specific modernization factor plus the time period specific backlog factor. In another embodiment, a computer-implemented method for calculating future facility conditions includes calculating a time period specific maintenance cost, calculating a time period specific modernization factor, and calculating a time period specific backlog factor. Future facility conditions equal the time period specific maintenance cost plus the time period specific modernization factor plus the time period specific backlog factor. Other embodiments are also presented.

  9. Argonne Discovery Yields Self-Healing Diamond-Like Carbon

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cunningham, Greg; Jones, Katie Elyce

    We report that large-scale reactive molecular dynamics simulations carried out on the US Department of Energy’s IBM Blue Gene/Q Mira supercomputer at the Argonne Leadership Computing Facility, along with experiments conducted by researchers in Argonne’s Energy Systems Division, enabled the design of a “self-healing” anti-wear coating that drastically reduces friction and related degradation in engines and moving machinery. Now, the computational work advanced for this purpose is being used to identify the friction-fighting potential of other catalysts.

  10. Argonne Discovery Yields Self-Healing Diamond-Like Carbon

    DOE PAGES

    Cunningham, Greg; Jones, Katie Elyce

    2016-10-27

    We report that large-scale reactive molecular dynamics simulations carried out on the US Department of Energy’s IBM Blue Gene/Q Mira supercomputer at the Argonne Leadership Computing Facility, along with experiments conducted by researchers in Argonne’s Energy Systems Division, enabled the design of a “self-healing” anti-wear coating that drastically reduces friction and related degradation in engines and moving machinery. Now, the computational work advanced for this purpose is being used to identify the friction-fighting potential of other catalysts.

  11. Computer aided design environment for the analysis and design of multi-body flexible structures

    NASA Technical Reports Server (NTRS)

    Ramakrishnan, Jayant V.; Singh, Ramen P.

    1989-01-01

    A computer aided design environment consisting of the programs NASTRAN, TREETOPS and MATLAB is presented in this paper. With links for data transfer between these programs, the integrated design of multi-body flexible structures is significantly enhanced. The CAD environment is used to model the Space Shuttle/Pinhole Occulater Facility. Then a controller is designed and evaluated in the nonlinear time history sense. Recent enhancements and ongoing research to add more capabilities are also described.

  12. Introduction to the LaRC central scientific computing complex

    NASA Technical Reports Server (NTRS)

    Shoosmith, John N.

    1993-01-01

    The computers and associated equipment that make up the Central Scientific Computing Complex of the Langley Research Center are briefly described. The electronic networks that provide access to the various components of the complex and a number of areas that can be used by Langley and contractors staff for special applications (scientific visualization, image processing, software engineering, and grid generation) are also described. Flight simulation facilities that use the central computers are described. Management of the complex, procedures for its use, and available services and resources are discussed. This document is intended for new users of the complex, for current users who wish to keep appraised of changes, and for visitors who need to understand the role of central scientific computers at Langley.

  13. Proposal for continued research in intelligent machines at the Center for Engineering Systems Advanced Research (CESAR) for FY 1988 to FY 1991

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weisbin, C.R.

    1987-03-01

    This document reviews research accomplishments achieved by the staff of the Center for Engineering Systems Advanced Research (CESAR) during the fiscal years 1984 through 1987. The manuscript also describes future CESAR objectives for the 1988-1991 planning horizon, and beyond. As much as possible, the basic research goals are derived from perceived Department of Energy (DOE) needs for increased safety, productivity, and competitiveness in the United States energy producing and consuming facilities. Research areas covered include the HERMIES-II Robot, autonomous robot navigation, hypercube computers, machine vision, and manipulators.

  14. Influence of computational fluid dynamics on experimental aerospace facilities: A fifteen year projection

    NASA Technical Reports Server (NTRS)

    1983-01-01

    An assessment was made of the impact of developments in computational fluid dynamics (CFD) on the traditional role of aerospace ground test facilities over the next fifteen years. With improvements in CFD and more powerful scientific computers projected over this period it is expected to have the capability to compute the flow over a complete aircraft at a unit cost three orders of magnitude lower than presently possible. Over the same period improvements in ground test facilities will progress by application of computational techniques including CFD to data acquisition, facility operational efficiency, and simulation of the light envelope; however, no dramatic change in unit cost is expected as greater efficiency will be countered by higher energy and labor costs.

  15. Study of Fluid Experiment System (FES)/CAST/Holographic Ground System (HGS)

    NASA Technical Reports Server (NTRS)

    Workman, Gary L.; Cummings, Rick; Jones, Brian

    1992-01-01

    The use of holographic and schlieren optical techniques for studying the concentration gradients in solidification processes has been used by several investigators over the years. The HGS facility at MSFC has been primary resource in researching this capability. Consequently, scientific personnel have been able to utilize these techniques in both ground based research and in space experiments. An important event in the scientific utilization of the HGS facilities was the TGS Crystal Growth and the casting and solidification technology (CAST) experiments that were flown on the International Microgravity Laboratory (IML) mission in March of this year. The preparation and processing of these space observations are the primary experiments reported in this work. This project provides some ground-based studies to optimize on the holographic techniques used to acquire information about the crystal growth processes flown on IML. Since the ground-based studies will be compared with the space-based experimental results, it is necessary to conduct sufficient ground based studies to best determine how the experiment worked in space. The current capabilities in computer based systems for image processing and numerical computation have certainly assisted in those efforts. As anticipated, this study has certainly shown that these advanced computing capabilities are helpful in the data analysis of such experiments.

  16. Yesterday, today and tomorrow: A perspective of CFD at NASA's Ames Research Center

    NASA Technical Reports Server (NTRS)

    Kutler, Paul; Gross, Anthony R.

    1987-01-01

    The opportunity to reflect on the computational fluid dynamics (CFD) progam at the NASA Ames Research Center (its beginning, its present state, and its direction for the future) is afforded. Essential elements of the research program during each period are reviewed, including people, facilities, and research problems. The burgeoning role that CFD is playing in the aerospace business is discussed, as is the necessity for validated CFD tools. The current aeronautical position of this country is assessed, as are revolutionary goals to help maintain its aeronautical supremacy in the world.

  17. Programmable multi-zone furnace for microgravity research

    NASA Technical Reports Server (NTRS)

    Rosenthal, Bruce N.; Krolikowski, Cathryn R.

    1991-01-01

    In order to provide new furnace technology to accommodate microgravity research studies and commercial applications in material processes, research has been initiated on the development of the Programmable-Multi-zone Furnace (PMZF). The PMZF is described as a multi-user materials processing furnace facility that is composed of thirty or more heater elements in series on a muffle tube or in a stacked ring-type configuration and independently controlled by a computer. One of the aims of the PMZF project is to allow furnace thermal gradient profiles to be reconfigured without physical modification of the hardware by creating the capability of reconfiguring thermal profiles in response to investigators' requests. The future location of the PMZF facility is discussed; the preliminary science survey results and preliminary conceptual designs for the PMZF are presented; and a review of multi-zone furnace technology is given.

  18. IBM PC enhances the world's future

    NASA Technical Reports Server (NTRS)

    Cox, Jozelle

    1988-01-01

    Although the purpose of this research is to illustrate the importance of computers to the public, particularly the IBM PC, present examinations will include computers developed before the IBM PC was brought into use. IBM, as well as other computing facilities, began serving the public years ago, and is continuing to find ways to enhance the existence of man. With new developments in supercomputers like the Cray-2, and the recent advances in artificial intelligence programming, the human race is gaining knowledge at a rapid pace. All have benefited from the development of computers in the world; not only have they brought new assets to life, but have made life more and more of a challenge everyday.

  19. Feasibility of Conducting J-2X Engine Testing at the Glenn Research Center Plum Brook Station B-2 Facility

    NASA Technical Reports Server (NTRS)

    Schafer, Charles F.; Cheston, Derrick J.; Worlund, Armis L.; Brown, James R.; Hooper, William G.; Monk, Jan C.; Winstead, Thomas W.

    2008-01-01

    A trade study of the feasibility of conducting J-2X testing in the Glenn Research Center (GRC) Plum Brook Station (PBS) B-2 facility was initiated in May 2006 with results available in October 2006. The Propulsion Test Integration Group (PTIG) led the study with support from Marshall Space Flight Center (MSFC) and Jacobs Sverdrup Engineering. The primary focus of the trade study was on facility design concepts and their capability to satisfy the J-2X altitude simulation test requirements. The propulsion systems tested in the B-2 facility were in the 30,000-pound (30K) thrust class. The J-2X thrust is approximately 10 times larger. Therefore, concepts significantly different from the current configuration are necessary for the diffuser, spray chamber subsystems, and cooling water. Steam exhaust condensation in the spray chamber is judged to be the key risk consideration relative to acceptable spray chamber pressure. Further assessment via computational fluid dynamics (CFD) and other simulation capabilities (e.g. methodology for anchoring predictions with actual test data and subscale testing to support investigation.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baker, Ann E; Barker, Ashley D; Bland, Arthur S Buddy

    Oak Ridge National Laboratory's Leadership Computing Facility (OLCF) continues to deliver the most powerful resources in the U.S. for open science. At 2.33 petaflops peak performance, the Cray XT Jaguar delivered more than 1.4 billion core hours in calendar year (CY) 2011 to researchers around the world for computational simulations relevant to national and energy security; advancing the frontiers of knowledge in physical sciences and areas of biological, medical, environmental, and computer sciences; and providing world-class research facilities for the nation's science enterprise. Users reported more than 670 publications this year arising from their use of OLCF resources. Of thesemore » we report the 300 in this review that are consistent with guidance provided. Scientific achievements by OLCF users cut across all range scales from atomic to molecular to large-scale structures. At the atomic scale, researchers discovered that the anomalously long half-life of Carbon-14 can be explained by calculating, for the first time, the very complex three-body interactions between all the neutrons and protons in the nucleus. At the molecular scale, researchers combined experimental results from LBL's light source and simulations on Jaguar to discover how DNA replication continues past a damaged site so a mutation can be repaired later. Other researchers combined experimental results from ORNL's Spallation Neutron Source and simulations on Jaguar to reveal the molecular structure of ligno-cellulosic material used in bioethanol production. This year, Jaguar has been used to do billion-cell CFD calculations to develop shock wave compression turbo machinery as a means to meet DOE goals for reducing carbon sequestration costs. General Electric used Jaguar to calculate the unsteady flow through turbo machinery to learn what efficiencies the traditional steady flow assumption is hiding from designers. Even a 1% improvement in turbine design can save the nation billions of gallons of fuel.« less

  1. [The Computer Competency of Nurses in Long-Term Care Facilities and Related Factors].

    PubMed

    Chang, Ya-Ping; Kuo, Huai-Ting; Li, I-Chuan

    2016-12-01

    It is important for nurses who work in long-term care facilities (LTCFs) to have an adequate level of computer competency due to the multidisciplinary and comprehensive nature of long-term care services. Thus, it is important to understand the current computer competency of nursing staff in LTCFs and the factors that relate to this competency. To explore the computer competency of LTCF nurses and to identify the demographic and computer-usage characteristics that relate significantly to computer competency in the LTCF environment. A cross-sectional research design and a self-report questionnaire were used to collect data from 185 nurses working at LTCFs in Taipei. The results found that the variables of the frequency of computer use (β = .33), age (β = -.30), type(s) of the software used at work (β = .28), hours of on-the-job training (β = -.14), prior work experience at other LTCFs (β = -.14), and Internet use at home (β = .12) explain 58.0% of the variance in the computer competency of participants. The results of the present study suggest that the following measures may help increase the computer competency of LTCF nurses. (1) Nurses should be encouraged to use electronic nursing records rather than handwritten records. (2) On-the-job training programs should emphasize participant competency in the Excel software package in order to maintain efficient and good-quality of LTC services after implementing of the LTC insurance policy.

  2. A review of recent developments in flight test techniques at the Ames Research Center, Dryden Flight Research Facility

    NASA Technical Reports Server (NTRS)

    Layton, G. P.

    1984-01-01

    New flight test techniques in use at Ames Dryden are reviewed. The use of the pilot in combination with ground and airborne computational capabilities to maximize data return is discussed, including the remotely piloted research vehicle technique for high-risk testing, the remotely augmented vehicle technique for handling qualities research, and use of ground computed flight director information to fly unique profiles such as constant Reynolds number profiles through the transonic flight regime. Techniques used for checkout and design verification of systems-oriented aircraft are discussed, including descriptions of the various simulations, iron bird setups, and vehicle tests. Some newly developed techniques to support the aeronautical research disciplines are discussed, including a new approach to position-error determination, and the use of a large skin friction balance for the measurement of drag caused by various excrescencies.

  3. Nitash Balsara

    Science.gov Websites

    NPBalsara@lbl.gov 510-642-8973 Research profile » A U.S. Department of Energy National Laboratory Operated by the University of California UC logo Questions & Comments * Privacy & Security Notice Computational Study of Excited-State Phenomena in Energy Materials Center for X-ray Optics MSD Facilities Ion

  4. Ting Xu

    Science.gov Websites

    California, Berkeley tingxu@berkeley.edu 510-642-1632 Research profile » A U.S. Department of Energy National Laboratory Operated by the University of California UC logo Questions & Comments * Privacy Computational Study of Excited-State Phenomena in Energy Materials Center for X-ray Optics MSD Facilities Ion

  5. Detailed Concepts in Performing Oversight on an Army Radiographic Inspection Site

    DTIC Science & Technology

    2017-03-01

    number of facilities that perform various nondestructive tests , inspections, and evaluations. The U.S. Army Armament Research, Development and...procedures, and documentation in place to conform to nationally recognized standards. This report specifically reviews the radiographic testing ...X-ray Nondestructive testing (NDT) Radiographic testing (RT) Computed tomography (CT) 16. SECURITY

  6. Management of Library Security. SPEC Kit 247 and SPEC Flyer 247.

    ERIC Educational Resources Information Center

    Soete, George J., Comp.; Zimmerman, Glen, Comp.

    This SPEC (Systems and Procedures Exchange Center) Kit and Flyer reports results of a survey conducted in January 1999 that examined how ARL (Association of Research Libraries) member libraries assure the safety and security of persons, library materials, physical facilities, furnishings, computer equipment, etc. Forty-five of the 122 ARL member…

  7. Computer-based nursing documentation in nursing homes: A feasibility study.

    PubMed

    Yu, Ping; Qiu, Yiyu; Crookes, Patrick

    2006-01-01

    The burden of paper-based nursing documentation has led to increasing complaints and decreasing job satisfaction amongst aged-care workers in Australian nursing homes. The automation of nursing documentation has been identified as one of the possible strategies to address this issue. A major obstacle to the introduction of IT solutions, however, has been a prevailing doubt concerning the ability and/or the willingness of aged-care workers to accept such innovation. This research investigates the attitudes of aged-care workers towards adopting IT innovation. Questionnaire survey were conducted in 13 nursing homes around the Illawarra and Sydney regions in Australia. The survey found that an unexpected 89.3% of participants supported the strategy of introducing electronic nursing documentation systems into residential aged-care facilities. 94.3% of them would use such a system depending on circumstances. Despite a shortage of computers in the workplace, which is a major barrier, this research provides strong evidence that care workers in residential aged-care facilities are willing to accept electronic nursing documentation practice and the uptake of information technology in residential aged-care is feasible in Australia.

  8. Instrument Systems Analysis and Verification Facility (ISAVF) users guide

    NASA Technical Reports Server (NTRS)

    Davis, J. F.; Thomason, J. O.; Wolfgang, J. L.

    1985-01-01

    The ISAVF facility is primarily an interconnected system of computers, special purpose real time hardware, and associated generalized software systems, which will permit the Instrument System Analysts, Design Engineers and Instrument Scientists, to perform trade off studies, specification development, instrument modeling, and verification of the instrument, hardware performance. It is not the intent of the ISAVF to duplicate or replace existing special purpose facilities such as the Code 710 Optical Laboratories or the Code 750 Test and Evaluation facilities. The ISAVF will provide data acquisition and control services for these facilities, as needed, using remote computer stations attached to the main ISAVF computers via dedicated communication lines.

  9. UTILIZATION OF COMPUTER FACILITIES IN THE MATHEMATICS AND BUSINESS CURRICULUM IN A LARGE SUBURBAN HIGH SCHOOL.

    ERIC Educational Resources Information Center

    RENO, MARTIN; AND OTHERS

    A STUDY WAS UNDERTAKEN TO EXPLORE IN A QUALITATIVE WAY THE POSSIBLE UTILIZATION OF COMPUTER AND DATA PROCESSING METHODS IN HIGH SCHOOL EDUCATION. OBJECTIVES WERE--(1) TO ESTABLISH A WORKING RELATIONSHIP WITH A COMPUTER FACILITY SO THAT ABLE STUDENTS AND THEIR TEACHERS WOULD HAVE ACCESS TO THE FACILITIES, (2) TO DEVELOP A UNIT FOR THE UTILIZATION…

  10. Laboratory directed research and development program FY 1999

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hansen, Todd; Levy, Karin

    2000-03-08

    The Ernest Orlando Lawrence Berkeley National Laboratory (Berkeley Lab or LBNL) is a multi-program national research facility operated by the University of California for the Department of Energy (DOE). As an integral element of DOE's National Laboratory System, Berkeley Lab supports DOE's missions in fundamental science, energy resources, and environmental quality. Berkeley Lab programs advance four distinct goals for DOE and the nation: (1) To perform leading multidisciplinary research in the computing sciences, physical sciences, energy sciences, biosciences, and general sciences in a manner that ensures employee and public safety and protection of the environment. (2) To develop and operatemore » unique national experimental facilities for qualified investigators. (3) To educate and train future generations of scientists and engineers to promote national science and education goals. (4) To transfer knowledge and technological innovations and to foster productive relationships among Berkeley Lab's research programs, universities, and industry in order to promote national economic competitiveness. This is the annual report on Laboratory Directed Research and Development (LDRD) program for FY99.« less

  11. First-principles characterization of formate and carboxyl adsorption on the stoichiometric CeO2(111) and CeO2(110) surfaces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mei, Donghai

    2013-05-20

    Molecular adsorption of formate and carboxyl on the stoichiometric CeO2(111) and CeO2(110) surfaces was studied using periodic density functional theory (DFT+U) calculations. Two distinguishable adsorption modes (strong and weak) of formate are identified. The bidentate configuration is more stable than the monodentate adsorption configuration. Both formate and carboxyl bind at the more open CeO2(110) surface are stronger. The calculated vibrational frequencies of two adsorbed species are consistent with experimental measurements. Finally, the effects of U parameters on the adsorption of formate and carboxyl over both CeO2 surfaces were investigated. We found that the geometrical configurations of two adsorbed species aremore » not affected by using different U parameters (U=0, 5, and 7). However, the calculated adsorption energy of carboxyl pronouncedly increases with the U value while the adsorption energy of formate only slightly changes (<0.2 eV). The Bader charge analysis shows the opposite charge transfer occurs for formate and carboxyl adsorption where the adsorbed formate is negatively charge whiled the adsorbed carboxyl is positively charged. Interestingly, with the increasing U parameter, the amount of charge is also increased. This work was supported by the Laboratory Directed Research and Development (LDRD) project of the Pacific Northwest National Laboratory (PNNL) and by a Cooperative Research and Development Agreement (CRADA) with General Motors. The computations were performed using the Molecular Science Computing Facility in the William R. Wiley Environmental Molecular Sciences Laboratory (EMSL), which is a U.S. Department of Energy national scientific user facility located at PNNL in Richland, Washington. Part of the computing time was also granted by the National Energy Research Scientific Computing Center (NERSC)« less

  12. Facilities | Integrated Energy Solutions | NREL

    Science.gov Websites

    strategies needed to optimize our entire energy system. A photo of the high-performance computer at NREL . High-Performance Computing Data Center High-performance computing facilities at NREL provide high-speed

  13. The NASA Energy Conservation Program

    NASA Technical Reports Server (NTRS)

    Gaffney, G. P.

    1977-01-01

    Large energy-intensive research and test equipment at NASA installations is identified, and methods for reducing energy consumption outlined. However, some of the research facilities are involved in developing more efficient, fuel-conserving aircraft, and tradeoffs between immediate and long-term conservation may be necessary. Major programs for conservation include: computer-based systems to automatically monitor and control utility consumption; a steam-producing solid waste incinerator; and a computer-based cost analysis technique to engineer more efficient heating and cooling of buildings. Alternate energy sources in operation or under evaluation include: solar collectors; electric vehicles; and ultrasonically emulsified fuel to attain higher combustion efficiency. Management support, cooperative participation by employees, and effective reporting systems for conservation programs, are also discussed.

  14. Experience with a UNIX based batch computing facility for H1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gerhards, R.; Kruener-Marquis, U.; Szkutnik, Z.

    1994-12-31

    A UNIX based batch computing facility for the H1 experiment at DESY is described. The ultimate goal is to replace the DESY IBM mainframe by a multiprocessor SGI Challenge series computer, using the UNIX operating system, for most of the computing tasks in H1.

  15. 78 FR 18353 - Guidance for Industry: Blood Establishment Computer System Validation in the User's Facility...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-26

    ...; (Formerly FDA-2007D-0393)] Guidance for Industry: Blood Establishment Computer System Validation in the User... Industry: Blood Establishment Computer System Validation in the User's Facility'' dated April 2013. The... document entitled ``Guidance for Industry: Blood Establishment Computer System Validation in the User's...

  16. Seals Research at Texas A/M University

    NASA Technical Reports Server (NTRS)

    Morrison, Gerald L.

    1991-01-01

    The Turbomachinery Laboratory at Texas A&M has been providing experimental data and computational codes for the design seals for many years. The program began with the development of a Halon based seal test rig. This facility provided information about the effective stiffness and damping in whirling seals. The Halon effectively simulated cryogenic fluids. Another test facility was developed (using air as the working fluid) where the stiffness and damping matrices can be determined. This data was used to develop bulk flow models of the seal's effect upon rotating machinery; in conjunction with this research, a bulk flow model for calculation of performance and rotordynamic coefficients of annular pressure seals of arbitrary non-uniform clearance for barotropic fluids such as LH2, LOX, LN2, and CH4 was developed. This program is very efficient (fast) and converges for very large eccentricities. Currently, work is being performed on a bulk flow analysis of the effects of the impeller-shroud interaction upon the stability of pumps. The data was used along with data from other researchers to develop an empirical leakage prediction code for MSFC. Presently, the flow field inside labyrinth and annular seals are being studied in detail. An advanced 3-D Doppler anemometer system is being used to measure the mean velocity and entire Reynolds stress tensor distribution throughout the seals. Concentric and statically eccentric seals were studied; presently, whirling seals are being studied. The data obtained are providing valuable information about the flow phenomena occurring inside the seals, as well as a data base for comparison with numerical predictions and for turbulence model development. A finite difference computer code was developed for solving the Reynolds averaged Navier Stokes equation inside labyrinth seals. A multi-scale k-epsilon turbulence model is currently being evaluated. A new seal geometry was designed and patented using a computer code. A large scale, 2-D seal flow visualization facility is also being developed.

  17. Numerical simulation of long-duration blast wave evolution in confined facilities

    NASA Astrophysics Data System (ADS)

    Togashi, F.; Baum, J. D.; Mestreau, E.; Löhner, R.; Sunshine, D.

    2010-10-01

    The objective of this research effort was to investigate the quasi-steady flow field produced by explosives in confined facilities. In this effort we modeled tests in which a high explosive (HE) cylindrical charge was hung in the center of a room and detonated. The HEs used for the tests were C-4 and AFX 757. While C-4 is just slightly under-oxidized and is typically modeled as an ideal explosive, AFX 757 includes a significant percentage of aluminum particles, so long-time afterburning and energy release must be considered. The Lawrence Livermore National Laboratory (LLNL)-produced thermo-chemical equilibrium algorithm, “Cheetah”, was used to estimate the remaining burnable detonation products. From these remaining species, the afterburning energy was computed and added to the flow field. Computations of the detonation and afterburn of two HEs in the confined multi-room facility were performed. The results demonstrate excellent agreement with available experimental data in terms of blast wave time of arrival, peak shock amplitude, reverberation, and total impulse (and hence, total energy release, via either the detonation or afterburn processes.

  18. Methods for nuclear air-cleaning-system accident-consequence assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andrae, R.W.; Bolstad, J.W.; Gregory, W.S.

    1982-01-01

    This paper describes a multilaboratory research program that is directed toward addressing many questions that analysts face when performing air cleaning accident consequence assessments. The program involves developing analytical tools and supportive experimental data that will be useful in making more realistic assessments of accident source terms within and up to the atmospheric boundaries of nuclear fuel cycle facilities. The types of accidents considered in this study includes fires, explosions, spills, tornadoes, criticalities, and equipment failures. The main focus of the program is developing an accident analysis handbook (AAH). We will describe the contents of the AAH, which include descriptionsmore » of selected nuclear fuel cycle facilities, process unit operations, source-term development, and accident consequence analyses. Three computer codes designed to predict gas and material propagation through facility air cleaning systems are described. These computer codes address accidents involving fires (FIRAC), explosions (EXPAC), and tornadoes (TORAC). The handbook relies on many illustrative examples to show the analyst how to approach accident consequence assessments. We will use the FIRAC code and a hypothetical fire scenario to illustrate the accident analysis capability.« less

  19. Recent Accomplishments and Future Directions in US Fusion Safety & Environmental Program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    David A. Petti; Brad J. Merrill; Phillip Sharpe

    2006-07-01

    The US fusion program has long recognized that the safety and environmental (S&E) potential of fusion can be attained by prudent materials selection, judicious design choices, and integration of safety requirements into the design of the facility. To achieve this goal, S&E research is focused on understanding the behavior of the largest sources of radioactive and hazardous materials in a fusion facility, understanding how energy sources in a fusion facility could mobilize those materials, developing integrated state of the art S&E computer codes and risk tools for safety assessment, and evaluating S&E issues associated with current fusion designs. In thismore » paper, recent accomplishments are reviewed and future directions outlined.« less

  20. Study of the mapping of Navier-Stokes algorithms onto multiple-instruction/multiple-data-stream computers

    NASA Technical Reports Server (NTRS)

    Eberhardt, D. S.; Baganoff, D.; Stevens, K.

    1984-01-01

    Implicit approximate-factored algorithms have certain properties that are suitable for parallel processing. A particular computational fluid dynamics (CFD) code, using this algorithm, is mapped onto a multiple-instruction/multiple-data-stream (MIMD) computer architecture. An explanation of this mapping procedure is presented, as well as some of the difficulties encountered when trying to run the code concurrently. Timing results are given for runs on the Ames Research Center's MIMD test facility which consists of two VAX 11/780's with a common MA780 multi-ported memory. Speedups exceeding 1.9 for characteristic CFD runs were indicated by the timing results.

  1. Computational fluid dynamics applications at McDonnel Douglas

    NASA Technical Reports Server (NTRS)

    Hakkinen, R. J.

    1987-01-01

    Representative examples are presented of applications and development of advanced Computational Fluid Dynamics (CFD) codes for aerodynamic design at the McDonnell Douglas Corporation (MDC). Transonic potential and Euler codes, interactively coupled with boundary layer computation, and solutions of slender-layer Navier-Stokes approximation are applied to aircraft wing/body calculations. An optimization procedure using evolution theory is described in the context of transonic wing design. Euler methods are presented for analysis of hypersonic configurations, and helicopter rotors in hover and forward flight. Several of these projects were accepted for access to the Numerical Aerodynamic Simulation (NAS) facility at the NASA-Ames Research Center.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mann, Reinhold C.

    This is the first formal progress report issued by the ORNL Life Sciences Division. It covers the period from February 1997 through December 1998, which has been critical in the formation of our new division. The legacy of 50 years of excellence in biological research at ORNL has been an important driver for everyone in the division to do their part so that this new research division can realize the potential it has to make seminal contributions to the life sciences for years to come. This reporting period is characterized by intense assessment and planning efforts. They included thorough scrutinymore » of our strengths and weaknesses, analyses of our situation with respect to comparative research organizations, and identification of major thrust areas leading to core research efforts that take advantage of our special facilities and expertise. Our goal is to develop significant research and development (R&D) programs in selected important areas to which we can make significant contributions by combining our distinctive expertise and resources in the biological sciences with those in the physical, engineering, and computational sciences. Significant facilities in mouse genomics, mass spectrometry, neutron science, bioanalytical technologies, and high performance computing are critical to the success of our programs. Research and development efforts in the division are organized in six sections. These cluster into two broad areas of R&D: systems biology and technology applications. The systems biology part of the division encompasses our core biological research programs. It includes the Mammalian Genetics and Development Section, the Biochemistry and Biophysics Section, and the Computational Biosciences Section. The technology applications part of the division encompasses the Assessment Technology Section, the Environmental Technology Section, and the Toxicology and Risk Analysis Section. These sections are the stewards of the division's core competencies. The common mission of the division is to advance science and technology to understand complex biological systems and their relationship with human health and the environment.« less

  3. Multiple grid problems on concurrent-processing computers

    NASA Technical Reports Server (NTRS)

    Eberhardt, D. S.; Baganoff, D.

    1986-01-01

    Three computer codes were studied which make use of concurrent processing computer architectures in computational fluid dynamics (CFD). The three parallel codes were tested on a two processor multiple-instruction/multiple-data (MIMD) facility at NASA Ames Research Center, and are suggested for efficient parallel computations. The first code is a well-known program which makes use of the Beam and Warming, implicit, approximate factored algorithm. This study demonstrates the parallelism found in a well-known scheme and it achieved speedups exceeding 1.9 on the two processor MIMD test facility. The second code studied made use of an embedded grid scheme which is used to solve problems having complex geometries. The particular application for this study considered an airfoil/flap geometry in an incompressible flow. The scheme eliminates some of the inherent difficulties found in adapting approximate factorization techniques onto MIMD machines and allows the use of chaotic relaxation and asynchronous iteration techniques. The third code studied is an application of overset grids to a supersonic blunt body problem. The code addresses the difficulties encountered when using embedded grids on a compressible, and therefore nonlinear, problem. The complex numerical boundary system associated with overset grids is discussed and several boundary schemes are suggested. A boundary scheme based on the method of characteristics achieved the best results.

  4. Computational Simulations of the NASA Langley HyMETS Arc-Jet Facility

    NASA Technical Reports Server (NTRS)

    Brune, A. J.; Bruce, W. E., III; Glass, D. E.; Splinter, S. C.

    2017-01-01

    The Hypersonic Materials Environmental Test System (HyMETS) arc-jet facility located at the NASA Langley Research Center in Hampton, Virginia, is primarily used for the research, development, and evaluation of high-temperature thermal protection systems for hypersonic vehicles and reentry systems. In order to improve testing capabilities and knowledge of the test article environment, an effort is underway to computationally simulate the flow-field using computational fluid dynamics (CFD). A detailed three-dimensional model of the arc-jet nozzle and free-jet portion of the flow-field has been developed and compared to calibration probe Pitot pressure and stagnation-point heat flux for three test conditions at low, medium, and high enthalpy. The CFD model takes into account uniform pressure and non-uniform enthalpy profiles at the nozzle inlet as well as catalytic recombination efficiency effects at the probe surface. Comparing the CFD results and test data indicates an effectively fully-catalytic copper surface on the heat flux probe of about 10% efficiency and a 2-3 kpa pressure drop from the arc heater bore, where the pressure is measured, to the plenum section, prior to the nozzle. With these assumptions, the CFD results are well within the uncertainty of the stagnation pressure and heat flux measurements. The conditions at the nozzle exit were also compared with radial and axial velocimetry. This simulation capability will be used to evaluate various three-dimensional models that are tested in the HyMETS facility. An end-to-end aerothermal and thermal simulation of HyMETS test articles will follow this work to provide a better understanding of the test environment, test results, and to aid in test planning. Additional flow-field diagnostic measurements will also be considered to improve the modeling capability.

  5. Optical studies in the holographic ground station

    NASA Technical Reports Server (NTRS)

    Workman, Gary L.

    1991-01-01

    The Holographic Group System (HGS) Facility in rooms 22 & 123, Building 4708 has been developed to provide for ground based research in determining pre-flight parameters and analyzing the results from space experiments. The University of Alabama, Huntsville (UAH) has researched the analysis aspects of the HGS and reports their findings here. Some of the results presented here also occur in the Facility Operating Procedure (FOP), which contains instructions for power up, operation, and powerdown of the Fluid Experiment System (FES) Holographic Ground System (HGS) Test Facility for the purpose of optically recording fluid and/or crystal behavior in a test article during ground based testing through the construction of holograms and recording of videotape. The alignment of the optical bench components, holographic reconstruction and and microscopy alignment sections were also included in the document for continuity even though they are not used until after optical recording of the test article) setup of support subsystems and the Automated Holography System (AHS) computer. The HGS provides optical recording and monitoring during GCEL runs or development testing of potential FES flight hardware or software. This recording/monitoring can be via 70mm holographic film, standard videotape, or digitized images on computer disk. All optical bench functions necessary to construct holograms will be under the control of the AHS personal computer (PC). These include type of exposure, time intervals between exposures, exposure length, film frame identification, film advancement, film platen evacuation and repressurization, light source diffuser introduction, and control of realtime video monitoring. The completed sequence of hologram types (single exposure, diffuse double exposure, etc.) and their time of occurrence can be displayed, printed, or stored on floppy disk posttest for the user.

  6. Achieving production-level use of HEP software at the Argonne Leadership Computing Facility

    NASA Astrophysics Data System (ADS)

    Uram, T. D.; Childers, J. T.; LeCompte, T. J.; Papka, M. E.; Benjamin, D.

    2015-12-01

    HEP's demand for computing resources has grown beyond the capacity of the Grid, and these demands will accelerate with the higher energy and luminosity planned for Run II. Mira, the ten petaFLOPs supercomputer at the Argonne Leadership Computing Facility, is a potentially significant compute resource for HEP research. Through an award of fifty million hours on Mira, we have delivered millions of events to LHC experiments by establishing the means of marshaling jobs through serial stages on local clusters, and parallel stages on Mira. We are running several HEP applications, including Alpgen, Pythia, Sherpa, and Geant4. Event generators, such as Sherpa, typically have a split workload: a small scale integration phase, and a second, more scalable, event-generation phase. To accommodate this workload on Mira we have developed two Python-based Django applications, Balsam and ARGO. Balsam is a generalized scheduler interface which uses a plugin system for interacting with scheduler software such as HTCondor, Cobalt, and TORQUE. ARGO is a workflow manager that submits jobs to instances of Balsam. Through these mechanisms, the serial and parallel tasks within jobs are executed on the appropriate resources. This approach and its integration with the PanDA production system will be discussed.

  7. Computer model to simulate testing at the National Transonic Facility

    NASA Technical Reports Server (NTRS)

    Mineck, Raymond E.; Owens, Lewis R., Jr.; Wahls, Richard A.; Hannon, Judith A.

    1995-01-01

    A computer model has been developed to simulate the processes involved in the operation of the National Transonic Facility (NTF), a large cryogenic wind tunnel at the Langley Research Center. The simulation was verified by comparing the simulated results with previously acquired data from three experimental wind tunnel test programs in the NTF. The comparisons suggest that the computer model simulates reasonably well the processes that determine the liquid nitrogen (LN2) consumption, electrical consumption, fan-on time, and the test time required to complete a test plan at the NTF. From these limited comparisons, it appears that the results from the simulation model are generally within about 10 percent of the actual NTF test results. The use of actual data acquisition times in the simulation produced better estimates of the LN2 usage, as expected. Additional comparisons are needed to refine the model constants. The model will typically produce optimistic results since the times and rates included in the model are typically the optimum values. Any deviation from the optimum values will lead to longer times or increased LN2 and electrical consumption for the proposed test plan. Computer code operating instructions and listings of sample input and output files have been included.

  8. NASA Johnson Space Center Usability Testing and Analysis Facility (UTAF) Overview

    NASA Technical Reports Server (NTRS)

    Whitmore, M.

    2004-01-01

    The Usability Testing and Analysis Facility (UTAF) is part of the Space Human Factors Laboratory at the NASA Johnson Space Center in Houston, Texas. The facility provides support to the Office of Biological and Physical Research, the Space Shuttle Program, the International Space Station Program, and other NASA organizations. In addition, there are ongoing collaborative research efforts with external businesses and universities. The UTAF provides human factors analysis, evaluation, and usability testing of crew interfaces for space applications. This includes computer displays and controls, workstation systems, and work environments. The UTAF has a unique mix of capabilities, with a staff experienced in both cognitive human factors and ergonomics. The current areas of focus are: human factors applications in emergency medical care and informatics; control and display technologies for electronic procedures and instructions; voice recognition in noisy environments; crew restraint design for unique microgravity workstations; and refinement of human factors processes. This presentation will provide an overview of ongoing activities, and will address how the projects will evolve to meet new space initiatives.

  9. NASA Johnson Space Center Usability Testing and Analysis Facility (WAF) Overview

    NASA Technical Reports Server (NTRS)

    Whitmore, M.

    2004-01-01

    The Usability Testing and Analysis Facility (UTAF) is part of the Space Human Factors Laboratory at the NASA Johnson Space Center in Houston, Texas. The facility provides support to the Office of Biological and Physical Research, the Space Shuttle Program, the International Space Station Program, and other NASA organizations. In addition, there are ongoing collaborative research efforts with external businesses and universities. The UTAF provides human factors analysis, evaluation, and usability testing of crew interfaces for space applications. This includes computer displays and controls, workstation systems, and work environments. The UTAF has a unique mix of capabilities, with a staff experienced in both cognitive human factors and ergonomics. The current areas of focus are: human factors applications in emergency medical care and informatics; control and display technologies for electronic procedures and instructions; voice recognition in noisy environments; crew restraint design for unique microgravity workstations; and refinement of human factors processes. This presentation will provide an overview of ongoing activities, and will address how the projects will evolve to meet new space initiatives.

  10. Future Computer Requirements for Computational Aerodynamics

    NASA Technical Reports Server (NTRS)

    1978-01-01

    Recent advances in computational aerodynamics are discussed as well as motivations for and potential benefits of a National Aerodynamic Simulation Facility having the capability to solve fluid dynamic equations at speeds two to three orders of magnitude faster than presently possible with general computers. Two contracted efforts to define processor architectures for such a facility are summarized.

  11. Development of test methods for scale model simulation of aerial applications in the NASA Langley Vortex Research Facility. [agricultural aircraft

    NASA Technical Reports Server (NTRS)

    Jordan, F. L., Jr.

    1980-01-01

    As part of basic research to improve aerial applications technology, methods were developed at the Langley Vortex Research Facility to simulate and measure deposition patterns of aerially-applied sprays and granular materials by means of tests with small-scale models of agricultural aircraft and dynamically-scaled test particles. Interactions between the aircraft wake and the dispersed particles are being studied with the objective of modifying wake characteristics and dispersal techniques to increase swath width, improve deposition pattern uniformity, and minimize drift. The particle scaling analysis, test methods for particle dispersal from the model aircraft, visualization of particle trajectories, and measurement and computer analysis of test deposition patterns are described. An experimental validation of the scaling analysis and test results that indicate improved control of chemical drift by use of winglets are presented to demonstrate test methods.

  12. Advanced Group Support Systems and Facilities

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K. (Compiler); Malone, John B. (Compiler)

    1999-01-01

    The document contains the proceedings of the Workshop on Advanced Group Support Systems and Facilities held at NASA Langley Research Center, Hampton, Virginia, July 19-20, 1999. The workshop was jointly sponsored by the University of Virginia Center for Advanced Computational Technology and NASA. Workshop attendees came from NASA, other government agencies, industry, and universities. The objectives of the workshop were to assess the status of advanced group support systems and to identify the potential of these systems for use in future collaborative distributed design and synthesis environments. The presentations covered the current status and effectiveness of different group support systems.

  13. Large blast and thermal simulator advanced concept driver design by computational fluid dynamics. Final report, 1987-1989

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Opalka, K.O.

    1989-08-01

    The construction of a large test facility has been proposed for simulating the blast and thermal environment resulting from nuclear explosions. This facility would be used to test the survivability and vulnerability of military equipment such as trucks, tanks, and helicopters in a simulated thermal and blast environment, and to perform research into nuclear blast phenomenology. The proposed advanced design concepts, heating of driver gas and fast-acting throat valves for wave shaping, are described and the results of CFD studies to advance these new technical concepts fro simulating decaying blast waves are reported.

  14. Astronomy and astrophysics for the 1980's, volume 1

    NASA Technical Reports Server (NTRS)

    1982-01-01

    The programs recommended address the most significant questions that confront contemporary astronomy and fall into three general categories: prerequisites for research initiatives, including instrumentation and detectors, theory and data analysis, computational facilities, laboratory astrophysics, and technical support at ground-based observatories; programs including an Advanced X-ray Astrophysics Facility, a Very-Long Baseline Array, a Technology Telescope and a Large Deployable Reflector; and programs for study and development, including X-ray observatories in space, instruments for the detection of gravitational waves from astronomical objects, and long duration spaceflights of infrared telescopes. Estimated costs of these programs are provided.

  15. An overview of HyFIE Technical Research Project: cross-testing in main European hypersonic wind tunnels on EXPERT body

    NASA Astrophysics Data System (ADS)

    Brazier, Jean-Philippe; Martinez Schramm, Jan; Paris, Sébastien; Gawehn, Thomas; Reimann, Bodo

    2016-09-01

    HyFIE project aimed at improving the measurement techniques in hypersonic wind tunnels and comparing the experimental data provided by four major European facilities: DLR HEG and H2K, ONERA F4 and VKI Longshot. A common geometry of EXPERT body was chosen and four different models were used. A large amount of experimental data was collected and compared with the results of numerical simulations. Collapsing all the measured values showed a good agreement between the different facilities, as well as between experimental and computed data.

  16. Astronomy and astrophysics for the 1980's, volume 1

    NASA Astrophysics Data System (ADS)

    The programs recommended address the most significant questions that confront contemporary astronomy and fall into three general categories: prerequisites for research initiatives, including instrumentation and detectors, theory and data analysis, computational facilities, laboratory astrophysics, and technical support at ground-based observatories; programs including an Advanced X-ray Astrophysics Facility, a Very-Long Baseline Array, a Technology Telescope and a Large Deployable Reflector; and programs for study and development, including X-ray observatories in space, instruments for the detection of gravitational waves from astronomical objects, and long duration spaceflights of infrared telescopes. Estimated costs of these programs are provided.

  17. CRYSNET manual. Informal report. [Hardware and software of crystallographic computing network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None,

    1976-07-01

    This manual describes the hardware and software which together make up the crystallographic computing network (CRYSNET). The manual is intended as a users' guide and also provides general information for persons without any experience with the system. CRYSNET is a network of intelligent remote graphics terminals that are used to communicate with the CDC Cyber 70/76 computing system at the Brookhaven National Laboratory (BNL) Central Scientific Computing Facility. Terminals are in active use by four research groups in the field of crystallography. A protein data bank has been established at BNL to store in machine-readable form atomic coordinates and othermore » crystallographic data for macromolecules. The bank currently includes data for more than 20 proteins. This structural information can be accessed at BNL directly by the CRYSNET graphics terminals. More than two years of experience has been accumulated with CRYSNET. During this period, it has been demonstrated that the terminals, which provide access to a large, fast third-generation computer, plus stand-alone interactive graphics capability, are useful for computations in crystallography, and in a variety of other applications as well. The terminal hardware, the actual operations of the terminals, and the operations of the BNL Central Facility are described in some detail, and documentation of the terminal and central-site software is given. (RWR)« less

  18. Summary of the Tandem Cylinder Solutions from the Benchmark Problems for Airframe Noise Computations-I Workshop

    NASA Technical Reports Server (NTRS)

    Lockard, David P.

    2011-01-01

    Fifteen submissions in the tandem cylinders category of the First Workshop on Benchmark problems for Airframe Noise Computations are summarized. Although the geometry is relatively simple, the problem involves complex physics. Researchers employed various block-structured, overset, unstructured and embedded Cartesian grid techniques and considerable computational resources to simulate the flow. The solutions are compared against each other and experimental data from 2 facilities. Overall, the simulations captured the gross features of the flow, but resolving all the details which would be necessary to compute the noise remains challenging. In particular, how to best simulate the effects of the experimental transition strip, and the associated high Reynolds number effects, was unclear. Furthermore, capturing the spanwise variation proved difficult.

  19. Real time computer data system for the 40 x 80 ft wind tunnel facility at Ames Research Center

    NASA Technical Reports Server (NTRS)

    Cambra, J. M.; Tolari, G. P.

    1974-01-01

    The wind tunnel realtime computer system is a distributed data gathering system that features a master computer subsystem, a high speed data gathering subsystem, a quick look dynamic analysis and vibration control subsystem, an analog recording back-up subsystem, a pulse code modulation (PCM) on-board subsystem, a communications subsystem, and a transducer excitation and calibration subsystem. The subsystems are married to the master computer through an executive software system and standard hardware and FORTRAN software interfaces. The executive software system has four basic software routines. These are the playback, setup, record, and monitor routines. The standard hardware interfaces along with the software interfaces provide the system with the capability of adapting to new environments.

  20. Advances and trends in the development of computational models for tires

    NASA Technical Reports Server (NTRS)

    Noor, A. K.; Tanner, J. A.

    1985-01-01

    Status and some recent developments of computational models for tires are summarized. Discussion focuses on a number of aspects of tire modeling and analysis including: tire materials and their characterization; evolution of tire models; characteristics of effective finite element models for analyzing tires; analysis needs for tires; and impact of the advances made in finite element technology, computational algorithms, and new computing systems on tire modeling and analysis. An initial set of benchmark problems has been proposed in concert with the U.S. tire industry. Extensive sets of experimental data will be collected for these problems and used for evaluating and validating different tire models. Also, the new Aircraft Landing Dynamics Facility (ALDF) at NASA Langley Research Center is described.

  1. Enabling Earth Science: The Facilities and People of the NCCS

    NASA Technical Reports Server (NTRS)

    2002-01-01

    The NCCS's mass data storage system allows scientists to store and manage the vast amounts of data generated by these computations, and its high-speed network connections allow the data to be accessed quickly from the NCCS archives. Some NCCS users perform studies that are directly related to their ability to run computationally expensive and data-intensive simulations. Because the number and type of questions scientists research often are limited by computing power, the NCCS continually pursues the latest technologies in computing, mass storage, and networking technologies. Just as important as the processors, tapes, and routers of the NCCS are the personnel who administer this hardware, create and manage accounts, maintain security, and assist the scientists, often working one on one with them.

  2. Facilities Management via Computer: Information at Your Fingertips.

    ERIC Educational Resources Information Center

    Hensey, Susan

    1996-01-01

    Computer-aided facilities management is a software program consisting of a relational database of facility information--such as occupancy, usage, student counts, etc.--attached to or merged with computerized floor plans. This program can integrate data with drawings, thereby allowing the development of "what if" scenarios. (MLF)

  3. Hera: High Energy Astronomical Data Analysis via the Internet

    NASA Astrophysics Data System (ADS)

    Valencic, Lynne A.; Chai, P.; Pence, W.; Snowden, S.

    2011-09-01

    The HEASARC at NASA Goddard Space Flight Center has developed Hera, a data processing facility for analyzing high energy astronomical data over the internet. Hera provides all the software packages, disk space, and computing resources needed to do general processing of and advanced research on publicly available data from High Energy Astrophysics missions. The data and data products are kept on a server at GSFC and can be downloaded to a user's local machine. This service is provided for free to students, educators, and researchers for educational and research purposes.

  4. Aeronautical engineering: A continuing bibliography with indexes (supplement 280)

    NASA Technical Reports Server (NTRS)

    1992-01-01

    This bibliography lists 647 reports, articles, and other documents introduced into the NASA scientific and technical information system in June, 1991. Subject coverage includes: aerodynamics, air transportation safety, aircraft communication and navigation, aircraft design and performance, aircraft instrumentation, aircraft propulsion, aircraft stability and control, research facilities, astronautics, chemistry and materials, engineering, geosciences, computer sciences, physics, and social sciences.

  5. Report: EPA Could Improve Physical Access and Service Continuity/Contingency Controls for Financial and Mixed-Financial Systems Located at its Research Triangle Park Campus

    EPA Pesticide Factsheets

    Report #2006-P-00005, December 14, 2005. Controls needed to be improved in areas such as visitor access to facilities, use of contractor access badges, and general physical access to the NCC, computer rooms outside the NCC, and media storage rooms.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Laros, James H.; Grant, Ryan; Levenhagen, Michael J.

    Measuring and controlling the power and energy consumption of high performance computing systems by various components in the software stack is an active research area. Implementations in lower level software layers are beginning to emerge in some production systems, which is very welcome. To be most effective, a portable interface to measurement and control features would significantly facilitate participation by all levels of the software stack. We present a proposal for a standard power Application Programming Interface (API) that endeavors to cover the entire software space, from generic hardware interfaces to the input from the computer facility manager.

  7. [Elderlies in street situation or social vulnerability: facilities and difficulties in the use of computational tools].

    PubMed

    Frias, Marcos Antonio da Eira; Peres, Heloisa Helena Ciqueto; Pereira, Valclei Aparecida Gandolpho; Negreiros, Maria Célia de; Paranhos, Wana Yeda; Leite, Maria Madalena Januário

    2014-01-01

    This study aimed to identify the advantages and difficulties encountered by older people living on the streets or social vulnerability, to use the computer or internet. It is an exploratory qualitative research, in which five elderlies, attended on a non-governmental organization located in the city of São Paulo, have participated. The discourses were analyzed by content analysis technique and showed, as facilities, among others, to clarify doubts with the monitors, the stimulus for new discoveries coupled with proactivity and curiosity, and develop new skills. The mentioned difficulties were related to physical or cognitive issues, lack of instructor, and lack of knowledge to interact with the machine. The studies focusing on the elderly population living on the streets or in social vulnerability may contribute with evidence to guide the formulation of public policies to this population.

  8. Computational Tools and Facilities for the Next-Generation Analysis and Design Environment

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K. (Compiler); Malone, John B. (Compiler)

    1997-01-01

    This document contains presentations from the joint UVA/NASA Workshop on Computational Tools and Facilities for the Next-Generation Analysis and Design Environment held at the Virginia Consortium of Engineering and Science Universities in Hampton, Virginia on September 17-18, 1996. The presentations focused on the computational tools and facilities for analysis and design of engineering systems, including, real-time simulations, immersive systems, collaborative engineering environment, Web-based tools and interactive media for technical training. Workshop attendees represented NASA, commercial software developers, the aerospace industry, government labs, and academia. The workshop objectives were to assess the level of maturity of a number of computational tools and facilities and their potential for application to the next-generation integrated design environment.

  9. Material Protection, Accounting, and Control Technologies (MPACT) Advanced Integration Roadmap

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, Mike; Cipiti, Ben; Demuth, Scott Francis

    2017-01-30

    The development of sustainable advanced nuclear fuel cycles is a long-term goal of the Office of Nuclear Energy’s (DOE-NE) Fuel Cycle Technologies program. The Material Protection, Accounting, and Control Technologies (MPACT) campaign is supporting research and development (R&D) of advanced instrumentation, analysis tools, and integration methodologies to meet this goal (Miller, 2015). This advanced R&D is intended to facilitate safeguards and security by design of fuel cycle facilities. The lab-scale demonstration of a virtual facility, distributed test bed, that connects the individual tools being developed at National Laboratories and university research establishments, is a key program milestone for 2020. Thesemore » tools will consist of instrumentation and devices as well as computer software for modeling, simulation and integration.« less

  10. Mach 0.3 Burner Rig Facility at the NASA Glenn Materials Research Laboratory

    NASA Technical Reports Server (NTRS)

    Fox, Dennis S.; Miller, Robert A.; Zhu, Dongming; Perez, Michael; Cuy, Michael D.; Robinson, R. Craig

    2011-01-01

    This Technical Memorandum presents the current capabilities of the state-of-the-art Mach 0.3 Burner Rig Facility. It is used for materials research including oxidation, corrosion, erosion and impact. Consisting of seven computer controlled jet-fueled combustors in individual test cells, these relatively small rigs burn just 2 to 3 gal of jet fuel per hour. The rigs are used as an efficient means of subjecting potential aircraft engine/airframe advanced materials to the high temperatures, high velocities and thermal cycling closely approximating actual operating environments. Materials of various geometries and compositions can be evaluated at temperatures from 700 to 2400 F. Tests are conducted not only on bare superalloys and ceramics, but also to study the behavior and durability of protective coatings applied to those materials.

  11. Material Protection, Accounting, and Control Technologies (MPACT) Advanced Integration Roadmap

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Durkee, Joe W.; Cipiti, Ben; Demuth, Scott Francis

    The development of sustainable advanced nuclear fuel cycles is a long-term goal of the Office of Nuclear Energy’s (DOE-NE) Fuel Cycle Technologies program. The Material Protection, Accounting, and Control Technologies (MPACT) campaign is supporting research and development (R&D) of advanced instrumentation, analysis tools, and integration methodologies to meet this goal (Miller, 2015). This advanced R&D is intended to facilitate safeguards and security by design of fuel cycle facilities. The lab-scale demonstration of a virtual facility, distributed test bed, that connects the individual tools being developed at National Laboratories and university research establishments, is a key program milestone for 2020. Thesemore » tools will consist of instrumentation and devices as well as computer software for modeling, simulation and integration.« less

  12. Future aerospace ground test facility requirements for the Arnold Engineering Development Center

    NASA Technical Reports Server (NTRS)

    Kirchner, Mark E.; Baron, Judson R.; Bogdonoff, Seymour M.; Carter, Donald I.; Couch, Lana M.; Fanning, Arthur E.; Heiser, William H.; Koff, Bernard L.; Melnik, Robert E.; Mercer, Stephen C.

    1992-01-01

    Arnold Engineering Development Center (AEDC) was conceived at the close of World War II, when major new developments in flight technology were presaged by new aerodynamic and propulsion concepts. During the past 40 years, AEDC has played a significant part in the development of many aerospace systems. The original plans were extended through the years by some additional facilities, particularly in the area of propulsion testing. AEDC now has undertaken development of a master plan in an attempt to project requirements and to plan for ground test and computational facilities over the coming 20 to 30 years. This report was prepared in response to an AEDC request that the National Research Council (NRC) assemble a committee to prepare guidance for planning and modernizing AEDC facilities for the development and testing of future classes of aerospace systems as envisaged by the U.S. Air Force.

  13. The change in critical technologies for computational physics

    NASA Technical Reports Server (NTRS)

    Watson, Val

    1990-01-01

    It is noted that the types of technology required for computational physics are changing as the field matures. Emphasis has shifted from computer technology to algorithm technology and, finally, to visual analysis technology as areas of critical research for this field. High-performance graphical workstations tied to a supercommunicator with high-speed communications along with the development of especially tailored visualization software has enabled analysis of highly complex fluid-dynamics simulations. Particular reference is made here to the development of visual analysis tools at NASA's Numerical Aerodynamics Simulation Facility. The next technology which this field requires is one that would eliminate visual clutter by extracting key features of simulations of physics and technology in order to create displays that clearly portray these key features. Research in the tuning of visual displays to human cognitive abilities is proposed. The immediate transfer of technology to all levels of computers, specifically the inclusion of visualization primitives in basic software developments for all work stations and PCs, is recommended.

  14. 2014 Annual Report - Argonne Leadership Computing Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Collins, James R.; Papka, Michael E.; Cerny, Beth A.

    The Argonne Leadership Computing Facility provides supercomputing capabilities to the scientific and engineering community to advance fundamental discovery and understanding in a broad range of disciplines.

  15. 2015 Annual Report - Argonne Leadership Computing Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Collins, James R.; Papka, Michael E.; Cerny, Beth A.

    The Argonne Leadership Computing Facility provides supercomputing capabilities to the scientific and engineering community to advance fundamental discovery and understanding in a broad range of disciplines.

  16. Feasibility of MHD submarine propulsion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Doss, E.D.; Sikes, W.C.

    1992-09-01

    This report describes the work performed during Phase 1 and Phase 2 of the collaborative research program established between Argonne National Laboratory (ANL) and Newport News Shipbuilding and Dry Dock Company (NNS). Phase I of the program focused on the development of computer models for Magnetohydrodynamic (MHD) propulsion. Phase 2 focused on the experimental validation of the thruster performance models and the identification, through testing, of any phenomena which may impact the attractiveness of this propulsion system for shipboard applications. The report discusses in detail the work performed in Phase 2 of the program. In Phase 2, a two Teslamore » test facility was designed, built, and operated. The facility test loop, its components, and their design are presented. The test matrix and its rationale are discussed. Representative experimental results of the test program are presented, and are compared to computer model predictions. In general, the results of the tests and their comparison with the predictions indicate that thephenomena affecting the performance of MHD seawater thrusters are well understood and can be accurately predicted with the developed thruster computer models.« less

  17. Data Crosscutting Requirements Review

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kleese van Dam, Kerstin; Shoshani, Arie; Plata, Charity

    2013-04-01

    In April 2013, a diverse group of researchers from the U.S. Department of Energy (DOE) scientific community assembled to assess data requirements associated with DOE-sponsored scientific facilities and large-scale experiments. Participants in the review included facilities staff, program managers, and scientific experts from the offices of Basic Energy Sciences, Biological and Environmental Research, High Energy Physics, and Advanced Scientific Computing Research. As part of the meeting, review participants discussed key issues associated with three distinct aspects of the data challenge: 1) processing, 2) management, and 3) analysis. These discussions identified commonalities and differences among the needs of varied scientific communities.more » They also helped to articulate gaps between current approaches and future needs, as well as the research advances that will be required to close these gaps. Moreover, the review provided a rare opportunity for experts from across the Office of Science to learn about their collective expertise, challenges, and opportunities. The "Data Crosscutting Requirements Review" generated specific findings and recommendations for addressing large-scale data crosscutting requirements.« less

  18. Computer Operating System Maintenance.

    DTIC Science & Technology

    1982-06-01

    FACILITY The Computer Management Information Facility ( CMIF ) system was developed by Rapp Systems to fulfill the need at the CRF to record and report on...computer center resource usage and utilization. The foundation of the CMIF system is a System 2000 data base (CRFMGMT) which stores and permits access

  19. Turbine Internal and Film Cooling Modeling For 3D Navier-Stokes Codes

    NASA Technical Reports Server (NTRS)

    DeWitt, Kenneth; Garg Vijay; Ameri, Ali

    2005-01-01

    The aim of this research project is to make use of NASA Glenn on-site computational facilities in order to develop, validate and apply aerodynamic, heat transfer, and turbine cooling models for use in advanced 3D Navier-Stokes Computational Fluid Dynamics (CFD) codes such as the Glenn-" code. Specific areas of effort include: Application of the Glenn-HT code to specific configurations made available under Turbine Based Combined Cycle (TBCC), and Ultra Efficient Engine Technology (UEET) projects. Validating the use of a multi-block code for the time accurate computation of the detailed flow and heat transfer of cooled turbine airfoils. The goal of the current research is to improve the predictive ability of the Glenn-HT code. This will enable one to design more efficient turbine components for both aviation and power generation. The models will be tested against specific configurations provided by NASA Glenn.

  20. Space plasma branch at NRL

    NASA Astrophysics Data System (ADS)

    The Naval Research Laboratory (Washington, D.C.) formed the Space Plasma Branch within its Plasma Physics Division on July 1. Vithal Patel, former Program Director of Magnetospheric Physics, National Science Foundation, also joined NRL on the same date as Associate Superintendent of the Plasma Physics Division. Barret Ripin is head of the newly organized branch. The Space Plasma branch will do basic and applied space plasma research using a multidisciplinary approach. It consolidates traditional rocket and satellite space experiments, space plasma theory and computation, with laboratory space-related experiments. About 40 research scientists, postdoctoral fellows, engineers, and technicians are divided among its five sections. The Theory and Computation sections are led by Joseph Huba and Joel Fedder, the Space Experiments section is led by Paul Rodriguez, and the Pharos Laser Facility and Laser Experiments sections are headed by Charles Manka and Jacob Grun.

  1. Large Scale Computing and Storage Requirements for High Energy Physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gerber, Richard A.; Wasserman, Harvey

    2010-11-24

    The National Energy Research Scientific Computing Center (NERSC) is the leading scientific computing facility for the Department of Energy's Office of Science, providing high-performance computing (HPC) resources to more than 3,000 researchers working on about 400 projects. NERSC provides large-scale computing resources and, crucially, the support and expertise needed for scientists to make effective use of them. In November 2009, NERSC, DOE's Office of Advanced Scientific Computing Research (ASCR), and DOE's Office of High Energy Physics (HEP) held a workshop to characterize the HPC resources needed at NERSC to support HEP research through the next three to five years. Themore » effort is part of NERSC's legacy of anticipating users needs and deploying resources to meet those demands. The workshop revealed several key points, in addition to achieving its goal of collecting and characterizing computing requirements. The chief findings: (1) Science teams need access to a significant increase in computational resources to meet their research goals; (2) Research teams need to be able to read, write, transfer, store online, archive, analyze, and share huge volumes of data; (3) Science teams need guidance and support to implement their codes on future architectures; and (4) Projects need predictable, rapid turnaround of their computational jobs to meet mission-critical time constraints. This report expands upon these key points and includes others. It also presents a number of case studies as representative of the research conducted within HEP. Workshop participants were asked to codify their requirements in this case study format, summarizing their science goals, methods of solution, current and three-to-five year computing requirements, and software and support needs. Participants were also asked to describe their strategy for computing in the highly parallel, multi-core environment that is expected to dominate HPC architectures over the next few years. The report includes a section that describes efforts already underway or planned at NERSC that address requirements collected at the workshop. NERSC has many initiatives in progress that address key workshop findings and are aligned with NERSC's strategic plans.« less

  2. Parents’ Perceived Barriers to Accessing Sports and Recreation Facilities in Ontario, Canada: Exploring the Relationships between Income, Neighbourhood Deprivation, and Community

    PubMed Central

    Jarvis, Jocelyn W.

    2017-01-01

    Sports and recreation facilities provide places where children can be physically active. Previous research has shown that availability is often worse in lower-socioeconomic status (SES) areas, yet others have found inverse relationships, no relationships, or mixed findings. Since children’s health behaviours are influenced by their parents, it is important to understand parents’ perceived barriers to accessing sports and recreation facilities. Data from computer assisted telephone interviews with parents living in Ontario, Canada were merged via postal codes with neighbourhood deprivation data. Multivariable logistic regression modeling was used to estimate the likelihood that parents reported barriers to accessing local sports and recreation facilities. Parents with lower household incomes were more likely to report barriers to access. For each unit increase in deprivation score (i.e., more deprived), the likelihood of reporting a barrier increased 16% (95% CI: 1.04, 1.28). For parents, the relationships between household income, neighbourhood-level deprivation, and barriers are complex. Understanding these relationships is important for research, policy and planning, as parental barriers to opportunities for physical activity have implications for child health behaviours, and ultimately childhood overweight and obesity. PMID:29065524

  3. Parents' Perceived Barriers to Accessing Sports and Recreation Facilities in Ontario, Canada: Exploring the Relationships between Income, Neighbourhood Deprivation, and Community.

    PubMed

    Harrington, Daniel W; Jarvis, Jocelyn W; Manson, Heather

    2017-10-23

    Sports and recreation facilities provide places where children can be physically active. Previous research has shown that availability is often worse in lower-socioeconomic status (SES) areas, yet others have found inverse relationships, no relationships, or mixed findings. Since children's health behaviours are influenced by their parents, it is important to understand parents' perceived barriers to accessing sports and recreation facilities. Data from computer assisted telephone interviews with parents living in Ontario, Canada were merged via postal codes with neighbourhood deprivation data. Multivariable logistic regression modeling was used to estimate the likelihood that parents reported barriers to accessing local sports and recreation facilities. Parents with lower household incomes were more likely to report barriers to access. For each unit increase in deprivation score (i.e., more deprived), the likelihood of reporting a barrier increased 16% (95% CI: 1.04, 1.28). For parents, the relationships between household income, neighbourhood-level deprivation, and barriers are complex. Understanding these relationships is important for research, policy and planning, as parental barriers to opportunities for physical activity have implications for child health behaviours, and ultimately childhood overweight and obesity.

  4. Wayfinding in Healthcare Facilities: Contributions from Environmental Psychology

    PubMed Central

    Devlin, Ann Sloan

    2014-01-01

    The ability to successfully navigate in healthcare facilities is an important goal for patients, visitors, and staff. Despite the fundamental nature of such behavior, it is not infrequent for planners to consider wayfinding only after the fact, once the building or building complex is complete. This review argues that more recognition is needed for the pivotal role of wayfinding in healthcare facilities. First, to provide context, the review presents a brief overview of the relationship between environmental psychology and healthcare facility design. Then, the core of the article covers advances in wayfinding research with an emphasis on healthcare environments, including the roles of plan configuration and manifest cues, technology, and user characteristics. Plan configuration and manifest cues, which appeared early on in wayfinding research, continue to play a role in wayfinding success and should inform design decisions. Such considerations are joined by emerging technologies (e.g., mobile applications, virtual reality, and computational models of wayfinding) as a way to both enhance our theoretical knowledge of wayfinding and advance its applications for users. Among the users discussed here are those with cognitive and/or visual challenges (e.g., Down syndrome, age-related decrements such as dementia, and limitations of vision). In addition, research on the role of cross-cultural comprehension and the effort to develop a system of universal healthcare symbols is included. The article concludes with a summary of the status of these advances and directions for future research. PMID:25431446

  5. HYDRA, a new tool for mechanical testing

    NASA Technical Reports Server (NTRS)

    Brinkmann, P. W.

    1994-01-01

    The introduction outlines the verification concept for programs of the European Space Agency (ESA). The role of the Agency in coordinating the activities of major European space test centers is summarized. Major test facilities of the environmental test center at ESTEC, the Space Research and Technology Center of ESA, are shown and their specific characteristics are highlighted with special emphasis on the 6-degree-of-freedom (6-DOF) hydraulic shaker. The specified performance characteristics for sine and transient tests are presented. Results of single-axis hardware tests and 6-DOF computer simulations are included. Efforts employed to protect payloads against accidental damage in case of malfunctions of the facility are listed. Finally the operational advantages of the facility, as well as the possible use of the HYDRA control system design for future applications are indicated.

  6. System Security Authorization Agreement (SSAA) for the WIRE Archive and Research Facility

    NASA Technical Reports Server (NTRS)

    2002-01-01

    The Wide-Field Infrared Explorer (WIRE) Archive and Research Facility (WARF) is operated and maintained by the Department of Physics, USAF Academy. The lab is located in Fairchild Hall, 2354 Fairchild Dr., Suite 2A103, USAF Academy, CO 80840. The WARF will be used for research and education in support of the NASA Wide Field Infrared Explorer (WIRE) satellite, and for related high-precision photometry missions and activities. The WARF will also contain the WIRE preliminary and final archives prior to their delivery to the National Space Science Data Center (NSSDC). The WARF consists of a suite of equipment purchased under several NASA grants in support of WIRE research. The core system consists of a Red Hat Linux workstation with twin 933 MHz PIII processors, 1 GB of RAM, 133 GB of hard disk space, and DAT and DLT tape drives. The WARF is also supported by several additional networked Linux workstations. Only one of these (an older 450 Mhz PIII computer running Red Hat Linux) is currently running, but the addition of several more is expected over the next year. In addition, a printer will soon be added. The WARF will serve as the primary research facility for the analysis and archiving of data from the WIRE satellite, together with limited quantities of other high-precision astronomical photometry data from both ground- and space-based facilities. However, the archive to be created here will not be the final archive; rather, the archive will be duplicated at the NSSDC and public access to the data will generally take place through that site.

  7. Development and application of structural dynamics analysis capabilities

    NASA Technical Reports Server (NTRS)

    Heinemann, Klaus W.; Hozaki, Shig

    1994-01-01

    Extensive research activities were performed in the area of multidisciplinary modeling and simulation of aerospace vehicles that are relevant to NASA Dryden Flight Research Facility. The efforts involved theoretical development, computer coding, and debugging of the STARS code. New solution procedures were developed in such areas as structures, CFD, and graphics, among others. Furthermore, systems-oriented codes were developed for rendering the code truly multidisciplinary and rather automated in nature. Also, work was performed in pre- and post-processing of engineering analysis data.

  8. Advanced instrumentation for aeronautical propulsion research

    NASA Technical Reports Server (NTRS)

    Hartmann, M. J.

    1986-01-01

    The development and use of advanced instrumentation and measurement systems are key to extending the understanding of the physical phenomena that limit the advancement of aeropropulsion systems. The data collected by using these systems are necessary to verify numerical models and to increase the technologists' intuition into the physical phenomena. The systems must be versatile enough to allow their use with older technology measurement systems, with computer-based data reduction systems, and with existing test facilities. Researchers in all aeropropulsion fields contribute to the development of these systems.

  9. Clinical Physiologic Research Instrumentation: An Approach Using Modular Elements and Distributed Processing

    PubMed Central

    Hagen, R. W.; Ambos, H. D.; Browder, M. W.; Roloff, W. R.; Thomas, L. J.

    1979-01-01

    The Clinical Physiologic Research System (CPRS) developed from our experience in applying computers to medical instrumentation problems. This experience revealed a set of applications with a commonality in data acquisition, analysis, input/output, and control needs that could be met by a portable system. The CPRS demonstrates a practical methodology for integrating commercial instruments with distributed modular elements of local design in order to make facile responses to changing instrumentation needs in clinical environments. ImagesFigure 3

  10. Unobtrusive monitoring of computer interactions to detect cognitive status in elders.

    PubMed

    Jimison, Holly; Pavel, Misha; McKanna, James; Pavel, Jesse

    2004-09-01

    The U.S. has experienced a rapid growth in the use of computers by elders. E-mail, Web browsing, and computer games are among the most common routine activities for this group of users. In this paper, we describe techniques for unobtrusively monitoring naturally occurring computer interactions to detect sustained changes in cognitive performance. Researchers have demonstrated the importance of the early detection of cognitive decline. Users over the age of 75 are at risk for medically related cognitive problems and confusion, and early detection allows for more effective clinical intervention. In this paper, we present algorithms for inferring a user's cognitive performance using monitoring data from computer games and psychomotor measurements associated with keyboard entry and mouse movement. The inferences are then used to classify significant performance changes, and additionally, to adapt computer interfaces with tailored hints and assistance when needed. These methods were tested in a group of elders in a residential facility.

  11. Tomographic methods in flow diagnostics

    NASA Technical Reports Server (NTRS)

    Decker, Arthur J.

    1993-01-01

    This report presents a viewpoint of tomography that should be well adapted to currently available optical measurement technology as well as the needs of computational and experimental fluid dynamists. The goals in mind are to record data with the fastest optical array sensors; process the data with the fastest parallel processing technology available for small computers; and generate results for both experimental and theoretical data. An in-depth example treats interferometric data as it might be recorded in an aeronautics test facility, but the results are applicable whenever fluid properties are to be measured or applied from projections of those properties. The paper discusses both computed and neural net calibration tomography. The report also contains an overview of key definitions and computational methods, key references, computational problems such as ill-posedness, artifacts, missing data, and some possible and current research topics.

  12. Managing geometric information with a data base management system

    NASA Technical Reports Server (NTRS)

    Dube, R. P.

    1984-01-01

    The strategies for managing computer based geometry are described. The computer model of geometry is the basis for communication, manipulation, and analysis of shape information. The research on integrated programs for aerospace-vehicle design (IPAD) focuses on the use of data base management system (DBMS) technology to manage engineering/manufacturing data. The objectives of IPAD is to develop a computer based engineering complex which automates the storage, management, protection, and retrieval of engineering data. In particular, this facility must manage geometry information as well as associated data. The approach taken on the IPAD project to achieve this objective is discussed. Geometry management in current systems and the approach taken in the early IPAD prototypes are examined.

  13. Advanced ballistic range technology

    NASA Technical Reports Server (NTRS)

    Yates, Leslie A.

    1993-01-01

    Optical images, such as experimental interferograms, schlieren, and shadowgraphs, are routinely used to identify and locate features in experimental flow fields and for validating computational fluid dynamics (CFD) codes. Interferograms can also be used for comparing experimental and computed integrated densities. By constructing these optical images from flow-field simulations, one-to-one comparisons of computation and experiment are possible. During the period from February 1, 1992, to November 30, 1992, work has continued on the development of CISS (Constructed Interferograms, Schlieren, and Shadowgraphs), a code that constructs images from ideal- and real-gas flow-field simulations. In addition, research connected with the automated film-reading system and the proposed reactivation of the radiation facility has continued.

  14. Computational Accelerator Physics. Proceedings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bisognano, J.J.; Mondelli, A.A.

    1997-04-01

    The sixty two papers appearing in this volume were presented at CAP96, the Computational Accelerator Physics Conference held in Williamsburg, Virginia from September 24{minus}27,1996. Science Applications International Corporation (SAIC) and the Thomas Jefferson National Accelerator Facility (Jefferson lab) jointly hosted CAP96, with financial support from the U.S. department of Energy`s Office of Energy Research and the Office of Naval reasearch. Topics ranged from descriptions of specific codes to advanced computing techniques and numerical methods. Update talks were presented on nearly all of the accelerator community`s major electromagnetic and particle tracking codes. Among all papers, thirty of them are abstracted formore » the Energy Science and Technology database.(AIP)« less

  15. NASA Johnson Space Center Usability Testing and Analysis facility (UTAF) Overview

    NASA Technical Reports Server (NTRS)

    Whitmore, Mihriban; Holden, Kritina L.

    2005-01-01

    The Usability Testing and Analysis Facility (UTAF) is part of the Space Human Factors Laboratory at the NASA Johnson Space Center in Houston, Texas. The facility performs research for NASA's HumanSystems Integration Program, under the HumanSystems Research and Technology Division. Specifically, the UTAF provides human factors support for space vehicles, including the International Space Station, the Space Shuttle, and the forthcoming Crew Exploration Vehicle. In addition, there are ongoing collaborative research efforts with external corporations and universities. The UTAF provides human factors analysis, evaluation, and usability testing of crew interfaces for space applications. This includes computer displays and controls, workstation systems, and work environments. The UTAF has a unique mix of capabilities, with a staff experienced in both cognitive human factors and ergonomics. The current areas of focus are: human factors applications in emergency medical care and informatics; control and display technologies for electronic procedures and instructions; voice recognition in noisy environments; crew restraint design for unique microgravity workstations; and refinement of human factors processes and requirements. This presentation will provide an overview of ongoing activities, and will address how the UTAF projects will evolve to meet new space initiatives.

  16. Impacts: NIST Building and Fire Research Laboratory (technical and societal)

    NASA Astrophysics Data System (ADS)

    Raufaste, N. J.

    1993-08-01

    The Building and Fire Research Laboratory (BFRL) of the National Institute of Standards and Technology (NIST) is dedicated to the life cycle quality of constructed facilities. The report describes major effects of BFRL's program on building and fire research. Contents of the document include: structural reliability; nondestructive testing of concrete; structural failure investigations; seismic design and construction standards; rehabilitation codes and standards; alternative refrigerants research; HVAC simulation models; thermal insulation; residential equipment energy efficiency; residential plumbing standards; computer image evaluation of building materials; corrosion-protection for reinforcing steel; prediction of the service lives of building materials; quality of construction materials laboratory testing; roofing standards; simulating fires with computers; fire safety evaluation system; fire investigations; soot formation and evolution; cone calorimeter development; smoke detector standards; standard for the flammability of children's sleepwear; smoldering insulation fires; wood heating safety research; in-place testing of concrete; communication protocols for building automation and control systems; computer simulation of the properties of concrete and other porous materials; cigarette-induced furniture fires; carbon monoxide formation in enclosure fires; halon alternative fire extinguishing agents; turbulent mixing research; materials fire research; furniture flammability testing; standard for the cigarette ignition resistance of mattresses; support of navy firefighter trainer program; and using fire to clean up oil spills.

  17. SPAN: Astronomy and astrophysics

    NASA Technical Reports Server (NTRS)

    Thomas, Valerie L.; Green, James L.; Warren, Wayne H., Jr.; Lopez-Swafford, Brian

    1987-01-01

    The Space Physics Analysis Network (SPAN) is a multi-mission, correlative data comparison network which links science research and data analysis computers in the U.S., Canada, and Europe. The purpose of this document is to provide Astronomy and Astrophysics scientists, currently reachable on SPAN, with basic information and contacts for access to correlative data bases, star catalogs, and other astrophysic facilities accessible over SPAN.

  18. Appraisal of Scientific Resources for Emergency Management.

    DTIC Science & Technology

    1983-09-01

    water, communications, computers, and oil refineries or storage facilities. In addition, the growth of the number of operative nuclear power plants ...one from a nuclear power plant accident); one involved hazardous waste disposal problems; and finally two involved wartime scenarios, one focusing on...pro- tection research, radiological protection from nuclear power plant accidents, concepts and operation of public shelters, and post attack

  19. Wright Laboratory Research and Development Facilities Handbook

    DTIC Science & Technology

    1992-08-01

    properties o. superconductors SPECIAL/UNIQUE CAPABILITIES: Two superconducting coils: 3-inch bore, 10 Tesla coil. 20 kilojoule repetitively pulsed coil 7 inch...bore, cryogenically cooled 14 Tesla coil INSTRUMENTATION: Computer Controlled Variable Temperature (2-400K) and Field (0-5 Tesla ) Squid Susceptometer...Variable Temperature (10-80K) and Field (0-10 Tesla ) Transport Current Measurement Apparatus RF Source Sputtering Rig, Optical Microscope, Furnaces

  20. Microgravity

    NASA Image and Video Library

    2004-04-15

    The Wake Shield Facility is a free-flying research and development facility that is designed to use the pure vacuum of space to conduct scientific research in the development of new materials. The thin film materials technology developed by the WSF could some day lead to applications such as faster electronics components for computers. The WSF Free-Flyer is a 12-foot-diameter stainless steel disk that, while traveling in orbit at approximately 18,000 mph, leaves in its wake a vacuum 1,000 to 10,000 times better than the best vacuums currently achieved on Earth. While it is carried into orbit by the Space Shuttle, the WSF is a fully equipped spacecraft in its own right, with cold gas propulsion for separation from the orbiter and a momentum bias attitude control system. All WSF functions are undertaken by a spacecraft computer with the WSF remotely controlled from the ground. The ultra vacuum, nearly empty of all molecules, is then used to conduct a series of thin film growths by a process called epitaxy which produces exceptionally pure and atomically ordered thin films of semiconductor compounds such as gallium arsenide. Using this process, the WSF offers the potential of producing thin film materials, and the devices they will make possible.

  1. Implementation of the WICS Wall Interference Correction System at the National Transonic Facility

    NASA Technical Reports Server (NTRS)

    Iyer, Venkit; Everhart, Joel L.; Bir, Pamela J.; Ulbrich, Norbert

    2000-01-01

    The Wall Interference Correction System (WICS) is operational at the National Transonic Facility (NTF) of NASA Langley Research Center (NASA LaRC) for semispan and full span tests in the solid wall (slots covered) configuration. The method is based on the wall pressure signature method for computing corrections to the measured parameters. It is an adaptation of the WICS code operational at the 12 ft pressure wind tunnel (12ft PWT) of NASA Ames Research Center (NASA ARC). This paper discusses the details of implementation of WICS at the NTF including tunnel calibration, code modifications for tunnel and support geometry, changes made for the NTF wall orifices layout, details of interfacing with the tunnel data processing system, and post-processing of results. Example results of applying WICS to a semispan test and a full span test are presented. Comparison with classical correction results and an analysis of uncertainty in the corrections are also given. As a special application of the code, the Mach number calibration data from a centerline pipe test was computed by WICS. Finally, future work for expanding the applicability of the code including online implementation is discussed.

  2. Computational Design and Analysis of a Transonic Natural Laminar Flow Wing for a Wind Tunnel Model

    NASA Technical Reports Server (NTRS)

    Lynde, Michelle N.; Campbell, Richard L.

    2017-01-01

    A natural laminar flow (NLF) wind tunnel model has been designed and analyzed for a wind tunnel test in the National Transonic Facility (NTF) at the NASA Langley Research Center. The NLF design method is built into the CDISC design module and uses a Navier-Stokes flow solver, a boundary layer profile solver, and stability analysis and transition prediction software. The NLF design method alters the pressure distribution to support laminar flow on the upper surface of wings with high sweep and flight Reynolds numbers. The method addresses transition due to attachment line contamination/transition, Gortler vortices, and crossflow and Tollmien-Schlichting modal instabilities. The design method is applied to the wing of the Common Research Model (CRM) at transonic flight conditions. Computational analysis predicts significant extents of laminar flow on the wing upper surface, which results in drag savings. A 5.2 percent scale semispan model of the CRM NLF wing will be built and tested in the NTF. This test will aim to validate the NLF design method, as well as characterize the laminar flow testing capabilities in the wind tunnel facility.

  3. Implementation of the WICS Wall Interference Correction System at the National Transonic Facility

    NASA Technical Reports Server (NTRS)

    Iyer, Venkit; Martin, Lockheed; Everhart, Joel L.; Bir, Pamela J.; Ulbrich, Norbert

    2000-01-01

    The Wall Interference Correction System (WICS) is operational at the National Transonic Facility (NTF) of NASA Langley Research Center (NASA LaRC) for semispan and full span tests in the solid wall (slots covered) configuration, The method is based on the wall pressure signature method for computing corrections to the measured parameters. It is an adaptation of the WICS code operational at the 12 ft pressure wind tunnel (12ft PWT) of NASA Ames Research Center (NASA ARC). This paper discusses the details of implementation of WICS at the NTF including, tunnel calibration, code modifications for tunnel and support geometry, changes made for the NTF wall orifices layout, details of interfacing with the tunnel data processing system, and post-processing of results. Example results of applying WICS to a semispan test and a full span test are presented. Comparison with classical correction results and an analysis of uncertainty in the corrections are also given. As a special application of the code, the Mach number calibration data from a centerline pipe test was computed by WICS. Finally, future work for expanding the applicability of the code including online implementation is discussed.

  4. PREFACE: 14th International Workshop on Advanced Computing and Analysis Techniques in Physics Research (ACAT 2011)

    NASA Astrophysics Data System (ADS)

    Teodorescu, Liliana; Britton, David; Glover, Nigel; Heinrich, Gudrun; Lauret, Jérôme; Naumann, Axel; Speer, Thomas; Teixeira-Dias, Pedro

    2012-06-01

    ACAT2011 This volume of Journal of Physics: Conference Series is dedicated to scientific contributions presented at the 14th International Workshop on Advanced Computing and Analysis Techniques in Physics Research (ACAT 2011) which took place on 5-7 September 2011 at Brunel University, UK. The workshop series, which began in 1990 in Lyon, France, brings together computer science researchers and practitioners, and researchers from particle physics and related fields in order to explore and confront the boundaries of computing and of automatic data analysis and theoretical calculation techniques. It is a forum for the exchange of ideas among the fields, exploring and promoting cutting-edge computing, data analysis and theoretical calculation techniques in fundamental physics research. This year's edition of the workshop brought together over 100 participants from all over the world. 14 invited speakers presented key topics on computing ecosystems, cloud computing, multivariate data analysis, symbolic and automatic theoretical calculations as well as computing and data analysis challenges in astrophysics, bioinformatics and musicology. Over 80 other talks and posters presented state-of-the art developments in the areas of the workshop's three tracks: Computing Technologies, Data Analysis Algorithms and Tools, and Computational Techniques in Theoretical Physics. Panel and round table discussions on data management and multivariate data analysis uncovered new ideas and collaboration opportunities in the respective areas. This edition of ACAT was generously sponsored by the Science and Technology Facility Council (STFC), the Institute for Particle Physics Phenomenology (IPPP) at Durham University, Brookhaven National Laboratory in the USA and Dell. We would like to thank all the participants of the workshop for the high level of their scientific contributions and for the enthusiastic participation in all its activities which were, ultimately, the key factors in the success of the workshop. Further information on ACAT 2011 can be found at http://acat2011.cern.ch Dr Liliana Teodorescu Brunel University ACATgroup The PDF also contains details of the workshop's committees and sponsors.

  5. INTEGRATION OF FACILITY MODELING CAPABILITIES FOR NUCLEAR NONPROLIFERATION ANALYSIS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gorensek, M.; Hamm, L.; Garcia, H.

    2011-07-18

    Developing automated methods for data collection and analysis that can facilitate nuclear nonproliferation assessment is an important research area with significant consequences for the effective global deployment of nuclear energy. Facility modeling that can integrate and interpret observations collected from monitored facilities in order to ascertain their functional details will be a critical element of these methods. Although improvements are continually sought, existing facility modeling tools can characterize all aspects of reactor operations and the majority of nuclear fuel cycle processing steps, and include algorithms for data processing and interpretation. Assessing nonproliferation status is challenging because observations can come frommore » many sources, including local and remote sensors that monitor facility operations, as well as open sources that provide specific business information about the monitored facilities, and can be of many different types. Although many current facility models are capable of analyzing large amounts of information, they have not been integrated in an analyst-friendly manner. This paper addresses some of these facility modeling capabilities and illustrates how they could be integrated and utilized for nonproliferation analysis. The inverse problem of inferring facility conditions based on collected observations is described, along with a proposed architecture and computer framework for utilizing facility modeling tools. After considering a representative sampling of key facility modeling capabilities, the proposed integration framework is illustrated with several examples.« less

  6. Scientific Computing Strategic Plan for the Idaho National Laboratory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whiting, Eric Todd

    Scientific computing is a critical foundation of modern science. Without innovations in the field of computational science, the essential missions of the Department of Energy (DOE) would go unrealized. Taking a leadership role in such innovations is Idaho National Laboratory’s (INL’s) challenge and charge, and is central to INL’s ongoing success. Computing is an essential part of INL’s future. DOE science and technology missions rely firmly on computing capabilities in various forms. Modeling and simulation, fueled by innovations in computational science and validated through experiment, are a critical foundation of science and engineering. Big data analytics from an increasing numbermore » of widely varied sources is opening new windows of insight and discovery. Computing is a critical tool in education, science, engineering, and experiments. Advanced computing capabilities in the form of people, tools, computers, and facilities, will position INL competitively to deliver results and solutions on important national science and engineering challenges. A computing strategy must include much more than simply computers. The foundational enabling component of computing at many DOE national laboratories is the combination of a showcase like data center facility coupled with a very capable supercomputer. In addition, network connectivity, disk storage systems, and visualization hardware are critical and generally tightly coupled to the computer system and co located in the same facility. The existence of these resources in a single data center facility opens the doors to many opportunities that would not otherwise be possible.« less

  7. Preface: SciDAC 2008

    NASA Astrophysics Data System (ADS)

    Stevens, Rick

    2008-07-01

    The fourth annual Scientific Discovery through Advanced Computing (SciDAC) Conference was held June 13-18, 2008, in Seattle, Washington. The SciDAC conference series is the premier communitywide venue for presentation of results from the DOE Office of Science's interdisciplinary computational science program. Started in 2001 and renewed in 2006, the DOE SciDAC program is the country's - and arguably the world's - most significant interdisciplinary research program supporting the development of advanced scientific computing methods and their application to fundamental and applied areas of science. SciDAC supports computational science across many disciplines, including astrophysics, biology, chemistry, fusion sciences, and nuclear physics. Moreover, the program actively encourages the creation of long-term partnerships among scientists focused on challenging problems and computer scientists and applied mathematicians developing the technology and tools needed to address those problems. The SciDAC program has played an increasingly important role in scientific research by allowing scientists to create more accurate models of complex processes, simulate problems once thought to be impossible, and analyze the growing amount of data generated by experiments. To help further the research community's ability to tap into the capabilities of current and future supercomputers, Under Secretary for Science, Raymond Orbach, launched the Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program in 2003. The INCITE program was conceived specifically to seek out computationally intensive, large-scale research projects with the potential to significantly advance key areas in science and engineering. The program encourages proposals from universities, other research institutions, and industry. During the first two years of the INCITE program, 10 percent of the resources at NERSC were allocated to INCITE awardees. However, demand for supercomputing resources far exceeded available systems; and in 2003, the Office of Science identified increasing computing capability by a factor of 100 as the second priority on its Facilities of the Future list. The goal was to establish leadership-class computing resources to support open science. As a result of a peer reviewed competition, the first leadership computing facility was established at Oak Ridge National Laboratory in 2004. A second leadership computing facility was established at Argonne National Laboratory in 2006. This expansion of computational resources led to a corresponding expansion of the INCITE program. In 2008, Argonne, Lawrence Berkeley, Oak Ridge, and Pacific Northwest national laboratories all provided resources for INCITE. By awarding large blocks of computer time on the DOE leadership computing facilities, the INCITE program enables the largest-scale computations to be pursued. In 2009, INCITE will award over half a billion node-hours of time. The SciDAC conference celebrates progress in advancing science through large-scale modeling and simulation. Over 350 participants attended this year's talks, poster sessions, and tutorials, spanning the disciplines supported by DOE. While the principal focus was on SciDAC accomplishments, this year's conference also included invited presentations and posters from DOE INCITE awardees. Another new feature in the SciDAC conference series was an electronic theater and video poster session, which provided an opportunity for the community to see over 50 scientific visualizations in a venue equipped with many high-resolution large-format displays. To highlight the growing international interest in petascale computing, this year's SciDAC conference included a keynote presentation by Herman Lederer from the Max Planck Institut, one of the leaders of DEISA (Distributed European Infrastructure for Supercomputing Applications) project and a member of the PRACE consortium, Europe's main petascale project. We also heard excellent talks from several European groups, including Laurent Gicquel of CERFACS, who spoke on `Large-Eddy Simulations of Turbulent Reacting Flows of Real Burners: Status and Challenges', and Jean-Francois Hamelin from EDF, who presented a talk on `Getting Ready for Petaflop Capacities and Beyond: A Utility Perspective'. Two other compelling addresses gave attendees a glimpse into the future. Tomas Diaz de la Rubia of Lawrence Livermore National Laboratory spoke on a vision for a fusion/fission hybrid reactor known as the `LIFE Engine' and discussed some of the materials and modeling challenges that need to be overcome to realize the vision for a 1000-year greenhouse-gas-free power source. Dan Reed from Microsoft gave a capstone talk on the convergence of technology, architecture, and infrastructure for cloud computing, data-intensive computing, and exascale computing (1018 flops/sec). High-performance computing is making rapid strides. The SciDAC community's computational resources are expanding dramatically. In the summer of 2008 the first general purpose petascale system (IBM Cell-based RoadRunner at Los Alamos National Laboratory) was recognized in the top 500 list of fastest machines heralding in the dawning of the petascale era. The DOE's leadership computing facility at Argonne reached number three on the Top 500 and is at the moment the most capable open science machine based on an IBM BG/P system with a peak performance of over 550 teraflops/sec. Later this year Oak Ridge is expected to deploy a 1 petaflops/sec Cray XT system. And even before the scientific community has had an opportunity to make significant use of petascale systems, the computer science research community is forging ahead with ideas and strategies for development of systems that may by the end of the next decade sustain exascale performance. Several talks addressed barriers to, and strategies for, achieving exascale capabilities. The last day of the conference was devoted to tutorials hosted by Microsoft Research at a new conference facility in Redmond, Washington. Over 90 people attended the tutorials, which covered topics ranging from an introduction to BG/P programming to advanced numerical libraries. The SciDAC and INCITE programs and the DOE Office of Advanced Scientific Computing Research core program investments in applied mathematics, computer science, and computational and networking facilities provide a nearly optimum framework for advancing computational science for DOE's Office of Science. At a broader level this framework also is benefiting the entire American scientific enterprise. As we look forward, it is clear that computational approaches will play an increasingly significant role in addressing challenging problems in basic science, energy, and environmental research. It takes many people to organize and support the SciDAC conference, and I would like to thank as many of them as possible. The backbone of the conference is the technical program; and the task of selecting, vetting, and recruiting speakers is the job of the organizing committee. I thank the members of this committee for all the hard work and the many tens of conference calls that enabled a wonderful program to be assembled. This year the following people served on the organizing committee: Jim Ahrens, LANL; David Bader, LLNL; Bryan Barnett, Microsoft; Peter Beckman, ANL; Vincent Chan, GA; Jackie Chen, SNL; Lori Diachin, LLNL; Dan Fay, Microsoft; Ian Foster, ANL; Mark Gordon, Ames; Mohammad Khaleel, PNNL; David Keyes, Columbia University; Bob Lucas, University of Southern California; Tony Mezzacappa, ORNL; Jeff Nichols, ORNL; David Nowak, ANL; Michael Papka, ANL; Thomas Schultess, ORNL; Horst Simon, LBNL; David Skinner, LBNL; Panagiotis Spentzouris, Fermilab; Bob Sugar, UCSB; and Kathy Yelick, LBNL. I owe a special thanks to Mike Papka and Jim Ahrens for handling the electronic theater. I also thank all those who submitted videos. It was a highly successful experiment. Behind the scenes an enormous amount of work is required to make a large conference go smoothly. First I thank Cheryl Zidel for her tireless efforts as organizing committee liaison and posters chair and, in general, handling all of my end of the program and keeping me calm. I also thank Gail Pieper for her work in editing the proceedings, Beth Cerny Patino for her work on the Organizing Committee website and electronic theater, and Ken Raffenetti for his work in keeping that website working. Jon Bashor and John Hules did an excellent job in handling conference communications. I thank Caitlin Youngquist for the striking graphic design; Dan Fay for tutorials arrangements; and Lynn Dory, Suzanne Stevenson, Sarah Pebelske and Sarah Zidel for on-site registration and conference support. We all owe Yeen Mankin an extra-special thanks for choosing the hotel, handling contracts, arranging menus, securing venues, and reassuring the chair that everything was under control. We are pleased to have obtained corporate sponsorship from Cray, IBM, Intel, HP, and SiCortex. I thank all the speakers and panel presenters. I also thank the former conference chairs Tony Metzzacappa, Bill Tang, and David Keyes, who were never far away for advice and encouragement. Finally, I offer my thanks to Michael Strayer, without whose leadership, vision, and persistence the SciDAC program would not have come into being and flourished. I am honored to be part of his program and his friend. Rick Stevens Seattle, Washington July 18, 2008

  8. Design of a radiation facility for very small specimens used in radiobiology studies

    NASA Astrophysics Data System (ADS)

    Rodriguez, Manuel; Jeraj, Robert

    2008-06-01

    A design of a radiation facility for very small specimens used in radiobiology is presented. This micro-irradiator has been primarily designed to irradiate partial bodies in zebrafish embryos 3-4 mm in length. A miniature x-ray, 50 kV photon beam, is used as a radiation source. The source is inserted in a cylindrical brass collimator that has a pinhole of 1.0 mm in diameter along the central axis to produce a pencil photon beam. The collimator with the source is attached underneath a computer-controlled movable table which holds the specimens. Using a 45° tilted mirror, a digital camera, connected to the computer, takes pictures of the specimen and the pinhole collimator. From the image provided by the camera, the relative distance from the specimen to the pinhole axis is calculated and coordinates are sent to the movable table to properly position the samples in the beam path. Due to its monitoring system, characteristic of the radiation beam, accuracy and precision of specimen positioning, and automatic image-based specimen recognition, this radiation facility is a suitable tool to irradiate partial bodies in zebrafish embryos, cell cultures or any other small specimen used in radiobiology research.

  9. Unleashing the Power of Distributed CPU/GPU Architectures: Massive Astronomical Data Analysis and Visualization Case Study

    NASA Astrophysics Data System (ADS)

    Hassan, A. H.; Fluke, C. J.; Barnes, D. G.

    2012-09-01

    Upcoming and future astronomy research facilities will systematically generate terabyte-sized data sets moving astronomy into the Petascale data era. While such facilities will provide astronomers with unprecedented levels of accuracy and coverage, the increases in dataset size and dimensionality will pose serious computational challenges for many current astronomy data analysis and visualization tools. With such data sizes, even simple data analysis tasks (e.g. calculating a histogram or computing data minimum/maximum) may not be achievable without access to a supercomputing facility. To effectively handle such dataset sizes, which exceed today's single machine memory and processing limits, we present a framework that exploits the distributed power of GPUs and many-core CPUs, with a goal of providing data analysis and visualizing tasks as a service for astronomers. By mixing shared and distributed memory architectures, our framework effectively utilizes the underlying hardware infrastructure handling both batched and real-time data analysis and visualization tasks. Offering such functionality as a service in a “software as a service” manner will reduce the total cost of ownership, provide an easy to use tool to the wider astronomical community, and enable a more optimized utilization of the underlying hardware infrastructure.

  10. Application Of Artificial Intelligence To Wind Tunnels

    NASA Technical Reports Server (NTRS)

    Lo, Ching F.; Steinle, Frank W., Jr.

    1989-01-01

    Report discusses potential use of artificial-intelligence systems to manage wind-tunnel test facilities at Ames Research Center. One of goals of program to obtain experimental data of better quality and otherwise generally increase productivity of facilities. Another goal to increase efficiency and expertise of current personnel and to retain expertise of former personnel. Third goal to increase effectiveness of management through more efficient use of accumulated data. System used to improve schedules of operation and maintenance of tunnels and other equipment, assignment of personnel, distribution of electrical power, and analysis of costs and productivity. Several commercial artificial-intelligence computer programs discussed as possible candidates for use.

  11. Integrated exhaust gas analysis system for aircraft turbine engine component testing

    NASA Technical Reports Server (NTRS)

    Summers, R. L.; Anderson, R. C.

    1985-01-01

    An integrated exhaust gas analysis system was designed and installed in the hot-section facility at the Lewis Research Center. The system is designed to operate either manually or automatically and also to be operated from a remote station. The system measures oxygen, water vapor, total hydrocarbons, carbon monoxide, carbon dioxide, and oxides of nitrogen. Two microprocessors control the system and the analyzers, collect data and process them into engineering units, and present the data to the facility computers and the system operator. Within the design of this system there are innovative concepts and procedures that are of general interest and application to other gas analysis tasks.

  12. NASA Virtual Glovebox: An Immersive Virtual Desktop Environment for Training Astronauts in Life Science Experiments

    NASA Technical Reports Server (NTRS)

    Twombly, I. Alexander; Smith, Jeffrey; Bruyns, Cynthia; Montgomery, Kevin; Boyle, Richard

    2003-01-01

    The International Space Station will soon provide an unparalleled research facility for studying the near- and longer-term effects of microgravity on living systems. Using the Space Station Glovebox Facility - a compact, fully contained reach-in environment - astronauts will conduct technically challenging life sciences experiments. Virtual environment technologies are being developed at NASA Ames Research Center to help realize the scientific potential of this unique resource by facilitating the experimental hardware and protocol designs and by assisting the astronauts in training. The Virtual GloveboX (VGX) integrates high-fidelity graphics, force-feedback devices and real- time computer simulation engines to achieve an immersive training environment. Here, we describe the prototype VGX system, the distributed processing architecture used in the simulation environment, and modifications to the visualization pipeline required to accommodate the display configuration.

  13. Have computers, will travel: providing on-site library instruction in rural health facilities using a portable computer lab.

    PubMed

    Neilson, Christine J

    2010-01-01

    The Saskatchewan Health Information Resources Partnership (SHIRP) provides library instruction to Saskatchewan's health care practitioners and students on placement in health care facilities as part of its mission to provide province-wide access to evidence-based health library resources. A portable computer lab was assembled in 2007 to provide hands-on training in rural health facilities that do not have computer labs of their own. Aside from some minor inconveniences, the introduction and operation of the portable lab has gone smoothly. The lab has been well received by SHIRP patrons and continues to be an essential part of SHIRP outreach.

  14. A large-scale computer facility for computational aerodynamics

    NASA Technical Reports Server (NTRS)

    Bailey, F. R.; Ballhaus, W. F., Jr.

    1985-01-01

    As a result of advances related to the combination of computer system technology and numerical modeling, computational aerodynamics has emerged as an essential element in aerospace vehicle design methodology. NASA has, therefore, initiated the Numerical Aerodynamic Simulation (NAS) Program with the objective to provide a basis for further advances in the modeling of aerodynamic flowfields. The Program is concerned with the development of a leading-edge, large-scale computer facility. This facility is to be made available to Government agencies, industry, and universities as a necessary element in ensuring continuing leadership in computational aerodynamics and related disciplines. Attention is given to the requirements for computational aerodynamics, the principal specific goals of the NAS Program, the high-speed processor subsystem, the workstation subsystem, the support processing subsystem, the graphics subsystem, the mass storage subsystem, the long-haul communication subsystem, the high-speed data-network subsystem, and software.

  15. NIF ICCS network design and loading analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tietbohl, G; Bryant, R

    The National Ignition Facility (NIF) is housed within a large facility about the size of two football fields. The Integrated Computer Control System (ICCS) is distributed throughout this facility and requires the integration of about 40,000 control points and over 500 video sources. This integration is provided by approximately 700 control computers distributed throughout the NIF facility and a network that provides the communication infrastructure. A main control room houses a set of seven computer consoles providing operator access and control of the various distributed front-end processors (FEPs). There are also remote workstations distributed within the facility that allow providemore » operator console functions while personnel are testing and troubleshooting throughout the facility. The operator workstations communicate with the FEPs which implement the localized control and monitoring functions. There are different types of FEPs for the various subsystems being controlled. This report describes the design of the NIF ICCS network and how it meets the traffic loads that will are expected and the requirements of the Sub-System Design Requirements (SSDR's). This document supersedes the earlier reports entitled Analysis of the National Ignition Facility Network, dated November 6, 1996 and The National Ignition Facility Digital Video and Control Network, dated July 9, 1996. For an overview of the ICCS, refer to the document NIF Integrated Computer Controls System Description (NIF-3738).« less

  16. Evaluation of nuclear-facility decommissioning projects. Summary report: Ames Laboratory Research Reactor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Link, B.W.; Miller, R.L.

    1983-07-01

    This document summarizes the available information concerning the decommissioning of the Ames Laboratory Research Reactor (ALRR), a five-megawatt heavy water moderated and cooled research reactor. The data were placed in a computerized information retrieval/manipulation system which permits its future utilization for purposes of comparative analysis. This information is presented both in detail in its computer output form and also as a manually assembled summarization which highlights the more important aspects of the decommissioning program. Some comparative information with reference to generic decommissioning data extracted from NUREG/CR 1756, Technology, Safety and Costs of Decommissioning Nuclear Research and Test Reactors, is included.

  17. The Nature of Scatter at the DARHT Facility and Suggestions for Improved Modeling of DARHT Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morneau, Rachel Anne; Klasky, Marc Louis

    The U.S. Stockpile Stewardship Program [1] is designed to sustain and evaluate the nuclear weapons stockpile while foregoing underground nuclear tests. The maintenance of a smaller, aging U.S. nuclear weapons stockpile without underground testing requires complex computer calculations [14]. These calculations in turn need to be verified and benchmarked [14]. A wide range of research facilities have been used to test and evaluate nuclear weapons while respecting the Comprehensive Nuclear Test-Ban Treaty (CTBT) [2]. Some of these facilities include the National Ignition Facility (NIF) at Lawrence Livermore National Laboratory, the Z machine at Sandia National Laboratories, and the Dual Axismore » Radiographic Hydrodynamic Test (DARHT) facility at Los Alamos National Laboratory. This research will focus largely on DARHT (although some information from Cygnus and the Los Alamos Microtron may be used in this research) by modeling it and comparing to experimental data. DARHT is an electron accelerator that employs high-energy flash x-ray sources for imaging hydro-tests. This research proposes to address some of the issues crucial to understanding DARHT Axis II and the analysis of the radiographic images produced. Primarily, the nature of scatter at DARHT will be modeled and verified with experimental data. It will then be shown that certain design decisions can be made to optimize the scatter field for hydrotest experiments. Spectral effects will be briefly explored to determine if there is any considerable effect on the density reconstruction caused by changes in the energy spectrum caused by target changes. Finally, a generalized scatter model will be made using results from MCNP that can be convolved with the direct transmission of an object to simulate the scatter of that object at the detector plane. The region in which with this scatter model is appropriate will be explored.« less

  18. EOS MLS Science Data Processing System: A Description of Architecture and Capabilities

    NASA Technical Reports Server (NTRS)

    Cuddy, David T.; Echeverri, Mark D.; Wagner, Paul A.; Hanzel, Audrey T.; Fuller, Ryan A.

    2006-01-01

    This paper describes the architecture and capabilities of the Science Data Processing System (SDPS) for the EOS MLS. The SDPS consists of two major components--the Science Computing Facility and the Science Investigator-led Processing System. The Science Computing Facility provides the facilities for the EOS MLS Science Team to perform the functions of scientific algorithm development, processing software development, quality control of data products, and scientific analyses. The Science Investigator-led Processing System processes and reprocesses the science data for the entire mission and delivers the data products to the Science Computing Facility and to the Goddard Space Flight Center Earth Science Distributed Active Archive Center, which archives and distributes the standard science products.

  19. A preliminary design study for a cosmic X-ray spectrometer

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The results are described of theoretical and experimental investigations aimed at the development of a curved crystal cosmic X-ray spectrometer to be used at the focal plane of the large orbiting X-ray telescope on the third High Energy Astronomical Observatory. The effort was concentrated on the development of spectrometer concepts and their evaluation by theoretical analysis, computer simulation, and laboratory testing with breadboard arrangements of crystals and detectors. In addition, a computer-controlled facility for precision testing and evaluation of crystals in air and vacuum was constructed. A summary of research objectives and results is included.

  20. Management and development of local area network upgrade prototype

    NASA Technical Reports Server (NTRS)

    Fouser, T. J.

    1981-01-01

    Given the situation of having management and development users accessing a central computing facility and given the fact that these same users have the need for local computation and storage, the utilization of a commercially available networking system such as CP/NET from Digital Research provides the building blocks for communicating intelligent microsystems to file and print services. The major problems to be overcome in the implementation of such a network are the dearth of intelligent communication front-ends for the microcomputers and the lack of a rich set of management and software development tools.

  1. The Generation Challenge Programme Platform: Semantic Standards and Workbench for Crop Science

    PubMed Central

    Bruskiewich, Richard; Senger, Martin; Davenport, Guy; Ruiz, Manuel; Rouard, Mathieu; Hazekamp, Tom; Takeya, Masaru; Doi, Koji; Satoh, Kouji; Costa, Marcos; Simon, Reinhard; Balaji, Jayashree; Akintunde, Akinnola; Mauleon, Ramil; Wanchana, Samart; Shah, Trushar; Anacleto, Mylah; Portugal, Arllet; Ulat, Victor Jun; Thongjuea, Supat; Braak, Kyle; Ritter, Sebastian; Dereeper, Alexis; Skofic, Milko; Rojas, Edwin; Martins, Natalia; Pappas, Georgios; Alamban, Ryan; Almodiel, Roque; Barboza, Lord Hendrix; Detras, Jeffrey; Manansala, Kevin; Mendoza, Michael Jonathan; Morales, Jeffrey; Peralta, Barry; Valerio, Rowena; Zhang, Yi; Gregorio, Sergio; Hermocilla, Joseph; Echavez, Michael; Yap, Jan Michael; Farmer, Andrew; Schiltz, Gary; Lee, Jennifer; Casstevens, Terry; Jaiswal, Pankaj; Meintjes, Ayton; Wilkinson, Mark; Good, Benjamin; Wagner, James; Morris, Jane; Marshall, David; Collins, Anthony; Kikuchi, Shoshi; Metz, Thomas; McLaren, Graham; van Hintum, Theo

    2008-01-01

    The Generation Challenge programme (GCP) is a global crop research consortium directed toward crop improvement through the application of comparative biology and genetic resources characterization to plant breeding. A key consortium research activity is the development of a GCP crop bioinformatics platform to support GCP research. This platform includes the following: (i) shared, public platform-independent domain models, ontology, and data formats to enable interoperability of data and analysis flows within the platform; (ii) web service and registry technologies to identify, share, and integrate information across diverse, globally dispersed data sources, as well as to access high-performance computational (HPC) facilities for computationally intensive, high-throughput analyses of project data; (iii) platform-specific middleware reference implementations of the domain model integrating a suite of public (largely open-access/-source) databases and software tools into a workbench to facilitate biodiversity analysis, comparative analysis of crop genomic data, and plant breeding decision making. PMID:18483570

  2. Basic Requirements for Systems Software Research and Development

    NASA Technical Reports Server (NTRS)

    Kuszmaul, Chris; Nitzberg, Bill

    1996-01-01

    Our success over the past ten years evaluating and developing advanced computing technologies has been due to a simple research and development (R/D) model. Our model has three phases: (a) evaluating the state-of-the-art, (b) identifying problems and creating innovations, and (c) developing solutions, improving the state- of-the-art. This cycle has four basic requirements: a large production testbed with real users, a diverse collection of state-of-the-art hardware, facilities for evalua- tion of emerging technologies and development of innovations, and control over system management on these testbeds. Future research will be irrelevant and future products will not work if any of these requirements is eliminated. In order to retain our effectiveness, the numerical aerospace simulator (NAS) must replace out-of-date production testbeds in as timely a fashion as possible, and cannot afford to ignore innovative designs such as new distributed shared memory machines, clustered commodity-based computers, and multi-threaded architectures.

  3. Exploring the role of pendant amines in transition metal complexes for the reduction of N2 to hydrazine and ammonia

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhattacharya, Papri; Prokopchuk, Demyan E.; Mock, Michael T.

    2017-03-01

    This review examines the synthesis and acid reactivity of transition metal dinitrogen complexes bearing diphosphine ligands containing pendant amine groups in the second coordination sphere. This manuscript is a review of the work performed in the Center for Molecular Electrocatalysis. This work was supported as part of the Center for Molecular Electrocatalysis, an Energy Frontier Research Center funded by the U.S. Department of Energy (U.S. DOE), Office of Science, Office of Basic Energy Sciences. EPR studies on Fe were performed using EMSL, a national scientific user facility sponsored by the DOE’s Office of Biological and Environmental Research and located atmore » PNNL. Computational resources were provided by the National Energy Research Scientific Computing Center (NERSC) at Lawrence Berkeley National Laboratory. Pacific Northwest National Laboratory is operated by Battelle for the U.S. DOE.« less

  4. Advanced Scientific Computing Research Network Requirements: ASCR Network Requirements Review Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bacon, Charles; Bell, Greg; Canon, Shane

    The Energy Sciences Network (ESnet) is the primary provider of network connectivity for the U.S. Department of Energy (DOE) Office of Science (SC), the single largest supporter of basic research in the physical sciences in the United States. In support of SC programs, ESnet regularly updates and refreshes its understanding of the networking requirements of the instruments, facilities, scientists, and science programs that it serves. This focus has helped ESnet to be a highly successful enabler of scientific discovery for over 25 years. In October 2012, ESnet and the Office of Advanced Scientific Computing Research (ASCR) of the DOE SCmore » organized a review to characterize the networking requirements of the programs funded by the ASCR program office. The requirements identified at the review are summarized in the Findings section, and are described in more detail in the body of the report.« less

  5. Scientific Visualization in High Speed Network Environments

    NASA Technical Reports Server (NTRS)

    Vaziri, Arsi; Kutler, Paul (Technical Monitor)

    1997-01-01

    In several cases, new visualization techniques have vastly increased the researcher's ability to analyze and comprehend data. Similarly, the role of networks in providing an efficient supercomputing environment have become more critical and continue to grow at a faster rate than the increase in the processing capabilities of supercomputers. A close relationship between scientific visualization and high-speed networks in providing an important link to support efficient supercomputing is identified. The two technologies are driven by the increasing complexities and volume of supercomputer data. The interaction of scientific visualization and high-speed networks in a Computational Fluid Dynamics simulation/visualization environment are given. Current capabilities supported by high speed networks, supercomputers, and high-performance graphics workstations at the Numerical Aerodynamic Simulation Facility (NAS) at NASA Ames Research Center are described. Applied research in providing a supercomputer visualization environment to support future computational requirements are summarized.

  6. Atmospheric Radiation Measurement Program Climate Research Facility Operations Quarterly Report. October 1 - December 31, 2009.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    D. L. Sisterson

    2010-01-12

    Individual raw data streams from instrumentation at the Atmospheric Radiation Measurement (ARM) Program Climate Research Facility (ACRF) fixed and mobile sites are collected and sent to the Data Management Facility (DMF) at Pacific Northwest National Laboratory (PNNL) for processing in near real-time. Raw and processed data are then sent approximately daily to the ACRF Archive, where they are made available to users. For each instrument, we calculate the ratio of the actual number of data records received daily at the Archive to the expected number of data records. The results are tabulated by (1) individual data stream, site, and monthmore » for the current year and (2) site and fiscal year (FY) dating back to 1998. The U.S. Department of Energy (DOE) requires national user facilities to report time-based operating data. The requirements concern the actual hours of operation (ACTUAL); the estimated maximum operation or uptime goal (OPSMAX), which accounts for planned downtime; and the VARIANCE [1 - (ACTUAL/OPSMAX)], which accounts for unplanned downtime. The OPSMAX time for the first quarter of FY 2010 for the North Slope Alaska (NSA) locale is 1,987.20 hours (0.90 x 2,208); for the Southern Great Plains (SGP) site is 2,097.60 hours (0.95 x 2,208); and for the Tropical Western Pacific (TWP) locale is 1,876.8 hours (0.85 x 2,208). The ARM Mobile Facility (AMF) deployment in Graciosa Island, the Azores, Portugal, continues; its OPSMAX time this quarter is 2,097.60 hours (0.95 x 2,208). The differences in OPSMAX performance reflect the complexity of local logistics and the frequency of extreme weather events. It is impractical to measure OPSMAX for each instrument or data stream. Data availability reported here refers to the average of the individual, continuous data streams that have been received by the Archive. Data not at the Archive are the result of downtime (scheduled or unplanned) of the individual instruments. Therefore, data availability is directly related to individual instrument uptime. Thus, the average percentage of data in the Archive represents the average percentage of the time (24 hours per day, 92 days for this quarter) the instruments were operating this quarter. The Site Access Request System is a web-based database used to track visitors to the fixed and mobile sites, all of which have facilities that can be visited. The NSA locale has the Barrow and Atqasuk sites. The SGP locale has historically had a central facility, 23 extended facilities, 4 boundary facilities, and 3 intermediate facilities. Beginning this quarter, the SGP began a transition to a smaller footprint (150 km x 150 km) by rearranging the original and new instrumentation made available through the American Recovery and Reinvestment Act (ARRA). The central facility and 4 extended facilities will remain, but there will be up to 16 surface new characterization facilities, 4 radar facilities, and 3 profiler facilities sited in the smaller domain. This new configuration will provide observations at scales more appropriate to current and future climate models. The TWP locale has the Manus, Nauru, and Darwin sites. These sites will also have expanded measurement capabilities with the addition of new instrumentation made available through ARRA funds. It is anticipated that the new instrumentation at all the fixed sites will be in place within the next 12 months. The AMF continues its 20-month deployment in Graciosa Island, Azores, Portugal, that started May 1, 2009. The AMF will also have additional observational capabilities within the next 12 months. Users can participate in field experiments at the sites and mobile facility, or they can participate remotely. Therefore, a variety of mechanisms are provided to users to access site information. Users who have immediate (real-time) needs for data access can request a research account on the local site data systems. This access is particularly useful to users for quick decisions in executing time-dependent activities associated with field campaigns at the fixed sites and mobile facility locations. The eight computers for the research accounts are located at the Barrow and Atqasuk sites; the SGP central facility; the TWP Manus, Nauru, and Darwin sites; the AMF; and the DMF at PNNL. However, users are warned that the data provided at the time of collection have not been fully screened for quality and therefore are not considered to be official ACRF data. Hence, these accounts are considered to be part of the facility activities associated with field campaign activities, and users are tracked. In addition, users who visit sites can connect their computer or instrument to an ACRF site data system network, which requires an on-site device account. Remote (off-site) users can also have remote access to any ACRF instrument or computer system at any ACRF site, which requires an off-site device account. These accounts are also managed and tracked.« less

  7. Advanced Simulation & Computing FY15 Implementation Plan Volume 2, Rev. 0.5

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCoy, Michel; Archer, Bill; Matzen, M. Keith

    2014-09-16

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the surety and reliability of the U.S. nuclear stockpile. The SSP uses nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of experimental facilities and programs, and the computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities andmore » computational resources that support annual stockpile assessment and certification, study advanced nuclear weapons design and manufacturing processes, analyze accident scenarios and weapons aging, and provide the tools to enable stockpile Life Extension Programs (LEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balance of resource, including technical staff, hardware, simulation software, and computer science solutions. As the program approaches the end of its second decade, ASC is intently focused on increasing predictive capabilities in a three-dimensional (3D) simulation environment while maintaining support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (sufficient resolution, dimensionality, and scientific details), quantify critical margins and uncertainties, and resolve increasingly difficult analyses needed for the SSP. Where possible, the program also enables the use of high-performance simulation and computing tools to address broader national security needs, such as foreign nuclear weapon assessments and counternuclear terrorism.« less

  8. [The Unified National Health System and the third sector: Characterization of non-hospital facilities providing basic health care services in Belo Horizonte, Minas Gerais, Brazil].

    PubMed

    Canabrava, Claudia Marques; Andrade, Eli Iôla Gurgel; Janones, Fúlvio Alves; Alves, Thiago Andrade; Cherchiglia, Mariangela Leal

    2007-01-01

    In Brazil, nonprofit or charitable organizations are the oldest and most traditional and institutionalized form of relationship between the third sector and the state. Despite the historical importance of charitable hospital care, little research has been done on the participation of the nonprofit sector in basic health care in the country. This article identifies and describes non-hospital nonprofit facilities providing systematically organized basic health care in Belo Horizonte, Minas Gerais, Brazil, in 2004. The research focused on the facilities registered with the National Council on Social Work, using computer-assisted telephone and semi-structured interviews. Identification and description of these organizations showed that the charitable segment of the third sector conducts organized and systematic basic health care services but is not recognized by the Unified National Health System as a potential partner, even though it receives referrals from basic government services. The study showed spatial and temporal overlapping of government and third-sector services in the same target population.

  9. NASA Glenn Wind Tunnel Model Systems Criteria

    NASA Technical Reports Server (NTRS)

    Soeder, Ronald H.; Roeder, James W.; Stark, David E.; Linne, Alan A.

    2004-01-01

    This report describes criteria for the design, analysis, quality assurance, and documentation of models that are to be tested in the wind tunnel facilities at the NASA Glenn Research Center. This report presents two methods for computing model allowable stresses on the basis of the yield stress or ultimate stress, and it defines project procedures to test models in the NASA Glenn aeropropulsion facilities. Both customer-furnished and in-house model systems are discussed. The functions of the facility personnel and customers are defined. The format for the pretest meetings, safety permit process, and model reviews are outlined. The format for the model systems report (a requirement for each model that is to be tested at NASA Glenn) is described, the engineers responsible for developing the model systems report are listed, and the timetable for its delivery to the project engineer is given.

  10. An evaluation of software tools for the design and development of cockpit displays

    NASA Technical Reports Server (NTRS)

    Ellis, Thomas D., Jr.

    1993-01-01

    The use of all-glass cockpits at the NASA Langley Research Center (LaRC) simulation facility has changed the means of design, development, and maintenance of instrument displays. The human-machine interface has evolved from a physical hardware device to a software-generated electronic display system. This has subsequently caused an increased workload at the facility. As computer processing power increases and the glass cockpit becomes predominant in facilities, software tools used in the design and development of cockpit displays are becoming both feasible and necessary for a more productive simulation environment. This paper defines LaRC requirements of a display software development tool and compares two available applications against these requirements. As a part of the software engineering process, these tools reduce development time, provide a common platform for display development, and produce exceptional real-time results.

  11. Quantum Machine Learning

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak

    2018-01-01

    Quantum computing promises an unprecedented ability to solve intractable problems by harnessing quantum mechanical effects such as tunneling, superposition, and entanglement. The Quantum Artificial Intelligence Laboratory (QuAIL) at NASA Ames Research Center is the space agency's primary facility for conducting research and development in quantum information sciences. QuAIL conducts fundamental research in quantum physics but also explores how best to exploit and apply this disruptive technology to enable NASA missions in aeronautics, Earth and space sciences, and space exploration. At the same time, machine learning has become a major focus in computer science and captured the imagination of the public as a panacea to myriad big data problems. In this talk, we will discuss how classical machine learning can take advantage of quantum computing to significantly improve its effectiveness. Although we illustrate this concept on a quantum annealer, other quantum platforms could be used as well. If explored fully and implemented efficiently, quantum machine learning could greatly accelerate a wide range of tasks leading to new technologies and discoveries that will significantly change the way we solve real-world problems.

  12. Development of a change management system

    NASA Technical Reports Server (NTRS)

    Parks, Cathy Bonifas

    1993-01-01

    The complexity and interdependence of software on a computer system can create a situation where a solution to one problem causes failures in dependent software. In the computer industry, software problems arise and are often solved with 'quick and dirty' solutions. But in implementing these solutions, documentation about the solution or user notification of changes is often overlooked, and new problems are frequently introduced because of insufficient review or testing. These problems increase when numerous heterogeneous systems are involved. Because of this situation, a change management system plays an integral part in the maintenance of any multisystem computing environment. At the NASA Ames Advanced Computational Facility (ACF), the Online Change Management System (OCMS) was designed and developed to manage the changes being applied to its multivendor computing environment. This paper documents the research, design, and modifications that went into the development of this change management system (CMS).

  13. A secure file manager for UNIX

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeVries, R.G.

    1990-12-31

    The development of a secure file management system for a UNIX-based computer facility with supercomputers and workstations is described. Specifically, UNIX in its usual form does not address: (1) Operation which would satisfy rigorous security requirements. (2) Online space management in an environment where total data demands would be many times the actual online capacity. (3) Making the file management system part of a computer network in which users of any computer in the local network could retrieve data generated on any other computer in the network. The characteristics of UNIX can be exploited to develop a portable, secure filemore » manager which would operate on computer systems ranging from workstations to supercomputers. Implementation considerations making unusual use of UNIX features, rather than requiring extensive internal system changes, are described, and implementation using the Cray Research Inc. UNICOS operating system is outlined.« less

  14. Optimization of knowledge-based systems and expert system building tools

    NASA Technical Reports Server (NTRS)

    Yasuda, Phyllis; Mckellar, Donald

    1993-01-01

    The objectives of the NASA-AMES Cooperative Agreement were to investigate, develop, and evaluate, via test cases, the system parameters and processing algorithms that constrain the overall performance of the Information Sciences Division's Artificial Intelligence Research Facility. Written reports covering various aspects of the grant were submitted to the co-investigators for the grant. Research studies concentrated on the field of artificial intelligence knowledge-based systems technology. Activities included the following areas: (1) AI training classes; (2) merging optical and digital processing; (3) science experiment remote coaching; (4) SSF data management system tests; (5) computer integrated documentation project; (6) conservation of design knowledge project; (7) project management calendar and reporting system; (8) automation and robotics technology assessment; (9) advanced computer architectures and operating systems; and (10) honors program.

  15. Physics through the 1990s: Scientific interfaces and technological applications

    NASA Technical Reports Server (NTRS)

    1986-01-01

    The volume examines the scientific interfaces and technological applications of physics. Twelve areas are dealt with: biological physics-biophysics, the brain, and theoretical biology; the physics-chemistry interface-instrumentation, surfaces, neutron and synchrotron radiation, polymers, organic electronic materials; materials science; geophysics-tectonics, the atmosphere and oceans, planets, drilling and seismic exploration, and remote sensing; computational physics-complex systems and applications in basic research; mathematics-field theory and chaos; microelectronics-integrated circuits, miniaturization, future trends; optical information technologies-fiber optics and photonics; instrumentation; physics applications to energy needs and the environment; national security-devices, weapons, and arms control; medical physics-radiology, ultrasonics, MNR, and photonics. An executive summary and many chapters contain recommendations regarding funding, education, industry participation, small-group university research and large facility programs, government agency programs, and computer database needs.

  16. National Wind Tecnology Center Provides Dual Axis Resonant Blade Testing

    ScienceCinema

    Felker, Fort

    2018-01-16

    NREL's Structural Testing Laboratory at the National Wind Technology Center (NWTC) provides experimental laboratories, computer facilities for analytical work, space for assembling components and turbines for atmospheric testing as well as office space for industry researchers. Fort Felker, center director at the NWTC, discusses NREL's state-of-the-art structural testing capabilities and shows a flapwise and edgewise blade test in progress.

  17. Library Technology and Architecture; Report of a Conference Held at the Harvard Graduate School of Education, February 9, 1967.

    ERIC Educational Resources Information Center

    Harvard Univ., Cambridge, MA. Graduate School of Education.

    The purpose of the conference was to investigate the implications of new technologies for library architecture and to use the findings in planning new Library Research Facility for the Harvard Graduate School of Education. The first half of this document consists of reports prepared by six consultants on such topics as microforms, computers,…

  18. HPC Access Using KVM over IP

    DTIC Science & Technology

    2007-06-08

    Lightwave VDE /200 KVM-over-Fiber (Keyboard, Video and Mouse) devices installed throughout the TARDEC campus. Implementation of this system required...development effort through the pursuit of an Army-funded Phase-II Small Business Innovative Research (SBIR) effort with IP Video Systems (formerly known as...visualization capabilities of a DoD High- Performance Computing facility, many advanced features are necessary. TARDEC-HPC’s SBIR with IP Video Systems

  19. Division of Biological and Medical Research annual report, 1979. [Lead abstract

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rosenthal, M.W.

    1979-01-01

    Separate abstracts were prepared for 14 of the 20 sections included in this progress report. The other 6 sections include: introductory statements by the division director; descriptions of the animal, computer, electron microscope, and radiation support facilities; a listing of the educational activities, divisional seminars, and oral presentations by staff members; and divisional staff publications. An author index to the report is included. (ERB)

  20. Water-Cooled Data Center Packs More Power Per Rack | Poster

    Cancer.gov

    By Frank Blanchard and Ken Michaels, Staff Writers Behind each tall, black computer rack in the data center at the Advanced Technology Research Facility (ATRF) is something both strangely familiar and oddly out of place: It looks like a radiator. The back door of each cabinet is gridded with the coils of the Liebert cooling system, which circulates chilled water to remove heat

  1. National Wind Tecnology Center Provides Dual Axis Resonant Blade Testing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Felker, Fort

    2013-11-13

    NREL's Structural Testing Laboratory at the National Wind Technology Center (NWTC) provides experimental laboratories, computer facilities for analytical work, space for assembling components and turbines for atmospheric testing as well as office space for industry researchers. Fort Felker, center director at the NWTC, discusses NREL's state-of-the-art structural testing capabilities and shows a flapwise and edgewise blade test in progress.

  2. A random-key encoded harmony search approach for energy-efficient production scheduling with shared resources

    NASA Astrophysics Data System (ADS)

    Garcia-Santiago, C. A.; Del Ser, J.; Upton, C.; Quilligan, F.; Gil-Lopez, S.; Salcedo-Sanz, S.

    2015-11-01

    When seeking near-optimal solutions for complex scheduling problems, meta-heuristics demonstrate good performance with affordable computational effort. This has resulted in a gravitation towards these approaches when researching industrial use-cases such as energy-efficient production planning. However, much of the previous research makes assumptions about softer constraints that affect planning strategies and about how human planners interact with the algorithm in a live production environment. This article describes a job-shop problem that focuses on minimizing energy consumption across a production facility of shared resources. The application scenario is based on real facilities made available by the Irish Center for Manufacturing Research. The formulated problem is tackled via harmony search heuristics with random keys encoding. Simulation results are compared to a genetic algorithm, a simulated annealing approach and a first-come-first-served scheduling. The superior performance obtained by the proposed scheduler paves the way towards its practical implementation over industrial production chains.

  3. Integrated instrumentation & computation environment for GRACE

    NASA Astrophysics Data System (ADS)

    Dhekne, P. S.

    2002-03-01

    The project GRACE (Gamma Ray Astrophysics with Coordinated Experiments) aims at setting up a state of the art Gamma Ray Observatory at Mt. Abu, Rajasthan for undertaking comprehensive scientific exploration over a wide spectral window (10's keV - 100's TeV) from a single location through 4 coordinated experiments. The cumulative data collection rate of all the telescopes is expected to be about 1 GB/hr, necessitating innovations in the data management environment. As real-time data acquisition and control as well as off-line data processing, analysis and visualization environment of these systems is based on the us cutting edge and affordable technologies in the field of computers, communications and Internet. We propose to provide a single, unified environment by seamless integration of instrumentation and computations by taking advantage of the recent advancements in Web based technologies. This new environment will allow researchers better acces to facilities, improve resource utilization and enhance collaborations by having identical environments for online as well as offline usage of this facility from any location. We present here a proposed implementation strategy for a platform independent web-based system that supplements automated functions with video-guided interactive and collaborative remote viewing, remote control through virtual instrumentation console, remote acquisition of telescope data, data analysis, data visualization and active imaging system. This end-to-end web-based solution will enhance collaboration among researchers at the national and international level for undertaking scientific studies, using the telescope systems of the GRACE project.

  4. ISCR Annual Report: Fical Year 2004

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McGraw, J R

    2005-03-03

    Large-scale scientific computation and all of the disciplines that support and help to validate it have been placed at the focus of Lawrence Livermore National Laboratory (LLNL) by the Advanced Simulation and Computing (ASC) program of the National Nuclear Security Administration (NNSA) and the Scientific Discovery through Advanced Computing (SciDAC) initiative of the Office of Science of the Department of Energy (DOE). The maturation of computational simulation as a tool of scientific and engineering research is underscored in the November 2004 statement of the Secretary of Energy that, ''high performance computing is the backbone of the nation's science and technologymore » enterprise''. LLNL operates several of the world's most powerful computers--including today's single most powerful--and has undertaken some of the largest and most compute-intensive simulations ever performed. Ultrascale simulation has been identified as one of the highest priorities in DOE's facilities planning for the next two decades. However, computers at architectural extremes are notoriously difficult to use efficiently. Furthermore, each successful terascale simulation only points out the need for much better ways of interacting with the resulting avalanche of data. Advances in scientific computing research have, therefore, never been more vital to LLNL's core missions than at present. Computational science is evolving so rapidly along every one of its research fronts that to remain on the leading edge, LLNL must engage researchers at many academic centers of excellence. In Fiscal Year 2004, the Institute for Scientific Computing Research (ISCR) served as one of LLNL's main bridges to the academic community with a program of collaborative subcontracts, visiting faculty, student internships, workshops, and an active seminar series. The ISCR identifies researchers from the academic community for computer science and computational science collaborations with LLNL and hosts them for short- and long-term visits with the aim of encouraging long-term academic research agendas that address LLNL's research priorities. Through such collaborations, ideas and software flow in both directions, and LLNL cultivates its future workforce. The Institute strives to be LLNL's ''eyes and ears'' in the computer and information sciences, keeping the Laboratory aware of and connected to important external advances. It also attempts to be the ''feet and hands'' that carry those advances into the Laboratory and incorporates them into practice. ISCR research participants are integrated into LLNL's Computing and Applied Research (CAR) Department, especially into its Center for Applied Scientific Computing (CASC). In turn, these organizations address computational challenges arising throughout the rest of the Laboratory. Administratively, the ISCR flourishes under LLNL's University Relations Program (URP). Together with the other five institutes of the URP, it navigates a course that allows LLNL to benefit from academic exchanges while preserving national security. While it is difficult to operate an academic-like research enterprise within the context of a national security laboratory, the results declare the challenges well met and worth the continued effort.« less

  5. HyPEP FY06 Report: Models and Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DOE report

    2006-09-01

    The Department of Energy envisions the next generation very high-temperature gas-cooled reactor (VHTR) as a single-purpose or dual-purpose facility that produces hydrogen and electricity. The Ministry of Science and Technology (MOST) of the Republic of Korea also selected VHTR for the Nuclear Hydrogen Development and Demonstration (NHDD) Project. This research project aims at developing a user-friendly program for evaluating and optimizing cycle efficiencies of producing hydrogen and electricity in a Very-High-Temperature Reactor (VHTR). Systems for producing electricity and hydrogen are complex and the calculations associated with optimizing these systems are intensive, involving a large number of operating parameter variations andmore » many different system configurations. This research project will produce the HyPEP computer model, which is specifically designed to be an easy-to-use and fast running tool for evaluating nuclear hydrogen and electricity production facilities. The model accommodates flexible system layouts and its cost models will enable HyPEP to be well-suited for system optimization. Specific activities of this research are designed to develop the HyPEP model into a working tool, including (a) identifying major systems and components for modeling, (b) establishing system operating parameters and calculation scope, (c) establishing the overall calculation scheme, (d) developing component models, (e) developing cost and optimization models, and (f) verifying and validating the program. Once the HyPEP model is fully developed and validated, it will be used to execute calculations on candidate system configurations. FY-06 report includes a description of reference designs, methods used in this study, models and computational strategies developed for the first year effort. Results from computer codes such as HYSYS and GASS/PASS-H used by Idaho National Laboratory and Argonne National Laboratory, respectively will be benchmarked with HyPEP results in the following years.« less

  6. What happened after the evaluation?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bennett, C L

    1999-03-12

    An ergonomics program including a ergonomic computer workstation evaluations at a research and development facility was assessed three years after formal implementation. As part of the assessment, 53 employees who had been subjects of computer workstation evaluations were interviewed. The documented reports (ergonomic evaluation forms) of the ergonomic evaluations were used in the process of selecting the interview subjects. The evaluation forms also provided information about the aspects of the computer workstation that were discussed and recommended as part of the evaluation, although the amount of detail and completeness of the forms varied. Although the results were mixed and reflectivemore » of the multivariate psychosocial factors affecting employees working in a large organization, the findings led to recommendations for improvements of the program.« less

  7. Experience with Ada on the F-18 High Alpha Research Vehicle Flight Test Program

    NASA Technical Reports Server (NTRS)

    Regenie, Victoria A.; Earls, Michael; Le, Jeanette; Thomson, Michael

    1992-01-01

    Considerable experience was acquired with Ada at the NASA Dryden Flight Research Facility during the on-going High Alpha Technology Program. In this program, an F-18 aircraft was highly modified by the addition of thrust-vectoring vanes to the airframe. In addition, substantial alteration was made in the original quadruplex flight control system. The result is the High Alpha Research Vehicle. An additional research flight control computer was incorporated in each of the four channels. Software for the research flight control computer was written in Ada. To date, six releases of this software have been flown. This paper provides a detailed description of the modifications to the research flight control system. Efficient ground-testing of the software was accomplished by using simulations that used the Ada for portions of their software. These simulations are also described. Modifying and transferring the Ada for flight software to the software simulation configuration has allowed evaluation of this language. This paper also discusses such significant issues in using Ada as portability, modifiability, and testability as well as documentation requirements.

  8. Experience with Ada on the F-18 High Alpha Research Vehicle flight test program

    NASA Technical Reports Server (NTRS)

    Regenie, Victoria A.; Earls, Michael; Le, Jeanette; Thomson, Michael

    1994-01-01

    Considerable experience has been acquired with Ada at the NASA Dryden Flight Research Facility during the on-going High Alpha Technology Program. In this program, an F-18 aircraft has been highly modified by the addition of thrust-vectoring vanes to the airframe. In addition, substantial alteration was made in the original quadruplex flight control system. The result is the High Alpha Research Vehicle. An additional research flight control computer was incorporated in each of the four channels. Software for the research flight control computer was written Ada. To date, six releases of this software have been flown. This paper provides a detailed description of the modifications to the research flight control system. Efficient ground-testing of the software was accomplished by using simulations that used the Ada for portions of their software. These simulations are also described. Modifying and transferring the Ada flight software to the software simulation configuration has allowed evaluation of this language. This paper also discusses such significant issues in using Ada as portability, modifiability, and testability as well as documentation requirements.

  9. Diffraction studies applicable to 60-foot microwave research facilities

    NASA Technical Reports Server (NTRS)

    Schmidt, R. F.

    1973-01-01

    The principal features of this document are the analysis of a large dual-reflector antenna system by vector Kirchhoff theory, the evaluation of subreflector aperture-blocking, determination of the diffraction and blockage effects of a subreflector mounting structure, and an estimate of strut-blockage effects. Most of the computations are for a frequency of 15.3 GHz, and were carried out using the IBM 360/91 and 360/95 systems at Goddard Space Flight Center. The FORTRAN 4 computer program used to perform the computations is of a general and modular type so that various system parameters such as frequency, eccentricity, diameter, focal-length, etc. can be varied at will. The parameters of the 60-foot NRL Ku-band installation at Waldorf, Maryland, were entered into the program for purposes of this report. Similar calculations could be performed for the NELC installation at La Posta, California, the NASA Wallops Station facility in Virginia, and other antenna systems, by a simple change in IBM control cards. A comparison is made between secondary radiation patterns of the NRL antenna measured by DOD Satellite and those obtained by analytical/numerical methods at a frequency of 7.3 GHz.

  10. INTEGRATION OF PANDA WORKLOAD MANAGEMENT SYSTEM WITH SUPERCOMPUTERS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    De, K; Jha, S; Maeno, T

    Abstract The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the funda- mental nature of matter and the basic forces that shape our universe, and were recently credited for the dis- covery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Datamore » Analysis) Workload Management System for managing the workflow for all data processing on over 140 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data cen- ters are physically scattered all over the world. While PanDA currently uses more than 250000 cores with a peak performance of 0.3+ petaFLOPS, next LHC data taking runs will require more resources than Grid computing can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, Europe and Russia (in particular with Titan supercomputer at Oak Ridge Leadership Com- puting Facility (OLCF), Supercomputer at the National Research Center Kurchatov Institute , IT4 in Ostrava, and others). The current approach utilizes a modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run single- threaded workloads in parallel on Titan s multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads on several supercomputing platforms. We will present our current accom- plishments in running PanDA WMS at supercomputers and demonstrate our ability to use PanDA as a portal independent of the computing facility s infrastructure for High Energy and Nuclear Physics, as well as other data-intensive science applications, such as bioinformatics and astro-particle physics.« less

  11. Research on Key Technologies of Cloud Computing

    NASA Astrophysics Data System (ADS)

    Zhang, Shufen; Yan, Hongcan; Chen, Xuebin

    With the development of multi-core processors, virtualization, distributed storage, broadband Internet and automatic management, a new type of computing mode named cloud computing is produced. It distributes computation task on the resource pool which consists of massive computers, so the application systems can obtain the computing power, the storage space and software service according to its demand. It can concentrate all the computing resources and manage them automatically by the software without intervene. This makes application offers not to annoy for tedious details and more absorbed in his business. It will be advantageous to innovation and reduce cost. It's the ultimate goal of cloud computing to provide calculation, services and applications as a public facility for the public, So that people can use the computer resources just like using water, electricity, gas and telephone. Currently, the understanding of cloud computing is developing and changing constantly, cloud computing still has no unanimous definition. This paper describes three main service forms of cloud computing: SAAS, PAAS, IAAS, compared the definition of cloud computing which is given by Google, Amazon, IBM and other companies, summarized the basic characteristics of cloud computing, and emphasized on the key technologies such as data storage, data management, virtualization and programming model.

  12. Analysis of Student Satisfaction Toward Quality of Service Facility

    NASA Astrophysics Data System (ADS)

    Napitupulu, D.; Rahim, R.; Abdullah, D.; Setiawan, MI; Abdillah, LA; Ahmar, AS; Simarmata, J.; Hidayat, R.; Nurdiyanto, H.; Pranolo, A.

    2018-01-01

    The development of higher education is very rapid rise to the tight competition both public universities and private colleges. XYZ University realized to win the competition, required continuous quality improvement, including the quality of existing service facilities. Amenities quality services is believed to support the success of the learning activities and improve user satisfaction. This study aims to determine the extent to which the quality of the services effect on user satisfaction. The research method used is survey-based questionnaire that measure perception and expectation. The results showed a gap between perception and expectations of the respondents have a negative value for each item. This means XYZ service facility at the university is not currently meet the expectations of society members. Three service facility that has the lowest index is based on the perception of respondents is a laboratory (2.56), computer and multimedia (2.63) as well as wifi network (2.99). The magnitude of the correlation between satisfaction with the quality of service facilities is 0.725 which means a strong and positive relationship. The influence of the quality of service facilities to the satisfaction of the students is 0.525 meaning that the variable quality of the services facility can explain 52.5% of the variable satisfaction. The study provided recommendations for improvements to enhance the quality of services facility at the XYZ university facilities.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nikolic, R J

    This month's issue has the following articles: (1) Dawn of a New Era of Scientific Discovery - Commentary by Edward I. Moses; (2) At the Frontiers of Fundamental Science Research - Collaborators from national laboratories, universities, and international organizations are using the National Ignition Facility to probe key fundamental science questions; (3) Livermore Responds to Crisis in Post-Earthquake Japan - More than 70 Laboratory scientists provided round-the-clock expertise in radionuclide analysis and atmospheric dispersion modeling as part of the nation's support to Japan following the March 2011 earthquake and nuclear accident; (4) A Comprehensive Resource for Modeling, Simulation, and Experimentsmore » - A new Web-based resource called MIDAS is a central repository for material properties, experimental data, and computer models; and (5) Finding Data Needles in Gigabit Haystacks - Livermore computer scientists have developed a novel computer architecture based on 'persistent' memory to ease data-intensive computations.« less

  14. Active Flow Control in an Aggressive Transonic Diffuser

    NASA Astrophysics Data System (ADS)

    Skinner, Ryan W.; Jansen, Kenneth E.

    2017-11-01

    A diffuser exchanges upstream kinetic energy for higher downstream static pressure by increasing duct cross-sectional area. The resulting stream-wise and span-wise pressure gradients promote extensive separation in many diffuser configurations. The present computational work evaluates active flow control strategies for separation control in an asymmetric, aggressive diffuser of rectangular cross-section at inlet Mach 0.7 and Re 2.19M. Corner suction is used to suppress secondary flows, and steady/unsteady tangential blowing controls separation on both the single ramped face and the opposite flat face. We explore results from both Spalart-Allmaras RANS and DDES turbulence modeling frameworks; the former is found to miss key physics of the flow control mechanisms. Simulated baseline, steady, and unsteady blowing performance is validated against experimental data. Funding was provided by Northrop Grumman Corporation, and this research used resources of the Argonne Leadership Computing Facility, which is a DOE Office of Science User Facility supported under Contract DE-AC02-06CH11357.

  15. Development of computer-based analytical tool for assessing physical protection system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mardhi, Alim, E-mail: alim-m@batan.go.id; Chulalongkorn University, Faculty of Engineering, Nuclear Engineering Department, 254 Phayathai Road, Pathumwan, Bangkok Thailand. 10330; Pengvanich, Phongphaeth, E-mail: ppengvan@gmail.com

    Assessment of physical protection system effectiveness is the priority for ensuring the optimum protection caused by unlawful acts against a nuclear facility, such as unauthorized removal of nuclear materials and sabotage of the facility itself. Since an assessment based on real exercise scenarios is costly and time-consuming, the computer-based analytical tool can offer the solution for approaching the likelihood threat scenario. There are several currently available tools that can be used instantly such as EASI and SAPE, however for our research purpose it is more suitable to have the tool that can be customized and enhanced further. In this work,more » we have developed a computer–based analytical tool by utilizing the network methodological approach for modelling the adversary paths. The inputs are multi-elements in security used for evaluate the effectiveness of the system’s detection, delay, and response. The tool has capability to analyze the most critical path and quantify the probability of effectiveness of the system as performance measure.« less

  16. Use of Computed Tomography for Characterizing Materials Grown Terrestrially and in Microgravity

    NASA Technical Reports Server (NTRS)

    Gillies, Donald C.; Engel, H. P.

    2001-01-01

    The purpose behind this work is to provide NASA Principal Investigators (PIs) rapid information, nondestructively, about their samples. This information will be in the form of density values throughout the samples, especially within slices 1 mm high. With correct interpretation and good calibration, these values will enable the PI to obtain macro chemical compositional analysis for his/her samples. Alternatively, the technique will provide information about the porosity level and its distribution within the sample. Experience gained with a NASA Microgravity Research Division-sponsored Advanced Technology Development (ATD) project on this topic has brought the technique to a level of maturity at which it has become a viable characterization tool for many of the Materials Science Pls, but with equipment that could never be supported within their own facilities. The existing computed tomography (CT) facility at NASA's Kennedy Space Center (KSC) is ideally situated to furnish information rapidly and conveniently to PIs, particularly immediately before and after flight missions.

  17. Study of launch site processing and facilities for future launch vehicles

    NASA Astrophysics Data System (ADS)

    Shaffer, Rex

    1995-03-01

    The purpose of this research is to provide innovative and creative approaches to assess the impact to the Kennedy Space Center and other launch sites for a range of candidate manned and unmanned space transportation systems. The general scope of the research includes the engineering activities, analyses, and evaluations defined in the four tasks below: (1) development of innovative approaches and computer aided tools; (2) operations analyses of launch vehicle concepts and designs; (3) assessment of ground operations impacts; and (4) development of methodologies to identify promising technologies.

  18. Development of ROTC (Reserve Officers’ Training Corps) Data Sets and Evaluation of Their Usefulness for Officer Longitudinal Research Data Base

    DTIC Science & Technology

    1987-08-01

    Two ROTC/OLRDB data sets result from this effort. They reside at the National Institutes of Health (NIH) computer facility. They were both built and...59Z) 1985 8.326 3.836 (46%) Total 31,967 18,617 (58%) Research use of these data sets would benefit from further documentation for some data which...to the existing files, there vould appear to be significant benefit from the inclusion of additional years of OLRDB data vith the newly formed ROTC

  19. Study of launch site processing and facilities for future launch vehicles

    NASA Technical Reports Server (NTRS)

    Shaffer, Rex

    1995-01-01

    The purpose of this research is to provide innovative and creative approaches to assess the impact to the Kennedy Space Center and other launch sites for a range of candidate manned and unmanned space transportation systems. The general scope of the research includes the engineering activities, analyses, and evaluations defined in the four tasks below: (1) development of innovative approaches and computer aided tools; (2) operations analyses of launch vehicle concepts and designs; (3) assessment of ground operations impacts; and (4) development of methodologies to identify promising technologies.

  20. An Investigation of the Relative Safety of Alternative Navigational System Designs for the New Sunshine Skyway Bridge: A CAORF (Computer Aided Operations Research Facility) Simulation.

    DTIC Science & Technology

    1985-09-01

    physical states of the operator, ment may not result in a safe vessel transit. *" such as poor health or fatigue; and (3) workload, stress and time...with respect to the display format used, e.g., graphic or tion systems investigated were very similar to the types of digital, and the specific...The research that has been provided a predicted area of danger format superimposed 13 on a display providing exact ownship position information

  1. MAP-oriented research in the People's Republic of China

    NASA Technical Reports Server (NTRS)

    Lu, D.

    1985-01-01

    A brief accounting of MAP oriented research in the Republic of China is given. A stratosphere balloon launching facility and its capabilities are reviewed. Observations of the stratospheric aerosols with a balloon-borne aerosol computer were made. Long term monitoring of stratospheric aerosols induced by volcanic eruptions are made with a ruby lidar. The main parameters of an ST radar system are given. The ionospheric D region is investigated with the method of ionospheric absorption. And photochemical modeling and radiation parameterization of the middle atmosphere are made.

  2. Basic energy sciences: Summary of accomplishments

    NASA Astrophysics Data System (ADS)

    1990-05-01

    For more than four decades, the Department of Energy, including its predecessor agencies, has supported a program of basic research in nuclear- and energy related sciences, known as Basic Energy Sciences. The purpose of the program is to explore fundamental phenomena, create scientific knowledge, and provide unique user facilities necessary for conducting basic research. Its technical interests span the range of scientific disciplines: physical and biological sciences, geological sciences, engineering, mathematics, and computer sciences. Its products and facilities are essential to technology development in many of the more applied areas of the Department's energy, science, and national defense missions. The accomplishments of Basic Energy Sciences research are numerous and significant. Not only have they contributed to Departmental missions, but have aided significantly the development of technologies which now serve modern society daily in business, industry, science, and medicine. In a series of stories, this report highlights 22 accomplishments, selected because of their particularly noteworthy contributions to modern society. A full accounting of all the accomplishments would be voluminous. Detailed documentation of the research results can be found in many thousands of articles published in peer-reviewed technical literature.

  3. Basic Energy Sciences: Summary of Accomplishments

    DOE R&D Accomplishments Database

    1990-05-01

    For more than four decades, the Department of Energy, including its predecessor agencies, has supported a program of basic research in nuclear- and energy-related sciences, known as Basic Energy Sciences. The purpose of the program is to explore fundamental phenomena, create scientific knowledge, and provide unique user'' facilities necessary for conducting basic research. Its technical interests span the range of scientific disciplines: physical and biological sciences, geological sciences, engineering, mathematics, and computer sciences. Its products and facilities are essential to technology development in many of the more applied areas of the Department's energy, science, and national defense missions. The accomplishments of Basic Energy Sciences research are numerous and significant. Not only have they contributed to Departmental missions, but have aided significantly the development of technologies which now serve modern society daily in business, industry, science, and medicine. In a series of stories, this report highlights 22 accomplishments, selected because of their particularly noteworthy contributions to modern society. A full accounting of all the accomplishments would be voluminous. Detailed documentation of the research results can be found in many thousands of articles published in peer-reviewed technical literature.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    The Computing and Communications (C) Division is responsible for the Laboratory's Integrated Computing Network (ICN) as well as Laboratory-wide communications. Our computing network, used by 8,000 people distributed throughout the nation, constitutes one of the most powerful scientific computing facilities in the world. In addition to the stable production environment of the ICN, we have taken a leadership role in high-performance computing and have established the Advanced Computing Laboratory (ACL), the site of research on experimental, massively parallel computers; high-speed communication networks; distributed computing; and a broad variety of advanced applications. The computational resources available in the ACL are ofmore » the type needed to solve problems critical to national needs, the so-called Grand Challenge'' problems. The purpose of this publication is to inform our clients of our strategic and operating plans in these important areas. We review major accomplishments since late 1990 and describe our strategic planning goals and specific projects that will guide our operations over the next few years. Our mission statement, planning considerations, and management policies and practices are also included.« less

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    The Computing and Communications (C) Division is responsible for the Laboratory`s Integrated Computing Network (ICN) as well as Laboratory-wide communications. Our computing network, used by 8,000 people distributed throughout the nation, constitutes one of the most powerful scientific computing facilities in the world. In addition to the stable production environment of the ICN, we have taken a leadership role in high-performance computing and have established the Advanced Computing Laboratory (ACL), the site of research on experimental, massively parallel computers; high-speed communication networks; distributed computing; and a broad variety of advanced applications. The computational resources available in the ACL are ofmore » the type needed to solve problems critical to national needs, the so-called ``Grand Challenge`` problems. The purpose of this publication is to inform our clients of our strategic and operating plans in these important areas. We review major accomplishments since late 1990 and describe our strategic planning goals and specific projects that will guide our operations over the next few years. Our mission statement, planning considerations, and management policies and practices are also included.« less

  6. The Ames Power Monitoring System

    NASA Technical Reports Server (NTRS)

    Osetinsky, Leonid; Wang, David

    2003-01-01

    The Ames Power Monitoring System (APMS) is a centralized system of power meters, computer hardware, and specialpurpose software that collects and stores electrical power data by various facilities at Ames Research Center (ARC). This system is needed because of the large and varying nature of the overall ARC power demand, which has been observed to range from 20 to 200 MW. Large portions of peak demand can be attributed to only three wind tunnels (60, 180, and 100 MW, respectively). The APMS helps ARC avoid or minimize costly demand charges by enabling wind-tunnel operators, test engineers, and the power manager to monitor total demand for center in real time. These persons receive the information they need to manage and schedule energy-intensive research in advance and to adjust loads in real time to ensure that the overall maximum allowable demand is not exceeded. The APMS (see figure) includes a server computer running the Windows NT operating system and can, in principle, include an unlimited number of power meters and client computers. As configured at the time of reporting the information for this article, the APMS includes more than 40 power meters monitoring all the major research facilities, plus 15 Windows-based client personal computers that display real-time and historical data to users via graphical user interfaces (GUIs). The power meters and client computers communicate with the server using Transmission Control Protocol/Internet Protocol (TCP/IP) on Ethernet networks, variously, through dedicated fiber-optic cables or through the pre-existing ARC local-area network (ARCLAN). The APMS has enabled ARC to achieve significant savings ($1.2 million in 2001) in the cost of power and electric energy by helping personnel to maintain total demand below monthly allowable levels, to manage the overall power factor to avoid low power factor penalties, and to use historical system data to identify opportunities for additional energy savings. The APMS also provides power engineers and electricians with the information they need to plan modifications in advance and perform day-to-day maintenance of the ARC electric-power distribution system.

  7. PREFACE: 15th International Workshop on Advanced Computing and Analysis Techniques in Physics Research (ACAT2013)

    NASA Astrophysics Data System (ADS)

    Wang, Jianxiong

    2014-06-01

    This volume of Journal of Physics: Conference Series is dedicated to scientific contributions presented at the 15th International Workshop on Advanced Computing and Analysis Techniques in Physics Research (ACAT 2013) which took place on 16-21 May 2013 at the Institute of High Energy Physics, Chinese Academy of Sciences, Beijing, China. The workshop series brings together computer science researchers and practitioners, and researchers from particle physics and related fields to explore and confront the boundaries of computing and of automatic data analysis and theoretical calculation techniques. This year's edition of the workshop brought together over 120 participants from all over the world. 18 invited speakers presented key topics on the universe in computer, Computing in Earth Sciences, multivariate data analysis, automated computation in Quantum Field Theory as well as computing and data analysis challenges in many fields. Over 70 other talks and posters presented state-of-the-art developments in the areas of the workshop's three tracks: Computing Technologies, Data Analysis Algorithms and Tools, and Computational Techniques in Theoretical Physics. The round table discussions on open-source, knowledge sharing and scientific collaboration stimulate us to think over the issue in the respective areas. ACAT 2013 was generously sponsored by the Chinese Academy of Sciences (CAS), National Natural Science Foundation of China (NFSC), Brookhaven National Laboratory in the USA (BNL), Peking University (PKU), Theoretical Physics Cernter for Science facilities of CAS (TPCSF-CAS) and Sugon. We would like to thank all the participants for their scientific contributions and for the en- thusiastic participation in all its activities of the workshop. Further information on ACAT 2013 can be found at http://acat2013.ihep.ac.cn. Professor Jianxiong Wang Institute of High Energy Physics Chinese Academy of Science Details of committees and sponsors are available in the PDF

  8. Kentucky DOE EPSCoR Program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grulke, Eric; Stencel, John

    2011-09-13

    The KY DOE EPSCoR Program supports two research clusters. The Materials Cluster uses unique equipment and computational methods that involve research expertise at the University of Kentucky and University of Louisville. This team determines the physical, chemical and mechanical properties of nanostructured materials and examines the dominant mechanisms involved in the formation of new self-assembled nanostructures. State-of-the-art parallel computational methods and algorithms are used to overcome current limitations of processing that otherwise are restricted to small system sizes and short times. The team also focuses on developing and applying advanced microtechnology fabrication techniques and the application of microelectrornechanical systems (MEMS)more » for creating new materials, novel microdevices, and integrated microsensors. The second research cluster concentrates on High Energy and Nuclear Physics. lt connects research and educational activities at the University of Kentucky, Eastern Kentucky University and national DOE research laboratories. Its vision is to establish world-class research status dedicated to experimental and theoretical investigations in strong interaction physics. The research provides a forum, facilities, and support for scientists to interact and collaborate in subatomic physics research. The program enables increased student involvement in fundamental physics research through the establishment of graduate fellowships and collaborative work.« less

  9. Physics through the 1990s: An overview

    NASA Technical Reports Server (NTRS)

    1986-01-01

    The volume details the interaction of physics and society, and presents a short summary of the progress in the major fields of physics and a summary of the other seven volumes of the Physics through the 1990s series, issues and recommended policy changes are described regarding funding, education, industry participation, small-group university research and large facility programs, government agency programs, and computer database needs. Three supplements report in detail on international issues in physics (the US position in the field, international cooperation and competition-especially on large projects, freedom for scientists, free flow of information, and education of foreign students), the education and supply of physicists (the changes in US physics education, employment and manpower, and demographics of the field), and the organization and support of physics (government, university, and industry research; facilities; national laboratories; and decision making). An executive summary contains recommendations for maintaining excellence in physics. A glossary of scientific terms is appended.

  10. Medical Image Analysis Facility

    NASA Technical Reports Server (NTRS)

    1978-01-01

    To improve the quality of photos sent to Earth by unmanned spacecraft. NASA's Jet Propulsion Laboratory (JPL) developed a computerized image enhancement process that brings out detail not visible in the basic photo. JPL is now applying this technology to biomedical research in its Medical lrnage Analysis Facility, which employs computer enhancement techniques to analyze x-ray films of internal organs, such as the heart and lung. A major objective is study of the effects of I stress on persons with heart disease. In animal tests, computerized image processing is being used to study coronary artery lesions and the degree to which they reduce arterial blood flow when stress is applied. The photos illustrate the enhancement process. The upper picture is an x-ray photo in which the artery (dotted line) is barely discernible; in the post-enhancement photo at right, the whole artery and the lesions along its wall are clearly visible. The Medical lrnage Analysis Facility offers a faster means of studying the effects of complex coronary lesions in humans, and the research now being conducted on animals is expected to have important application to diagnosis and treatment of human coronary disease. Other uses of the facility's image processing capability include analysis of muscle biopsy and pap smear specimens, and study of the microscopic structure of fibroprotein in the human lung. Working with JPL on experiments are NASA's Ames Research Center, the University of Southern California School of Medicine, and Rancho Los Amigos Hospital, Downey, California.

  11. HEPCloud, a New Paradigm for HEP Facilities: CMS Amazon Web Services Investigation

    DOE PAGES

    Holzman, Burt; Bauerdick, Lothar A. T.; Bockelman, Brian; ...

    2017-09-29

    Historically, high energy physics computing has been performed on large purpose-built computing systems. These began as single-site compute facilities, but have evolved into the distributed computing grids used today. Recently, there has been an exponential increase in the capacity and capability of commercial clouds. Cloud resources are highly virtualized and intended to be able to be flexibly deployed for a variety of computing tasks. There is a growing interest among the cloud providers to demonstrate the capability to perform large-scale scientific computing. In this paper, we discuss results from the CMS experiment using the Fermilab HEPCloud facility, which utilized bothmore » local Fermilab resources and virtual machines in the Amazon Web Services Elastic Compute Cloud. We discuss the planning, technical challenges, and lessons learned involved in performing physics workflows on a large-scale set of virtualized resources. Additionally, we will discuss the economics and operational efficiencies when executing workflows both in the cloud and on dedicated resources.« less

  12. HEPCloud, a New Paradigm for HEP Facilities: CMS Amazon Web Services Investigation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holzman, Burt; Bauerdick, Lothar A. T.; Bockelman, Brian

    Historically, high energy physics computing has been performed on large purpose-built computing systems. These began as single-site compute facilities, but have evolved into the distributed computing grids used today. Recently, there has been an exponential increase in the capacity and capability of commercial clouds. Cloud resources are highly virtualized and intended to be able to be flexibly deployed for a variety of computing tasks. There is a growing interest among the cloud providers to demonstrate the capability to perform large-scale scientific computing. In this paper, we discuss results from the CMS experiment using the Fermilab HEPCloud facility, which utilized bothmore » local Fermilab resources and virtual machines in the Amazon Web Services Elastic Compute Cloud. We discuss the planning, technical challenges, and lessons learned involved in performing physics workflows on a large-scale set of virtualized resources. Additionally, we will discuss the economics and operational efficiencies when executing workflows both in the cloud and on dedicated resources.« less

  13. Revision history aware repositories of computational models of biological systems.

    PubMed

    Miller, Andrew K; Yu, Tommy; Britten, Randall; Cooling, Mike T; Lawson, James; Cowan, Dougal; Garny, Alan; Halstead, Matt D B; Hunter, Peter J; Nickerson, David P; Nunns, Geo; Wimalaratne, Sarala M; Nielsen, Poul M F

    2011-01-14

    Building repositories of computational models of biological systems ensures that published models are available for both education and further research, and can provide a source of smaller, previously verified models to integrate into a larger model. One problem with earlier repositories has been the limitations in facilities to record the revision history of models. Often, these facilities are limited to a linear series of versions which were deposited in the repository. This is problematic for several reasons. Firstly, there are many instances in the history of biological systems modelling where an 'ancestral' model is modified by different groups to create many different models. With a linear series of versions, if the changes made to one model are merged into another model, the merge appears as a single item in the history. This hides useful revision history information, and also makes further merges much more difficult, as there is no record of which changes have or have not already been merged. In addition, a long series of individual changes made outside of the repository are also all merged into a single revision when they are put back into the repository, making it difficult to separate out individual changes. Furthermore, many earlier repositories only retain the revision history of individual files, rather than of a group of files. This is an important limitation to overcome, because some types of models, such as CellML 1.1 models, can be developed as a collection of modules, each in a separate file. The need for revision history is widely recognised for computer software, and a lot of work has gone into developing version control systems and distributed version control systems (DVCSs) for tracking the revision history. However, to date, there has been no published research on how DVCSs can be applied to repositories of computational models of biological systems. We have extended the Physiome Model Repository software to be fully revision history aware, by building it on top of Mercurial, an existing DVCS. We have demonstrated the utility of this approach, when used in conjunction with the model composition facilities in CellML, to build and understand more complex models. We have also demonstrated the ability of the repository software to present version history to casual users over the web, and to highlight specific versions which are likely to be useful to users. Providing facilities for maintaining and using revision history information is an important part of building a useful repository of computational models, as this information is useful both for understanding the source of and justification for parts of a model, and to facilitate automated processes such as merges. The availability of fully revision history aware repositories, and associated tools, will therefore be of significant benefit to the community.

  14. The Data Acquisition and Control Systems of the Jet Noise Laboratory at the NASA Langley Research Center

    NASA Technical Reports Server (NTRS)

    Jansen, B. J., Jr.

    1998-01-01

    The features of the data acquisition and control systems of the NASA Langley Research Center's Jet Noise Laboratory are presented. The Jet Noise Laboratory is a facility that simulates realistic mixed flow turbofan jet engine nozzle exhaust systems in simulated flight. The system is capable of acquiring data for a complete take-off assessment of noise and nozzle performance. This paper describes the development of an integrated system to control and measure the behavior of model jet nozzles featuring dual independent high pressure combusting air streams with wind tunnel flow. The acquisition and control system is capable of simultaneous measurement of forces, moments, static and dynamic model pressures and temperatures, and jet noise. The design concepts for the coordination of the control computers and multiple data acquisition computers and instruments are discussed. The control system design and implementation are explained, describing the features, equipment, and the experiences of using a primarily Personal Computer based system. Areas for future development are examined.

  15. Application of pulsed multi-ion irradiations in radiation damage research: A stochastic cluster dynamics simulation study

    NASA Astrophysics Data System (ADS)

    Hoang, Tuan L.; Nazarov, Roman; Kang, Changwoo; Fan, Jiangyuan

    2018-07-01

    Under the multi-ion irradiation conditions present in accelerated material-testing facilities or fission/fusion nuclear reactors, the combined effects of atomic displacements with radiation products may induce complex synergies in the structural materials. However, limited access to multi-ion irradiation facilities and the lack of computational models capable of simulating the evolution of complex defects and their synergies make it difficult to understand the actual physical processes taking place in the materials under these extreme conditions. In this paper, we propose the application of pulsed single/dual-beam irradiation as replacements for the expensive steady triple-beam irradiation to study radiation damages in materials under multi-ion irradiation.

  16. National Synchrotron Light Source annual report 1991

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hulbert, S.L.; Lazarz, N.M.

    1992-04-01

    This report discusses the following research conducted at NSLS: atomic and molecular science; energy dispersive diffraction; lithography, microscopy and tomography; nuclear physics; UV photoemission and surface science; x-ray absorption spectroscopy; x-ray scattering and crystallography; x-ray topography; workshop on surface structure; workshop on electronic and chemical phenomena at surfaces; workshop on imaging; UV FEL machine reviews; VUV machine operations; VUV beamline operations; VUV storage ring parameters; x-ray machine operations; x-ray beamline operations; x-ray storage ring parameters; superconducting x-ray lithography source; SXLS storage ring parameters; the accelerator test facility; proposed UV-FEL user facility at the NSLS; global orbit feedback systems; and NSLSmore » computer system.« less

  17. Remote Internet access to advanced analytical facilities: a new approach with Web-based services.

    PubMed

    Sherry, N; Qin, J; Fuller, M Suominen; Xie, Y; Mola, O; Bauer, M; McIntyre, N S; Maxwell, D; Liu, D; Matias, E; Armstrong, C

    2012-09-04

    Over the past decade, the increasing availability of the World Wide Web has held out the possibility that the efficiency of scientific measurements could be enhanced in cases where experiments were being conducted at distant facilities. Examples of early successes have included X-ray diffraction (XRD) experimental measurements of protein crystal structures at synchrotrons and access to scanning electron microscopy (SEM) and NMR facilities by users from institutions that do not possess such advanced capabilities. Experimental control, visual contact, and receipt of results has used some form of X forwarding and/or VNC (virtual network computing) software that transfers the screen image of a server at the experimental site to that of the users' home site. A more recent development is a web services platform called Science Studio that provides teams of scientists with secure links to experiments at one or more advanced research facilities. The software provides a widely distributed team with a set of controls and screens to operate, observe, and record essential parts of the experiment. As well, Science Studio provides high speed network access to computing resources to process the large data sets that are often involved in complex experiments. The simple web browser and the rapid transfer of experimental data to a processing site allow efficient use of the facility and assist decision making during the acquisition of the experimental results. The software provides users with a comprehensive overview and record of all parts of the experimental process. A prototype network is described involving X-ray beamlines at two different synchrotrons and an SEM facility. An online parallel processing facility has been developed that analyzes the data in near-real time using stream processing. Science Studio and can be expanded to include many other analytical applications, providing teams of users with rapid access to processed results along with the means for detailed discussion of their significance.

  18. NACA Computers Take Readings From Manometer Boards

    NASA Image and Video Library

    1949-02-21

    Female computers at the National Advisory Committee for Aeronautics (NACA) Lewis Flight Propulsion Laboratory copy pressure readings from rows of manometers below the 18- by 18-inch Supersonic Wind Tunnel. The computers obtained test data from the manometers and other instruments, made the initial computations, and plotted the information graphically. Based on these computations, the researchers planned their next test or summarized their findings in a report. Manometers were mercury-filled glass tubes that were used to indicate different pressure levels from inside the test facility or from the test article. Manometers look and function very similarly to thermometers. Dozens of pressure sensing instruments were installed for each test. Each was connected to a manometer tube located inside the control room. The mercury inside the manometer rose and fell with the pressure levels. The dark mercury can be seen in this photograph at different levels within the tubes. Since this activity was dynamic, it was necessary to note the levels at given points during the test. This was done using both computer notations and photography.

  19. EC93-41094-4

    NASA Image and Video Library

    1993-05-18

    A NASA F/A-18, specially modified to test the newest and most advanced system technologies, on its first research flight on May 21, 1993, at NASA's Dryden Flight Research Facility, Edwards, California. Flown by Dryden in a multi-year, joint NASA/DOD/industry program, the F/A-18 former Navy fighter was modified into a unique Systems Research Aircraft (SRA) to investigate a host of new technologies in the areas of flight controls, airdata sensing and advanced computing. The primary goal of the SRA program was to validate through flight research cutting-edge technologies which could benefit future aircraft and spacecraft by improving efficiency and performance, reducing weight and complexity, with a resultant reduction on development and operational costs.

  20. A Study of the Organization and Search of Bibliographic Holdings Records in On-Line Computer Systems: Phase I. Final Report.

    ERIC Educational Resources Information Center

    Cunningham, Jay L.; And Others

    This report presents the results of the initial phase of the File Organization Project, a study which focuses upon the on-line maintenance and search of the library's catalog holdings record. The focus of the project is to develop a facility for research and experimentation with the many issues of on-line file organizations and search. The first…

  1. European Scientific Notes. Volume 35, Number 12,

    DTIC Science & Technology

    1981-12-31

    been redesigned to work A. Osorio, which was organized some 3 with the Intel 8085 microprocessor, it years ago and contains about half of the has the...operational set. attempt to derive a set of invariants MOISE is based on the Intel 8085A upon which virtually speaker-invariant microprocessor, and...FACILITY software interface; a Research Signal Processor (RSP) using reduced computational It has been IBM International’s complexity algorithms for

  2. The Lister Hill National Center for Biomedical Communications.

    PubMed

    Smith, K A

    1994-09-01

    On August 3, 1968, the Joint Resolution of the Congress established the program and construction of the Lister Hill National Center for Biomedical Communications. The facility dedicated in 1980 contains the latest in computer and communications technologies. The history, program requirements, construction management, and general planning are discussed including technical issues regarding cabling, systems functions, heating, ventilation, and air conditioning system (HVAC), fire suppression, research and development laboratories, among others.

  3. Kennedy Space Center exercise program

    NASA Technical Reports Server (NTRS)

    Hoffman, Cristy

    1993-01-01

    The Kennedy Space Center (KSC) Fitness Program began in Feb. 1993. The program is managed by the Biomedical Operations and Research Office and operated by the Bionetics Corporation. The facilities and programs are offered to civil servants, all contractors, temporary duty assignment (TDY) participants, and retirees. All users must first have a medical clearance. A computer-generated check-in system is used to monitor participant usage. Various aspects of the program are discussed.

  4. Applied Operations Research: Operator's Assistant

    NASA Technical Reports Server (NTRS)

    Cole, Stuart K.

    2015-01-01

    NASA operates high value critical equipment (HVCE) that requires trouble shooting, periodic maintenance and continued monitoring by Operations staff. The complexity HVCE and information required to maintain and trouble shoot HVCE to assure continued mission success as paper is voluminous. Training on new HVCE is commensurate with the need for equipment maintenance. LaRC Research Directorate has undertaken a proactive research to support Operations staff by initiation of the development and prototyping an electronic computer based portable maintenance aid (Operator's Assistant). This research established a goal with multiple objectives and a working prototype was developed. The research identified affordable solutions; constraints; demonstrated use of commercial off the shelf software; use of the US Coast Guard maintenance solution; NASA Procedure Representation Language; and the identification of computer system strategies; where these demonstrations and capabilities support the Operator, and maintenance. The results revealed validation against measures of effectiveness and overall proved a substantial training and capability sustainment tool. The research indicated that the OA could be deployed operationally at the LaRC Compressor Station with an expectation of satisfactorily results and to obtain additional lessons learned prior to deployment at other LaRC Research Directorate Facilities. The research revealed projected cost and time savings.

  5. Automatic control study of the icing research tunnel refrigeration system

    NASA Technical Reports Server (NTRS)

    Kieffer, Arthur W.; Soeder, Ronald H.

    1991-01-01

    The Icing Research Tunnel (IRT) at the NASA Lewis Research Center is a subsonic, closed-return atmospheric tunnel. The tunnel includes a heat exchanger and a refrigeration plant to achieve the desired air temperature and a spray system to generate the type of icing conditions that would be encountered by aircraft. At the present time, the tunnel air temperature is controlled by manual adjustment of freon refrigerant flow control valves. An upgrade of this facility calls for these control valves to be adjusted by an automatic controller. The digital computer simulation of the IRT refrigeration plant and the automatic controller that was used in the simulation are discussed.

  6. Astronomy and astrophysics for the 1980's. Volume 1 - Report of the Astronomy Survey Committee. Volume 2 - Reports of the Panels

    NASA Astrophysics Data System (ADS)

    Recommended priorities for astronomy and astrophysics in the 1980s are considered along with the frontiers of astrophysics, taking into account large-scale structure in the universe, the evolution of galaxies, violent events, the formation of stars and planets, solar and stellar activity, astronomy and the forces of nature, and planets, life, and intelligence. Approved, continuing, and previously recommended programs are related to the Space Telescope and the associated Space Telescope Science Institute, second-generation instrumentation for the Space Telescope, and Gamma Ray Observatory, facilities for the detection of solar neutrinos, and the Shuttle Infrared Telescope Facility. Attention is given to the prerequisites for new research initiatives, new programs, programs for study and development, high-energy astrophysics, radio astronomy, theoretical and laboratory astrophysics, data processing and computational facilities, organization and education, and ultraviolet, optical, and infrared astronomy.

  7. Expanding the Scope of High-Performance Computing Facilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Uram, Thomas D.; Papka, Michael E.

    The high-performance computing centers of the future will expand their roles as service providers, and as the machines scale up, so should the sizes of the communities they serve. National facilities must cultivate their users as much as they focus on operating machines reliably. The authors present five interrelated topic areas that are essential to expanding the value provided to those performing computational science.

  8. Calibration of Axisymmetric and Quasi-1D Solvers for High Enthalpy Nozzles

    NASA Technical Reports Server (NTRS)

    Papadopoulos, P. E.; Gochberg, L. A.; Tokarcik-Polsky, S.; Venkatapathy, E.; Deiwert, G. S.; Edwards, Thomas A. (Technical Monitor)

    1994-01-01

    The proposed paper will present a numerical investigation of the flow characteristics and boundary layer development in the nozzles of high enthalpy shock tunnel facilities used for hypersonic propulsion testing. The computed flow will be validated against existing experimental data. Pitot pressure data obtained at the entrance of the test cabin will be used to validate the numerical simulations. It is necessary to accurately model the facility nozzles in order to characterize the test article flow conditions. Initially the axisymmetric nozzle flow will be computed using a Navier Stokes solver for a range of reservoir conditions. The calculated solutions will be compared and calibrated against available experimental data from the DLR HEG piston-driven shock tunnel and the 16-inch shock tunnel at NASA Ames Research Center. The Reynolds number is assumed to be high enough at the throat that the boundary layer flow is assumed turbulent at this point downstream. The real gas affects will be examined. In high Mach number facilities the boundary layer is thick. Attempts will be made to correlate the boundary layer displacement thickness. The displacement thickness correlation will be used to calibrate the quasi-1D codes NENZF and LSENS in order to provide fast and efficient tools of characterizing the facility nozzles. The calibrated quasi-1D codes will be implemented to study the effects of chemistry and the flow condition variations at the test section due to small variations in the driver gas conditions.

  9. Description of a dual fail operational redundant strapdown inertial measurement unit for integrated avionics systems research

    NASA Technical Reports Server (NTRS)

    Bryant, W. H.; Morrell, F. R.

    1981-01-01

    An experimental redundant strapdown inertial measurement unit (RSDIMU) is developed as a link to satisfy safety and reliability considerations in the integrated avionics concept. The unit includes four two degree-of-freedom tuned rotor gyros, and four accelerometers in a skewed and separable semioctahedral array. These sensors are coupled to four microprocessors which compensate sensor errors. These microprocessors are interfaced with two flight computers which process failure detection, isolation, redundancy management, and general flight control/navigation algorithms. Since the RSDIMU is a developmental unit, it is imperative that the flight computers provide special visibility and facility in algorithm modification.

  10. Issues and recommendations associated with distributed computation and data management systems for the space sciences

    NASA Technical Reports Server (NTRS)

    1986-01-01

    The primary purpose of the report is to explore management approaches and technology developments for computation and data management systems designed to meet future needs in the space sciences.The report builds on work presented in previous reports on solar-terrestrial and planetary reports, broadening the outlook to all of the space sciences, and considering policy issues aspects related to coordiantion between data centers, missions, and ongoing research activities, because it is perceived that the rapid growth of data and the wide geographic distribution of relevant facilities will present especially troublesome problems for data archiving, distribution, and analysis.

  11. Computer usage among nurses in rural health-care facilities in South Africa: obstacles and challenges.

    PubMed

    Asah, Flora

    2013-04-01

    This study discusses factors inhibiting computer usage for work-related tasks among computer-literate professional nurses within rural healthcare facilities in South Africa. In the past two decades computer literacy courses have not been part of the nursing curricula. Computer courses are offered by the State Information Technology Agency. Despite this, there seems to be limited use of computers by professional nurses in the rural context. Focus group interviews held with 40 professional nurses from three government hospitals in northern KwaZulu-Natal. Contributing factors were found to be lack of information technology infrastructure, restricted access to computers and deficits in regard to the technical and nursing management support. The physical location of computers within the health-care facilities and lack of relevant software emerged as specific obstacles to usage. Provision of continuous and active support from nursing management could positively influence computer usage among professional nurses. A closer integration of information technology and computer literacy skills into existing nursing curricula would foster a positive attitude towards computer usage through early exposure. Responses indicated that change of mindset may be needed on the part of nursing management so that they begin to actively promote ready access to computers as a means of creating greater professionalism and collegiality. © 2011 Blackwell Publishing Ltd.

  12. Space Weather Model Testing And Validation At The Community Coordinated Modeling Center

    NASA Astrophysics Data System (ADS)

    Hesse, M.; Kuznetsova, M.; Rastaetter, L.; Falasca, A.; Keller, K.; Reitan, P.

    The Community Coordinated Modeling Center (CCMC) is a multi-agency partner- ship aimed at the creation of next generation space weather models. The goal of the CCMC is to undertake the research and developmental work necessary to substantially increase the present-day modeling capability for space weather purposes, and to pro- vide models for transition to the rapid prototyping centers at the space weather forecast centers. This goal requires close collaborations with and substantial involvement of the research community. The physical regions to be addressed by CCMC-related activities range from the solar atmosphere to the Earth's upper atmosphere. The CCMC is an integral part of NASA's Living With aStar initiative, of the National Space Weather Program Implementation Plan, and of the Department of Defense Space Weather Tran- sition Plan. CCMC includes a facility at NASA Goddard Space Flight Center, as well as distributed computing facilities provided by the Air Force. CCMC also provides, to the research community, access to state-of-the-art space research models. In this paper we will provide updates on CCMC status, on current plans, research and devel- opment accomplishments and goals, and on the model testing and validation process undertaken as part of the CCMC mandate.

  13. UCSB FEL user-mode adaption project. Final report, 1 Jan 86-31 Dec 90

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jaccarino, V.

    1992-04-14

    This research sponsored by the SDIO Biomedical and Materials Sciences FEL Program held the following objectives. Provide a facility in which in-house and outside user research in the materials and biological sciences can be carried out in the Far Infrared using-the unique properties of the UCSB electrostatic accelerator-driven FEL. Develop and implement new FEL concepts and FIR technology and encourage the transfer and application of this research. Train graduate students, post doctoral researchers and technical personnel in varied aspects of scientific user disciplines, FEL science and FIR technology in a cooperative, interdisciplinary environment. In summary, a free electron laser facilitymore » has been developed which is operational from 200 GH z, (6.6 cm -1), to 4.8 THz, (160 cm-1) tunable under computer control and able to deliver kilowatts of millimeter wave and far-infrared power. This facility has a well equipped user lab that has been used to perform ground breaking experiments in scientific areas as diverse as bio-physics. Nine graduate students and post doctoral researchers have been trained in the operation, use and application of these free-electron lasers.« less

  14. Fast laboratory-based micro-computed tomography for pore-scale research: Illustrative experiments and perspectives on the future

    NASA Astrophysics Data System (ADS)

    Bultreys, Tom; Boone, Marijn A.; Boone, Matthieu N.; De Schryver, Thomas; Masschaele, Bert; Van Hoorebeke, Luc; Cnudde, Veerle

    2016-09-01

    Over the past decade, the wide-spread implementation of laboratory-based X-ray micro-computed tomography (micro-CT) scanners has revolutionized both the experimental and numerical research on pore-scale transport in geological materials. The availability of these scanners has opened up the possibility to image a rock's pore space in 3D almost routinely to many researchers. While challenges do persist in this field, we treat the next frontier in laboratory-based micro-CT scanning: in-situ, time-resolved imaging of dynamic processes. Extremely fast (even sub-second) micro-CT imaging has become possible at synchrotron facilities over the last few years, however, the restricted accessibility of synchrotrons limits the amount of experiments which can be performed. The much smaller X-ray flux in laboratory-based systems bounds the time resolution which can be attained at these facilities. Nevertheless, progress is being made to improve the quality of measurements performed on the sub-minute time scale. We illustrate this by presenting cutting-edge pore scale experiments visualizing two-phase flow and solute transport in real-time with a lab-based environmental micro-CT set-up. To outline the current state of this young field and its relevance to pore-scale transport research, we critically examine its current bottlenecks and their possible solutions, both on the hardware and the software level. Further developments in laboratory-based, time-resolved imaging could prove greatly beneficial to our understanding of transport behavior in geological materials and to the improvement of pore-scale modeling by providing valuable validation.

  15. The JASMIN Cloud: specialised and hybrid to meet the needs of the Environmental Sciences Community

    NASA Astrophysics Data System (ADS)

    Kershaw, Philip; Lawrence, Bryan; Churchill, Jonathan; Pritchard, Matt

    2014-05-01

    Cloud computing provides enormous opportunities for the research community. The large public cloud providers provide near-limitless scaling capability. However, adapting Cloud to scientific workloads is not without its problems. The commodity nature of the public cloud infrastructure can be at odds with the specialist requirements of the research community. Issues such as trust, ownership of data, WAN bandwidth and costing models make additional barriers to more widespread adoption. Alongside the application of public cloud for scientific applications, a number of private cloud initiatives are underway in the research community of which the JASMIN Cloud is one example. Here, cloud service models are being effectively super-imposed over more established services such as data centres, compute cluster facilities and Grids. These have the potential to deliver the specialist infrastructure needed for the science community coupled with the benefits of a Cloud service model. The JASMIN facility based at the Rutherford Appleton Laboratory was established in 2012 to support the data analysis requirements of the climate and Earth Observation community. In its first year of operation, the 5PB of available storage capacity was filled and the hosted compute capability used extensively. JASMIN has modelled the concept of a centralised large-volume data analysis facility. Key characteristics have enabled success: peta-scale fast disk connected via low latency networks to compute resources and the use of virtualisation for effective management of the resources for a range of users. A second phase is now underway funded through NERC's (Natural Environment Research Council) Big Data initiative. This will see significant expansion to the resources available with a doubling of disk-based storage to 12PB and an increase of compute capacity by a factor of ten to over 3000 processing cores. This expansion is accompanied by a broadening in the scope for JASMIN, as a service available to the entire UK environmental science community. Experience with the first phase demonstrated the range of user needs. A trade-off is needed between access privileges to resources, flexibility of use and security. This has influenced the form and types of service under development for the new phase. JASMIN will deploy a specialised private cloud organised into "Managed" and "Unmanaged" components. In the Managed Cloud, users have direct access to the storage and compute resources for optimal performance but for reasons of security, via a more restrictive PaaS (Platform-as-a-Service) interface. The Unmanaged Cloud is deployed in an isolated part of the network but co-located with the rest of the infrastructure. This enables greater liberty to tenants - full IaaS (Infrastructure-as-a-Service) capability to provision customised infrastructure - whilst at the same time protecting more sensitive parts of the system from direct access using these elevated privileges. The private cloud will be augmented with cloud-bursting capability so that it can exploit the resources available from public clouds, making it effectively a hybrid solution. A single interface will overlay the functionality of both the private cloud and external interfaces to public cloud providers giving users the flexibility to migrate resources between infrastructures as requirements dictate.

  16. FOSS GIS on the GFZ HPC cluster: Towards a service-oriented Scientific Geocomputation Environment

    NASA Astrophysics Data System (ADS)

    Loewe, P.; Klump, J.; Thaler, J.

    2012-12-01

    High performance compute clusters can be used as geocomputation workbenches. Their wealth of resources enables us to take on geocomputation tasks which exceed the limitations of smaller systems. These general capabilities can be harnessed via tools such as Geographic Information System (GIS), provided they are able to utilize the available cluster configuration/architecture and provide a sufficient degree of user friendliness to allow for wide application. While server-level computing is clearly not sufficient for the growing numbers of data- or computation-intense tasks undertaken, these tasks do not get even close to the requirements needed for access to "top shelf" national cluster facilities. So until recently such kind of geocomputation research was effectively barred due to lack access to of adequate resources. In this paper we report on the experiences gained by providing GRASS GIS as a software service on a HPC compute cluster at the German Research Centre for Geosciences using Platform Computing's Load Sharing Facility (LSF). GRASS GIS is the oldest and largest Free Open Source (FOSS) GIS project. During ramp up in 2011, multiple versions of GRASS GIS (v 6.4.2, 6.5 and 7.0) were installed on the HPC compute cluster, which currently consists of 234 nodes with 480 CPUs providing 3084 cores. Nineteen different processing queues with varying hardware capabilities and priorities are provided, allowing for fine-grained scheduling and load balancing. After successful initial testing, mechanisms were developed to deploy scripted geocomputation tasks onto dedicated processing queues. The mechanisms are based on earlier work by NETELER et al. (2008) and allow to use all 3084 cores for GRASS based geocomputation work. However, in practice applications are limited to fewer resources as assigned to their respective queue. Applications of the new GIS functionality comprise so far of hydrological analysis, remote sensing and the generation of maps of simulated tsunamis in the Mediterranean Sea for the Tsunami Atlas of the FP-7 TRIDEC Project (www.tridec-online.eu). This included the processing of complex problems, requiring significant amounts of processing time up to full 20 CPU days. This GRASS GIS-based service is provided as a research utility in the sense of "Software as a Service" (SaaS) and is a first step towards a GFZ corporate cloud service.

  17. MIP models for connected facility location: A theoretical and computational study☆

    PubMed Central

    Gollowitzer, Stefan; Ljubić, Ivana

    2011-01-01

    This article comprises the first theoretical and computational study on mixed integer programming (MIP) models for the connected facility location problem (ConFL). ConFL combines facility location and Steiner trees: given a set of customers, a set of potential facility locations and some inter-connection nodes, ConFL searches for the minimum-cost way of assigning each customer to exactly one open facility, and connecting the open facilities via a Steiner tree. The costs needed for building the Steiner tree, facility opening costs and the assignment costs need to be minimized. We model ConFL using seven compact and three mixed integer programming formulations of exponential size. We also show how to transform ConFL into the Steiner arborescence problem. A full hierarchy between the models is provided. For two exponential size models we develop a branch-and-cut algorithm. An extensive computational study is based on two benchmark sets of randomly generated instances with up to 1300 nodes and 115,000 edges. We empirically compare the presented models with respect to the quality of obtained bounds and the corresponding running time. We report optimal values for all but 16 instances for which the obtained gaps are below 0.6%. PMID:25009366

  18. Advancing Capabilities for Understanding the Earth System Through Intelligent Systems, the NSF Perspective

    NASA Astrophysics Data System (ADS)

    Gil, Y.; Zanzerkia, E. E.; Munoz-Avila, H.

    2015-12-01

    The National Science Foundation (NSF) Directorate for Geosciences (GEO) and Directorate for Computer and Information Science (CISE) acknowledge the significant scientific challenges required to understand the fundamental processes of the Earth system, within the atmospheric and geospace, Earth, ocean and polar sciences, and across those boundaries. A broad view of the opportunities and directions for GEO are described in the report "Dynamic Earth: GEO imperative and Frontiers 2015-2020." Many of the aspects of geosciences research, highlighted both in this document and other community grand challenges, pose novel problems for researchers in intelligent systems. Geosciences research will require solutions for data-intensive science, advanced computational capabilities, and transformative concepts for visualizing, using, analyzing and understanding geo phenomena and data. Opportunities for the scientific community to engage in addressing these challenges are available and being developed through NSF's portfolio of investments and activities. The NSF-wide initiative, Cyberinfrastructure Framework for 21st Century Science and Engineering (CIF21), looks to accelerate research and education through new capabilities in data, computation, software and other aspects of cyberinfrastructure. EarthCube, a joint program between GEO and the Advanced Cyberinfrastructure Division, aims to create a well-connected and facile environment to share data and knowledge in an open, transparent, and inclusive manner, thus accelerating our ability to understand and predict the Earth system. EarthCube's mission opens an opportunity for collaborative research on novel information systems enhancing and supporting geosciences research efforts. NSF encourages true, collaborative partnerships between scientists in computer sciences and the geosciences to meet these challenges.

  19. Propulsion/flight control integration technology (PROFIT) software system definition

    NASA Technical Reports Server (NTRS)

    Carlin, C. M.; Hastings, W. J.

    1978-01-01

    The Propulsion Flight Control Integration Technology (PROFIT) program is designed to develop a flying testbed dedicated to controls research. The control software for PROFIT is defined. Maximum flexibility, needed for long term use of the flight facility, is achieved through a modular design. The Host program, processes inputs from the telemetry uplink, aircraft central computer, cockpit computer control and plant sensors to form an input data base for use by the control algorithms. The control algorithms, programmed as application modules, process the input data to generate an output data base. The Host program formats the data for output to the telemetry downlink, the cockpit computer control, and the control effectors. Two applications modules are defined - the bill of materials F-100 engine control and the bill of materials F-15 inlet control.

  20. FermiLib v0.1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    MCCLEAN, JARROD; HANER, THOMAS; STEIGER, DAMIAN

    FermiLib is an open source software package designed to facilitate the development and testing of algorithms for simulations of fermionic systems on quantum computers. Fermionic simulations represent an important application of early quantum devices with a lot of potential high value targets, such as quantum chemistry for the development of new catalysts. This software strives to provide a link between the required domain expertise in specific fermionic applications and quantum computing to enable more users to directly interface with, and develop for, these applications. It is an extensible Python library designed to interface with the high performance quantum simulator, ProjectQ,more » as well as application specific software such as PSI4 from the domain of quantum chemistry. Such software is key to enabling effective user facilities in quantum computation research.« less

Top