Sample records for nasa advanced supercomputing

  1. NASA Advanced Supercomputing Facility Expansion

    NASA Technical Reports Server (NTRS)

    Thigpen, William W.

    2017-01-01

    The NASA Advanced Supercomputing (NAS) Division enables advances in high-end computing technologies and in modeling and simulation methods to tackle some of the toughest science and engineering challenges facing NASA today. The name "NAS" has long been associated with leadership and innovation throughout the high-end computing (HEC) community. We play a significant role in shaping HEC standards and paradigms, and provide leadership in the areas of large-scale InfiniBand fabrics, Lustre open-source filesystems, and hyperwall technologies. We provide an integrated high-end computing environment to accelerate NASA missions and make revolutionary advances in science. Pleiades, a petaflop-scale supercomputer, is used by scientists throughout the U.S. to support NASA missions, and is ranked among the most powerful systems in the world. One of our key focus areas is in modeling and simulation to support NASA's real-world engineering applications and make fundamental advances in modeling and simulation methods.

  2. NASA's supercomputing experience

    NASA Technical Reports Server (NTRS)

    Bailey, F. Ron

    1990-01-01

    A brief overview of NASA's recent experience in supercomputing is presented from two perspectives: early systems development and advanced supercomputing applications. NASA's role in supercomputing systems development is illustrated by discussion of activities carried out by the Numerical Aerodynamical Simulation Program. Current capabilities in advanced technology applications are illustrated with examples in turbulence physics, aerodynamics, aerothermodynamics, chemistry, and structural mechanics. Capabilities in science applications are illustrated by examples in astrophysics and atmospheric modeling. Future directions and NASA's new High Performance Computing Program are briefly discussed.

  3. NASA Advanced Supercomputing (NAS) User Services Group

    NASA Technical Reports Server (NTRS)

    Pandori, John; Hamilton, Chris; Niggley, C. E.; Parks, John W. (Technical Monitor)

    2002-01-01

    This viewgraph presentation provides an overview of NAS (NASA Advanced Supercomputing), its goals, and its mainframe computer assets. Also covered are its functions, including systems monitoring and technical support.

  4. Flux-Level Transit Injection Experiments with NASA Pleiades Supercomputer

    NASA Astrophysics Data System (ADS)

    Li, Jie; Burke, Christopher J.; Catanzarite, Joseph; Seader, Shawn; Haas, Michael R.; Batalha, Natalie; Henze, Christopher; Christiansen, Jessie; Kepler Project, NASA Advanced Supercomputing Division

    2016-06-01

    Flux-Level Transit Injection (FLTI) experiments are executed with NASA's Pleiades supercomputer for the Kepler Mission. The latest release (9.3, January 2016) of the Kepler Science Operations Center Pipeline is used in the FLTI experiments. Their purpose is to validate the Analytic Completeness Model (ACM), which can be computed for all Kepler target stars, thereby enabling exoplanet occurrence rate studies. Pleiades, a facility of NASA's Advanced Supercomputing Division, is one of the world's most powerful supercomputers and represents NASA's state-of-the-art technology. We discuss the details of implementing the FLTI experiments on the Pleiades supercomputer. For example, taking into account that ~16 injections are generated by one core of the Pleiades processors in an hour, the “shallow” FLTI experiment, in which ~2000 injections are required per target star, can be done for 16% of all Kepler target stars in about 200 hours. Stripping down the transit search to bare bones, i.e. only searching adjacent high/low periods at high/low pulse durations, makes the computationally intensive FLTI experiments affordable. The design of the FLTI experiments and the analysis of the resulting data are presented in “Validating an Analytic Completeness Model for Kepler Target Stars Based on Flux-level Transit Injection Experiments” by Catanzarite et al. (#2494058).Kepler was selected as the 10th mission of the Discovery Program. Funding for the Kepler Mission has been provided by the NASA Science Mission Directorate.

  5. Scaling of data communications for an advanced supercomputer network

    NASA Technical Reports Server (NTRS)

    Levin, E.; Eaton, C. K.; Young, Bruce

    1986-01-01

    The goal of NASA's Numerical Aerodynamic Simulation (NAS) Program is to provide a powerful computational environment for advanced research and development in aeronautics and related disciplines. The present NAS system consists of a Cray 2 supercomputer connected by a data network to a large mass storage system, to sophisticated local graphics workstations and by remote communication to researchers throughout the United States. The program plan is to continue acquiring the most powerful supercomputers as they become available. The implications of a projected 20-fold increase in processing power on the data communications requirements are described.

  6. NASA's Pleiades Supercomputer Crunches Data For Groundbreaking Analysis and Visualizations

    NASA Image and Video Library

    2016-11-23

    The Pleiades supercomputer at NASA's Ames Research Center, recently named the 13th fastest computer in the world, provides scientists and researchers high-fidelity numerical modeling of complex systems and processes. By using detailed analyses and visualizations of large-scale data, Pleiades is helping to advance human knowledge and technology, from designing the next generation of aircraft and spacecraft to understanding the Earth's climate and the mysteries of our galaxy.

  7. Impact of the Columbia Supercomputer on NASA Space and Exploration Mission

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Kwak, Dochan; Kiris, Cetin; Lawrence, Scott

    2006-01-01

    NASA's 10,240-processor Columbia supercomputer gained worldwide recognition in 2004 for increasing the space agency's computing capability ten-fold, and enabling U.S. scientists and engineers to perform significant, breakthrough simulations. Columbia has amply demonstrated its capability to accelerate NASA's key missions, including space operations, exploration systems, science, and aeronautics. Columbia is part of an integrated high-end computing (HEC) environment comprised of massive storage and archive systems, high-speed networking, high-fidelity modeling and simulation tools, application performance optimization, and advanced data analysis and visualization. In this paper, we illustrate the impact Columbia is having on NASA's numerous space and exploration applications, such as the development of the Crew Exploration and Launch Vehicles (CEV/CLV), effects of long-duration human presence in space, and damage assessment and repair recommendations for remaining shuttle flights. We conclude by discussing HEC challenges that must be overcome to solve space-related science problems in the future.

  8. Distributed user services for supercomputers

    NASA Technical Reports Server (NTRS)

    Sowizral, Henry A.

    1989-01-01

    User-service operations at supercomputer facilities are examined. The question is whether a single, possibly distributed, user-services organization could be shared by NASA's supercomputer sites in support of a diverse, geographically dispersed, user community. A possible structure for such an organization is identified as well as some of the technologies needed in operating such an organization.

  9. Supercomputer networking for space science applications

    NASA Technical Reports Server (NTRS)

    Edelson, B. I.

    1992-01-01

    The initial design of a supercomputer network topology including the design of the communications nodes along with the communications interface hardware and software is covered. Several space science applications that are proposed experiments by GSFC and JPL for a supercomputer network using the NASA ACTS satellite are also reported.

  10. Discover Supercomputer 5

    NASA Image and Video Library

    2017-12-08

    Two rows of the “Discover” supercomputer at the NASA Center for Climate Simulation (NCCS) contain more than 4,000 computer processors. Discover has a total of nearly 15,000 processors. Credit: NASA/Pat Izzo To learn more about NCCS go to: www.nasa.gov/topics/earth/features/climate-sim-center.html NASA Goddard Space Flight Center is home to the nation's largest organization of combined scientists, engineers and technologists that build spacecraft, instruments and new technology to study the Earth, the sun, our solar system, and the universe.

  11. Discover Supercomputer 3

    NASA Image and Video Library

    2017-12-08

    The heart of the NASA Center for Climate Simulation (NCCS) is the “Discover” supercomputer. In 2009, NCCS added more than 8,000 computer processors to Discover, for a total of nearly 15,000 processors. Credit: NASA/Pat Izzo To learn more about NCCS go to: www.nasa.gov/topics/earth/features/climate-sim-center.html NASA Goddard Space Flight Center is home to the nation's largest organization of combined scientists, engineers and technologists that build spacecraft, instruments and new technology to study the Earth, the sun, our solar system, and the universe.

  12. Discover Supercomputer 2

    NASA Image and Video Library

    2017-12-08

    The heart of the NASA Center for Climate Simulation (NCCS) is the “Discover” supercomputer. In 2009, NCCS added more than 8,000 computer processors to Discover, for a total of nearly 15,000 processors. Credit: NASA/Pat Izzo To learn more about NCCS go to: www.nasa.gov/topics/earth/features/climate-sim-center.html NASA Goddard Space Flight Center is home to the nation's largest organization of combined scientists, engineers and technologists that build spacecraft, instruments and new technology to study the Earth, the sun, our solar system, and the universe.

  13. Discover Supercomputer 4

    NASA Image and Video Library

    2017-12-08

    This close-up view highlights one row—approximately 2,000 computer processors—of the “Discover” supercomputer at the NASA Center for Climate Simulation (NCCS). Discover has a total of nearly 15,000 processors. Credit: NASA/Pat Izzo To learn more about NCCS go to: www.nasa.gov/topics/earth/features/climate-sim-center.html NASA Goddard Space Flight Center is home to the nation's largest organization of combined scientists, engineers and technologists that build spacecraft, instruments and new technology to study the Earth, the sun, our solar system, and the universe.

  14. Discover Supercomputer 1

    NASA Image and Video Library

    2017-12-08

    The heart of the NASA Center for Climate Simulation (NCCS) is the “Discover” supercomputer. In 2009, NCCS added more than 8,000 computer processors to Discover, for a total of nearly 15,000 processors. Credit: NASA/Pat Izzo To learn more about NCCS go to: www.nasa.gov/topics/earth/features/climate-sim-center.html NASA Goddard Space Flight Center is home to the nation's largest organization of combined scientists, engineers and technologists that build spacecraft, instruments and new technology to study the Earth, the sun, our solar system, and the universe.

  15. Edison - A New Cray Supercomputer Advances Discovery at NERSC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dosanjh, Sudip; Parkinson, Dula; Yelick, Kathy

    2014-02-06

    When a supercomputing center installs a new system, users are invited to make heavy use of the computer as part of the rigorous testing. In this video, find out what top scientists have discovered using Edison, a Cray XC30 supercomputer, and how NERSC's newest supercomputer will accelerate their future research.

  16. Edison - A New Cray Supercomputer Advances Discovery at NERSC

    ScienceCinema

    Dosanjh, Sudip; Parkinson, Dula; Yelick, Kathy; Trebotich, David; Broughton, Jeff; Antypas, Katie; Lukic, Zarija, Borrill, Julian; Draney, Brent; Chen, Jackie

    2018-01-16

    When a supercomputing center installs a new system, users are invited to make heavy use of the computer as part of the rigorous testing. In this video, find out what top scientists have discovered using Edison, a Cray XC30 supercomputer, and how NERSC's newest supercomputer will accelerate their future research.

  17. Role of High-End Computing in Meeting NASA's Science and Engineering Challenges

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Tu, Eugene L.; Van Dalsem, William R.

    2006-01-01

    Two years ago, NASA was on the verge of dramatically increasing its HEC capability and capacity. With the 10,240-processor supercomputer, Columbia, now in production for 18 months, HEC has an even greater impact within the Agency and extending to partner institutions. Advanced science and engineering simulations in space exploration, shuttle operations, Earth sciences, and fundamental aeronautics research are occurring on Columbia, demonstrating its ability to accelerate NASA s exploration vision. This talk describes how the integrated production environment fostered at the NASA Advanced Supercomputing (NAS) facility at Ames Research Center is accelerating scientific discovery, achieving parametric analyses of multiple scenarios, and enhancing safety for NASA missions. We focus on Columbia s impact on two key engineering and science disciplines: Aerospace, and Climate. We also discuss future mission challenges and plans for NASA s next-generation HEC environment.

  18. NASA's Climate in a Box: Desktop Supercomputing for Open Scientific Model Development

    NASA Astrophysics Data System (ADS)

    Wojcik, G. S.; Seablom, M. S.; Lee, T. J.; McConaughy, G. R.; Syed, R.; Oloso, A.; Kemp, E. M.; Greenseid, J.; Smith, R.

    2009-12-01

    NASA's High Performance Computing Portfolio in cooperation with its Modeling, Analysis, and Prediction program intends to make its climate and earth science models more accessible to a larger community. A key goal of this effort is to open the model development and validation process to the scientific community at large such that a natural selection process is enabled and results in a more efficient scientific process. One obstacle to others using NASA models is the complexity of the models and the difficulty in learning how to use them. This situation applies not only to scientists who regularly use these models but also non-typical users who may want to use the models such as scientists from different domains, policy makers, and teachers. Another obstacle to the use of these models is that access to high performance computing (HPC) accounts, from which the models are implemented, can be restrictive with long wait times in job queues and delays caused by an arduous process of obtaining an account, especially for foreign nationals. This project explores the utility of using desktop supercomputers in providing a complete ready-to-use toolkit of climate research products to investigators and on demand access to an HPC system. One objective of this work is to pre-package NASA and NOAA models so that new users will not have to spend significant time porting the models. In addition, the prepackaged toolkit will include tools, such as workflow, visualization, social networking web sites, and analysis tools, to assist users in running the models and analyzing the data. The system architecture to be developed will allow for automatic code updates for each user and an effective means with which to deal with data that are generated. We plan to investigate several desktop systems, but our work to date has focused on a Cray CX1. Currently, we are investigating the potential capabilities of several non-traditional development environments. While most NASA and NOAA models are

  19. High Resolution Aerospace Applications using the NASA Columbia Supercomputer

    NASA Technical Reports Server (NTRS)

    Mavriplis, Dimitri J.; Aftosmis, Michael J.; Berger, Marsha

    2005-01-01

    This paper focuses on the parallel performance of two high-performance aerodynamic simulation packages on the newly installed NASA Columbia supercomputer. These packages include both a high-fidelity, unstructured, Reynolds-averaged Navier-Stokes solver, and a fully-automated inviscid flow package for cut-cell Cartesian grids. The complementary combination of these two simulation codes enables high-fidelity characterization of aerospace vehicle design performance over the entire flight envelope through extensive parametric analysis and detailed simulation of critical regions of the flight envelope. Both packages. are industrial-level codes designed for complex geometry and incorpor.ats. CuStomized multigrid solution algorithms. The performance of these codes on Columbia is examined using both MPI and OpenMP and using both the NUMAlink and InfiniBand interconnect fabrics. Numerical results demonstrate good scalability on up to 2016 CPUs using the NUMAIink4 interconnect, with measured computational rates in the vicinity of 3 TFLOP/s, while InfiniBand showed some performance degradation at high CPU counts, particularly with multigrid. Nonetheless, the results are encouraging enough to indicate that larger test cases using combined MPI/OpenMP communication should scale well on even more processors.

  20. Floating point arithmetic in future supercomputers

    NASA Technical Reports Server (NTRS)

    Bailey, David H.; Barton, John T.; Simon, Horst D.; Fouts, Martin J.

    1989-01-01

    Considerations in the floating-point design of a supercomputer are discussed. Particular attention is given to word size, hardware support for extended precision, format, and accuracy characteristics. These issues are discussed from the perspective of the Numerical Aerodynamic Simulation Systems Division at NASA Ames. The features believed to be most important for a future supercomputer floating-point design include: (1) a 64-bit IEEE floating-point format with 11 exponent bits, 52 mantissa bits, and one sign bit and (2) hardware support for reasonably fast double-precision arithmetic.

  1. Supercomputing in the Age of Discovering Superearths, Earths and Exoplanet Systems

    NASA Technical Reports Server (NTRS)

    Jenkins, Jon M.

    2015-01-01

    NASA's Kepler Mission was launched in March 2009 as NASA's first mission capable of finding Earth-size planets orbiting in the habitable zone of Sun-like stars, that range of distances for which liquid water would pool on the surface of a rocky planet. Kepler has discovered over 1000 planets and over 4600 candidates, many of them as small as the Earth. Today, Kepler's amazing success seems to be a fait accompli to those unfamiliar with her history. But twenty years ago, there were no planets known outside our solar system, and few people believed it was possible to detect tiny Earth-size planets orbiting other stars. Motivating NASA to select Kepler for launch required a confluence of the right detector technology, advances in signal processing and algorithms, and the power of supercomputing.

  2. Color graphics, interactive processing, and the supercomputer

    NASA Technical Reports Server (NTRS)

    Smith-Taylor, Rudeen

    1987-01-01

    The development of a common graphics environment for the NASA Langley Research Center user community and the integration of a supercomputer into this environment is examined. The initial computer hardware, the software graphics packages, and their configurations are described. The addition of improved computer graphics capability to the supercomputer, and the utilization of the graphic software and hardware are discussed. Consideration is given to the interactive processing system which supports the computer in an interactive debugging, processing, and graphics environment.

  3. Intricacies of modern supercomputing illustrated with recent advances in simulations of strongly correlated electron systems

    NASA Astrophysics Data System (ADS)

    Schulthess, Thomas C.

    2013-03-01

    The continued thousand-fold improvement in sustained application performance per decade on modern supercomputers keeps opening new opportunities for scientific simulations. But supercomputers have become very complex machines, built with thousands or tens of thousands of complex nodes consisting of multiple CPU cores or, most recently, a combination of CPU and GPU processors. Efficient simulations on such high-end computing systems require tailored algorithms that optimally map numerical methods to particular architectures. These intricacies will be illustrated with simulations of strongly correlated electron systems, where the development of quantum cluster methods, Monte Carlo techniques, as well as their optimal implementation by means of algorithms with improved data locality and high arithmetic density have gone hand in hand with evolving computer architectures. The present work would not have been possible without continued access to computing resources at the National Center for Computational Science of Oak Ridge National Laboratory, which is funded by the Facilities Division of the Office of Advanced Scientific Computing Research, and the Swiss National Supercomputing Center (CSCS) that is funded by ETH Zurich.

  4. NASA's Participation in the National Computational Grid

    NASA Technical Reports Server (NTRS)

    Feiereisen, William J.; Zornetzer, Steve F. (Technical Monitor)

    1998-01-01

    Over the last several years it has become evident that the character of NASA's supercomputing needs has changed. One of the major missions of the agency is to support the design and manufacture of aero- and space-vehicles with technologies that will significantly reduce their cost. It is becoming clear that improvements in the process of aerospace design and manufacturing will require a high performance information infrastructure that allows geographically dispersed teams to draw upon resources that are broader than traditional supercomputing. A computational grid draws together our information resources into one system. We can foresee the time when a Grid will allow engineers and scientists to use the tools of supercomputers, databases and on line experimental devices in a virtual environment to collaborate with distant colleagues. The concept of a computational grid has been spoken of for many years, but several events in recent times are conspiring to allow us to actually build one. In late 1997 the National Science Foundation initiated the Partnerships for Advanced Computational Infrastructure (PACI) which is built around the idea of distributed high performance computing. The Alliance lead, by the National Computational Science Alliance (NCSA), and the National Partnership for Advanced Computational Infrastructure (NPACI), lead by the San Diego Supercomputing Center, have been instrumental in drawing together the "Grid Community" to identify the technology bottlenecks and propose a research agenda to address them. During the same period NASA has begun to reformulate parts of two major high performance computing research programs to concentrate on distributed high performance computing and has banded together with the PACI centers to address the research agenda in common.

  5. Automated Help System For A Supercomputer

    NASA Technical Reports Server (NTRS)

    Callas, George P.; Schulbach, Catherine H.; Younkin, Michael

    1994-01-01

    Expert-system software developed to provide automated system of user-helping displays in supercomputer system at Ames Research Center Advanced Computer Facility. Users located at remote computer terminals connected to supercomputer and each other via gateway computers, local-area networks, telephone lines, and satellite links. Automated help system answers routine user inquiries about how to use services of computer system. Available 24 hours per day and reduces burden on human experts, freeing them to concentrate on helping users with complicated problems.

  6. Hurricane Intensity Forecasts with a Global Mesoscale Model on the NASA Columbia Supercomputer

    NASA Technical Reports Server (NTRS)

    Shen, Bo-Wen; Tao, Wei-Kuo; Atlas, Robert

    2006-01-01

    It is known that General Circulation Models (GCMs) have insufficient resolution to accurately simulate hurricane near-eye structure and intensity. The increasing capabilities of high-end computers (e.g., the NASA Columbia Supercomputer) have changed this. In 2004, the finite-volume General Circulation Model at a 1/4 degree resolution, doubling the resolution used by most of operational NWP center at that time, was implemented and run to obtain promising landfall predictions for major hurricanes (e.g., Charley, Frances, Ivan, and Jeanne). In 2005, we have successfully implemented the 1/8 degree version, and demonstrated its performance on intensity forecasts with hurricane Katrina (2005). It is found that the 1/8 degree model is capable of simulating the radius of maximum wind and near-eye wind structure, and thereby promising intensity forecasts. In this study, we will further evaluate the model s performance on intensity forecasts of hurricanes Ivan, Jeanne, Karl in 2004. Suggestions for further model development will be made in the end.

  7. Large-Scale NASA Science Applications on the Columbia Supercluster

    NASA Technical Reports Server (NTRS)

    Brooks, Walter

    2005-01-01

    Columbia, NASA's newest 61 teraflops supercomputer that became operational late last year, is a highly integrated Altix cluster of 10,240 processors, and was named to honor the crew of the Space Shuttle lost in early 2003. Constructed in just four months, Columbia increased NASA's computing capability ten-fold, and revitalized the Agency's high-end computing efforts. Significant cutting-edge science and engineering simulations in the areas of space and Earth sciences, as well as aeronautics and space operations, are already occurring on this largest operational Linux supercomputer, demonstrating its capacity and capability to accelerate NASA's space exploration vision. The presentation will describe how an integrated environment consisting not only of next-generation systems, but also modeling and simulation, high-speed networking, parallel performance optimization, and advanced data analysis and visualization, is being used to reduce design cycle time, accelerate scientific discovery, conduct parametric analysis of multiple scenarios, and enhance safety during the life cycle of NASA missions. The talk will conclude by discussing how NAS partnered with various NASA centers, other government agencies, computer industry, and academia, to create a national resource in large-scale modeling and simulation.

  8. Supercomputer optimizations for stochastic optimal control applications

    NASA Technical Reports Server (NTRS)

    Chung, Siu-Leung; Hanson, Floyd B.; Xu, Huihuang

    1991-01-01

    Supercomputer optimizations for a computational method of solving stochastic, multibody, dynamic programming problems are presented. The computational method is valid for a general class of optimal control problems that are nonlinear, multibody dynamical systems, perturbed by general Markov noise in continuous time, i.e., nonsmooth Gaussian as well as jump Poisson random white noise. Optimization techniques for vector multiprocessors or vectorizing supercomputers include advanced data structures, loop restructuring, loop collapsing, blocking, and compiler directives. These advanced computing techniques and superconducting hardware help alleviate Bellman's curse of dimensionality in dynamic programming computations, by permitting the solution of large multibody problems. Possible applications include lumped flight dynamics models for uncertain environments, such as large scale and background random aerospace fluctuations.

  9. Japanese supercomputer technology.

    PubMed

    Buzbee, B L; Ewald, R H; Worlton, W J

    1982-12-17

    Under the auspices of the Ministry for International Trade and Industry the Japanese have launched a National Superspeed Computer Project intended to produce high-performance computers for scientific computation and a Fifth-Generation Computer Project intended to incorporate and exploit concepts of artificial intelligence. If these projects are successful, which appears likely, advanced economic and military research in the United States may become dependent on access to supercomputers of foreign manufacture.

  10. Code IN Exhibits - Supercomputing 2000

    NASA Technical Reports Server (NTRS)

    Yarrow, Maurice; McCann, Karen M.; Biswas, Rupak; VanderWijngaart, Rob F.; Kwak, Dochan (Technical Monitor)

    2000-01-01

    The creation of parameter study suites has recently become a more challenging problem as the parameter studies have become multi-tiered and the computational environment has become a supercomputer grid. The parameter spaces are vast, the individual problem sizes are getting larger, and researchers are seeking to combine several successive stages of parameterization and computation. Simultaneously, grid-based computing offers immense resource opportunities but at the expense of great difficulty of use. We present ILab, an advanced graphical user interface approach to this problem. Our novel strategy stresses intuitive visual design tools for parameter study creation and complex process specification, and also offers programming-free access to grid-based supercomputer resources and process automation.

  11. Role of HPC in Advancing Computational Aeroelasticity

    NASA Technical Reports Server (NTRS)

    Guruswamy, Guru P.

    2004-01-01

    On behalf of the High Performance Computing and Modernization Program (HPCMP) and NASA Advanced Supercomputing Division (NAS) a study is conducted to assess the role of supercomputers on computational aeroelasticity of aerospace vehicles. The study is mostly based on the responses to a web based questionnaire that was designed to capture the nuances of high performance computational aeroelasticity, particularly on parallel computers. A procedure is presented to assign a fidelity-complexity index to each application. Case studies based on major applications using HPCMP resources are presented.

  12. Optimization of large matrix calculations for execution on the Cray X-MP vector supercomputer

    NASA Technical Reports Server (NTRS)

    Hornfeck, William A.

    1988-01-01

    A considerable volume of large computational computer codes were developed for NASA over the past twenty-five years. This code represents algorithms developed for machines of earlier generation. With the emergence of the vector supercomputer as a viable, commercially available machine, an opportunity exists to evaluate optimization strategies to improve the efficiency of existing software. This result is primarily due to architectural differences in the latest generation of large-scale machines and the earlier, mostly uniprocessor, machines. A sofware package being used by NASA to perform computations on large matrices is described, and a strategy for conversion to the Cray X-MP vector supercomputer is also described.

  13. National Test Facility civilian agency use of supercomputers not feasible

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1994-12-01

    Based on interviews with civilian agencies cited in the House report (DOE, DoEd, HHS, FEMA, NOAA), none would be able to make effective use of NTF`s excess supercomputing capabilities. These agencies stated they could not use the resources primarily because (1) NTF`s supercomputers are older machines whose performance and costs cannot match those of more advanced computers available from other sources and (2) some agencies have not yet developed applications requiring supercomputer capabilities or do not have funding to support such activities. In addition, future support for the hardware and software at NTF is uncertain, making any investment by anmore » outside user risky.« less

  14. Computational Nanotechnology at NASA Ames Research Center, 1996

    NASA Technical Reports Server (NTRS)

    Globus, Al; Bailey, David; Langhoff, Steve; Pohorille, Andrew; Levit, Creon; Chancellor, Marisa K. (Technical Monitor)

    1996-01-01

    Some forms of nanotechnology appear to have enormous potential to improve aerospace and computer systems; computational nanotechnology, the design and simulation of programmable molecular machines, is crucial to progress. NASA Ames Research Center has begun a computational nanotechnology program including in-house work, external research grants, and grants of supercomputer time. Four goals have been established: (1) Simulate a hypothetical programmable molecular machine replicating itself and building other products. (2) Develop molecular manufacturing CAD (computer aided design) software and use it to design molecular manufacturing systems and products of aerospace interest, including computer components. (3) Characterize nanotechnologically accessible materials of aerospace interest. Such materials may have excellent strength and thermal properties. (4) Collaborate with experimentalists. Current in-house activities include: (1) Development of NanoDesign, software to design and simulate a nanotechnology based on functionalized fullerenes. Early work focuses on gears. (2) A design for high density atomically precise memory. (3) Design of nanotechnology systems based on biology. (4) Characterization of diamonoid mechanosynthetic pathways. (5) Studies of the laplacian of the electronic charge density to understand molecular structure and reactivity. (6) Studies of entropic effects during self-assembly. Characterization of properties of matter for clusters up to sizes exhibiting bulk properties. In addition, the NAS (NASA Advanced Supercomputing) supercomputer division sponsored a workshop on computational molecular nanotechnology on March 4-5, 1996 held at NASA Ames Research Center. Finally, collaborations with Bill Goddard at CalTech, Ralph Merkle at Xerox Parc, Don Brenner at NCSU (North Carolina State University), Tom McKendree at Hughes, and Todd Wipke at UCSC are underway.

  15. Advances and issues from the simulation of planetary magnetospheres with recent supercomputer systems

    NASA Astrophysics Data System (ADS)

    Fukazawa, K.; Walker, R. J.; Kimura, T.; Tsuchiya, F.; Murakami, G.; Kita, H.; Tao, C.; Murata, K. T.

    2016-12-01

    Planetary magnetospheres are very large, while phenomena within them occur on meso- and micro-scales. These scales range from 10s of planetary radii to kilometers. To understand dynamics in these multi-scale systems, numerical simulations have been performed by using the supercomputer systems. We have studied the magnetospheres of Earth, Jupiter and Saturn by using 3-dimensional magnetohydrodynamic (MHD) simulations for a long time, however, we have not obtained the phenomena near the limits of the MHD approximation. In particular, we have not studied meso-scale phenomena that can be addressed by using MHD.Recently we performed our MHD simulation of Earth's magnetosphere by using the K-computer which is the first 10PFlops supercomputer and obtained multi-scale flow vorticity for the both northward and southward IMF. Furthermore, we have access to supercomputer systems which have Xeon, SPARC64, and vector-type CPUs and can compare simulation results between the different systems. Finally, we have compared the results of our parameter survey of the magnetosphere with observations from the HISAKI spacecraft.We have encountered a number of difficulties effectively using the latest supercomputer systems. First the size of simulation output increases greatly. Now a simulation group produces over 1PB of output. Storage and analysis of this much data is difficult. The traditional way to analyze simulation results is to move the results to the investigator's home computer. This takes over three months using an end-to-end 10Gbps network. In reality, there are problems at some nodes such as firewalls that can increase the transfer time to over one year. Another issue is post-processing. It is hard to treat a few TB of simulation output due to the memory limitations of a post-processing computer. To overcome these issues, we have developed and introduced the parallel network storage, the highly efficient network protocol and the CUI based visualization tools.In this study, we

  16. Collaborative Supercomputing for Global Change Science

    NASA Astrophysics Data System (ADS)

    Nemani, R.; Votava, P.; Michaelis, A.; Melton, F.; Milesi, C.

    2011-03-01

    There is increasing pressure on the science community not only to understand how recent and projected changes in climate will affect Earth's global environment and the natural resources on which society depends but also to design solutions to mitigate or cope with the likely impacts. Responding to this multidimensional challenge requires new tools and research frameworks that assist scientists in collaborating to rapidly investigate complex interdisciplinary science questions of critical societal importance. One such collaborative research framework, within the NASA Earth sciences program, is the NASA Earth Exchange (NEX). NEX combines state-of-the-art supercomputing, Earth system modeling, remote sensing data from NASA and other agencies, and a scientific social networking platform to deliver a complete work environment. In this platform, users can explore and analyze large Earth science data sets, run modeling codes, collaborate on new or existing projects, and share results within or among communities (see Figure S1 in the online supplement to this Eos issue (http://www.agu.org/eos_elec)).

  17. Advanced aerodynamics. Selected NASA research

    NASA Technical Reports Server (NTRS)

    1981-01-01

    This Conference Publication contains selected NASA papers that were presented at the Fifth Annual Status Review of the NASA Aircraft Energy Efficiency (ACEE) Energy Efficient Transport (EET) Program held at Dryden Flight Research Center in Edwards, California on September 14 to 15, 1981. These papers describe the status of several NASA in-house research activities in the areas of advanced turboprops, natural laminar flow, oscillating control surfaces, high-Reynolds-number airfoil tests, high-lift technology, and theoretical design techniques.

  18. Computational mechanics analysis tools for parallel-vector supercomputers

    NASA Technical Reports Server (NTRS)

    Storaasli, O. O.; Nguyen, D. T.; Baddourah, M. A.; Qin, J.

    1993-01-01

    Computational algorithms for structural analysis on parallel-vector supercomputers are reviewed. These parallel algorithms, developed by the authors, are for the assembly of structural equations, 'out-of-core' strategies for linear equation solution, massively distributed-memory equation solution, unsymmetric equation solution, general eigen-solution, geometrically nonlinear finite element analysis, design sensitivity analysis for structural dynamics, optimization algorithm and domain decomposition. The source code for many of these algorithms is available from NASA Langley.

  19. NASA Institute for Advanced Concepts

    NASA Technical Reports Server (NTRS)

    Cassanova, Robert A.

    1999-01-01

    The purpose of NASA Institute for Advanced Concepts (NIAC) is to provide an independent, open forum for the external analysis and definition of space and aeronautics advanced concepts to complement the advanced concepts activities conducted within the NASA Enterprises. The NIAC will issue Calls for Proposals during each year of operation and will select revolutionary advanced concepts for grant or contract awards through a peer review process. Final selection of awards will be with the concurrence of NASA's Chief Technologist. The operation of the NIAC is reviewed biannually by the NIAC Science, Exploration and Technology Council (NSETC) whose members are drawn from the senior levels of industry and universities. The process of defining the technical scope of the initial Call for Proposals was begun with the NIAC "Grand Challenges" workshop conducted on May 21-22, 1998 in Columbia, Maryland. These "Grand Challenges" resulting from this workshop became the essence of the technical scope for the first Phase I Call for Proposals which was released on June 19, 1998 with a due date of July 31, 1998. The first Phase I Call for Proposals attracted 119 proposals. After a thorough peer review, prioritization by NIAC and technical concurrence by NASA, sixteen subgrants were awarded. The second Phase I Call for Proposals was released on November 23, 1998 with a due date of January 31, 1999. Sixty-three (63) proposals were received in response to this Call. On December 2-3, 1998, the NSETC met to review the progress and future plans of the NIAC. The next NSETC meeting is scheduled for August 5-6, 1999. The first Phase II Call for Proposals was released to the current Phase I grantees on February 3,1999 with a due date of May 31, 1999. Plans for the second year of the contract include a continuation of the sequence of Phase I and Phase II Calls for Proposals and hosting the first NIAC Annual Meeting and USRA/NIAC Technical Symposium in NASA HQ.

  20. Supercomputer applications in molecular modeling.

    PubMed

    Gund, T M

    1988-01-01

    An overview of the functions performed by molecular modeling is given. Molecular modeling techniques benefiting from supercomputing are described, namely, conformation, search, deriving bioactive conformations, pharmacophoric pattern searching, receptor mapping, and electrostatic properties. The use of supercomputers for problems that are computationally intensive, such as protein structure prediction, protein dynamics and reactivity, protein conformations, and energetics of binding is also examined. The current status of supercomputing and supercomputer resources are discussed.

  1. Supercomputing Sheds Light on the Dark Universe

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Habib, Salman; Heitmann, Katrin

    2012-11-15

    At Argonne National Laboratory, scientists are using supercomputers to shed light on one of the great mysteries in science today, the Dark Universe. With Mira, a petascale supercomputer at the Argonne Leadership Computing Facility, a team led by physicists Salman Habib and Katrin Heitmann will run the largest, most complex simulation of the universe ever attempted. By contrasting the results from Mira with state-of-the-art telescope surveys, the scientists hope to gain new insights into the distribution of matter in the universe, advancing future investigations of dark energy and dark matter into a new realm. The team's research was named amore » finalist for the 2012 Gordon Bell Prize, an award recognizing outstanding achievement in high-performance computing.« less

  2. NASA Tech Briefs, November/December 1986, Special Edition

    NASA Technical Reports Server (NTRS)

    1986-01-01

    Topics: Computing: The View from NASA Headquarters; Earth Resources Laboratory Applications Software: Versatile Tool for Data Analysis; The Hypercube: Cost-Effective Supercomputing; Artificial Intelligence: Rendezvous with NASA; NASA's Ada Connection; COSMIC: NASA's Software Treasurehouse; Golden Oldies: Tried and True NASA Software; Computer Technical Briefs; NASA TU Services; Digital Fly-by-Wire.

  3. High Performance Distributed Computing in a Supercomputer Environment: Computational Services and Applications Issues

    NASA Technical Reports Server (NTRS)

    Kramer, Williams T. C.; Simon, Horst D.

    1994-01-01

    This tutorial proposes to be a practical guide for the uninitiated to the main topics and themes of high-performance computing (HPC), with particular emphasis to distributed computing. The intent is first to provide some guidance and directions in the rapidly increasing field of scientific computing using both massively parallel and traditional supercomputers. Because of their considerable potential computational power, loosely or tightly coupled clusters of workstations are increasingly considered as a third alternative to both the more conventional supercomputers based on a small number of powerful vector processors, as well as high massively parallel processors. Even though many research issues concerning the effective use of workstation clusters and their integration into a large scale production facility are still unresolved, such clusters are already used for production computing. In this tutorial we will utilize the unique experience made at the NAS facility at NASA Ames Research Center. Over the last five years at NAS massively parallel supercomputers such as the Connection Machines CM-2 and CM-5 from Thinking Machines Corporation and the iPSC/860 (Touchstone Gamma Machine) and Paragon Machines from Intel were used in a production supercomputer center alongside with traditional vector supercomputers such as the Cray Y-MP and C90.

  4. NASA capabilities roadmap: advanced telescopes and observatories

    NASA Technical Reports Server (NTRS)

    Feinberg, Lee D.

    2005-01-01

    The NASA Advanced Telescopes and Observatories (ATO) Capability Roadmap addresses technologies necessary for NASA to enable future space telescopes and observatories collecting all electromagnetic bands, ranging from x-rays to millimeter waves, and including gravity-waves. It has derived capability priorities from current and developing Space Missions Directorate (SMD) strategic roadmaps and, where appropriate, has ensured their consistency with other NASA Strategic and Capability Roadmaps. Technology topics include optics; wavefront sensing and control and interferometry; distributed and advanced spacecraft systems; cryogenic and thermal control systems; large precision structure for observatories; and the infrastructure essential to future space telescopes and observatories.

  5. NASA Exhibits

    NASA Technical Reports Server (NTRS)

    Deardorff, Glenn; Djomehri, M. Jahed; Freeman, Ken; Gambrel, Dave; Green, Bryan; Henze, Chris; Hinke, Thomas; Hood, Robert; Kiris, Cetin; Moran, Patrick; hide

    2001-01-01

    A series of NASA presentations for the Supercomputing 2001 conference are summarized. The topics include: (1) Mars Surveyor Landing Sites "Collaboratory"; (2) Parallel and Distributed CFD for Unsteady Flows with Moving Overset Grids; (3) IP Multicast for Seamless Support of Remote Science; (4) Consolidated Supercomputing Management Office; (5) Growler: A Component-Based Framework for Distributed/Collaborative Scientific Visualization and Computational Steering; (6) Data Mining on the Information Power Grid (IPG); (7) Debugging on the IPG; (8) Debakey Heart Assist Device: (9) Unsteady Turbopump for Reusable Launch Vehicle; (10) Exploratory Computing Environments Component Framework; (11) OVERSET Computational Fluid Dynamics Tools; (12) Control and Observation in Distributed Environments; (13) Multi-Level Parallelism Scaling on NASA's Origin 1024 CPU System; (14) Computing, Information, & Communications Technology; (15) NAS Grid Benchmarks; (16) IPG: A Large-Scale Distributed Computing and Data Management System; and (17) ILab: Parameter Study Creation and Submission on the IPG.

  6. NASA Advanced Propeller Research

    NASA Technical Reports Server (NTRS)

    Groeneweg, John F.; Bober, Lawrence J.

    1988-01-01

    Acoustic and aerodynamic research at NASA Lewis Research Center on advanced propellers is reviewed including analytical and experimental results on both single and counterrotation. Computational tools used to calculate the detailed flow and acoustic i e l d s a r e described along with wind tunnel tests to obtain data for code verification . Results from two kinds of experiments are reviewed: ( 1 ) performance and near field noise at cruise conditions as measured in the NASA Lewis 8-by 6-Foot Wind Tunnel and ( 2 ) farfield noise and performance for takeoff/approach conditions as measured in the NASA Lewis 9-by 15-Font Anechoic Wind Tunnel. Detailed measurements of steady blade surface pressures are described along with vortex flow phenomena at off design conditions . Near field noise at cruise is shown to level out or decrease as tip relative Mach number is increased beyond 1.15. Counterrotation interaction noise is shown to be a dominant source at take off but a secondary source at cruise. Effects of unequal rotor diameters and rotor-to-rotor spacing on interaction noise a real so illustrated. Comparisons of wind tunnel acoustic measurements to flight results are made. Finally, some future directions in advanced propeller research such as swirl recovery vanes, higher sweep, forward sweep, and ducted propellers are discussed.

  7. NASA advanced propeller research

    NASA Technical Reports Server (NTRS)

    Groeneweg, John F.; Bober, Lawrence J.

    1988-01-01

    Acoustic and aerodynamic research at NASA Lewis Research Center on advanced propellers is reviewed including analytical and experimental results on both single and counterrotation. Computational tools used to calculate the detailed flow and acoustic fields are described along with wind tunnel tests to obtain data for code verification. Results from two kinds of experiments are reviewed: (1) performance and near field noise at cruise conditions as measured in the NASA Lewis 8- by 6-foot Wind Tunnel; and (2) far field noise and performance for takeoff/approach conditions as measured in the NASA Lewis 9- by 15-foot Anechoic Wind Tunnel. Detailed measurements of steady blade surface pressures are described along with vortex flow phenomena at off-design conditions. Near field noise at cruise is shown to level out or decrease as tip relative Mach number is increased beyond 1.15. Counterrotation interaction noise is shown to be a dominant source at takeoff but a secondary source at cruise. Effects of unequal rotor diameters and rotor-to-rotor spacing on interaction noise are also illustrated. Comparisons of wind tunnel acoustic measurements to flight results are made. Finally, some future directions in advanced propeller research such as swirl recovery vanes, higher sweep, forward sweep, and ducted propellers are discussed.

  8. Computer Electromagnetics and Supercomputer Architecture

    NASA Technical Reports Server (NTRS)

    Cwik, Tom

    1993-01-01

    The dramatic increase in performance over the last decade for microporcessor computations is compared with that for the supercomputer computations. This performance, the projected performance, and a number of other issues such as cost and the inherent pysical limitations in curent supercomputer technology have naturally led to parallel supercomputers and ensemble of interconnected microprocessors.

  9. A Look at the Impact of High-End Computing Technologies on NASA Missions

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Dunbar, Jill; Hardman, John; Bailey, F. Ron; Wheeler, Lorien; Rogers, Stuart

    2012-01-01

    From its bold start nearly 30 years ago and continuing today, the NASA Advanced Supercomputing (NAS) facility at Ames Research Center has enabled remarkable breakthroughs in the space agency s science and engineering missions. Throughout this time, NAS experts have influenced the state-of-the-art in high-performance computing (HPC) and related technologies such as scientific visualization, system benchmarking, batch scheduling, and grid environments. We highlight the pioneering achievements and innovations originating from and made possible by NAS resources and know-how, from early supercomputing environment design and software development, to long-term simulation and analyses critical to design safe Space Shuttle operations and associated spinoff technologies, to the highly successful Kepler Mission s discovery of new planets now capturing the world s imagination.

  10. Demonstration of NICT Space Weather Cloud --Integration of Supercomputer into Analysis and Visualization Environment--

    NASA Astrophysics Data System (ADS)

    Watari, S.; Morikawa, Y.; Yamamoto, K.; Inoue, S.; Tsubouchi, K.; Fukazawa, K.; Kimura, E.; Tatebe, O.; Kato, H.; Shimojo, S.; Murata, K. T.

    2010-12-01

    In the Solar-Terrestrial Physics (STP) field, spatio-temporal resolution of computer simulations is getting higher and higher because of tremendous advancement of supercomputers. A more advanced technology is Grid Computing that integrates distributed computational resources to provide scalable computing resources. In the simulation research, it is effective that a researcher oneself designs his physical model, performs calculations with a supercomputer, and analyzes and visualizes for consideration by a familiar method. A supercomputer is far from an analysis and visualization environment. In general, a researcher analyzes and visualizes in the workstation (WS) managed at hand because the installation and the operation of software in the WS are easy. Therefore, it is necessary to copy the data from the supercomputer to WS manually. Time necessary for the data transfer through long delay network disturbs high-accuracy simulations actually. In terms of usefulness, integrating a supercomputer and an analysis and visualization environment seamlessly with a researcher's familiar method is important. NICT has been developing a cloud computing environment (NICT Space Weather Cloud). In the NICT Space Weather Cloud, disk servers are located near its supercomputer and WSs for data analysis and visualization. They are connected to JGN2plus that is high-speed network for research and development. Distributed virtual high-capacity storage is also constructed by Grid Datafarm (Gfarm v2). Huge-size data output from the supercomputer is transferred to the virtual storage through JGN2plus. A researcher can concentrate on the research by a familiar method without regard to distance between a supercomputer and an analysis and visualization environment. Now, total 16 disk servers are setup in NICT headquarters (at Koganei, Tokyo), JGN2plus NOC (at Otemachi, Tokyo), Okinawa Subtropical Environment Remote-Sensing Center, and Cybermedia Center, Osaka University. They are connected on

  11. Particle simulation on heterogeneous distributed supercomputers

    NASA Technical Reports Server (NTRS)

    Becker, Jeffrey C.; Dagum, Leonardo

    1993-01-01

    We describe the implementation and performance of a three dimensional particle simulation distributed between a Thinking Machines CM-2 and a Cray Y-MP. These are connected by a combination of two high-speed networks: a high-performance parallel interface (HIPPI) and an optical network (UltraNet). This is the first application to use this configuration at NASA Ames Research Center. We describe our experience implementing and using the application and report the results of several timing measurements. We show that the distribution of applications across disparate supercomputing platforms is feasible and has reasonable performance. In addition, several practical aspects of the computing environment are discussed.

  12. Advanced Methodologies for NASA Science Missions

    NASA Astrophysics Data System (ADS)

    Hurlburt, N. E.; Feigelson, E.; Mentzel, C.

    2017-12-01

    Most of NASA's commitment to computational space science involves the organization and processing of Big Data from space-based satellites, and the calculations of advanced physical models based on these datasets. But considerable thought is also needed on what computations are needed. The science questions addressed by space data are so diverse and complex that traditional analysis procedures are often inadequate. The knowledge and skills of the statistician, applied mathematician, and algorithmic computer scientist must be incorporated into programs that currently emphasize engineering and physical science. NASA's culture and administrative mechanisms take full cognizance that major advances in space science are driven by improvements in instrumentation. But it is less well recognized that new instruments and science questions give rise to new challenges in the treatment of satellite data after it is telemetered to the ground. These issues might be divided into two stages: data reduction through software pipelines developed within NASA mission centers; and science analysis that is performed by hundreds of space scientists dispersed through NASA, U.S. universities, and abroad. Both stages benefit from the latest statistical and computational methods; in some cases, the science result is completely inaccessible using traditional procedures. This paper will review the current state of NASA and present example applications using modern methodologies.

  13. Advancing Autonomous Operations Technologies for NASA Missions

    NASA Technical Reports Server (NTRS)

    Cruzen, Craig; Thompson, Jerry Todd

    2013-01-01

    This paper discusses the importance of implementing advanced autonomous technologies supporting operations of future NASA missions. The ability for crewed, uncrewed and even ground support systems to be capable of mission support without external interaction or control has become essential as space exploration moves further out into the solar system. The push to develop and utilize autonomous technologies for NASA mission operations stems in part from the need to reduce operations cost while improving and increasing capability and safety. This paper will provide examples of autonomous technologies currently in use at NASA and will identify opportunities to advance existing autonomous technologies that will enhance mission success by reducing operations cost, ameliorating inefficiencies, and mitigating catastrophic anomalies.

  14. Advancing Autonomous Operations Technologies for NASA Missions

    NASA Technical Reports Server (NTRS)

    Cruzen, Craig; Thompson, Jerry T.

    2013-01-01

    This paper discusses the importance of implementing advanced autonomous technologies supporting operations of future NASA missions. The ability for crewed, uncrewed and even ground support systems to be capable of mission support without external interaction or control has become essential as space exploration moves further out into the solar system. The push to develop and utilize autonomous technologies for NASA mission operations stems in part from the need to reduce cost while improving and increasing capability and safety. This paper will provide examples of autonomous technologies currently in use at NASA and will identify opportunities to advance existing autonomous technologies that will enhance mission success by reducing cost, ameliorating inefficiencies, and mitigating catastrophic anomalies

  15. The NASA Advanced Space Power Systems Project

    NASA Technical Reports Server (NTRS)

    Mercer, Carolyn R.; Hoberecht, Mark A.; Bennett, William R.; Lvovich, Vadim F.; Bugga, Ratnakumar

    2015-01-01

    The goal of the NASA Advanced Space Power Systems Project is to develop advanced, game changing technologies that will provide future NASA space exploration missions with safe, reliable, light weight and compact power generation and energy storage systems. The development effort is focused on maturing the technologies from a technology readiness level of approximately 23 to approximately 56 as defined in the NASA Procedural Requirement 7123.1B. Currently, the project is working on two critical technology areas: High specific energy batteries, and regenerative fuel cell systems with passive fluid management. Examples of target applications for these technologies are: extending the duration of extravehicular activities (EVA) with high specific energy and energy density batteries; providing reliable, long-life power for rovers with passive fuel cell and regenerative fuel cell systems that enable reduced system complexity. Recent results from the high energy battery and regenerative fuel cell technology development efforts will be presented. The technical approach, the key performance parameters and the technical results achieved to date in each of these new elements will be included. The Advanced Space Power Systems Project is part of the Game Changing Development Program under NASAs Space Technology Mission Directorate.

  16. Porting Ordinary Applications to Blue Gene/Q Supercomputers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maheshwari, Ketan C.; Wozniak, Justin M.; Armstrong, Timothy

    2015-08-31

    Efficiently porting ordinary applications to Blue Gene/Q supercomputers is a significant challenge. Codes are often originally developed without considering advanced architectures and related tool chains. Science needs frequently lead users to want to run large numbers of relatively small jobs (often called many-task computing, an ensemble, or a workflow), which can conflict with supercomputer configurations. In this paper, we discuss techniques developed to execute ordinary applications over leadership class supercomputers. We use the high-performance Swift parallel scripting framework and build two workflow execution techniques-sub-jobs and main-wrap. The sub-jobs technique, built on top of the IBM Blue Gene/Q resource manager Cobalt'smore » sub-block jobs, lets users submit multiple, independent, repeated smaller jobs within a single larger resource block. The main-wrap technique is a scheme that enables C/C++ programs to be defined as functions that are wrapped by a high-performance Swift wrapper and that are invoked as a Swift script. We discuss the needs, benefits, technicalities, and current limitations of these techniques. We further discuss the real-world science enabled by these techniques and the results obtained.« less

  17. NASA/industry advanced turboprop technology program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ziemianski, J.A.; Whitlow, J.B. Jr.

    1988-01-01

    Experimental and analytical effort shows that use of advanced turboprop (propfan) propulsion instead of conventional turbofans in the older narrow-body airline fleet could reduce fuel consumption for this type of aircraft by up to 50 percent. The NASA Advanced Turboprop (ATP) program was formulated to address the key technologies required for these thin, swept-blade propeller concepts. A NASA, industry, and university team was assembled to develop and validate applicable design codes and prove by ground and flight test the viability of these propeller concepts. Some of the history of the ATP project, an overview of some of the issues, andmore » a summary of the technology developed to make advanced propellers viable in the high-subsonic cruise speed application are presented. The ATP program was awarded the prestigious Robert J. Collier Trophy for the greatest achievement in aeronautics and astronautics in America in 1987.« less

  18. NASA/industry advanced turboprop technology program

    NASA Technical Reports Server (NTRS)

    Ziemianski, Joseph A.; Whitlow, John B., Jr.

    1988-01-01

    Experimental and analytical effort shows that use of advanced turboprop (propfan) propulsion instead of conventional turbofans in the older narrow-body airline fleet could reduce fuel consumption for this type of aircraft by up to 50 percent. The NASA Advanced Turboprop (ATP) program was formulated to address the key technologies required for these thin, swept-blade propeller concepts. A NASA, industry, and university team was assembled to develop and validate applicable design codes and prove by ground and flight test the viability of these propeller concepts. Some of the history of the ATP project, an overview of some of the issues, and a summary of the technology developed to make advanced propellers viable in the high-subsonic cruise speed application are presented. The ATP program was awarded the prestigious Robert J. Collier Trophy for the greatest achievement in aeronautics and astronautics in America in 1987.

  19. Hurricane Forecasts with a Global Mesoscale-resolving Model on the NASA Columbia Supercomputer Preliminary Simulations of Hurricane Katrina (2005)

    NASA Technical Reports Server (NTRS)

    Shen, B.-W.; Atlas, R.; Reale, O.; Chern, J.-D.; Li, S.-J.; Lee, T.; Chang, J.; Henze, C.; Yeh, K.-S.

    2006-01-01

    It is known that the General Circulation Models (GCMs) have sufficient resolution to accurately simulate hurricane near-eye structure and intensity. To overcome this limitation, the mesoscale-resolving finite-element GCM (fvGCM) has been experimentally deployed on the NASA Columbia supercomputer, and its performance is evaluated choosing hurricane Katrina as an example in this study. On late August 2005 Katrina underwent two stages of rapid intensification and became the sixth most intense hurricane in the Atlantic. Six 5-day simulations of Katrina at both 0.25 deg and 0.125 deg show comparable track forecasts, but the 0,125 deg runs provide much better intensity forecasts, producing center pressure with errors of only +/- 12 hPa. The 0.125 deg simulates better near-eye wind distributions and a more realistic average intensification rate. A convection parameterization (CP) is one of the major limitations in a GCM, the 0.125 deg run with CP disabled produces very encouraging results.

  20. The 0.125 degree finite-volume General Circulation Model on the NASA Columbia Supercomputer: Preliminary Simulations of Mesoscale Vortices

    NASA Technical Reports Server (NTRS)

    Shen, B.-W.; Atlas, R.; Chern, J.-D.; Reale, O.; Lin, S.-J.; Lee, T.; Chang, J.

    2005-01-01

    The NASA Columbia supercomputer was ranked second on the TOP500 List in November, 2004. Such a quantum jump in computing power provides unprecedented opportunities to conduct ultra-high resolution simulations with the finite-volume General Circulation Model (fvGCM). During 2004, the model was run in realtime experimentally at 0.25 degree resolution producing remarkable hurricane forecasts [Atlas et al., 2005]. In 2005, the horizontal resolution was further doubled, which makes the fvGCM comparable to the first mesoscale resolving General Circulation Model at the Earth Simulator Center [Ohfuchi et al., 2004]. Nine 5-day 0.125 degree simulations of three hurricanes in 2004 are presented first for model validation. Then it is shown how the model can simulate the formation of the Catalina eddies and Hawaiian lee vortices, which are generated by the interaction of the synoptic-scale flow with surface forcing, and have never been reproduced in a GCM before.)

  1. NASA high performance computing and communications program

    NASA Technical Reports Server (NTRS)

    Holcomb, Lee; Smith, Paul; Hunter, Paul

    1993-01-01

    The National Aeronautics and Space Administration's HPCC program is part of a new Presidential initiative aimed at producing a 1000-fold increase in supercomputing speed and a 100-fold improvement in available communications capability by 1997. As more advanced technologies are developed under the HPCC program, they will be used to solve NASA's 'Grand Challenge' problems, which include improving the design and simulation of advanced aerospace vehicles, allowing people at remote locations to communicate more effectively and share information, increasing scientist's abilities to model the Earth's climate and forecast global environmental trends, and improving the development of advanced spacecraft. NASA's HPCC program is organized into three projects which are unique to the agency's mission: the Computational Aerosciences (CAS) project, the Earth and Space Sciences (ESS) project, and the Remote Exploration and Experimentation (REE) project. An additional project, the Basic Research and Human Resources (BRHR) project exists to promote long term research in computer science and engineering and to increase the pool of trained personnel in a variety of scientific disciplines. This document presents an overview of the objectives and organization of these projects as well as summaries of individual research and development programs within each project.

  2. Role of High-End Computing in Meeting NASA's Science and Engineering Challenges

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak

    2006-01-01

    High-End Computing (HEC) has always played a major role in meeting the modeling and simulation needs of various NASA missions. With NASA's newest 62 teraflops Columbia supercomputer, HEC is having an even greater impact within the Agency and beyond. Significant cutting-edge science and engineering simulations in the areas of space exploration, Shuttle operations, Earth sciences, and aeronautics research, are already occurring on Columbia, demonstrating its ability to accelerate NASA s exploration vision. The talk will describe how the integrated supercomputing production environment is being used to reduce design cycle time, accelerate scientific discovery, conduct parametric analysis of multiple scenarios, and enhance safety during the life cycle of NASA missions.

  3. An introduction to NASA's advanced computing program: Integrated computing systems in advanced multichip modules

    NASA Technical Reports Server (NTRS)

    Fang, Wai-Chi; Alkalai, Leon

    1996-01-01

    Recent changes within NASA's space exploration program favor the design, implementation, and operation of low cost, lightweight, small and micro spacecraft with multiple launches per year. In order to meet the future needs of these missions with regard to the use of spacecraft microelectronics, NASA's advanced flight computing (AFC) program is currently considering industrial cooperation and advanced packaging architectures. In relation to this, the AFC program is reviewed, considering the design and implementation of NASA's AFC multichip module.

  4. NASA's Advanced Space Transportation Hypersonic Program

    NASA Technical Reports Server (NTRS)

    Hueter, Uwe; McClinton, Charles; Cook, Stephen (Technical Monitor)

    2002-01-01

    NASA's has established long term goals for access-to-space. NASA's third generation launch systems are to be fully reusable and operational in approximately 25 years. The goals for third generation launch systems are to reduce cost by a factor of 100 and improve safety by a factor of 10,000 over current conditions. The Advanced Space Transportation Program Office (ASTP) at NASA's Marshall Space Flight Center in Huntsville, AL has the agency lead to develop third generation space transportation technologies. The Hypersonics Investment Area, part of ASTP, is developing the third generation launch vehicle technologies in two main areas, propulsion and airframes. The program's major investment is in hypersonic airbreathing propulsion since it offers the greatest potential for meeting the third generation launch vehicles. The program will mature the technologies in three key propulsion areas, scramjets, rocket-based combined cycle and turbine-based combination cycle. Ground and flight propulsion tests are being planned for the propulsion technologies. Airframe technologies will be matured primarily through ground testing. This paper describes NASA's activities in hypersonics. Current programs, accomplishments, future plans and technologies that are being pursued by the Hypersonics Investment Area under the Advanced Space Transportation Program Office will be discussed.

  5. NASA Advanced Life Support Technology Testing and Development

    NASA Technical Reports Server (NTRS)

    Wheeler, Raymond M.

    2012-01-01

    Prior to 2010, NASA's advanced life support research and development was carried out primarily under the Exploration Life Support Project of NASA's Exploration Systems Mission Directorate. In 2011, the Exploration Life Support Project was merged with other projects covering Fire Prevention/Suppression, Radiation Protection, Advanced Environmental Monitoring and Control, and Thermal Control Systems. This consolidated project was called Life Support and Habitation Systems, which was managed under the Exploration Systems Mission Directorate. In 2012, NASA re-organized major directorates within the agency, which eliminated the Exploration Systems Mission Directorate and created the Office of the Chief Technologist (OCT). Life support research and development is currently conducted within the Office of the Chief Technologist, under the Next Generation Life Support Project, and within the Human Exploration Operation Missions Directorate under several Advanced Exploration System projects. These Advanced Exploration Systems projects include various themes of life support technology testing, including atmospheric management, water management, logistics and waste management, and habitation systems. Food crop testing is currently conducted as part of the Deep Space Habitation (DSH) project within the Advanced Exploration Systems Program. This testing is focused on growing salad crops that could supplement the crew's diet during near term missions.

  6. Multi-petascale highly efficient parallel supercomputer

    DOEpatents

    Asaad, Sameh; Bellofatto, Ralph E.; Blocksome, Michael A.; Blumrich, Matthias A.; Boyle, Peter; Brunheroto, Jose R.; Chen, Dong; Cher, Chen -Yong; Chiu, George L.; Christ, Norman; Coteus, Paul W.; Davis, Kristan D.; Dozsa, Gabor J.; Eichenberger, Alexandre E.; Eisley, Noel A.; Ellavsky, Matthew R.; Evans, Kahn C.; Fleischer, Bruce M.; Fox, Thomas W.; Gara, Alan; Giampapa, Mark E.; Gooding, Thomas M.; Gschwind, Michael K.; Gunnels, John A.; Hall, Shawn A.; Haring, Rudolf A.; Heidelberger, Philip; Inglett, Todd A.; Knudson, Brant L.; Kopcsay, Gerard V.; Kumar, Sameer; Mamidala, Amith R.; Marcella, James A.; Megerian, Mark G.; Miller, Douglas R.; Miller, Samuel J.; Muff, Adam J.; Mundy, Michael B.; O'Brien, John K.; O'Brien, Kathryn M.; Ohmacht, Martin; Parker, Jeffrey J.; Poole, Ruth J.; Ratterman, Joseph D.; Salapura, Valentina; Satterfield, David L.; Senger, Robert M.; Smith, Brian; Steinmacher-Burow, Burkhard; Stockdell, William M.; Stunkel, Craig B.; Sugavanam, Krishnan; Sugawara, Yutaka; Takken, Todd E.; Trager, Barry M.; Van Oosten, James L.; Wait, Charles D.; Walkup, Robert E.; Watson, Alfred T.; Wisniewski, Robert W.; Wu, Peng

    2015-07-14

    A Multi-Petascale Highly Efficient Parallel Supercomputer of 100 petaOPS-scale computing, at decreased cost, power and footprint, and that allows for a maximum packaging density of processing nodes from an interconnect point of view. The Supercomputer exploits technological advances in VLSI that enables a computing model where many processors can be integrated into a single Application Specific Integrated Circuit (ASIC). Each ASIC computing node comprises a system-on-chip ASIC utilizing four or more processors integrated into one die, with each having full access to all system resources and enabling adaptive partitioning of the processors to functions such as compute or messaging I/O on an application by application basis, and preferably, enable adaptive partitioning of functions in accordance with various algorithmic phases within an application, or if I/O or other processors are underutilized, then can participate in computation or communication nodes are interconnected by a five dimensional torus network with DMA that optimally maximize the throughput of packet communications between nodes and minimize latency.

  7. Data-intensive computing on numerically-insensitive supercomputers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahrens, James P; Fasel, Patricia K; Habib, Salman

    2010-12-03

    With the advent of the era of petascale supercomputing, via the delivery of the Roadrunner supercomputing platform at Los Alamos National Laboratory, there is a pressing need to address the problem of visualizing massive petascale-sized results. In this presentation, I discuss progress on a number of approaches including in-situ analysis, multi-resolution out-of-core streaming and interactive rendering on the supercomputing platform. These approaches are placed in context by the emerging area of data-intensive supercomputing.

  8. Advanced Composite Structures At NASA Langley Research Center

    NASA Technical Reports Server (NTRS)

    Eldred, Lloyd B.

    2015-01-01

    Dr. Eldred's presentation will discuss several NASA efforts to improve and expand the use of composite structures within aerospace vehicles. Topics will include an overview of NASA's Advanced Composites Project (ACP), Space Launch System (SLS) applications, and Langley's ISAAC robotic composites research tool.

  9. Simulations of Hurricane Katrina (2005) with the 0.125 degree finite-volume General Circulation Model on the NASA Columbia Supercomputer

    NASA Technical Reports Server (NTRS)

    Shen, B.-W.; Atlas, R.; Reale, O.; Lin, S.-J.; Chern, J.-D.; Chang, J.; Henze, C.

    2006-01-01

    Hurricane Katrina was the sixth most intense hurricane in the Atlantic. Katrina's forecast poses major challenges, the most important of which is its rapid intensification. Hurricane intensity forecast with General Circulation Models (GCMs) is difficult because of their coarse resolution. In this article, six 5-day simulations with the ultra-high resolution finite-volume GCM are conducted on the NASA Columbia supercomputer to show the effects of increased resolution on the intensity predictions of Katrina. It is found that the 0.125 degree runs give comparable tracks to the 0.25 degree, but provide better intensity forecasts, bringing the center pressure much closer to observations with differences of only plus or minus 12 hPa. In the runs initialized at 1200 UTC 25 AUG, the 0.125 degree simulates a more realistic intensification rate and better near-eye wind distributions. Moreover, the first global 0.125 degree simulation without convection parameterization (CP) produces even better intensity evolution and near-eye winds than the control run with CP.

  10. TOP500 Supercomputers for June 2004

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack

    2004-06-23

    23rd Edition of TOP500 List of World's Fastest Supercomputers Released: Japan's Earth Simulator Enters Third Year in Top Position MANNHEIM, Germany; KNOXVILLE, Tenn.;&BERKELEY, Calif. In what has become a closely watched event in the world of high-performance computing, the 23rd edition of the TOP500 list of the world's fastest supercomputers was released today (June 23, 2004) at the International Supercomputer Conference in Heidelberg, Germany.

  11. Accessing Wind Tunnels From NASA's Information Power Grid

    NASA Technical Reports Server (NTRS)

    Becker, Jeff; Biegel, Bryan (Technical Monitor)

    2002-01-01

    The NASA Ames wind tunnel customers are one of the first users of the Information Power Grid (IPG) storage system at the NASA Advanced Supercomputing Division. We wanted to be able to store their data on the IPG so that it could be accessed remotely in a secure but timely fashion. In addition, incorporation into the IPG allows future use of grid computational resources, e.g., for post-processing of data, or to do side-by-side CFD validation. In this paper, we describe the integration of grid data access mechanisms with the existing DARWIN web-based system that is used to access wind tunnel test data. We also show that the combined system has reasonable performance: wind tunnel data may be retrieved at 50Mbits/s over a 100 base T network connected to the IPG storage server.

  12. NASA/USRA University Advanced Design Program Fifth Annual Summer Conference

    NASA Technical Reports Server (NTRS)

    1989-01-01

    The NASA/USRA University Advanced Design Program is a unique program that brings together NASA engineers, students, and faculty from United States engineering schools by integrating current and future NASA space/aeronautics engineering design projects into the university curriculum. The Program was conceived in the fall of 1984 as a pilot project to foster engineering design education in the universities and to supplement NASA's in-house efforts in advanced planning for space and aeronautics design. Nine universities and five NASA centers participated in the first year of the pilot project. Close cooperation between the NASA centers and the universities, the careful selection of design topics, and the enthusiasm of the students has resulted in a very successful program than now includes forty universities and eight NASA centers. The study topics cover a broad range of potential space and aeronautics projects.

  13. Application of technology developed for flight simulation at NASA. Langley Research Center

    NASA Technical Reports Server (NTRS)

    Cleveland, Jeff I., II

    1991-01-01

    In order to meet the stringent time-critical requirements for real-time man-in-the-loop flight simulation, computer processing operations including mathematical model computation and data input/output to the simulators must be deterministic and be completed in as short a time as possible. Personnel at NASA's Langley Research Center are currently developing the use of supercomputers for simulation mathematical model computation for real-time simulation. This, coupled with the use of an open systems software architecture, will advance the state-of-the-art in real-time flight simulation.

  14. Optimization of Supercomputer Use on EADS II System

    NASA Technical Reports Server (NTRS)

    Ahmed, Ardsher

    1998-01-01

    The main objective of this research was to optimize supercomputer use to achieve better throughput and utilization of supercomputers and to help facilitate the movement of non-supercomputing (inappropriate for supercomputer) codes to mid-range systems for better use of Government resources at Marshall Space Flight Center (MSFC). This work involved the survey of architectures available on EADS II and monitoring customer (user) applications running on a CRAY T90 system.

  15. High Performance Computing at NASA

    NASA Technical Reports Server (NTRS)

    Bailey, David H.; Cooper, D. M. (Technical Monitor)

    1994-01-01

    The speaker will give an overview of high performance computing in the U.S. in general and within NASA in particular, including a description of the recently signed NASA-IBM cooperative agreement. The latest performance figures of various parallel systems on the NAS Parallel Benchmarks will be presented. The speaker was one of the authors of the NAS (National Aerospace Standards) Parallel Benchmarks, which are now widely cited in the industry as a measure of sustained performance on realistic high-end scientific applications. It will be shown that significant progress has been made by the highly parallel supercomputer industry during the past year or so, with several new systems, based on high-performance RISC processors, that now deliver superior performance per dollar compared to conventional supercomputers. Various pitfalls in reporting performance will be discussed. The speaker will then conclude by assessing the general state of the high performance computing field.

  16. Development of Metal Matrix Composites for NASA'S Advanced Propulsion Systems

    NASA Technical Reports Server (NTRS)

    Lee, Jonathan A.

    2000-01-01

    The state-of-the-art development of several aluminum and copper based Metal Matrix Composites (MMC) for NASA's advanced propulsion systems will be presented. The presentation's goal is to provide an overview of NASA-Marshall Space Flight Center's planned and on-going activities in MMC for advanced liquid rocket engines such as the X-33 vehicle's Aerospike and X-34 Fastrac engine. The focus will be on lightweight and environmental compatibility with oxygen and hydrogen of key MMC materials, within each NASA's new propulsion application, that will provide a high payoff for NASA's reusable launch vehicle systems and space access vehicles. Advanced MMC processing techniques such as plasma spray, centrifugal casting, pressure infiltration casting will be discussed. Development of a novel 3D printing method for low cost production of composite preform, and functional gradient MMC to enhanced rocket engine's dimensional stability will be presented.

  17. NASA advanced turboprop research and concept validation program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whitlow, J.B. Jr.; Sievers, G.K.

    1988-01-01

    NASA has determined by experimental and analytical effort that use of advanced turboprop propulsion instead of the conventional turbofans in the older narrow-body airline fleet could reduce fuel consumption for this type of aircraft by up to 50 percent. In cooperation with industry, NASA has defined and implemented an Advanced Turboprop (ATP) program to develop and validate the technology required for these new high-speed, multibladed, thin, swept propeller concepts. This paper presents an overview of the analysis, model-scale test, and large-scale flight test elements of the program together with preliminary test results, as available.

  18. GREEN SUPERCOMPUTING IN A DESKTOP BOX

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    HSU, CHUNG-HSING; FENG, WU-CHUN; CHING, AVERY

    2007-01-17

    The computer workstation, introduced by Sun Microsystems in 1982, was the tool of choice for scientists and engineers as an interactive computing environment for the development of scientific codes. However, by the mid-1990s, the performance of workstations began to lag behind high-end commodity PCs. This, coupled with the disappearance of BSD-based operating systems in workstations and the emergence of Linux as an open-source operating system for PCs, arguably led to the demise of the workstation as we knew it. Around the same time, computational scientists started to leverage PCs running Linux to create a commodity-based (Beowulf) cluster that provided dedicatedmore » computer cycles, i.e., supercomputing for the rest of us, as a cost-effective alternative to large supercomputers, i.e., supercomputing for the few. However, as the cluster movement has matured, with respect to cluster hardware and open-source software, these clusters have become much more like their large-scale supercomputing brethren - a shared (and power-hungry) datacenter resource that must reside in a machine-cooled room in order to operate properly. Consequently, the above observations, when coupled with the ever-increasing performance gap between the PC and cluster supercomputer, provide the motivation for a 'green' desktop supercomputer - a turnkey solution that provides an interactive and parallel computing environment with the approximate form factor of a Sun SPARCstation 1 'pizza box' workstation. In this paper, they present the hardware and software architecture of such a solution as well as its prowess as a developmental platform for parallel codes. In short, imagine a 12-node personal desktop supercomputer that achieves 14 Gflops on Linpack but sips only 185 watts of power at load, resulting in a performance-power ratio that is over 300% better than their reference SMP platform.« less

  19. NASA Center for Climate Simulation (NCCS) Presentation

    NASA Technical Reports Server (NTRS)

    Webster, William P.

    2012-01-01

    The NASA Center for Climate Simulation (NCCS) offers integrated supercomputing, visualization, and data interaction technologies to enhance NASA's weather and climate prediction capabilities. It serves hundreds of users at NASA Goddard Space Flight Center, as well as other NASA centers, laboratories, and universities across the US. Over the past year, NCCS has continued expanding its data-centric computing environment to meet the increasingly data-intensive challenges of climate science. We doubled our Discover supercomputer's peak performance to more than 800 teraflops by adding 7,680 Intel Xeon Sandy Bridge processor-cores and most recently 240 Intel Xeon Phi Many Integrated Core (MIG) co-processors. A supercomputing-class analysis system named Dali gives users rapid access to their data on Discover and high-performance software including the Ultra-scale Visualization Climate Data Analysis Tools (UV-CDAT), with interfaces from user desktops and a 17- by 6-foot visualization wall. NCCS also is exploring highly efficient climate data services and management with a new MapReduce/Hadoop cluster while augmenting its data distribution to the science community. Using NCCS resources, NASA completed its modeling contributions to the Intergovernmental Panel on Climate Change (IPCG) Fifth Assessment Report this summer as part of the ongoing Coupled Modellntercomparison Project Phase 5 (CMIP5). Ensembles of simulations run on Discover reached back to the year 1000 to test model accuracy and projected climate change through the year 2300 based on four different scenarios of greenhouse gases, aerosols, and land use. The data resulting from several thousand IPCC/CMIP5 simulations, as well as a variety of other simulation, reanalysis, and observationdatasets, are available to scientists and decision makers through an enhanced NCCS Earth System Grid Federation Gateway. Worldwide downloads have totaled over 110 terabytes of data.

  20. New NASA 3D Animation Shows Seven Days of Simulated Earth Weather

    NASA Image and Video Library

    2014-08-11

    This visualization shows early test renderings of a global computational model of Earth's atmosphere based on data from NASA's Goddard Earth Observing System Model, Version 5 (GEOS-5). This particular run, called Nature Run 2, was run on a supercomputer, spanned 2 years of simulation time at 30 minute intervals, and produced Petabytes of output. The visualization spans a little more than 7 days of simulation time which is 354 time steps. The time period was chosen because a simulated category-4 typhoon developed off the coast of China. The 7 day period is repeated several times during the course of the visualization. Credit: NASA's Scientific Visualization Studio Read more or download here: svs.gsfc.nasa.gov/goto?4180 NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  1. Summary of NASA Advanced Telescope and Observatory Capability Roadmap

    NASA Technical Reports Server (NTRS)

    Stahl, H. Phil; Feinberg, Lee

    2006-01-01

    The NASA Advanced Telescope and Observatory (ATO) Capability Roadmap addresses technologies necessary for NASA to enable future space telescopes and observatories operating in all electromagnetic bands, from x-rays to millimeter waves, and including gravity-waves. It lists capability priorities derived from current and developing Space Missions Directorate (SMD) strategic roadmaps. Technology topics include optics; wavefront sensing and control and interferometry; distributed and advanced spacecraft systems; cryogenic and thermal control systems; large precision structure for observatories; and the infrastructure essential to future space telescopes and observatories.

  2. Summary of NASA Advanced Telescope and Observatory Capability Roadmap

    NASA Technical Reports Server (NTRS)

    Stahl, H. Philip; Feinberg, Lee

    2007-01-01

    The NASA Advanced Telescope and Observatory (ATO) Capability Roadmap addresses technologies necessary for NASA to enable future space telescopes and observatories operating in all electromagnetic bands, from x-rays to millimeter waves, and including gravity-waves. It lists capability priorities derived from current and developing Space Missions Directorate (SMD) strategic roadmaps. Technology topics include optics; wavefront sensing and control and interferometry; distributed and advanced spacecraft systems; cryogenic and thermal control systems; large precision structure for observatories; and the infrastructure essential to future space telescopes and observatories.

  3. TOP500 Supercomputers for November 2003

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack

    2003-11-16

    22nd Edition of TOP500 List of World s Fastest Supercomputers Released MANNHEIM, Germany; KNOXVILLE, Tenn.; BERKELEY, Calif. In what has become a much-anticipated event in the world of high-performance computing, the 22nd edition of the TOP500 list of the worlds fastest supercomputers was released today (November 16, 2003). The Earth Simulator supercomputer retains the number one position with its Linpack benchmark performance of 35.86 Tflop/s (''teraflops'' or trillions of calculations per second). It was built by NEC and installed last year at the Earth Simulator Center in Yokohama, Japan.

  4. Supercomputing in Aerospace

    NASA Technical Reports Server (NTRS)

    Kutler, Paul; Yee, Helen

    1987-01-01

    Topics addressed include: numerical aerodynamic simulation; computational mechanics; supercomputers; aerospace propulsion systems; computational modeling in ballistics; turbulence modeling; computational chemistry; computational fluid dynamics; and computational astrophysics.

  5. NASA advanced cryocooler technology development program

    NASA Astrophysics Data System (ADS)

    Coulter, Daniel R.; Ross, Ronald G., Jr.; Boyle, Robert F.; Key, R. W.

    2003-03-01

    Mechanical cryocoolers represent a significant enabling technology for NASA's Earth and Space Science Enterprises. Over the years, NASA has developed new cryocooler technologies for a wide variety of space missions. Recent achievements include the NCS, AIRS, TES and HIRDLS cryocoolers, and miniature pulse tube coolers at TRW and Lockheed Martin. The largest technology push within NASA right now is in the temperature range of 4 to 10 K. Missions such as the Next Generation Space Telescope (NGST) and Terrestrial Planet Finder (TPF) plan to use infrared detectors operating between 6-8 K, typically arsenic-doped silicon arrays, with IR telescopes from 3 to 6 meters in diameter. Similarly, Constellation-X plans to use X-ray microcalorimeters operating at 50 mK and will require ~6 K cooling to precool its multistage 50 mK magnetic refrigerator. To address cryocooler development for these next-generation missions, NASA has initiated a program referred to as the Advanced Cryocooler Technology Development Program (ACTDP). This paper presents an overview of the ACTDP program including programmatic objectives and timelines, and conceptual details of the cooler concepts under development.

  6. 48 CFR 225.7012 - Restriction on supercomputers.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 48 Federal Acquisition Regulations System 3 2014-10-01 2014-10-01 false Restriction on supercomputers. 225.7012 Section 225.7012 Federal Acquisition Regulations System DEFENSE ACQUISITION REGULATIONS... supercomputers. ...

  7. 48 CFR 225.7012 - Restriction on supercomputers.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 3 2010-10-01 2010-10-01 false Restriction on supercomputers. 225.7012 Section 225.7012 Federal Acquisition Regulations System DEFENSE ACQUISITION REGULATIONS... supercomputers. ...

  8. 48 CFR 225.7012 - Restriction on supercomputers.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 48 Federal Acquisition Regulations System 3 2013-10-01 2013-10-01 false Restriction on supercomputers. 225.7012 Section 225.7012 Federal Acquisition Regulations System DEFENSE ACQUISITION REGULATIONS... supercomputers. ...

  9. 48 CFR 225.7012 - Restriction on supercomputers.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 3 2011-10-01 2011-10-01 false Restriction on supercomputers. 225.7012 Section 225.7012 Federal Acquisition Regulations System DEFENSE ACQUISITION REGULATIONS... supercomputers. ...

  10. 48 CFR 225.7012 - Restriction on supercomputers.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 48 Federal Acquisition Regulations System 3 2012-10-01 2012-10-01 false Restriction on supercomputers. 225.7012 Section 225.7012 Federal Acquisition Regulations System DEFENSE ACQUISITION REGULATIONS... supercomputers. ...

  11. An overview of the NASA Advanced Propulsion Concepts program

    NASA Technical Reports Server (NTRS)

    Curran, Francis M.; Bennett, Gary L.; Frisbee, Robert H.; Sercel, Joel C.; Lapointe, Michael R.

    1992-01-01

    NASA Advanced Propulsion Concepts (APC) program for the development of long-term space propulsion system schemes is managed by both NASA-Lewis and the JPL and is tasked with the identification and conceptual development of high-risk/high-payoff configurations. Both theoretical and experimental investigations have been undertaken in technology areas deemed essential to the implementation of candidate concepts. These APC candidates encompass very high energy density chemical propulsion systems, advanced electric propulsion systems, and an antiproton-catalyzed nuclear propulsion concept. A development status evaluation is presented for these systems.

  12. Fostering Visions for the Future: A Review of the NASA Institute for Advanced Concepts

    NASA Technical Reports Server (NTRS)

    2009-01-01

    The NASA Institute for Advanced Concepts (NIAC) was formed in 1998 to provide an independent source of advanced aeronautical and space concepts that could dramatically impact how NASA develops and conducts its missions. Until the program's termination in August 2007, NIAC provided an independent open forum, a high-level point of entry to NASA for an external community of innovators, and an external capability for analysis and definition of advanced aeronautics and space concepts to complement the advanced concept activities conducted within NASA. Throughout its 9-year existence, NIAC inspired an atmosphere for innovation that stretched the imagination and encouraged creativity. As requested by Congress, this volume reviews the effectiveness of NIAC and makes recommendations concerning the importance of such a program to NASA and to the nation as a whole, including the proper role of NASA and the federal government in fostering scientific innovation and creativity and in developing advanced concepts for future systems. Key findings and recommendations include that in order to achieve its mission, NASA must have, and is currently lacking, a mechanism to investigate visionary, far-reaching advanced concepts. Therefore, a NIAC-like entity should be reestablished to fill this gap.

  13. Advancing Test Capabilities at NASA Wind Tunnels

    NASA Technical Reports Server (NTRS)

    Bell, James

    2015-01-01

    NASA maintains twelve major wind tunnels at three field centers capable of providing flows at 0.1 M 10 and unit Reynolds numbers up to 45106m. The maintenance and enhancement of these facilities is handled through a unified management structure under NASAs Aeronautics and Evaluation and Test Capability (AETC) project. The AETC facilities are; the 11x11 transonic and 9x7 supersonic wind tunnels at NASA Ames; the 10x10 and 8x6 supersonic wind tunnels, 9x15 low speed tunnel, Icing Research Tunnel, and Propulsion Simulator Laboratory, all at NASA Glenn; and the National Transonic Facility, Transonic Dynamics Tunnel, LAL aerothermodynamics laboratory, 8 High Temperature Tunnel, and 14x22 low speed tunnel, all at NASA Langley. This presentation describes the primary AETC facilities and their current capabilities, as well as improvements which are planned over the next five years. These improvements fall into three categories. The first are operations and maintenance improvements designed to increase the efficiency and reliability of the wind tunnels. These include new (possibly composite) fan blades at several facilities, new temperature control systems, and new and much more capable facility data systems. The second category of improvements are facility capability advancements. These include significant improvements to optical access in wind tunnel test sections at Ames, improvements to test section acoustics at Glenn and Langley, the development of a Supercooled Large Droplet capability for icing research, and the development of an icing capability for large engine testing. The final category of improvements consists of test technology enhancements which provide value across multiple facilities. These include projects to increase balance accuracy, provide NIST-traceable calibration characterization for wind tunnels, and to advance optical instruments for Computational Fluid Dynamics (CFD) validation. Taken as a whole, these individual projects provide significant

  14. NASA High Performance Computing and Communications program

    NASA Technical Reports Server (NTRS)

    Holcomb, Lee; Smith, Paul; Hunter, Paul

    1994-01-01

    The National Aeronautics and Space Administration's HPCC program is part of a new Presidential initiative aimed at producing a 1000-fold increase in supercomputing speed and a 1(X)-fold improvement in available communications capability by 1997. As more advanced technologies are developed under the HPCC program, they will be used to solve NASA's 'Grand Challenge' problems, which include improving the design and simulation of advanced aerospace vehicles, allowing people at remote locations to communicate more effectively and share information, increasing scientists' abilities to model the Earth's climate and forecast global environmental trends, and improving the development of advanced spacecraft. NASA's HPCC program is organized into three projects which are unique to the agency's mission: the Computational Aerosciences (CAS) project, the Earth and Space Sciences (ESS) project, and the Remote Exploration and Experimentation (REE) project. An additional project, the Basic Research and Human Resources (BRHR) project, exists to promote long term research in computer science and engineering and to increase the pool of trained personnel in a variety of scientific disciplines. This document presents an overview of the objectives and organization of these projects, as well as summaries of early accomplishments and the significance, status, and plans for individual research and development programs within each project. Areas of emphasis include benchmarking, testbeds, software and simulation methods.

  15. The NASA Advanced Communications Technology Satellite (ACTS)

    NASA Astrophysics Data System (ADS)

    Beck, G. A.

    1984-10-01

    Forecasts indicate that a saturation of the capacity of the satellite communications service will occur in the U.S. domestic market by the early 1990s. In order to prevent this from happening, advanced technologies must be developed. NASA has been concerned with such a development. One key is the exploitation of the Ka-band (30/20 GHz), which is much wider than C- and Ku-bands together. Another is the use of multiple narrow antenna beams in the satellite to achieve large frequency reuse factors with very high antenna gains. NASA has developed proof-of-concept hardware components which form the basis for a flight demonstration. The Advanced Communications Technology Satellite (ACTS) system will provide this demonstration. Attention is given to the ACTS Program definition, the ACTS Flight System, the Multibeam Communications Package, and the spacecraft bus.

  16. Input/output behavior of supercomputing applications

    NASA Technical Reports Server (NTRS)

    Miller, Ethan L.

    1991-01-01

    The collection and analysis of supercomputer I/O traces and their use in a collection of buffering and caching simulations are described. This serves two purposes. First, it gives a model of how individual applications running on supercomputers request file system I/O, allowing system designer to optimize I/O hardware and file system algorithms to that model. Second, the buffering simulations show what resources are needed to maximize the CPU utilization of a supercomputer given a very bursty I/O request rate. By using read-ahead and write-behind in a large solid stated disk, one or two applications were sufficient to fully utilize a Cray Y-MP CPU.

  17. Development of Metal Matrix Composites for NASA's Advanced Propulsion Systems

    NASA Technical Reports Server (NTRS)

    Lee, J.; Elam, S.

    2001-01-01

    The state-of-the-art development of several Metal Matrix Composites (MMC) for NASA's advanced propulsion systems will be presented. The goal is to provide an overview of NASA-Marshall Space Flight Center's on-going activities in MMC components for advanced liquid rocket engines such as the X-33 vehicle's Aerospike engine and X-34's Fastrac engine. The focus will be on lightweight, low cost, and environmental compatibility with oxygen and hydrogen of key MMC materials, within each of NASA's new propulsion application, that will provide a high payoff for NASA's Reusable Launch Vehicles and space access vehicles. In order to fabricate structures from MMC, effective joining methods must be developed to join MMC to the same or to different monolithic alloys. Therefore, a qualitative assessment of MMC's welding and joining techniques will be outlined.

  18. OpenMP Performance on the Columbia Supercomputer

    NASA Technical Reports Server (NTRS)

    Haoqiang, Jin; Hood, Robert

    2005-01-01

    This presentation discusses Columbia World Class Supercomputer which is one of the world's fastest supercomputers providing 61 TFLOPs (10/20/04). Conceived, designed, built, and deployed in just 120 days. A 20-node supercomputer built on proven 512-processor nodes. The largest SGI system in the world with over 10,000 Intel Itanium 2 processors and provides the largest node size incorporating commodity parts (512) and the largest shared-memory environment (2048) with 88% efficiency tops the scalar systems on the Top500 list.

  19. Improved Access to Supercomputers Boosts Chemical Applications.

    ERIC Educational Resources Information Center

    Borman, Stu

    1989-01-01

    Supercomputing is described in terms of computing power and abilities. The increase in availability of supercomputers for use in chemical calculations and modeling are reported. Efforts of the National Science Foundation and Cray Research are highlighted. (CW)

  20. NASA/USRA University advanced design program

    NASA Technical Reports Server (NTRS)

    Lembeck, Michael F.; Prussing, John

    1989-01-01

    The participation of the University of Illinois at Urbana-Champaign in the NASA/USRA University Advanced Design Program for the 1988 to 1989 academic year is reviewed. The University's design project was the Logistics Resupply and Emergency Crew Return System for Space Station Freedom. Sixty-one students divided into eight groups, participated in the spring 1989 semester. A presentation prepared by three students and a graduate teaching assistant for the program's summer conference summarized the project results. Teamed with the NASA Marshall Space Flight Center (MSFC), the University received support in the form of remote telecon lectures, reference material, and previously acquired applications software. In addition, a graduate teaching assistant was awarded a summer 1989 internship at MSFC.

  1. NASA Advanced Exploration Systems: Advancements in Life Support Systems

    NASA Technical Reports Server (NTRS)

    Shull, Sarah A.; Schneider, Walter F.

    2016-01-01

    The NASA Advanced Exploration Systems (AES) Life Support Systems (LSS) project strives to develop reliable, energy-efficient, and low-mass spacecraft systems to provide environmental control and life support systems (ECLSS) critical to enabling long duration human missions beyond low Earth orbit (LEO). Highly reliable, closed-loop life support systems are among the capabilities required for the longer duration human space exploration missions assessed by NASA’s Habitability Architecture Team.

  2. Advanced Fuel Cell System Thermal Management for NASA Exploration Missions

    NASA Technical Reports Server (NTRS)

    Burke, Kenneth A.

    2009-01-01

    The NASA Glenn Research Center is developing advanced passive thermal management technology to reduce the mass and improve the reliability of space fuel cell systems for the NASA exploration program. An analysis of a state-of-the-art fuel cell cooling systems was done to benchmark the portion of a fuel cell system s mass that is dedicated to thermal management. Additional analysis was done to determine the key performance targets of the advanced passive thermal management technology that would substantially reduce fuel cell system mass.

  3. Seismic signal processing on heterogeneous supercomputers

    NASA Astrophysics Data System (ADS)

    Gokhberg, Alexey; Ermert, Laura; Fichtner, Andreas

    2015-04-01

    The processing of seismic signals - including the correlation of massive ambient noise data sets - represents an important part of a wide range of seismological applications. It is characterized by large data volumes as well as high computational input/output intensity. Development of efficient approaches towards seismic signal processing on emerging high performance computing systems is therefore essential. Heterogeneous supercomputing systems introduced in the recent years provide numerous computing nodes interconnected via high throughput networks, every node containing a mix of processing elements of different architectures, like several sequential processor cores and one or a few graphical processing units (GPU) serving as accelerators. A typical representative of such computing systems is "Piz Daint", a supercomputer of the Cray XC 30 family operated by the Swiss National Supercomputing Center (CSCS), which we used in this research. Heterogeneous supercomputers provide an opportunity for manifold application performance increase and are more energy-efficient, however they have much higher hardware complexity and are therefore much more difficult to program. The programming effort may be substantially reduced by the introduction of modular libraries of software components that can be reused for a wide class of seismology applications. The ultimate goal of this research is design of a prototype for such library suitable for implementing various seismic signal processing applications on heterogeneous systems. As a representative use case we have chosen an ambient noise correlation application. Ambient noise interferometry has developed into one of the most powerful tools to image and monitor the Earth's interior. Future applications will require the extraction of increasingly small details from noise recordings. To meet this demand, more advanced correlation techniques combined with very large data volumes are needed. This poses new computational problems that

  4. Desktop supercomputer: what can it do?

    NASA Astrophysics Data System (ADS)

    Bogdanov, A.; Degtyarev, A.; Korkhov, V.

    2017-12-01

    The paper addresses the issues of solving complex problems that require using supercomputers or multiprocessor clusters available for most researchers nowadays. Efficient distribution of high performance computing resources according to actual application needs has been a major research topic since high-performance computing (HPC) technologies became widely introduced. At the same time, comfortable and transparent access to these resources was a key user requirement. In this paper we discuss approaches to build a virtual private supercomputer available at user's desktop: a virtual computing environment tailored specifically for a target user with a particular target application. We describe and evaluate possibilities to create the virtual supercomputer based on light-weight virtualization technologies, and analyze the efficiency of our approach compared to traditional methods of HPC resource management.

  5. Energy Efficient Supercomputing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anypas, Katie

    2014-10-17

    Katie Anypas, Head of NERSC's Services Department discusses the Lab's research into developing increasingly powerful and energy efficient supercomputers at our '8 Big Ideas' Science at the Theater event on October 8th, 2014, in Oakland, California.

  6. Energy Efficient Supercomputing

    ScienceCinema

    Anypas, Katie

    2018-05-07

    Katie Anypas, Head of NERSC's Services Department discusses the Lab's research into developing increasingly powerful and energy efficient supercomputers at our '8 Big Ideas' Science at the Theater event on October 8th, 2014, in Oakland, California.

  7. Advanced Information Technology Investments at the NASA Earth Science Technology Office

    NASA Astrophysics Data System (ADS)

    Clune, T.; Seablom, M. S.; Moe, K.

    2012-12-01

    The NASA Earth Science Technology Office (ESTO) regularly makes investments for nurturing advanced concepts in information technology to enable rapid, low-cost acquisition, processing and visualization of Earth science data in support of future NASA missions and climate change research. In 2012, the National Research Council published a mid-term assessment of the 2007 decadal survey for future spacemissions supporting Earth science and applications [1]. The report stated, "Earth sciences have advanced significantly because of existing observational capabilities and the fruit of past investments, along with advances in data and information systems, computer science, and enabling technologies." The report found that NASA had responded favorably and aggressively to the decadal survey and noted the role of the recent ESTO solicitation for information systems technologies that partnered with the NASA Applied Sciences Program to support the transition into operations. NASA's future missions are key stakeholders for the ESTO technology investments. Also driving these investments is the need for the Agency to properly address questions regarding the prediction, adaptation, and eventual mitigation of climate change. The Earth Science Division has championed interdisciplinary research, recognizing that the Earth must be studied as a complete system in order toaddress key science questions [2]. Information technology investments in the low-mid technology readiness level (TRL) range play a key role in meeting these challenges. ESTO's Advanced Information Systems Technology (AIST) program invests in higher risk / higher reward technologies that solve the most challenging problems of the information processing chain. This includes the space segment, where the information pipeline begins, to the end user, where knowledge is ultimatelyadvanced. The objectives of the program are to reduce the risk, cost, size, and development time of Earth Science space-based and ground

  8. Extreme Scale Plasma Turbulence Simulations on Top Supercomputers Worldwide

    DOE PAGES

    Tang, William; Wang, Bei; Ethier, Stephane; ...

    2016-11-01

    The goal of the extreme scale plasma turbulence studies described in this paper is to expedite the delivery of reliable predictions on confinement physics in large magnetic fusion systems by using world-class supercomputers to carry out simulations with unprecedented resolution and temporal duration. This has involved architecture-dependent optimizations of performance scaling and addressing code portability and energy issues, with the metrics for multi-platform comparisons being 'time-to-solution' and 'energy-to-solution'. Realistic results addressing how confinement losses caused by plasma turbulence scale from present-day devices to the much larger $25 billion international ITER fusion facility have been enabled by innovative advances in themore » GTC-P code including (i) implementation of one-sided communication from MPI 3.0 standard; (ii) creative optimization techniques on Xeon Phi processors; and (iii) development of a novel performance model for the key kernels of the PIC code. Our results show that modeling data movement is sufficient to predict performance on modern supercomputer platforms.« less

  9. The role of graphics super-workstations in a supercomputing environment

    NASA Technical Reports Server (NTRS)

    Levin, E.

    1989-01-01

    A new class of very powerful workstations has recently become available which integrate near supercomputer computational performance with very powerful and high quality graphics capability. These graphics super-workstations are expected to play an increasingly important role in providing an enhanced environment for supercomputer users. Their potential uses include: off-loading the supercomputer (by serving as stand-alone processors, by post-processing of the output of supercomputer calculations, and by distributed or shared processing), scientific visualization (understanding of results, communication of results), and by real time interaction with the supercomputer (to steer an iterative computation, to abort a bad run, or to explore and develop new algorithms).

  10. Mira: Argonne's 10-petaflops supercomputer

    ScienceCinema

    Papka, Michael; Coghlan, Susan; Isaacs, Eric; Peters, Mark; Messina, Paul

    2018-02-13

    Mira, Argonne's petascale IBM Blue Gene/Q system, ushers in a new era of scientific supercomputing at the Argonne Leadership Computing Facility. An engineering marvel, the 10-petaflops supercomputer is capable of carrying out 10 quadrillion calculations per second. As a machine for open science, any researcher with a question that requires large-scale computing resources can submit a proposal for time on Mira, typically in allocations of millions of core-hours, to run programs for their experiments. This adds up to billions of hours of computing time per year.

  11. Mira: Argonne's 10-petaflops supercomputer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Papka, Michael; Coghlan, Susan; Isaacs, Eric

    2013-07-03

    Mira, Argonne's petascale IBM Blue Gene/Q system, ushers in a new era of scientific supercomputing at the Argonne Leadership Computing Facility. An engineering marvel, the 10-petaflops supercomputer is capable of carrying out 10 quadrillion calculations per second. As a machine for open science, any researcher with a question that requires large-scale computing resources can submit a proposal for time on Mira, typically in allocations of millions of core-hours, to run programs for their experiments. This adds up to billions of hours of computing time per year.

  12. 48 CFR 252.225-7011 - Restriction on acquisition of supercomputers.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... of supercomputers. 252.225-7011 Section 252.225-7011 Federal Acquisition Regulations System DEFENSE... CLAUSES Text of Provisions And Clauses 252.225-7011 Restriction on acquisition of supercomputers. As prescribed in 225.7012-3, use the following clause: Restriction on Acquisition of Supercomputers (JUN 2005...

  13. 48 CFR 252.225-7011 - Restriction on acquisition of supercomputers.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... of supercomputers. 252.225-7011 Section 252.225-7011 Federal Acquisition Regulations System DEFENSE... CLAUSES Text of Provisions And Clauses 252.225-7011 Restriction on acquisition of supercomputers. As prescribed in 225.7012-3, use the following clause: Restriction on Acquisition of Supercomputers (JUN 2005...

  14. 48 CFR 252.225-7011 - Restriction on acquisition of supercomputers.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... of supercomputers. 252.225-7011 Section 252.225-7011 Federal Acquisition Regulations System DEFENSE... CLAUSES Text of Provisions And Clauses 252.225-7011 Restriction on acquisition of supercomputers. As prescribed in 225.7012-3, use the following clause: Restriction on Acquisition of Supercomputers (JUN 2005...

  15. 48 CFR 252.225-7011 - Restriction on acquisition of supercomputers.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... of supercomputers. 252.225-7011 Section 252.225-7011 Federal Acquisition Regulations System DEFENSE... CLAUSES Text of Provisions And Clauses 252.225-7011 Restriction on acquisition of supercomputers. As prescribed in 225.7012-3, use the following clause: Restriction on Acquisition of Supercomputers (JUN 2005...

  16. 48 CFR 252.225-7011 - Restriction on acquisition of supercomputers.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... of supercomputers. 252.225-7011 Section 252.225-7011 Federal Acquisition Regulations System DEFENSE... CLAUSES Text of Provisions And Clauses 252.225-7011 Restriction on acquisition of supercomputers. As prescribed in 225.7012-3, use the following clause: Restriction on Acquisition of Supercomputers (JUN 2005...

  17. Advances in the NASA Earth Science Division Applied Science Program

    NASA Astrophysics Data System (ADS)

    Friedl, L.; Bonniksen, C. K.; Escobar, V. M.

    2016-12-01

    The NASA Earth Science Division's Applied Science Program advances the understanding of and ability to used remote sensing data in support of socio-economic needs. The integration of socio-economic considerations in to NASA Earth Science projects has advanced significantly. The large variety of acquisition methods used has required innovative implementation options. The integration of application themes and the implementation of application science activities in flight project is continuing to evolve. The creation of the recently released Earth Science Division, Directive on Project Applications Program and the addition of an application science requirement in the recent EVM-2 solicitation document NASA's current intent. Continuing improvement in the Earth Science Applications Science Program are expected in the areas of thematic integration, Project Applications Program tailoring for Class D missions and transfer of knowledge between scientists and projects.

  18. Advanced Stirling Technology Development at NASA Glenn Research Center

    NASA Technical Reports Server (NTRS)

    Shaltens, Richard K.; Wong, Wayne A.

    2007-01-01

    The NASA Glenn Research Center has been developing advanced energy-conversion technologies for use with both radioisotope power systems and fission surface power systems for many decades. Under NASA's Science Mission Directorate, Planetary Science Theme, Technology Program, Glenn is developing the next generation of advanced Stirling convertors (ASCs) for use in the Department of Energy/Lockheed Martin Advanced Stirling Radioisotope Generator (ASRG). The next-generation power-conversion technologies require high efficiency and high specific power (watts electric per kilogram) to meet future mission requirements to use less of the Department of Energy's plutonium-fueled general-purpose heat source modules and reduce system mass. Important goals include long-life (greater than 14-yr) reliability and scalability so that these systems can be considered for a variety of future applications and missions including outer-planet missions and continual operation on the surface of Mars. This paper provides an update of the history and status of the ASC being developed for Glenn by Sunpower Inc. of Athens, Ohio.

  19. NASA's Advanced Radioisotope Power Conversion Technology Development Status

    NASA Technical Reports Server (NTRS)

    Anderson, David J.; Sankovic, John; Wilt, David; Abelson, Robert D.; Fleurial, Jean-Pierre

    2007-01-01

    NASA's Advanced Radioisotope Power Systems (ARPS) project is developing the next generation of radioisotope power conversion technologies that will enable future missions that have requirements that cannot be met by either photovoltaic systems or by current radioisotope power systems (RPSs). Requirements of advanced RPSs include high efficiency and high specific power (watts/kilogram) in order to meet future mission requirements with less radioisotope fuel and lower mass so that these systems can meet requirements for a variety of future space applications, including continual operation surface missions, outer-planetary missions, and solar probe. These advances would enable a factor of 2 to 4 decrease in the amount of fuel required to generate electrical power. Advanced RPS development goals also include long-life, reliability, and scalability. This paper provides an update on the contractual efforts under the Radioisotope Power Conversion Technology (RPCT) NASA Research Announcement (NRA) for research and development of Stirling, thermoelectric, and thermophotovoltaic power conversion technologies. The paper summarizes the current RPCT NRA efforts with a brief description of the effort, a status and/or summary of the contractor's key accomplishments, a discussion of upcoming plans, and a discussion of relevant system-level benefits and implications. The paper also provides a general discussion of the benefits from the development of these advanced power conversion technologies and the eventual payoffs to future missions (discussing system benefits due to overall improvements in efficiency, specific power, etc.).

  20. Engine Seal Technology Requirements to Meet NASA's Advanced Subsonic Technology Program Goals

    NASA Technical Reports Server (NTRS)

    Steinetz, Bruce M.; Hendricks, Robert C.

    1994-01-01

    Cycle studies have shown the benefits of increasing engine pressure ratios and cycle temperatures to decrease engine weight and improve performance of commercial turbine engines. NASA is working with industry to define technology requirements of advanced engines and engine technology to meet the goals of NASA's Advanced Subsonic Technology Initiative. As engine operating conditions become more severe and customers demand lower operating costs, NASA and engine manufacturers are investigating methods of improving engine efficiency and reducing operating costs. A number of new technologies are being examined that will allow next generation engines to operate at higher pressures and temperatures. Improving seal performance - reducing leakage and increasing service life while operating under more demanding conditions - will play an important role in meeting overall program goals of reducing specific fuel consumption and ultimately reducing direct operating costs. This paper provides an overview of the Advanced Subsonic Technology program goals, discusses the motivation for advanced seal development, and highlights seal technology requirements to meet future engine performance goals.

  1. Towards Efficient Supercomputing: Searching for the Right Efficiency Metric

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hsu, Chung-Hsing; Kuehn, Jeffery A; Poole, Stephen W

    2012-01-01

    The efficiency of supercomputing has traditionally been in the execution time. In early 2000 s, the concept of total cost of ownership was re-introduced, with the introduction of efficiency measure to include aspects such as energy and space. Yet the supercomputing community has never agreed upon a metric that can cover these aspects altogether and also provide a fair basis for comparison. This paper exam- ines the metrics that have been proposed in the past decade, and proposes a vector-valued metric for efficient supercom- puting. Using this metric, the paper presents a study of where the supercomputing industry has beenmore » and how it stands today with respect to efficient supercomputing.« less

  2. Performance of the Widely-Used CFD Code OVERFLOW on the Pleides Supercomputer

    NASA Technical Reports Server (NTRS)

    Guruswamy, Guru P.

    2017-01-01

    Computational performance studies were made for NASA's widely used Computational Fluid Dynamics code OVERFLOW on the Pleiades Supercomputer. Two test cases were considered: a full launch vehicle with a grid of 286 million points and a full rotorcraft model with a grid of 614 million points. Computations using up to 8000 cores were run on Sandy Bridge and Ivy Bridge nodes. Performance was monitored using times reported in the day files from the Portable Batch System utility. Results for two grid topologies are presented and compared in detail. Observations and suggestions for future work are made.

  3. TOP500 Supercomputers for June 2003

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack

    2003-06-23

    21st Edition of TOP500 List of World's Fastest Supercomputers Released MANNHEIM, Germany; KNOXVILLE, Tenn.;&BERKELEY, Calif. In what has become a much-anticipated event in the world of high-performance computing, the 21st edition of the TOP500 list of the world's fastest supercomputers was released today (June 23, 2003). The Earth Simulator supercomputer built by NEC and installed last year at the Earth Simulator Center in Yokohama, Japan, with its Linpack benchmark performance of 35.86 Tflop/s (teraflops or trillions of calculations per second), retains the number one position. The number 2 position is held by the re-measured ASCI Q system at Los Alamosmore » National Laboratory. With 13.88 Tflop/s, it is the second system ever to exceed the 10 Tflop/smark. ASCIQ was built by Hewlett-Packard and is based on the AlphaServerSC computer system.« less

  4. NASA's Space Launch System Advanced Booster Development

    NASA Technical Reports Server (NTRS)

    Robinson, Kimberly F.; Crumbly, Christopher M.; May, Todd A.

    2014-01-01

    The National Aeronautics and Space Administration's (NASA's) Space Launch System (SLS) Program, managed at the Marshall Space Flight Center, is making progress toward delivering a new capability for human space flight and scientific missions beyond Earth orbit. NASA is executing this development within flat budgetary guidelines by using existing engines assets and heritage technology to ready an initial 70 metric ton (t) lift capability for launch in 2017, and then employing a block upgrade approach to evolve a 130-t capability after 2021. A key component of the SLS acquisition plan is a three-phased approach for the first-stage boosters. The first phase is to expedite the 70-t configuration by completing development of the Space Shuttle heritage 5-segment solid rocket boosters (SRBs) for the initial flights of SLS. Since no existing boosters can meet the performance requirements for the 130-t class SLS, the next phases of the strategy focus on the eventual development of advanced boosters with an expected thrust class potentially double the current 5-segment solid rocket booster capability of 3.88 million pounds of thrust each. The second phase in the booster acquisition plan is the Advanced Booster Engineering Demonstration and/or Risk Reduction (ABEDRR) effort, for which contracts were awarded beginning in 2012 after a full and open competition, with a stated intent to reduce risks leading to an affordable advanced booster. NASA has awarded ABEDRR contracts to four industry teams, which are looking into new options for liquid-fuel booster engines, solid-fuel-motor propellants, and composite booster structures. Demonstrations and/or risk reduction efforts were required to be related to a proposed booster concept directly applicable to fielding an advanced booster. This paper will discuss the status of this acquisition strategy and its results toward readying both the 70 t and 130 t configurations of SLS. The third and final phase will be a full and open

  5. Requirements and Usage of NVM in Advanced Onboard Data Processing Systems

    NASA Technical Reports Server (NTRS)

    Some, R.

    2001-01-01

    This viewgraph presentation gives an overview of the requirements and uses of non-volatile memory (NVM) in advanced onboard data processing systems. Supercomputing in space presents the only viable approach to the bandwidth problem (can't get data down to Earth), controlling constellations of cooperating satellites, reducing mission operating costs, and real-time intelligent decision making and science data gathering. Details are given on the REE vision and impact on NASA and Department of Defense missions, objectives of REE, baseline architecture, and issues. NVM uses and requirements are listed.

  6. Proceedings of the Ninth Annual Summer Conference: NASA/USRA University Advanced Aeronautics Design Program and Advanced Space Design Program

    NASA Technical Reports Server (NTRS)

    1993-01-01

    The NASA/USRA University Advanced Design Program was established in 1984 as an attempt to add more and better design education to primarily undergraduate engineering programs. The original focus of the pilot program encompassing nine universities and five NASA centers was on space design. Two years later, the program was expanded to include aeronautics design with six universities and three NASA centers participating. This year marks the last of a three-year cycle of participation by forty-one universities, eight NASA centers, and one industry participant. The Advanced Space Design Program offers universities an opportunity to plan and design missions and hardware that would be of usc in the future as NASA enters a new era of exploration and discovery, while the Advanced Aeronautics Design Program generally offers opportunities for study of design problems closer to the present time, ranging from small, slow-speed vehicles to large, supersonic and hypersonic passenger transports. The systems approach to the design problem is emphasized in both the space and aeronautics projects. The student teams pursue the chosen problem during their senior year in a one- or two-semester capstone design course and submit a comprehensive written report at the conclusion of the project. Finally, student representatives from each of the universities summarize their work in oral presentations at the Annual Summer Conference, sponsored by one of the NASA centers and attended by the university faculty, NASA and USRA personnel and aerospace industry representatives. As the Advanced Design Program has grown in size, it has also matured in terms of the quality of the student projects. The present volume represents the student work accomplished during the 1992-1993 academic year reported at the Ninth Annual Summer Conference hosted by NASA Lyndon B. Johnson Space Center, June 14-18, 1993.

  7. Enabling Diverse Software Stacks on Supercomputers using High Performance Virtual Clusters.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Younge, Andrew J.; Pedretti, Kevin; Grant, Ryan

    While large-scale simulations have been the hallmark of the High Performance Computing (HPC) community for decades, Large Scale Data Analytics (LSDA) workloads are gaining attention within the scientific community not only as a processing component to large HPC simulations, but also as standalone scientific tools for knowledge discovery. With the path towards Exascale, new HPC runtime systems are also emerging in a way that differs from classical distributed com- puting models. However, system software for such capabilities on the latest extreme-scale DOE supercomputing needs to be enhanced to more appropriately support these types of emerging soft- ware ecosystems. In thismore » paper, we propose the use of Virtual Clusters on advanced supercomputing resources to enable systems to support not only HPC workloads, but also emerging big data stacks. Specifi- cally, we have deployed the KVM hypervisor within Cray's Compute Node Linux on a XC-series supercomputer testbed. We also use libvirt and QEMU to manage and provision VMs directly on compute nodes, leveraging Ethernet-over-Aries network emulation. To our knowledge, this is the first known use of KVM on a true MPP supercomputer. We investigate the overhead our solution using HPC benchmarks, both evaluating single-node performance as well as weak scaling of a 32-node virtual cluster. Overall, we find single node performance of our solution using KVM on a Cray is very efficient with near-native performance. However overhead increases by up to 20% as virtual cluster size increases, due to limitations of the Ethernet-over-Aries bridged network. Furthermore, we deploy Apache Spark with large data analysis workloads in a Virtual Cluster, ef- fectively demonstrating how diverse software ecosystems can be supported by High Performance Virtual Clusters.« less

  8. An Implementation Plan for NFS at NASA's NAS Facility

    NASA Technical Reports Server (NTRS)

    Lam, Terance L.; Kutler, Paul (Technical Monitor)

    1998-01-01

    This document discusses how NASA's NAS can benefit from the Sun Microsystems' Network File System (NFS). A case study is presented to demonstrate the effects of NFS on the NAS supercomputing environment. Potential problems are addressed and an implementation strategy is proposed.

  9. NSF Commits to Supercomputers.

    ERIC Educational Resources Information Center

    Waldrop, M. Mitchell

    1985-01-01

    The National Science Foundation (NSF) has allocated at least $200 million over the next five years to support four new supercomputer centers. Issues and trends related to this NSF initiative are examined. (JN)

  10. C3: A Collaborative Web Framework for NASA Earth Exchange

    NASA Astrophysics Data System (ADS)

    Foughty, E.; Fattarsi, C.; Hardoyo, C.; Kluck, D.; Wang, L.; Matthews, B.; Das, K.; Srivastava, A.; Votava, P.; Nemani, R. R.

    2010-12-01

    The NASA Earth Exchange (NEX) is a new collaboration platform for the Earth science community that provides a mechanism for scientific collaboration and knowledge sharing. NEX combines NASA advanced supercomputing resources, Earth system modeling, workflow management, NASA remote sensing data archives, and a collaborative communication platform to deliver a complete work environment in which users can explore and analyze large datasets, run modeling codes, collaborate on new or existing projects, and quickly share results among the Earth science communities. NEX is designed primarily for use by the NASA Earth science community to address scientific grand challenges. The NEX web portal component provides an on-line collaborative environment for sharing of Eearth science models, data, analysis tools and scientific results by researchers. In addition, the NEX portal also serves as a knowledge network that allows researchers to connect and collaborate based on the research they are involved in, specific geographic area of interest, field of study, etc. Features of the NEX web portal include: Member profiles, resource sharing (data sets, algorithms, models, publications), communication tools (commenting, messaging, social tagging), project tools (wikis, blogs) and more. The NEX web portal is built on the proven technologies and policies of DASHlink.arc.nasa.gov, (one of NASA's first science social media websites). The core component of the web portal is a C3 framework, which was built using Django and which is being deployed as a common framework for a number of collaborative sites throughout NASA.

  11. Advanced Stirling Convertor (ASC) Development for NASA RPS

    NASA Technical Reports Server (NTRS)

    Wong, Wayne A.; Wilson, Scott; Collins, Josh

    2014-01-01

    Sunpower's Advanced Stirling Convertor (ASC) initiated development under contract to the NASA Glenn Research Center (GRC) and after a series of successful demonstrations, the ASC began transitioning from a technology development project to flight development project. The ASC has very high power conversion efficiency making it attractive for future Radioisotope Power Systems (RPS) in order to make best use of the low plutonium-238 fuel inventory in the U.S. In recent years, the ASC became part of the NASA-Department of Energy Advanced Stirling Radioisotope Generator (ASRG) Integrated Project. Sunpower held two parallel contracts to produce ASC convertors, one with the Department of Energy/Lockheed Martin to produce the ASC-F flight convertors, and one with NASA GRC for the production of ASC-E3 engineering units, the initial units of which served as production pathfinders. The integrated ASC technical team successfully overcame various technical challenges that led to the completion and delivery of the first two pairs of flight-like ASC-E3 by 2013. However, in late Fall 2013, the DOE initiated termination of the Lockheed Martin ASRG flight development contract driven primarily by budget constraints. NASA continues to recognize the importance of high efficiency ASC power conversion for RPS and continues investment in the technology including the continuation of ASC-E3 production at Sunpower and the assembly of the ASRG Engineering Unit #2. This paper provides a summary of ASC technical accomplishments, overview of tests at GRC, plans for continued ASC production at Sunpower, and status of Stirling technology development.

  12. The Implementation of Advanced Solar Array Technology in Future NASA Missions

    NASA Technical Reports Server (NTRS)

    Piszczor, Michael F.; Kerslake, Thomas W.; Hoffman, David J.; White, Steve; Douglas, Mark; Spence, Brian; Jones, P. Alan

    2003-01-01

    Advanced solar array technology is expected to be critical in achieving the mission goals on many future NASA space flight programs. Current PV cell development programs offer significant potential and performance improvements. However, in order to achieve the performance improvements promised by these devices, new solar array structures must be designed and developed to accommodate these new PV cell technologies. This paper will address the use of advanced solar array technology in future NASA space missions and specifically look at how newer solar cell technologies impact solar array designs and overall power system performance.

  13. Comprehensive efficiency analysis of supercomputer resource usage based on system monitoring data

    NASA Astrophysics Data System (ADS)

    Mamaeva, A. A.; Shaykhislamov, D. I.; Voevodin, Vad V.; Zhumatiy, S. A.

    2018-03-01

    One of the main problems of modern supercomputers is the low efficiency of their usage, which leads to the significant idle time of computational resources, and, in turn, to the decrease in speed of scientific research. This paper presents three approaches to study the efficiency of supercomputer resource usage based on monitoring data analysis. The first approach performs an analysis of computing resource utilization statistics, which allows to identify different typical classes of programs, to explore the structure of the supercomputer job flow and to track overall trends in the supercomputer behavior. The second approach is aimed specifically at analyzing off-the-shelf software packages and libraries installed on the supercomputer, since efficiency of their usage is becoming an increasingly important factor for the efficient functioning of the entire supercomputer. Within the third approach, abnormal jobs – jobs with abnormally inefficient behavior that differs significantly from the standard behavior of the overall supercomputer job flow – are being detected. For each approach, the results obtained in practice in the Supercomputer Center of Moscow State University are demonstrated.

  14. Characterizing output bottlenecks in a supercomputer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xie, Bing; Chase, Jeffrey; Dillow, David A

    2012-01-01

    Supercomputer I/O loads are often dominated by writes. HPC (High Performance Computing) file systems are designed to absorb these bursty outputs at high bandwidth through massive parallelism. However, the delivered write bandwidth often falls well below the peak. This paper characterizes the data absorption behavior of a center-wide shared Lustre parallel file system on the Jaguar supercomputer. We use a statistical methodology to address the challenges of accurately measuring a shared machine under production load and to obtain the distribution of bandwidth across samples of compute nodes, storage targets, and time intervals. We observe and quantify limitations from competing traffic,more » contention on storage servers and I/O routers, concurrency limitations in the client compute node operating systems, and the impact of variance (stragglers) on coupled output such as striping. We then examine the implications of our results for application performance and the design of I/O middleware systems on shared supercomputers.« less

  15. Technology transfer in the NASA Ames Advanced Life Support Division

    NASA Technical Reports Server (NTRS)

    Connell, Kathleen; Schlater, Nelson; Bilardo, Vincent; Masson, Paul

    1992-01-01

    This paper summarizes a representative set of technology transfer activities which are currently underway in the Advanced Life Support Division of the Ames Research Center. Five specific NASA-funded research or technology development projects are synopsized that are resulting in transfer of technology in one or more of four main 'arenas:' (1) intra-NASA, (2) intra-Federal, (3) NASA - aerospace industry, and (4) aerospace industry - broader economy. Each project is summarized as a case history, specific issues are identified, and recommendations are formulated based on the lessons learned as a result of each project.

  16. NASA's advanced space transportation system launch vehicles

    NASA Technical Reports Server (NTRS)

    Branscome, Darrell R.

    1991-01-01

    Some insight is provided into the advanced transportation planning and systems that will evolve to support long term mission requirements. The general requirements include: launch and lift capacity to low earth orbit (LEO); space based transfer systems for orbital operations between LEO and geosynchronous equatorial orbit (GEO), the Moon, and Mars; and Transfer vehicle systems for long duration deep space probes. These mission requirements are incorporated in the NASA Civil Needs Data Base. To accomplish these mission goals, adequate lift capacity to LEO must be available: to support science and application missions; to provide for construction of the Space Station Freedom; and to support resupply of personnel and supplies for its operations. Growth in lift capacity must be time phased to support an expanding mission model that includes Freedom Station, the Mission to Planet Earth, and an expanded robotic planetary program. The near term increase in cargo lift capacity associated with development of the Shuttle-C is addressed. The joint DOD/NASA Advanced Launch System studies are focused on a longer term new cargo capability that will significantly reduce costs of placing payloads in space.

  17. Most Social Scientists Shun Free Use of Supercomputers.

    ERIC Educational Resources Information Center

    Kiernan, Vincent

    1998-01-01

    Social scientists, who frequently complain that the federal government spends too little on them, are passing up what scholars in the physical and natural sciences see as the government's best give-aways: free access to supercomputers. Some social scientists say the supercomputers are difficult to use; others find desktop computers provide…

  18. NASA Advanced Refrigerator/Freezer Technology Development Project Overview

    NASA Technical Reports Server (NTRS)

    Cairelli, J. E.

    1995-01-01

    NASA Lewis Research Center (LeRC) has recently initiated a three-year project to develop the advanced refrigerator/freezer (R/F) technologies needed to support future life and biomedical sciences space experiments. Refrigerator/freezer laboratory equipment, most of which needs to be developed, is enabling to about 75 percent of the planned space station life and biomedical science experiments. These experiments will require five different classes of equipment; three storage freezers operating at -20 C, -70 C and less than 183 C, a -70 C freeze-dryer, and a cryogenic (less than 183 C) quick/snap freezer. This project is in response to a survey of cooling system technologies, performed by a team of NASA scientists and engineers. The team found that the technologies, required for future R/F systems to support life and biomedical sciences spaceflight experiments, do not exist at an adequate state of development and concluded that a program to develop the advanced R/F technologies is needed. Limitations on spaceflight system size, mass, and power consumption present a significant challenge in developing these systems. This paper presents some background and a description of the Advanced R/F Technology Development Project, project approach and schedule, general description of the R/F systems, and a review of the major R/F equipment requirements.

  19. Advanced Ceramics for NASA's Current and Future Needs

    NASA Technical Reports Server (NTRS)

    Jaskowiak, Martha H.

    2006-01-01

    Ceramic composites and monolithics are widely recognized by NASA as enabling materials for a variety of aerospace applications. Compared to traditional materials, ceramic materials offer higher specific strength which can enable lighter weight vehicle and engine concepts, increased payloads, and increased operational margins. Additionally, the higher temperature capabilities of these materials allows for increased operating temperatures within the engine and on the vehicle surfaces which can lead to improved engine efficiency and vehicle performance. To meet the requirements of the next generation of both rocket and air-breathing engines, NASA is actively pursuing the development and maturation of a variety of ceramic materials. Anticipated applications for carbide, nitride and oxide-based ceramics will be presented. The current status of these materials and needs for future goals will be outlined. NASA also understands the importance of teaming with other government agencies and industry to optimize these materials and advance them to the level of maturation needed for eventual vehicle and engine demonstrations. A number of successful partnering efforts with NASA and industry will be highlighted.

  20. Intelligent supercomputers: the Japanese computer sputnik

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Walter, G.

    1983-11-01

    Japan's government-supported fifth-generation computer project has had a pronounced effect on the American computer and information systems industry. The US firms are intensifying their research on and production of intelligent supercomputers, a combination of computer architecture and artificial intelligence software programs. While the present generation of computers is built for the processing of numbers, the new supercomputers will be designed specifically for the solution of symbolic problems and the use of artificial intelligence software. This article discusses new and exciting developments that will increase computer capabilities in the 1990s. 4 references.

  1. NASA's Advanced Information Systems Technology (AIST) Program: Advanced Concepts and Disruptive Technologies

    NASA Astrophysics Data System (ADS)

    Little, M. M.; Moe, K.; Komar, G.

    2014-12-01

    NASA's Earth Science Technology Office (ESTO) manages a wide range of information technology projects under the Advanced Information Systems Technology (AIST) Program. The AIST Program aims to support all phases of NASA's Earth Science program with the goal of enabling new observations and information products, increasing the accessibility and use of Earth observations, and reducing the risk and cost of satellite and ground based information systems. Recent initiatives feature computational technologies to improve information extracted from data streams or model outputs and researchers' tools for Big Data analytics. Data-centric technologies enable research communities to facilitate collaboration and increase the speed with which results are produced and published. In the future NASA anticipates more small satellites (e.g., CubeSats), mobile drones and ground-based in-situ sensors will advance the state-of-the-art regarding how scientific observations are performed, given the flexibility, cost and deployment advantages of new operations technologies. This paper reviews the success of the program and the lessons learned. Infusion of these technologies is challenging and the paper discusses the obstacles and strategies to adoption by the earth science research and application efforts. It also describes alternative perspectives for the future program direction and for realizing the value in the steps to transform observations from sensors to data, to information, and to knowledge, namely: sensor measurement concepts development; data acquisition and management; data product generation; and data exploitation for science and applications.

  2. Proceedings of the Seventh Annual Summer Conference. NASA/USRA: University Advanced Design Program

    NASA Technical Reports Server (NTRS)

    1991-01-01

    The Advanced Design Program (ADP) is a unique program that brings together students and faculty from U.S. engineering schools with engineers from the NASA centers through integration of current and future NASA space and aeronautics projects into university engineering design curriculum. The Advanced Space Design Program study topics cover a broad range of projects that could be undertaken during a 20-30 year period beginning with the deployment of the Space Station Freedom. The Advanced Aeronautics Design Program study topics typically focus on nearer-term projects of interest to NASA, covering from small, slow-speed vehicles through large, supersonic passenger transports and on through hypersonic research vehicles. Student work accomplished during the 1990-91 academic year and reported at the 7th Annual Summer Conference is presented.

  3. Green Supercomputing at Argonne

    ScienceCinema

    Pete Beckman

    2017-12-09

    Pete Beckman, head of Argonne's Leadership Computing Facility (ALCF) talks about Argonne National Laboratory's green supercomputing—everything from designing algorithms to use fewer kilowatts per operation to using cold Chicago winter air to cool the machine more efficiently.

  4. Supercomputer Provides Molecular Insight into Cellulose (Fact Sheet)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    2011-02-01

    Groundbreaking research at the National Renewable Energy Laboratory (NREL) has used supercomputing simulations to calculate the work that enzymes must do to deconstruct cellulose, which is a fundamental step in biomass conversion technologies for biofuels production. NREL used the new high-performance supercomputer Red Mesa to conduct several million central processing unit (CPU) hours of simulation.

  5. Integration of Panda Workload Management System with supercomputers

    NASA Astrophysics Data System (ADS)

    De, K.; Jha, S.; Klimentov, A.; Maeno, T.; Mashinistov, R.; Nilsson, P.; Novikov, A.; Oleynik, D.; Panitkin, S.; Poyda, A.; Read, K. F.; Ryabinkin, E.; Teslyuk, A.; Velikhov, V.; Wells, J. C.; Wenaus, T.

    2016-09-01

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 140 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250000 cores with a peak performance of 0.3+ petaFLOPS, next LHC data taking runs will require more resources than Grid computing can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, Europe and Russia (in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF), Supercomputer at the National Research Center "Kurchatov Institute", IT4 in Ostrava, and others). The current approach utilizes a modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run singlethreaded workloads in parallel on Titan's multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads

  6. INTEGRATION OF PANDA WORKLOAD MANAGEMENT SYSTEM WITH SUPERCOMPUTERS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    De, K; Jha, S; Maeno, T

    Abstract The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the funda- mental nature of matter and the basic forces that shape our universe, and were recently credited for the dis- covery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Datamore » Analysis) Workload Management System for managing the workflow for all data processing on over 140 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data cen- ters are physically scattered all over the world. While PanDA currently uses more than 250000 cores with a peak performance of 0.3+ petaFLOPS, next LHC data taking runs will require more resources than Grid computing can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, Europe and Russia (in particular with Titan supercomputer at Oak Ridge Leadership Com- puting Facility (OLCF), Supercomputer at the National Research Center Kurchatov Institute , IT4 in Ostrava, and others). The current approach utilizes a modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run single- threaded workloads in parallel on Titan s multi-core worker nodes. This implementation was tested with a

  7. NASA's Advanced Communications Technology Satellite (ACTS)

    NASA Technical Reports Server (NTRS)

    Gedney, R. T.

    1983-01-01

    NASA recently restructured its Space Communications Program to emphasize the development of high risk communication technology useable in multiple frequency bands and to support a wide range of future communication needs. As part of this restructuring, the Advanced Communications Technology Satellite (ACTS) Project will develop and experimentally verify the technology associated with multiple fixed and scanning beam systems which will enable growth in communication satellite capacities and more effective utilization of the radio frequency spectrum. The ACTS requirements and operations as well as the technology significance for future systems are described.

  8. NASA/HAA Advanced Rotorcraft Technology and Tilt Rotor Workshops. Volume 1: Executive Summary

    NASA Technical Reports Server (NTRS)

    1980-01-01

    This presentation provides an overview of the NASA Rotorcraft Program as an introduction to the technical sessions of the Advanced Rotorcraft Technology Workshop. It deals with the basis for NASA's increasing emphasis on rotorcraft technology, NASA's research capabilities, recent program planning efforts, highlights of its 10-year plan and future directions and opportunities.

  9. Current state and future direction of computer systems at NASA Langley Research Center

    NASA Technical Reports Server (NTRS)

    Rogers, James L. (Editor); Tucker, Jerry H. (Editor)

    1992-01-01

    Computer systems have advanced at a rate unmatched by any other area of technology. As performance has dramatically increased there has been an equally dramatic reduction in cost. This constant cost performance improvement has precipitated the pervasiveness of computer systems into virtually all areas of technology. This improvement is due primarily to advances in microelectronics. Most people are now convinced that the new generation of supercomputers will be built using a large number (possibly thousands) of high performance microprocessors. Although the spectacular improvements in computer systems have come about because of these hardware advances, there has also been a steady improvement in software techniques. In an effort to understand how these hardware and software advances will effect research at NASA LaRC, the Computer Systems Technical Committee drafted this white paper to examine the current state and possible future directions of computer systems at the Center. This paper discusses selected important areas of computer systems including real-time systems, embedded systems, high performance computing, distributed computing networks, data acquisition systems, artificial intelligence, and visualization.

  10. Use of high performance networks and supercomputers for real-time flight simulation

    NASA Technical Reports Server (NTRS)

    Cleveland, Jeff I., II

    1993-01-01

    In order to meet the stringent time-critical requirements for real-time man-in-the-loop flight simulation, computer processing operations must be consistent in processing time and be completed in as short a time as possible. These operations include simulation mathematical model computation and data input/output to the simulators. In 1986, in response to increased demands for flight simulation performance, NASA's Langley Research Center (LaRC), working with the contractor, developed extensions to the Computer Automated Measurement and Control (CAMAC) technology which resulted in a factor of ten increase in the effective bandwidth and reduced latency of modules necessary for simulator communication. This technology extension is being used by more than 80 leading technological developers in the United States, Canada, and Europe. Included among the commercial applications are nuclear process control, power grid analysis, process monitoring, real-time simulation, and radar data acquisition. Personnel at LaRC are completing the development of the use of supercomputers for mathematical model computation to support real-time flight simulation. This includes the development of a real-time operating system and development of specialized software and hardware for the simulator network. This paper describes the data acquisition technology and the development of supercomputing for flight simulation.

  11. Recent Advancements in Atmospheric Measurements Made from NASA Airborne Science Platforms

    NASA Astrophysics Data System (ADS)

    Schill, S.; Bennett, J.; Edmond, K.; Finch, P.; Rainer, S.; Schaller, E. L.; Stith, E.; Van Gilst, D.; Webster, A.; Yang, M. Y.

    2017-12-01

    Techniques for making atmospheric measurements are as wide-ranging as the atmosphere is complex. From in situ measurements made by land, sea, or air, to remote sensing data collected by satellites orbiting the Earth, atmospheric measurements have been paramount in advancing the combined understanding of our planet. To date, many of these advancements have been enabled by NASA Airborne Science platforms, which provide unique opportunities to make these measurements in remote regions, and to compare them with an ever-increasing archive of remote satellite data. Here, we discuss recent advances and current capabilities of the National Suborbital Research Center (NSRC) which provides comprehensive instrumentation and data system support on a variety of NASA airborne research platforms. Application of these methods to a number of diverse science missions, as well as upcoming project opportunities, will also be discussed.

  12. NOAA SWPC / NASA CCMC Space Weather Modeling Assessment Project: Toward the Validation of Advancements in Heliospheric Space Weather Prediction Within WSA-Enlil

    NASA Astrophysics Data System (ADS)

    Adamson, E. T.; Pizzo, V. J.; Biesecker, D. A.; Mays, M. L.; MacNeice, P. J.; Taktakishvili, A.; Viereck, R. A.

    2017-12-01

    In 2011, NOAA's Space Weather Prediction Center (SWPC) transitioned the world's first operational space weather model into use at the National Weather Service's Weather and Climate Operational Supercomputing System (WCOSS). This operational forecasting tool is comprised of the Wang-Sheeley-Arge (WSA) solar wind model coupled with the Enlil heliospheric MHD model. Relying on daily-updated photospheric magnetograms produced by the National Solar Observatory's Global Oscillation Network Group (GONG), this tool provides critical predictive knowledge of heliospheric dynamics such as high speed streams and coronal mass ejections. With the goal of advancing this predictive model and quantifying progress, SWPC and NASA's Community Coordinated Modeling Center (CCMC) have initiated a collaborative effort to assess improvements in space weather forecasts at Earth by moving from a single daily-updated magnetogram to a sequence of time-dependent magnetograms to drive the ambient inputs for the WSA-Enlil model as well as incorporating the newly developed Air Force Data Assimilative Photospheric Flux Transport (ADAPT) model. We will provide a detailed overview of the scope of this effort and discuss preliminary results from the first phase focusing on the impact of time-dependent magnetogram inputs to the WSA-Enlil model.

  13. Advanced Stirling Convertor Development for NASA Radioisotope Power Systems

    NASA Technical Reports Server (NTRS)

    Wong, Wayne A.; Wilson, Scott D.; Collins, Josh

    2015-01-01

    Sunpower Inc.'s Advanced Stirling Convertor (ASC) initiated development under contract to the NASA Glenn Research Center and after a series of successful demonstrations, the ASC began transitioning from a technology development project to a flight development project. The ASC has very high power conversion efficiency making it attractive for future Radioisotope Power Systems (RPS) in order to make best use of the low plutonium-238 fuel inventory in the United States. In recent years, the ASC became part of the NASA and Department of Energy (DOE) Advanced Stirling Radioisotope Generator (ASRG) Integrated Project. Sunpower held two parallel contracts to produce ASCs, one with the DOE and Lockheed Martin to produce the ASC-F flight convertors, and one with NASA Glenn for the production of ASC-E3 engineering units, the initial units of which served as production pathfinders. The integrated ASC technical team successfully overcame various technical challenges that led to the completion and delivery of the first two pairs of flightlike ASC-E3 by 2013. However, in late fall 2013, the DOE initiated termination of the Lockheed Martin ASRG flight development contract driven primarily by budget constraints. NASA continues to recognize the importance of high-efficiency ASC power conversion for RPS and continues investment in the technology including the continuation of ASC-E3 production at Sunpower and the assembly of the ASRG Engineering Unit #2. This paper provides a summary of ASC technical accomplishments, overview of tests at Glenn, plans for continued ASC production at Sunpower, and status of Stirling technology development.

  14. NASA Lewis advanced IPV nickel-hydrogen technology

    NASA Technical Reports Server (NTRS)

    Smithrick, John J.; Britton, Doris L.

    1993-01-01

    Individual pressure vessel (IPV) nickel-hydrogen technology was advanced at NASA Lewis and under Lewis contracts. Some of the advancements are as follows: to use 26 percent potassium hydroxide electrolyte to improve cycle life and performance, to modify the state of the art cell design to eliminate identified failure modes and further improve cycle life, and to develop a lightweight nickel electrode to reduce battery mass, hence reduce launch and/or increase satellite payload. A breakthrough in the LEO cycle life of individual pressure vessel nickel-hydrogen battery cells was reported. The cycle life of boiler plate cells containing 26 percent KOH electrolyte was about 40,000 accelerated LEO cycles at 80 percent DOD compared to 3,500 cycles for cells containing 31 percent KOH. Results of the boiler plate cell tests have been validated at NWSC, Crane, Indiana. Forty-eight ampere-hour flight cells containing 26 and 31 percent KOH have undergone real time LEO cycle life testing at an 80 percent DOD, 10 C. The three cells containing 26 percent KOH failed on the average at cycle 19,500. The three cells containing 31 percent KOH failed on the average at cycle 6,400. Validation testing of NASA Lewis 125 Ah advanced design IPV nickel-hydrogen flight cells is also being conducted at NWSC, Crane, Indiana under a NASA Lewis contract. This consists of characterization, storage, and cycle life testing. There was no capacity degradation after 52 days of storage with the cells in the discharged state, on open circuit, 0 C, and a hydrogen pressure of 14.5 psia. The catalyzed wall wick cells have been cycled for over 22,694 cycles with no cell failures in the continuing test. All three of the non-catalyzed wall wick cells failed (cycles 9,588; 13,900; and 20,575). Cycle life test results of the Fibrex nickel electrode has demonstrated the feasibility of an improved nickel electrode giving a higher specific energy nickel-hydrogen cell. A nickel-hydrogen boiler plate cell using an 80

  15. Next Generation Security for the 10,240 Processor Columbia System

    NASA Technical Reports Server (NTRS)

    Hinke, Thomas; Kolano, Paul; Shaw, Derek; Keller, Chris; Tweton, Dave; Welch, Todd; Liu, Wen (Betty)

    2005-01-01

    This presentation includes a discussion of the Columbia 10,240-processor system located at the NASA Advanced Supercomputing (NAS) division at the NASA Ames Research Center which supports each of NASA's four missions: science, exploration systems, aeronautics, and space operations. It is comprised of 20 Silicon Graphics nodes, each consisting of 512 Itanium II processors. A 64 processor Columbia front-end system supports users as they prepare their jobs and then submits them to the PBS system. Columbia nodes and front-end systems use the Linux OS. Prior to SC04, the Columbia system was used to attain a processing speed of 51.87 TeraFlops, which made it number two on the Top 500 list of the world's supercomputers and the world's fastest "operational" supercomputer since it was fully engaged in supporting NASA users.

  16. Ice Storm Supercomputer

    ScienceCinema

    None

    2018-05-01

    A new Idaho National Laboratory supercomputer is helping scientists create more realistic simulations of nuclear fuel. Dubbed "Ice Storm" this 2048-processor machine allows researchers to model and predict the complex physics behind nuclear reactor behavior. And with a new visualization lab, the team can see the results of its simulations on the big screen. For more information about INL research, visit http://www.facebook.com/idahonationallaboratory.

  17. Scalability Test of Multiscale Fluid-Platelet Model for Three Top Supercomputers

    PubMed Central

    Zhang, Peng; Zhang, Na; Gao, Chao; Zhang, Li; Gao, Yuxiang; Deng, Yuefan; Bluestein, Danny

    2016-01-01

    We have tested the scalability of three supercomputers: the Tianhe-2, Stampede and CS-Storm with multiscale fluid-platelet simulations, in which a highly-resolved and efficient numerical model for nanoscale biophysics of platelets in microscale viscous biofluids is considered. Three experiments involving varying problem sizes were performed: Exp-S: 680,718-particle single-platelet; Exp-M: 2,722,872-particle 4-platelet; and Exp-L: 10,891,488-particle 16-platelet. Our implementations of multiple time-stepping (MTS) algorithm improved the performance of single time-stepping (STS) in all experiments. Using MTS, our model achieved the following simulation rates: 12.5, 25.0, 35.5 μs/day for Exp-S and 9.09, 6.25, 14.29 μs/day for Exp-M on Tianhe-2, CS-Storm 16-K80 and Stampede K20. The best rate for Exp-L was 6.25 μs/day for Stampede. Utilizing current advanced HPC resources, the simulation rates achieved by our algorithms bring within reach performing complex multiscale simulations for solving vexing problems at the interface of biology and engineering, such as thrombosis in blood flow which combines millisecond-scale hematology with microscale blood flow at resolutions of micro-to-nanoscale cellular components of platelets. This study of testing the performance characteristics of supercomputers with advanced computational algorithms that offer optimal trade-off to achieve enhanced computational performance serves to demonstrate that such simulations are feasible with currently available HPC resources. PMID:27570250

  18. Close to real life. [solving for transonic flow about lifting airfoils using supercomputers

    NASA Technical Reports Server (NTRS)

    Peterson, Victor L.; Bailey, F. Ron

    1988-01-01

    NASA's Numerical Aerodynamic Simulation (NAS) facility for CFD modeling of highly complex aerodynamic flows employs as its basic hardware two Cray-2s, an ETA-10 Model Q, an Amdahl 5880 mainframe computer that furnishes both support processing and access to 300 Gbytes of disk storage, several minicomputers and superminicomputers, and a Thinking Machines 16,000-device 'connection machine' processor. NAS, which was the first supercomputer facility to standardize operating-system and communication software on all processors, has done important Space Shuttle aerodynamics simulations and will be critical to the configurational refinement of the National Aerospace Plane and its intergrated powerplant, which will involve complex, high temperature reactive gasdynamic computations.

  19. Deploying the ODISEES Ontology-guided Search in the NASA Earth Exchange (NEX)

    NASA Astrophysics Data System (ADS)

    Huffer, E.; Gleason, J. L.; Cotnoir, M.; Spaulding, R.; Deardorff, G.

    2016-12-01

    data, and the approach to transfer data into the NAS supercomputing environment. Finally, we will describe the end-to-end demonstration of the capabilities implemented. This work was funded by the Advanced Information Systems Technology Program of NASA's Research Opportunities in Space and Earth Science.

  20. Dynamics and Control of Orbiting Space Structures NASA Advanced Design Program (ADP)

    NASA Technical Reports Server (NTRS)

    Cruse, T. A.

    1996-01-01

    The report summarizes the advanced design program in the mechanical engineering department at Vanderbilt University for the academic years 1994-1995 and 1995-1996. Approximately 100 students participated in the two years of the subject grant funding. The NASA-oriented design projects that were selected included lightweight hydrogen propellant tank for the reusable launch vehicle, a thermal barrier coating test facility, a piezoelectric motor for space antenna control, and a lightweight satellite for automated materials processing. The NASA supported advanced design program (ADP) has been a success and a number of graduates are working in aerospace and are doing design.

  1. Automatic discovery of the communication network topology for building a supercomputer model

    NASA Astrophysics Data System (ADS)

    Sobolev, Sergey; Stefanov, Konstantin; Voevodin, Vadim

    2016-10-01

    The Research Computing Center of Lomonosov Moscow State University is developing the Octotron software suite for automatic monitoring and mitigation of emergency situations in supercomputers so as to maximize hardware reliability. The suite is based on a software model of the supercomputer. The model uses a graph to describe the computing system components and their interconnections. One of the most complex components of a supercomputer that needs to be included in the model is its communication network. This work describes the proposed approach for automatically discovering the Ethernet communication network topology in a supercomputer and its description in terms of the Octotron model. This suite automatically detects computing nodes and switches, collects information about them and identifies their interconnections. The application of this approach is demonstrated on the "Lomonosov" and "Lomonosov-2" supercomputers.

  2. Proceedings of the 6th Annual Summer Conference: NASA/USRA University Advanced Design Program

    NASA Technical Reports Server (NTRS)

    1990-01-01

    The NASA/USRA University Advanced Design Program is a unique program that brings together NASA engineers, students, and faculty from United States engineering schools by integrating current and future NASA space/aeronautics engineering design projects into the university curriculum. The Program was conceived in the fall of 1984 as a pilot project to foster engineering design education in the universities and to supplement NASA's in-house efforts in advanced planning for space and aeronautics design. Nine universities and five NASA centers participated in the first year of the pilot project. The study topics cover a broad range of potential space and aeronautics projects that could be undertaken during a 20 to 30 year period beginning with the deployment of the Space Station Freedom scheduled for the mid-1990s. Both manned and unmanned endeavors are embraced, and the systems approach to the design problem is emphasized.

  3. US Department of Energy High School Student Supercomputing Honors Program: A follow-up assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1987-01-01

    The US DOE High School Student Supercomputing Honors Program was designed to recognize high school students with superior skills in mathematics and computer science and to provide them with formal training and experience with advanced computer equipment. This document reports on the participants who attended the first such program, which was held at the National Magnetic Fusion Energy Computer Center at the Lawrence Livermore National Laboratory (LLNL) during August 1985.

  4. Kriging for Spatial-Temporal Data on the Bridges Supercomputer

    NASA Astrophysics Data System (ADS)

    Hodgess, E. M.

    2017-12-01

    Currently, kriging of spatial-temporal data is slow and limited to relatively small vector sizes. We have developed a method on the Bridges supercomputer, at the Pittsburgh supercomputer center, which uses a combination of the tools R, Fortran, the Message Passage Interface (MPI), OpenACC, and special R packages for big data. This combination of tools now permits us to complete tasks which could previously not be completed, or takes literally hours to complete. We ran simulation studies from a laptop against the supercomputer. We also look at "real world" data sets, such as the Irish wind data, and some weather data. We compare the timings. We note that the timings are suprising good.

  5. Dynamic Impact Testing and Model Development in Support of NASA's Advanced Composites Program

    NASA Technical Reports Server (NTRS)

    Melis, Matthew E.; Pereira, J. Michael; Goldberg, Robert; Rassaian, Mostafa

    2018-01-01

    The purpose of this paper is to provide an executive overview of the HEDI effort for NASA's Advanced Composites Program and establish the foundation for the remaining papers to follow in the 2018 SciTech special session NASA ACC High Energy Dynamic Impact. The paper summarizes the work done for the Advanced Composites Program to advance our understanding of the behavior of composite materials during high energy impact events and to advance the ability of analytical tools to provide predictive simulations. The experimental program carried out at GRC is summarized and a status on the current development state for MAT213 will be provided. Future work will be discussed as the HEDI effort transitions from fundamental analysis and testing to investigating sub-component structural concept response to impact events.

  6. Next Generation NASA GA Advanced Concept

    NASA Technical Reports Server (NTRS)

    Hahn, Andrew S.

    2006-01-01

    Not only is the common dream of frequent personal flight travel going unfulfilled, the current generation of General Aviation (GA) is facing tremendous challenges that threaten to relegate the Single Engine Piston (SEP) aircraft market to a footnote in the history of U.S. aviation. A case is made that this crisis stems from a generally low utility coupled to a high cost that makes the SEP aircraft of relatively low transportation value and beyond the means of many. The roots of this low value are examined in a broad sense, and a Next Generation NASA Advanced GA Concept is presented that attacks those elements addressable by synergistic aircraft design.

  7. Advanced Curation Activities at NASA: Implications for Astrobiological Studies of Future Sample Collections

    NASA Technical Reports Server (NTRS)

    McCubbin, F. M.; Evans, C. A.; Fries, M. D.; Harrington, A. D.; Regberg, A. B.; Snead, C. J.; Zeigler, R. A.

    2017-01-01

    The Astromaterials Acquisition and Curation Office (henceforth referred to herein as NASA Curation Office) at NASA Johnson Space Center (JSC) is responsible for curating all of NASA's extraterrestrial samples. Under the governing document, NASA Policy Directive (NPD) 7100.10F JSC is charged with curation of all extraterrestrial material under NASA control, including future NASA missions. The Directive goes on to define Curation as including documentation, preservation, preparation, and distribution of samples for re-search, education, and public outreach. Here we briefly describe NASA's astromaterials collections and our ongoing efforts related to enhancing the utility of our current collections as well as our efforts to prepare for future sample return missions. We collectively refer to these efforts as advanced curation.

  8. NASA's National Center for Advanced Manufacturing

    NASA Technical Reports Server (NTRS)

    Vickers, John

    2003-01-01

    NASA has designated the Principal Center Assignment to the Marshall Space Flight Center (MSFC) for implementation of the National Center for Advanced Manufacturing (NCAM). NCAM is NASA s leading resource for the aerospace manufacturing research, development, and innovation needs that are critical to the goals of the Agency. Through this initiative NCAM s people work together with government, industry, and academia to ensure the technology base and national infrastructure are available to develop innovative manufacturing technologies with broad application to NASA Enterprise programs, and U.S. industry. Educational enhancements are ever-present within the NCAM focus to promote research, to inspire participation and to support education and training in manufacturing. Many important accomplishments took place during 2002. Through NCAM, NASA was among five federal agencies involved in manufacturing research and development (R&D) to launch a major effort to exchange information and cooperate directly to enhance the payoffs from federal investments. The Government Agencies Technology Exchange in Manufacturing (GATE-M) is the only active effort to specifically and comprehensively address manufacturing R&D across the federal government. Participating agencies include the departments of Commerce (represented by the National Institute of Standards and Technology), Defense, and Energy, as well as the National Science Foundation and NASA. MSFC s ongoing partnership with the State of Louisiana, the University of New Orleans, and Lockheed Martin Corporation at the Michoud Assembly Facility (MAF) progressed significantly. Major capital investments were initiated for world-class equipment additions including a universal friction stir welding system, composite fiber placement machine, five-axis machining center, and ten-axis laser ultrasonic nondestructive test system. The NCAM consortium of five universities led by University of New Orleans with Mississippi State University

  9. Advanced Concepts, Technologies and Flight Experiments for NASA's Earth Science Enterprise

    NASA Technical Reports Server (NTRS)

    Meredith, Barry D.

    2000-01-01

    Over the last 25 years, NASA Langley Research Center (LaRC) has established a tradition of excellence in scientific research and leading-edge system developments, which have contributed to improved scientific understanding of our Earth system. Specifically, LaRC advances knowledge of atmospheric processes to enable proactive climate prediction and, in that role, develops first-of-a-kind atmospheric sensing capabilities that permit a variety of new measurements to be made within a constrained enterprise budget. These advances are enabled by the timely development and infusion of new, state-of-the-art (SOA), active and passive instrument and sensor technologies. In addition, LaRC's center-of-excellence in structures and materials is being applied to the technological challenges of reducing measurement system size, mass, and cost through the development and use of space-durable materials; lightweight, multi-functional structures; and large deployable/inflatable structures. NASA Langley is engaged in advancing these technologies across the full range of readiness levels from concept, to components, to prototypes, to flight experiments, and on to actual science mission infusion. The purpose of this paper is to describe current activities and capabilities, recent achievements, and future plans of the integrated science, engineering, and technology team at Langley Research Center who are working to enable the future of NASA's Earth Science Enterprise.

  10. NASA's Space Launch System Advanced Booster Engineering Demonstration and Risk Reduction Efforts

    NASA Technical Reports Server (NTRS)

    Crumbly, Christopher M.; May, Todd; Dumbacher, Daniel

    2012-01-01

    The National Aeronautics and Space Administration (NASA) formally initiated the Space Launch System (SLS) development in September 2011, with the approval of the program s acquisition plan, which engages the current workforce and infrastructure to deliver an initial 70 metric ton (t) SLS capability in 2017, while using planned block upgrades to evolve to a full 130 t capability after 2021. A key component of the acquisition plan is a three-phased approach for the first stage boosters. The first phase is to complete the development of the Ares and Space Shuttle heritage 5-segment solid rocket boosters for initial exploration missions in 2017 and 2021. The second phase in the booster acquisition plan is the Advanced Booster Risk Reduction and/or Engineering Demonstration NASA Research Announcement (NRA), which was recently awarded after a full and open competition. The NRA was released to industry on February 9, 2012, and its stated intent was to reduce risks leading to an affordable Advanced Booster and to enable competition. The third and final phase will be a full and open competition for Design, Development, Test, and Evaluation (DDT&E) of the Advanced Boosters. There are no existing boosters that can meet the performance requirements for the 130 t class SLS. The expected thrust class of the Advanced Boosters is potentially double the current 5-segment solid rocket booster capability. These new boosters will enable the flexible path approach to space exploration beyond Earth orbit, opening up vast opportunities including near-Earth asteroids, Lagrange Points, and Mars. This evolved capability offers large volume for science missions and payloads, will be modular and flexible, and will be right-sized for mission requirements. NASA developed the Advanced Booster Engineering Demonstration and/or Risk Reduction NRA to seek industry participation in reducing risks leading to an affordable Advanced Booster that meets the SLS performance requirements. Demonstrations and

  11. Key Metrics and Goals for NASA's Advanced Air Transportation Technologies Program

    NASA Technical Reports Server (NTRS)

    Kaplan, Bruce; Lee, David

    1998-01-01

    NASA's Advanced Air Transportation Technologies (AATT) program is developing a set of decision support tools to aid air traffic service providers, pilots, and airline operations centers in improving operations of the National Airspace System (NAS). NASA needs a set of unifying metrics to tie these efforts together, which it can use to track the progress of the AATT program and communicate program objectives and status within NASA and to stakeholders in the NAS. This report documents the results of our efforts and the four unifying metrics we recommend for the AATT program. They are: airport peak capacity, on-route sector capacity, block time and fuel, and free flight-enabling.

  12. Application of NASA's advanced life support technologies in polar regions

    NASA Astrophysics Data System (ADS)

    Bubenheim, D. L.; Lewis, C.

    1997-01-01

    NASA's advanced life support technologies are being combined with Arctic science and engineering knowledge in the Advanced Life Systems for Extreme Environments (ALSEE) project. This project addresses treatment and reduction of waste, purification and recycling of water, and production of food in remote communities of Alaska. The project focus is a major issue in the state of Alaska and other areas of the Circumpolar North; the health and welfare of people, their lives and the subsistence lifestyle in remote communities, care for the environment, and economic opportunity through technology transfer. The challenge is to implement the technologies in a manner compatible with the social and economic structures of native communities, the state, and the commercial sector. NASA goals are technology selection, system design and methods development of regenerative life support systems for planetary and Lunar bases and other space exploration missions. The ALSEE project will provide similar advanced technologies to address the multiple problems facing the remote communities of Alaska and provide an extreme environment testbed for future space applications. These technologies have never been assembled for this purpose. They offer an integrated approach to solving pressing problems in remote communities.

  13. GRC Supporting Technology for NASA's Advanced Stirling Radioisotope Generator (ASRG)

    NASA Technical Reports Server (NTRS)

    Schreiber, Jeffrey G.; Thieme, Lanny G.

    2008-01-01

    From 1999 to 2006, the NASA Glenn Research Center (GRC) supported a NASA project to develop a high-efficiency, nominal 110-We Stirling Radioisotope Generator (SRG110) for potential use on NASA missions. Lockheed Martin was selected as the System Integration Contractor for the SRG110, under contract to the Department of Energy (DOE). The potential applications included deep space missions, and Mars rovers. The project was redirected in 2006 to make use of the Advanced Stirling Convertor (ASC) that was being developed by Sunpower, Inc. under contract to GRC, which would reduce the mass of the generator and increase the power output. This change would approximately double the specific power and result in the Advanced Stirling Radioisotope Generator (ASRG). The SRG110 supporting technology effort at GRC was replanned to support the integration of the Sunpower convertor and the ASRG. This paper describes the ASRG supporting technology effort at GRC and provides details of the contributions in some of the key areas. The GRC tasks include convertor extended-operation testing in air and in thermal vacuum environments, heater head life assessment, materials studies, permanent magnet characterization and aging tests, structural dynamics testing, electromagnetic interference and electromagnetic compatibility characterization, evaluation of organic materials, reliability studies, and analysis to support controller development.

  14. NASA University Research Centers Technical Advances in Education, Aeronautics, Space, Autonomy, Earth and Environment

    NASA Technical Reports Server (NTRS)

    Jamshidi, M. (Editor); Lumia, R. (Editor); Tunstel, E., Jr. (Editor); White, B. (Editor); Malone, J. (Editor); Sakimoto, P. (Editor)

    1997-01-01

    This first volume of the Autonomous Control Engineering (ACE) Center Press Series on NASA University Research Center's (URC's) Advanced Technologies on Space Exploration and National Service constitute a report on the research papers and presentations delivered by NASA Installations and industry and Report of the NASA's fourteen URC's held at the First National Conference in Albuquerque, New Mexico from February 16-19, 1997.

  15. The intelligent user interface for NASA's advanced information management systems

    NASA Technical Reports Server (NTRS)

    Campbell, William J.; Short, Nicholas, Jr.; Rolofs, Larry H.; Wattawa, Scott L.

    1987-01-01

    NASA has initiated the Intelligent Data Management Project to design and develop advanced information management systems. The project's primary goal is to formulate, design and develop advanced information systems that are capable of supporting the agency's future space research and operational information management needs. The first effort of the project was the development of a prototype Intelligent User Interface to an operational scientific database, using expert systems and natural language processing technologies. An overview of Intelligent User Interface formulation and development is given.

  16. Predicting Hurricanes with Supercomputers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2010-01-01

    Hurricane Emily, formed in the Atlantic Ocean on July 10, 2005, was the strongest hurricane ever to form before August. By checking computer models against the actual path of the storm, researchers can improve hurricane prediction. In 2010, NOAA researchers were awarded 25 million processor-hours on Argonne's BlueGene/P supercomputer for the project. Read more at http://go.usa.gov/OLh

  17. Supercomputers Of The Future

    NASA Technical Reports Server (NTRS)

    Peterson, Victor L.; Kim, John; Holst, Terry L.; Deiwert, George S.; Cooper, David M.; Watson, Andrew B.; Bailey, F. Ron

    1992-01-01

    Report evaluates supercomputer needs of five key disciplines: turbulence physics, aerodynamics, aerothermodynamics, chemistry, and mathematical modeling of human vision. Predicts these fields will require computer speed greater than 10(Sup 18) floating-point operations per second (FLOP's) and memory capacity greater than 10(Sup 15) words. Also, new parallel computer architectures and new structured numerical methods will make necessary speed and capacity available.

  18. Advanced Materials and Component Development for Lithium-Ion Cells for NASA Missions

    NASA Technical Reports Server (NTRS)

    Reid, Concha M.

    2012-01-01

    Human missions to Near Earth Objects, such as asteroids, planets, moons, liberation points, and orbiting structures, will require safe, high specific energy, high energy density batteries to provide new or extended capabilities than are possible with today s state-of-the-art aerospace batteries. The Enabling Technology Development and Demonstration Program, High Efficiency Space Power Systems Project battery development effort at the National Aeronautics and Space Administration (NASA) is continuing advanced lithium-ion cell development efforts begun under the Exploration Technology Development Program Energy Storage Project. Advanced, high-performing materials are required to provide improved performance at the component-level that contributes to performance at the integrated cell level in order to meet the performance goals for NASA s High Energy and Ultra High Energy cells. NASA s overall approach to advanced cell development and interim progress on materials performance for the High Energy and Ultra High Energy cells after approximately 1 year of development has been summarized in a previous paper. This paper will provide an update on these materials through the completion of 2 years of development. The progress of materials development, remaining challenges, and an outlook for the future of these materials in near term cell products will be discussed.

  19. ARC-2012-ACD12-0022-003

    NASA Image and Video Library

    2012-02-02

    Kepler Program VIP's from left Jon Jenkins, Natalie Batalha, and Bill Borucki pointing at the NASA Ames Hyperwall in the NAS (NASA Advanced Supercomputing) facility filled with exo-planets discovered during Kepler Mission. Moffett Field, CA (for aviation week)

  20. Fuel savings potential of the NASA Advanced Turboprop Program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whitlow, J.B. Jr.; Sievers, G.K.

    1984-01-01

    The NASA Advanced Turboprop (ATP) Program is directed at developing new technology for highly loaded, multibladed propellers for use at Mach 0.65 to 0.85 and at altitudes compatible with the air transport system requirements. Advanced turboprop engines offer the potential of 15 to 30 percent savings in aircraft block fuel relative to advanced turbofan engines (50 to 60 percent savings over today's turbofan fleet). The concept, propulsive efficiency gains, block fuel savings and other benefits, and the program objectives through a systems approach are described. Current program status and major accomplishments in both single rotation and counter rotation propeller technologymore » are addressed. The overall program from scale model wind tunnel tests to large scale flight tests on testbed aircraft is discussed.« less

  1. ARC-2012-ACD12-0022-007

    NASA Image and Video Library

    2012-02-02

    Kepler Program VIP's from left Natalie Batalha, Bill Borucki and Jon Jenkins in front of a NASA Ames Hyperwall display of newly discovered planet K-22B art at the NAS (NASA Advanced Supercomputing) Facility, Moffett Field, CA (for aviation week)

  2. Battery Separator Characterization and Evaluation Procedures for NASA's Advanced Lithium-Ion Batteries

    NASA Technical Reports Server (NTRS)

    Baldwin, Richard S.; Bennet, William R.; Wong, Eunice K.; Lewton, MaryBeth R.; Harris, Megan K.

    2010-01-01

    To address the future performance and safety requirements for the electrical energy storage technologies that will enhance and enable future NASA manned aerospace missions, advanced rechargeable, lithium-ion battery technology development is being pursued within the scope of the NASA Exploration Technology Development Program s (ETDP's) Energy Storage Project. A critical cell-level component of a lithium-ion battery which significantly impacts both overall electrochemical performance and safety is the porous separator that is sandwiched between the two active cell electrodes. To support the selection of the optimal cell separator material(s) for the advanced battery technology and chemistries under development, laboratory characterization and screening procedures were established to assess and compare separator material-level attributes and associated separator performance characteristics.

  3. Supercomputing Drives Innovation - Continuum Magazine | NREL

    Science.gov Websites

    years, NREL scientists have used supercomputers to simulate 3D models of the primary enzymes and Scientist, discuss a 3D model of wind plant aerodynamics, showing low velocity wakes and impact on

  4. NAS Technical Summaries, March 1993 - February 1994

    NASA Technical Reports Server (NTRS)

    1995-01-01

    NASA created the Numerical Aerodynamic Simulation (NAS) Program in 1987 to focus resources on solving critical problems in aeroscience and related disciplines by utilizing the power of the most advanced supercomputers available. The NAS Program provides scientists with the necessary computing power to solve today's most demanding computational fluid dynamics problems and serves as a pathfinder in integrating leading-edge supercomputing technologies, thus benefitting other supercomputer centers in government and industry. The 1993-94 operational year concluded with 448 high-speed processor projects and 95 parallel projects representing NASA, the Department of Defense, other government agencies, private industry, and universities. This document provides a glimpse at some of the significant scientific results for the year.

  5. NAS technical summaries. Numerical aerodynamic simulation program, March 1992 - February 1993

    NASA Technical Reports Server (NTRS)

    1994-01-01

    NASA created the Numerical Aerodynamic Simulation (NAS) Program in 1987 to focus resources on solving critical problems in aeroscience and related disciplines by utilizing the power of the most advanced supercomputers available. The NAS Program provides scientists with the necessary computing power to solve today's most demanding computational fluid dynamics problems and serves as a pathfinder in integrating leading-edge supercomputing technologies, thus benefitting other supercomputer centers in government and industry. The 1992-93 operational year concluded with 399 high-speed processor projects and 91 parallel projects representing NASA, the Department of Defense, other government agencies, private industry, and universities. This document provides a glimpse at some of the significant scientific results for the year.

  6. Prospects for Boiling of Subcooled Dielectric Liquids for Supercomputer Cooling

    NASA Astrophysics Data System (ADS)

    Zeigarnik, Yu. A.; Vasil'ev, N. V.; Druzhinin, E. A.; Kalmykov, I. V.; Kosoi, A. S.; Khodakov, K. A.

    2018-02-01

    It is shown experimentally that using forced-convection boiling of dielectric coolants of the Novec 649 Refrigerant subcooled relative to the saturation temperature makes possible removing heat flow rates up to 100 W/cm2 from modern supercomputer chip interface. This fact creates prerequisites for the application of dielectric liquids in cooling systems of modern supercomputers with increased requirements for their operating reliability.

  7. NASA programs in advanced sensors and measurement technology for aeronautical applications

    NASA Astrophysics Data System (ADS)

    Conway, Bruce A.

    NASA involvement in the development, implementation, and experimental use of advanced aeronautical sensors and measurement technologies is presently discussed within the framework of specific NASA research centers' activities. The technology thrusts are in the fields of high temperature strain gages and microphones, laser light-sheet flow visualization, LTA, LDV, and LDA, tunable laser-based aviation meteorology, and fiber-optic CARS measurements. IR thermography and close-range photogrammetry are undergoing substantial updating and application. It is expected that 'smart' sensors will be increasingly widely used, especially in conjunction with smart structures in aircraft and spacecraft.

  8. NASA's Space Launch System Advanced Booster Engineering Demonstration and/or Risk Reduction Efforts

    NASA Technical Reports Server (NTRS)

    Crumbly, Christopher M.; Dumbacher, Daniel L.; May, Todd A.

    2012-01-01

    The National Aeronautics and Space Administration (NASA) formally initiated the Space Launch System (SLS) development in September 2011, with the approval of the program s acquisition plan, which engages the current workforce and infrastructure to deliver an initial 70 metric ton (t) SLS capability in 2017, while using planned block upgrades to evolve to a full 130 t capability after 2021. A key component of the acquisition plan is a three-phased approach for the first stage boosters. The first phase is to complete the development of the Ares and Space Shuttle heritage 5-segment solid rocket boosters (SRBs) for initial exploration missions in 2017 and 2021. The second phase in the booster acquisition plan is the Advanced Booster Risk Reduction and/or Engineering Demonstration NASA Research Announcement (NRA), which was recently awarded after a full and open competition. The NRA was released to industry on February 9, 2012, with a stated intent to reduce risks leading to an affordable advanced booster and to enable competition. The third and final phase will be a full and open competition for Design, Development, Test, and Evaluation (DDT&E) of the advanced boosters. There are no existing boosters that can meet the performance requirements for the 130 t class SLS. The expected thrust class of the advanced boosters is potentially double the current 5-segment solid rocket booster capability. These new boosters will enable the flexible path approach to space exploration beyond Earth orbit (BEO), opening up vast opportunities including near-Earth asteroids, Lagrange Points, and Mars. This evolved capability offers large volume for science missions and payloads, will be modular and flexible, and will be right-sized for mission requirements. NASA developed the Advanced Booster Engineering Demonstration and/or Risk Reduction NRA to seek industry participation in reducing risks leading to an affordable advanced booster that meets the SLS performance requirements

  9. Destructive Thomas Fire Continues Its Advance in New NASA Satellite Image

    NASA Image and Video Library

    2017-12-11

    The Thomas fire, west of Los Angeles, continues to advance to the west and north and is threatening a number of coastal communities, including Santa Barbara. It is now the fifth largest wildfire in modern California history. According to CAL FIRE, as of midday Dec. 11, the fire had consumed more than 230,000 acres and was 15 percent contained. The Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) instrument on NASA's Terra satellite captured this image on Dec. 10. The image depicts vegetation in red, smoke in light brown, burned areas in dark grey, and active fires in yellow, as detected by the thermal infrared bands. The image covers an area of 14.3 by 19.6 miles (23 by 31.5 kilometers), and is located at 34.5 degrees north, 119.4 degrees west. https://photojournal.jpl.nasa.gov/catalog/PIA22122

  10. Introducing Mira, Argonne's Next-Generation Supercomputer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2013-03-19

    Mira, the new petascale IBM Blue Gene/Q system installed at the ALCF, will usher in a new era of scientific supercomputing. An engineering marvel, the 10-petaflops machine is capable of carrying out 10 quadrillion calculations per second.

  11. Low-Noise Potential of Advanced Fan Stage Stator Vane Designs Verified in NASA Lewis Wind Tunnel Test

    NASA Technical Reports Server (NTRS)

    Hughes, Christopher E.

    1999-01-01

    With the advent of new, more stringent noise regulations in the next century, aircraft engine manufacturers are investigating new technologies to make the current generation of aircraft engines as well as the next generation of advanced engines quieter without sacrificing operating performance. A current NASA initiative called the Advanced Subsonic Technology (AST) Program has set as a goal a 6-EPNdB (effective perceived noise) reduction in aircraft engine noise relative to 1992 technology levels by the year 2000. As part of this noise program, and in cooperation with the Allison Engine Company, an advanced, low-noise, high-bypass-ratio fan stage design and several advanced technology stator vane designs were recently tested in NASA Lewis Research Center's 9- by 15-Foot Low-Speed Wind Tunnel (an anechoic facility). The project was called the NASA/Allison Low Noise Fan.

  12. Advanced Stirling Convertor Testing at NASA Glenn Research Center

    NASA Technical Reports Server (NTRS)

    Oriti, Salvatore M.; Blaze, Gina M.

    2007-01-01

    The U.S. Department of Energy (DOE), Lockheed Martin Space Systems (LMSS), Sunpower Inc., and NASA Glenn Research Center (GRC) have been developing an Advanced Stirling Radioisotope Generator (ASRG) for use as a power system on space science and exploration missions. This generator will make use of the free-piston Stirling convertors to achieve higher conversion efficiency than currently available alternatives. The ASRG will utilize two Advanced Stirling Convertors (ASC) to convert thermal energy from a radioisotope heat source to electricity. NASA GRC has initiated several experiments to demonstrate the functionality of the ASC, including: in-air extended operation, thermal vacuum extended operation, and ASRG simulation for mobile applications. The in-air and thermal vacuum test articles are intended to provide convertor performance data over an extended operating time. These test articles mimic some features of the ASRG without the requirement of low system mass. Operation in thermal vacuum adds the element of simulating deep space. This test article is being used to gather convertor performance and thermal data in a relevant environment. The ASRG simulator was designed to incorporate a minimum amount of support equipment, allowing integration onto devices powered directly by the convertors, such as a rover. This paper discusses the design, fabrication, and implementation of these experiments.

  13. First NASA Advanced Composites Technology Conference, Part 2

    NASA Technical Reports Server (NTRS)

    Davis, John G., Jr. (Compiler); Bohon, Herman L. (Compiler)

    1991-01-01

    Presented here is a compilation of papers presented at the first NASA Advanced Composites Technology (ACT) Conference held in Seattle, Washington, from 29 Oct. to 1 Nov. 1990. The ACT program is a major new multiyear research initiative to achieve a national goal of technology readiness before the end of the decade. Included are papers on materials development and processing, innovative design concepts, analysis development and validation, cost effective manufacturing methodology, and cost tracking and prediction procedures. Papers on major applications programs approved by the Department of Defense are also included.

  14. Roadrunner Supercomputer Breaks the Petaflop Barrier

    ScienceCinema

    Los Alamos National Lab - Brian Albright, Charlie McMillan, Lin Yin

    2017-12-09

    At 3:30 a.m. on May 26, 2008, Memorial Day, the "Roadrunner" supercomputer exceeded a sustained speed of 1 petaflop/s, or 1 million billion calculations per second. The sustained performance makes Roadrunner more than twice as fast as the current number 1

  15. Supercomputer Issues from a University Perspective.

    ERIC Educational Resources Information Center

    Beering, Steven C.

    1984-01-01

    Discusses issues related to the access of and training of university researchers in using supercomputers, considering National Science Foundation's (NSF) role in this area, microcomputers on campuses, and the limited use of existing telecommunication networks. Includes examples of potential scientific projects (by subject area) utilizing…

  16. Non-preconditioned conjugate gradient on cell and FPGA based hybrid supercomputer nodes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dubois, David H; Dubois, Andrew J; Boorman, Thomas M

    2009-01-01

    This work presents a detailed implementation of a double precision, non-preconditioned, Conjugate Gradient algorithm on a Roadrunner heterogeneous supercomputer node. These nodes utilize the Cell Broadband Engine Architecture{sup TM} in conjunction with x86 Opteron{sup TM} processors from AMD. We implement a common Conjugate Gradient algorithm, on a variety of systems, to compare and contrast performance. Implementation results are presented for the Roadrunner hybrid supercomputer, SRC Computers, Inc. MAPStation SRC-6 FPGA enhanced hybrid supercomputer, and AMD Opteron only. In all hybrid implementations wall clock time is measured, including all transfer overhead and compute timings.

  17. Non-preconditioned conjugate gradient on cell and FPCA-based hybrid supercomputer nodes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dubois, David H; Dubois, Andrew J; Boorman, Thomas M

    2009-03-10

    This work presents a detailed implementation of a double precision, Non-Preconditioned, Conjugate Gradient algorithm on a Roadrunner heterogeneous supercomputer node. These nodes utilize the Cell Broadband Engine Architecture{trademark} in conjunction with x86 Opteron{trademark} processors from AMD. We implement a common Conjugate Gradient algorithm, on a variety of systems, to compare and contrast performance. Implementation results are presented for the Roadrunner hybrid supercomputer, SRC Computers, Inc. MAPStation SRC-6 FPGA enhanced hybrid supercomputer, and AMD Opteron only. In all hybrid implementations wall clock time is measured, including all transfer overhead and compute timings.

  18. Finite element methods on supercomputers - The scatter-problem

    NASA Technical Reports Server (NTRS)

    Loehner, R.; Morgan, K.

    1985-01-01

    Certain problems arise in connection with the use of supercomputers for the implementation of finite-element methods. These problems are related to the desirability of utilizing the power of the supercomputer as fully as possible for the rapid execution of the required computations, taking into account the gain in speed possible with the aid of pipelining operations. For the finite-element method, the time-consuming operations may be divided into three categories. The first two present no problems, while the third type of operation can be a reason for the inefficient performance of finite-element programs. Two possibilities for overcoming certain difficulties are proposed, giving attention to a scatter-process.

  19. A mass storage system for supercomputers based on Unix

    NASA Technical Reports Server (NTRS)

    Richards, J.; Kummell, T.; Zarlengo, D. G.

    1988-01-01

    The authors present the design, implementation, and utilization of a large mass storage subsystem (MSS) for the numerical aerodynamics simulation. The MSS supports a large networked, multivendor Unix-based supercomputing facility. The MSS at Ames Research Center provides all processors on the numerical aerodynamics system processing network, from workstations to supercomputers, the ability to store large amounts of data in a highly accessible, long-term repository. The MSS uses Unix System V and is capable of storing hundreds of thousands of files ranging from a few bytes to 2 Gb in size.

  20. NASA Bioreactors Advance Disease Treatments

    NASA Technical Reports Server (NTRS)

    2009-01-01

    the body. Experiments conducted by Johnson scientist Dr. Thomas Goodwin proved that the NASA bioreactor could successfully cultivate cells using simulated microgravity, resulting in three-dimensional tissues that more closely approximate those in the body. Further experiments conducted on space shuttle missions and by Wolf as an astronaut on the Mir space station demonstrated that the bioreactor s effects were even further expanded in space, resulting in remarkable levels of tissue formation. While the bioreactor may one day culture red blood cells for injured astronauts or single-celled organisms like algae as food or oxygen producers for a Mars colony, the technology s cell growth capability offers significant opportunities for terrestrial medical research right now. A small Texas company is taking advantage of the NASA technology to advance promising treatment applications for diseases both common and obscure.

  1. Advanced Architectures for Astrophysical Supercomputing

    NASA Astrophysics Data System (ADS)

    Barsdell, B. R.; Barnes, D. G.; Fluke, C. J.

    2010-12-01

    Astronomers have come to rely on the increasing performance of computers to reduce, analyze, simulate and visualize their data. In this environment, faster computation can mean more science outcomes or the opening up of new parameter spaces for investigation. If we are to avoid major issues when implementing codes on advanced architectures, it is important that we have a solid understanding of our algorithms. A recent addition to the high-performance computing scene that highlights this point is the graphics processing unit (GPU). The hardware originally designed for speeding-up graphics rendering in video games is now achieving speed-ups of O(100×) in general-purpose computation - performance that cannot be ignored. We are using a generalized approach, based on the analysis of astronomy algorithms, to identify the optimal problem-types and techniques for taking advantage of both current GPU hardware and future developments in computing architectures.

  2. Multiple DNA and protein sequence alignment on a workstation and a supercomputer.

    PubMed

    Tajima, K

    1988-11-01

    This paper describes a multiple alignment method using a workstation and supercomputer. The method is based on the alignment of a set of aligned sequences with the new sequence, and uses a recursive procedure of such alignment. The alignment is executed in a reasonable computation time on diverse levels from a workstation to a supercomputer, from the viewpoint of alignment results and computational speed by parallel processing. The application of the algorithm is illustrated by several examples of multiple alignment of 12 amino acid and DNA sequences of HIV (human immunodeficiency virus) env genes. Colour graphic programs on a workstation and parallel processing on a supercomputer are discussed.

  3. Library Services in a Supercomputer Center.

    ERIC Educational Resources Information Center

    Layman, Mary

    1991-01-01

    Describes library services that are offered at the San Diego Supercomputer Center (SDSC), which is located at the University of California at San Diego. Topics discussed include the user population; online searching; microcomputer use; electronic networks; current awareness programs; library catalogs; and the slide collection. A sidebar outlines…

  4. Extracting the Textual and Temporal Structure of Supercomputing Logs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jain, S; Singh, I; Chandra, A

    2009-05-26

    Supercomputers are prone to frequent faults that adversely affect their performance, reliability and functionality. System logs collected on these systems are a valuable resource of information about their operational status and health. However, their massive size, complexity, and lack of standard format makes it difficult to automatically extract information that can be used to improve system management. In this work we propose a novel method to succinctly represent the contents of supercomputing logs, by using textual clustering to automatically find the syntactic structures of log messages. This information is used to automatically classify messages into semantic groups via an onlinemore » clustering algorithm. Further, we describe a methodology for using the temporal proximity between groups of log messages to identify correlated events in the system. We apply our proposed methods to two large, publicly available supercomputing logs and show that our technique features nearly perfect accuracy for online log-classification and extracts meaningful structural and temporal message patterns that can be used to improve the accuracy of other log analysis techniques.« less

  5. Space Launch System NASA Research Announcement Advanced Booster Engineering Demonstration and/or Risk Reduction

    NASA Technical Reports Server (NTRS)

    Crumbly, Christopher M.; Craig, Kellie D.

    2011-01-01

    The intent of the Advanced Booster Engineering Demonstration and/or Risk Reduction (ABEDRR) effort is to: (1) Reduce risks leading to an affordable Advanced Booster that meets the evolved capabilities of SLS (2) Enable competition by mitigating targeted Advanced Booster risks to enhance SLS affordability. Key Concepts (1) Offerors must propose an Advanced Booster concept that meets SLS Program requirements (2) Engineering Demonstration and/or Risk Reduction must relate to the Offeror s Advanced Booster concept (3) NASA Research Announcement (NRA) will not be prescriptive in defining Engineering Demonstration and/or Risk Reduction

  6. Green Supercomputing at Argonne

    ScienceCinema

    Beckman, Pete

    2018-02-07

    Pete Beckman, head of Argonne's Leadership Computing Facility (ALCF) talks about Argonne National Laboratory's green supercomputing—everything from designing algorithms to use fewer kilowatts per operation to using cold Chicago winter air to cool the machine more efficiently. Argonne was recognized for green computing in the 2009 HPCwire Readers Choice Awards. More at http://www.anl.gov/Media_Center/News/2009/news091117.html Read more about the Argonne Leadership Computing Facility at http://www.alcf.anl.gov/

  7. Advanced Curation Activities at NASA: Preparing to Receive, Process, and Distribute Samples Returned from Future Missions

    NASA Technical Reports Server (NTRS)

    McCubbin, Francis M.; Zeigler, Ryan A.

    2017-01-01

    The Astromaterials Acquisition and Curation Office (henceforth referred to herein as NASA Curation Office) at NASA Johnson Space Center (JSC) is responsible for curating all of NASA's extraterrestrial samples. Under the governing document, NASA Policy Directive (NPD) 7100.10F JSC is charged with curation of all extraterrestrial material under NASA control, including future NASA missions. The Directive goes on to define Curation as including documentation, preservation, preparation, and distribution of samples for research, education, and public outreach. Here we briefly describe NASA's astromaterials collections and our ongoing efforts related to enhancing the utility of our current collections as well as our efforts to prepare for future sample return missions. We collectively refer to these efforts as advanced curation.

  8. NASA Noise Reduction Program for Advanced Subsonic Transports

    NASA Technical Reports Server (NTRS)

    Stephens, David G.; Cazier, F. W., Jr.

    1995-01-01

    Aircraft noise is an important byproduct of the world's air transportation system. Because of growing public interest and sensitivity to noise, noise reduction technology is becoming increasingly important to the unconstrained growth and utilization of the air transportation system. Unless noise technology keeps pace with public demands, noise restrictions at the international, national and/or local levels may unduly constrain the growth and capacity of the system to serve the public. In recognition of the importance of noise technology to the future of air transportation as well as the viability and competitiveness of the aircraft that operate within the system, NASA, the FAA and the industry have developed noise reduction technology programs having application to virtually all classes of subsonic and supersonic aircraft envisioned to operate far into the 21st century. The purpose of this paper is to describe the scope and focus of the Advanced Subsonic Technology Noise Reduction program with emphasis on the advanced technologies that form the foundation of the program.

  9. CFD Analysis in Advance of the NASA Juncture Flow Experiment

    NASA Technical Reports Server (NTRS)

    Lee, H. C.; Pulliam, T. H.; Neuhart, D. H.; Kegerise, M. A.

    2017-01-01

    NASA through its Transformational Tools and Technologies Project (TTT) under the Advanced Air Vehicle Program, is supporting a substantial effort to investigate the formation and origin of separation bubbles found on wing-body juncture zones. The flow behavior in these regions is highly complex, difficult to measure experimentally, and challenging to model numerically. Multiple wing configurations were designed and evaluated using Computational Fluid Dynamics (CFD), and a series of wind tunnel risk reduction tests were performed to further down-select the candidates for the final experiment. This paper documents the CFD analysis done in conjunction with the 6 percent scale risk reduction experiment performed in NASA Langley's 14- by 22-Foot Subsonic Tunnel. The combined CFD and wind tunnel results ultimately helped the Juncture Flow committee select the wing configurations for the final experiment.

  10. Recent advances in Ni-H2 technology at NASA Lewis Research Center

    NASA Technical Reports Server (NTRS)

    Gonzalezsanabria, O. D.; Britton, D. L.; Smithrick, J. J.; Reid, M. A.

    1986-01-01

    The NASA Lewis Research Center has concentrated its efforts on advancing the Ni-H2 system technology for low Earth orbit applications. Component technology as well as the design principles were studied in an effort to understand the system behavior and failure mechanisms in order to increase performance and extend cycle life. The design principles were previously addressed. The component development is discussed, in particular the separator and nickel electrode and how these efforts will advance the Ni-H2 system technology.

  11. A fault tolerant spacecraft supercomputer to enable a new class of scientific discovery

    NASA Technical Reports Server (NTRS)

    Katz, D. S.; McVittie, T. I.; Silliman, A. G., Jr.

    2000-01-01

    The goal of the Remote Exploration and Experimentation (REE) Project is to move supercomputeing into space in a coste effective manner and to allow the use of inexpensive, state of the art, commercial-off-the-shelf components and subsystems in these space-based supercomputers.

  12. Advance Inspection of NASA Next Mars Landing Site

    NASA Image and Video Library

    2017-03-29

    This map shows footprints of images taken from Mars orbit by the High Resolution Imaging Science Experiment (HiRISE) camera as part of advance analysis of the area where NASA's InSight mission will land in 2018. The final planned image of the set is targeted to fill in the yellow-outlined rectangle on March 30, 2017. HiRISE is one of six science instruments on NASA's Mars Reconnaissance Orbiter, which reached Mars in 2006 and surpassed 50,000 orbits on March 27, 2017. The map covers an area about 100 miles (160 kilometers) across. HiRISE has been used since 2006 to inspect dozens of candidate landing sites on Mars, including the sites where the Phoenix and Curiosity missions landed in 2008 and 2012. The site selected for InSight's Nov. 26, 2018, landing is on a flat plain in the Elysium Planitia region of Mars, between 4 and 5 degrees north of the equator. HiRISE images are detailed enough to reveal individual boulders big enough to be a landing hazard. The March 30 observation that completes the planned advance imaging of this landing area brings the number of HiRISE images of the area to 73. Some are pairs covering the same ground. Overlapping observations provide stereoscopic, 3-D information for evaluating characteristics such as slopes. On this map, coverage by stereo pairs is coded in pale blue, compared to the gray-green of single HiRISE image footprints. The ellipses on the map are about 81 miles (130 kilometers) west-to-east by about 17 miles (27 kilometers) north-to-south. InSight has about 99 percent odds of landing within the ellipse for which it is targeted. The three ellipses indicate landing expectations for three of the possible InSight launch dates: white outline for launch at the start of the launch period, on May 5, 2018; blue for launch on May 26, 2018; orange for launch on June 8, 2018. InSight -- an acronym for "Interior Exploration using Seismic Investigations, Geodesy and Heat Transport" -- will study the deep interior of Mars to improve

  13. The Pawsey Supercomputer geothermal cooling project

    NASA Astrophysics Data System (ADS)

    Regenauer-Lieb, K.; Horowitz, F.; Western Australian Geothermal Centre Of Excellence, T.

    2010-12-01

    The Australian Government has funded the Pawsey supercomputer in Perth, Western Australia, providing computational infrastructure intended to support the future operations of the Australian Square Kilometre Array radiotelescope and to boost next-generation computational geosciences in Australia. Supplementary funds have been directed to the development of a geothermal exploration well to research the potential for direct heat use applications at the Pawsey Centre site. Cooling the Pawsey supercomputer may be achieved by geothermal heat exchange rather than by conventional electrical power cooling, thus reducing the carbon footprint of the Pawsey Centre and demonstrating an innovative green technology that is widely applicable in industry and urban centres across the world. The exploration well is scheduled to be completed in 2013, with drilling due to commence in the third quarter of 2011. One year is allocated to finalizing the design of the exploration, monitoring and research well. Success in the geothermal exploration and research program will result in an industrial-scale geothermal cooling facility at the Pawsey Centre, and will provide a world-class student training environment in geothermal energy systems. A similar system is partially funded and in advanced planning to provide base-load air-conditioning for the main campus of the University of Western Australia. Both systems are expected to draw ~80-95 degrees C water from aquifers lying between 2000 and 3000 meters depth from naturally permeable rocks of the Perth sedimentary basin. The geothermal water will be run through absorption chilling devices, which only require heat (as opposed to mechanical work) to power a chilled water stream adequate to meet the cooling requirements. Once the heat has been removed from the geothermal water, licensing issues require the water to be re-injected back into the aquifer system. These systems are intended to demonstrate the feasibility of powering large-scale air

  14. Exploiting Thread Parallelism for Ocean Modeling on Cray XC Supercomputers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sarje, Abhinav; Jacobsen, Douglas W.; Williams, Samuel W.

    The incorporation of increasing core counts in modern processors used to build state-of-the-art supercomputers is driving application development towards exploitation of thread parallelism, in addition to distributed memory parallelism, with the goal of delivering efficient high-performance codes. In this work we describe the exploitation of threading and our experiences with it with respect to a real-world ocean modeling application code, MPAS-Ocean. We present detailed performance analysis and comparisons of various approaches and configurations for threading on the Cray XC series supercomputers.

  15. NASA Advanced Explorations Systems: 2017 Advancements in Life Support Systems

    NASA Technical Reports Server (NTRS)

    Schneider, Walter F.; Shull, Sarah A.

    2017-01-01

    The NASA Advanced Exploration Systems (AES) Life Support Systems (LSS) project strives to develop reliable, energy-efficient, and low-mass spacecraft systems to provide environmental control and life support systems (ECLSS) critical to enabling long duration human missions beyond low Earth orbit (LEO). Highly reliable, closed-loop life support systems are among the capabilities required for the longer duration human space exploration missions planned in the mid-2020s and beyond. The LSS Project is focused on four are-as-architecture and systems engineering for life support systems, environmental monitoring, air revitalization, and wastewater processing and water management. Starting with the International Space Station (ISS) LSS systems as a point of departure where applicable, the three-fold mission of the LSS Project is to address discrete LSS technology gaps, to improve the reliability of LSS systems, and to advance LSS systems toward integrated testing aboard the ISS. This paper is a follow on to the AES LSS development status reported in 2016 and provides additional details on the progress made since that paper was published with specific attention to the status of the Aerosol Sampler ISS Flight Experiment, the Spacecraft Atmosphere Monitor (SAM) Flight Experiment, the Brine Processor Assembly (BPA) Flight Experiment, the CO2 removal technology development tasks, and the work investigating the impacts of dormancy on LSS systems.

  16. Qualifying for the Green500: Experience with the newest generation of supercomputers at LANL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yilk, Todd

    The High Performance Computing Division of Los Alamos National Laboratory recently brought four new supercomputing platforms on line: Trinity with separate partitions built around the Haswell and Knights Landing CPU architectures for capability computing and Grizzly, Fire, and Ice for capacity computing applications. The power monitoring infrastructure of these machines is significantly enhanced over previous supercomputing generations at LANL and all were qualified at the highest level of the Green500 benchmark. Here, this paper discusses supercomputing at LANL, the Green500 benchmark, and notes on our experience meeting the Green500's reporting requirements.

  17. Qualifying for the Green500: Experience with the newest generation of supercomputers at LANL

    DOE PAGES

    Yilk, Todd

    2018-02-17

    The High Performance Computing Division of Los Alamos National Laboratory recently brought four new supercomputing platforms on line: Trinity with separate partitions built around the Haswell and Knights Landing CPU architectures for capability computing and Grizzly, Fire, and Ice for capacity computing applications. The power monitoring infrastructure of these machines is significantly enhanced over previous supercomputing generations at LANL and all were qualified at the highest level of the Green500 benchmark. Here, this paper discusses supercomputing at LANL, the Green500 benchmark, and notes on our experience meeting the Green500's reporting requirements.

  18. QCD on the BlueGene/L Supercomputer

    NASA Astrophysics Data System (ADS)

    Bhanot, G.; Chen, D.; Gara, A.; Sexton, J.; Vranas, P.

    2005-03-01

    In June 2004 QCD was simulated for the first time at sustained speed exceeding 1 TeraFlops in the BlueGene/L supercomputer at the IBM T.J. Watson Research Lab. The implementation and performance of QCD in the BlueGene/L is presented.

  19. Development of seismic tomography software for hybrid supercomputers

    NASA Astrophysics Data System (ADS)

    Nikitin, Alexandr; Serdyukov, Alexandr; Duchkov, Anton

    2015-04-01

    Seismic tomography is a technique used for computing velocity model of geologic structure from first arrival travel times of seismic waves. The technique is used in processing of regional and global seismic data, in seismic exploration for prospecting and exploration of mineral and hydrocarbon deposits, and in seismic engineering for monitoring the condition of engineering structures and the surrounding host medium. As a consequence of development of seismic monitoring systems and increasing volume of seismic data, there is a growing need for new, more effective computational algorithms for use in seismic tomography applications with improved performance, accuracy and resolution. To achieve this goal, it is necessary to use modern high performance computing systems, such as supercomputers with hybrid architecture that use not only CPUs, but also accelerators and co-processors for computation. The goal of this research is the development of parallel seismic tomography algorithms and software package for such systems, to be used in processing of large volumes of seismic data (hundreds of gigabytes and more). These algorithms and software package will be optimized for the most common computing devices used in modern hybrid supercomputers, such as Intel Xeon CPUs, NVIDIA Tesla accelerators and Intel Xeon Phi co-processors. In this work, the following general scheme of seismic tomography is utilized. Using the eikonal equation solver, arrival times of seismic waves are computed based on assumed velocity model of geologic structure being analyzed. In order to solve the linearized inverse problem, tomographic matrix is computed that connects model adjustments with travel time residuals, and the resulting system of linear equations is regularized and solved to adjust the model. The effectiveness of parallel implementations of existing algorithms on target architectures is considered. During the first stage of this work, algorithms were developed for execution on

  20. Application of NASA's Advanced Life Support Technologies in Polar Regions

    NASA Technical Reports Server (NTRS)

    Bubenheim, David L.

    1997-01-01

    The problems of obtaining adequate pure drinking water and disposing of liquid and solid waste in the U.S Arctic, a region where virtually all water is frozen solid for much of the year, has led to unsanitary solutions. Sanitation and a safe water supply are particularly problems in rural villages. These villages are without running water and use plastic buckets for toilets. The outbreak of diseases is believed to be partially attributable to exposure to human waste and lack of sanitation. Villages with the most frequent outbreaks of disease are those in which running water is difficult to obtain. Waste is emptied into open lagoons, rivers, or onto the sea coast. It does not degrade rapidly and in addition to affecting human health, can be harmful to the fragile ecology of the Arctic and the indigenous wildlife and fish populations. Current practices for waste management and sanitation pose serious human hazards as well as threaten the environment. NASA's unique knowledge of water/wastewater treatment systems for extreme environments, identified in the Congressional Office of Technology Assessment report entitled An Alaskan Challenge: Native Villagt Sanitation, may offer practical solutions addressing the issues of safe drinking water and effective sanitation practices in rural villages. NASA's advanced life support technologies are being combined with Arctic science and engineering knowledge to address the unique needs of the remote communities of Alaska through the Advanced Life Systems for Extreme Environments (ALSEE) project. ALSEE is a collaborative effort involving the NASA, the State of Alaska, the University of Alaska, the North Slope Borough of Alaska, Ilisagvik College in Barrow and the National Science Foundation (NSF). The focus is a major issue in the State of Alaska and other areas of the Circumpolar North; the health and welfare of its people, their lives and the subsistence lifestyle in remote communities, economic opportunity, and care for the

  1. Advances in Laser/Lidar Technologies for NASA's Science and Exploration Mission's Applications

    NASA Technical Reports Server (NTRS)

    Singh, Upendra N.; Kavaya, Michael J.

    2005-01-01

    NASA's Laser Risk Reduction Program, begun in 2002, has achieved many technology advances in only 3.5 years. The recent selection of several lidar proposals for Science and Exploration applications indicates that the LRRP goal of enabling future space-based missions by lowering the technology risk has already begun to be met.

  2. Graphics supercomputer for computational fluid dynamics research

    NASA Astrophysics Data System (ADS)

    Liaw, Goang S.

    1994-11-01

    The objective of this project is to purchase a state-of-the-art graphics supercomputer to improve the Computational Fluid Dynamics (CFD) research capability at Alabama A & M University (AAMU) and to support the Air Force research projects. A cutting-edge graphics supercomputer system, Onyx VTX, from Silicon Graphics Computer Systems (SGI), was purchased and installed. Other equipment including a desktop personal computer, PC-486 DX2 with a built-in 10-BaseT Ethernet card, a 10-BaseT hub, an Apple Laser Printer Select 360, and a notebook computer from Zenith were also purchased. A reading room has been converted to a research computer lab by adding some furniture and an air conditioning unit in order to provide an appropriate working environments for researchers and the purchase equipment. All the purchased equipment were successfully installed and are fully functional. Several research projects, including two existing Air Force projects, are being performed using these facilities.

  3. NASA Programs in Advanced Sensors and Measurement Technology for Aeronautical Applications

    NASA Technical Reports Server (NTRS)

    Conway, Bruce A.

    2004-01-01

    There are many challenges facing designers and operators of our next-generation aircraft in meeting the demands for efficiency, safety, and reliability which are will be imposed. This paper discusses aeronautical sensor requirements for a number of research and applications areas pertinent to the demands listed above. A brief overview will be given of aeronautical research measurements, along with a discussion of requirements for advanced technology. Also included will be descriptions of emerging sensors and instrumentation technology which may be exploited for enhanced research and operational capabilities. Finally, renewed emphasis of the National Aeronautics and Space Administration in advanced sensor and instrumentation technology development will be discussed, including project of technology advances over the next 5 years. Emphasis on NASA efforts to more actively advance the state-of-the-art in sensors and measurement techniques is timely in light of exciting new opportunities in airspace development and operation. An up-to-date summary of the measurement technology programs being established to respond to these opportunities is provided.

  4. Development of a Cloud Resolving Model for Heterogeneous Supercomputers

    NASA Astrophysics Data System (ADS)

    Sreepathi, S.; Norman, M. R.; Pal, A.; Hannah, W.; Ponder, C.

    2017-12-01

    A cloud resolving climate model is needed to reduce major systematic errors in climate simulations due to structural uncertainty in numerical treatments of convection - such as convective storm systems. This research describes the porting effort to enable SAM (System for Atmosphere Modeling) cloud resolving model on heterogeneous supercomputers using GPUs (Graphical Processing Units). We have isolated a standalone configuration of SAM that is targeted to be integrated into the DOE ACME (Accelerated Climate Modeling for Energy) Earth System model. We have identified key computational kernels from the model and offloaded them to a GPU using the OpenACC programming model. Furthermore, we are investigating various optimization strategies intended to enhance GPU utilization including loop fusion/fission, coalesced data access and loop refactoring to a higher abstraction level. We will present early performance results, lessons learned as well as optimization strategies. The computational platform used in this study is the Summitdev system, an early testbed that is one generation removed from Summit, the next leadership class supercomputer at Oak Ridge National Laboratory. The system contains 54 nodes wherein each node has 2 IBM POWER8 CPUs and 4 NVIDIA Tesla P100 GPUs. This work is part of a larger project, ACME-MMF component of the U.S. Department of Energy(DOE) Exascale Computing Project. The ACME-MMF approach addresses structural uncertainty in cloud processes by replacing traditional parameterizations with cloud resolving "superparameterization" within each grid cell of global climate model. Super-parameterization dramatically increases arithmetic intensity, making the MMF approach an ideal strategy to achieve good performance on emerging exascale computing architectures. The goal of the project is to integrate superparameterization into ACME, and explore its full potential to scientifically and computationally advance climate simulation and prediction.

  5. A Long History of Supercomputing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grider, Gary

    As part of its national security science mission, Los Alamos National Laboratory and HPC have a long, entwined history dating back to the earliest days of computing. From bringing the first problem to the nation’s first computer to building the first machine to break the petaflop barrier, Los Alamos holds many “firsts” in HPC breakthroughs. Today, supercomputers are integral to stockpile stewardship and the Laboratory continues to work with vendors in developing the future of HPC.

  6. Introducing Argonne’s Theta Supercomputer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    Theta, the Argonne Leadership Computing Facility’s (ALCF) new Intel-Cray supercomputer, is officially open to the research community. Theta’s massively parallel, many-core architecture puts the ALCF on the path to Aurora, the facility’s future Intel-Cray system. Capable of nearly 10 quadrillion calculations per second, Theta enables researchers to break new ground in scientific investigations that range from modeling the inner workings of the brain to developing new materials for renewable energy applications.

  7. Advanced Antenna Design for NASA's EcoSAR Instrument

    NASA Technical Reports Server (NTRS)

    Du Toit, Cornelis F.; Deshpande, Manohar; Rincon, Rafael F.

    2016-01-01

    Advanced antenna arrays were designed for NASA's EcoSAR airborne radar instrument. EcoSAR is a beamforming synthetic aperture radar instrument designed to make polarimetric and "single pass" interferometric measurements of Earth surface parameters. EcoSAR's operational requirements of a 435MHz center frequency with up to 200MHz bandwidth, dual polarization, high cross-polarization isolation (> 30 dB), +/- 45deg beam scan range and antenna form-factor constraints imposed stringent requirements on the antenna design. The EcoSAR project successfully developed, characterized, and tested two array antennas in an anechoic chamber. EcoSAR's first airborne campaign conducted in the spring of 2014 generated rich data sets of scientific and engineering value, demonstrating the successful operation of the antennas.

  8. Advanced Stirling Convertor Testing at NASA Glenn Research Center

    NASA Technical Reports Server (NTRS)

    Wilson, Scott D.; Poriti, Sal

    2010-01-01

    The NASA Glenn Research Center (GRC) has been testing high-efficiency free-piston Stirling convertors for potential use in radioisotope power systems (RPSs) since 1999. The current effort is in support of the Advanced Stirling Radioisotope Generator (ASRG), which is being developed by the U.S. Department of Energy (DOE), Lockheed Martin Space Systems Company (LMSSC), Sunpower, Inc., and the NASA GRC. This generator would use two high-efficiency Advanced Stirling Convertors (ASCs) to convert thermal energy from a radioisotope heat source into electricity. As reliability is paramount to a RPS capable of providing spacecraft power for potential multi-year missions, GRC provides direct technology support to the ASRG flight project in the areas of reliability, convertor and generator testing, high-temperature materials, structures, modeling and analysis, organics, structural dynamics, electromagnetic interference (EMI), and permanent magnets to reduce risk and enhance reliability of the convertor as this technology transitions toward flight status. Convertor and generator testing is carried out in short- and long-duration tests designed to characterize convertor performance when subjected to environments intended to simulate launch and space conditions. Long duration testing is intended to baseline performance and observe any performance degradation over the life of the test. Testing involves developing support hardware that enables 24/7 unattended operation and data collection. GRC currently has 14 Stirling convertors under unattended extended operation testing, including two operating in the ASRG Engineering Unit (ASRG-EU). Test data and high-temperature support hardware are discussed for ongoing and future ASC tests with emphasis on the ASC-E and ASC-E2.

  9. NSF Establishes First Four National Supercomputer Centers.

    ERIC Educational Resources Information Center

    Lepkowski, Wil

    1985-01-01

    The National Science Foundation (NSF) has awarded support for supercomputer centers at Cornell University, Princeton University, University of California (San Diego), and University of Illinois. These centers are to be the nucleus of a national academic network for use by scientists and engineers throughout the United States. (DH)

  10. Update on Risk Reduction Activities for a Liquid Advanced Booster for NASA's Space Launch System

    NASA Technical Reports Server (NTRS)

    Crocker, Andrew M.; Greene, William D.

    2017-01-01

    The stated goals of NASA's Research Announcement for the Space Launch System (SLS) Advanced Booster Engineering Demonstration and/or Risk Reduction (ABEDRR) are to reduce risks leading to an affordable Advanced Booster that meets the evolved capabilities of SLS and enable competition by mitigating targeted Advanced Booster risks to enhance SLS affordability. Dynetics, Inc. and Aerojet Rocketdyne (AR) formed a team to offer a wide-ranging set of risk reduction activities and full-scale, system-level demonstrations that support NASA's ABEDRR goals. During the ABEDRR effort, the Dynetics Team has modified flight-proven Apollo-Saturn F-1 engine components and subsystems to improve affordability and reliability (e.g., reduce parts counts, touch labor, or use lower cost manufacturing processes and materials). The team has built hardware to validate production costs and completed tests to demonstrate it can meet performance requirements. State-of-the-art manufacturing and processing techniques have been applied to the heritage F-1, resulting in a low recurring cost engine while retaining the benefits of Apollo-era experience. NASA test facilities have been used to perform low-cost risk-reduction engine testing. In early 2014, NASA and the Dynetics Team agreed to move additional large liquid oxygen/kerosene engine work under Dynetics' ABEDRR contract. Also led by AR, the objectives of this work are to demonstrate combustion stability and measure performance of a 500,000 lbf class Oxidizer-Rich Staged Combustion (ORSC) cycle main injector. A trade study was completed to investigate the feasibility, cost effectiveness, and technical maturity of a domestically-produced engine that could potentially both replace the RD-180 on Atlas V and satisfy NASA SLS payload-to-orbit requirements via an advanced booster application. Engine physical dimensions and performance parameters resulting from this study provide the system level requirements for the ORSC risk reduction test article

  11. Computing at the speed limit (supercomputers)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bernhard, R.

    1982-07-01

    The author discusses how unheralded efforts in the United States, mainly in universities, have removed major stumbling blocks to building cost-effective superfast computers for scientific and engineering applications within five years. These computers would have sustained speeds of billions of floating-point operations per second (flops), whereas with the fastest machines today the top sustained speed is only 25 million flops, with bursts to 160 megaflops. Cost-effective superfast machines can be built because of advances in very large-scale integration and the special software needed to program the new machines. VLSI greatly reduces the cost per unit of computing power. The developmentmore » of such computers would come at an opportune time. Although the US leads the world in large-scale computer technology, its supremacy is now threatened, not surprisingly, by the Japanese. Publicized reports indicate that the Japanese government is funding a cooperative effort by commercial computer manufacturers to develop superfast computers-about 1000 times faster than modern supercomputers. The US computer industry, by contrast, has balked at attempting to boost computer power so sharply because of the uncertain market for the machines and the failure of similar projects in the past to show significant results.« less

  12. NASA's Advanced Life Support Systems Human-Rated Test Facility

    NASA Technical Reports Server (NTRS)

    Henninger, D. L.; Tri, T. O.; Packham, N. J.

    1996-01-01

    Future NASA missions to explore the solar system will be long-duration missions, requiring human life support systems which must operate with very high reliability over long periods of time. Such systems must be highly regenerative, requiring minimum resupply, to enable the crews to be largely self-sufficient. These regenerative life support systems will use a combination of higher plants, microorganisms, and physicochemical processes to recycle air and water, produce food, and process wastes. A key step in the development of these systems is establishment of a human-rated test facility specifically tailored to evaluation of closed, regenerative life supports systems--one in which long-duration, large-scale testing involving human test crews can be performed. Construction of such a facility, the Advanced Life Support Program's (ALS) Human-Rated Test Facility (HRTF), has begun at NASA's Johnson Space Center, and definition of systems and development of initial outfitting concepts for the facility are underway. This paper will provide an overview of the HRTF project plan, an explanation of baseline configurations, and descriptive illustrations of facility outfitting concepts.

  13. New Directions for NASA's Advanced Life Support Program

    NASA Technical Reports Server (NTRS)

    Barta, Daniel J.

    2006-01-01

    Advanced Life Support (ALS), an element of Human Systems Research and Technology s (HSRT) Life Support and Habitation Program (LSH), has been NASA s primary sponsor of life support research and technology development for the agency. Over its history, ALS sponsored tasks across a diverse set of institutions, including field centers, colleges and universities, industry, and governmental laboratories, resulting in numerous publications and scientific articles, patents and new technologies, as well as education and training for primary, secondary and graduate students, including minority serving institutions. Prior to the Vision for Space Exploration (VSE) announced on January 14th, 2004 by the President, ALS had been focused on research and technology development for long duration exploration missions, emphasizing closed-loop regenerative systems, including both biological and physicochemical. Taking a robust and flexible approach, ALS focused on capabilities to enable visits to multiple potential destinations beyond low Earth orbit. ALS developed requirements, reference missions, and assumptions upon which to structure and focus its development program. The VSE gave NASA a plan for steady human and robotic space exploration based on specific, achievable goals. Recently, the Exploration Systems Architecture Study (ESAS) was chartered by NASA s Administrator to determine the best exploration architecture and strategy to implement the Vision. The study identified key technologies required to enable and significantly enhance the reference exploration missions and to prioritize near-term and far-term technology investments. This technology assessment resulted in a revised Exploration Systems Mission Directorate (ESMD) technology investment plan. A set of new technology development projects were initiated as part of the plan s implementation, replacing tasks previously initiated under HSRT and its sister program, Exploration Systems Research and Technology (ESRT). The

  14. Transferring ecosystem simulation codes to supercomputers

    NASA Technical Reports Server (NTRS)

    Skiles, J. W.; Schulbach, C. H.

    1995-01-01

    Many ecosystem simulation computer codes have been developed in the last twenty-five years. This development took place initially on main-frame computers, then mini-computers, and more recently, on micro-computers and workstations. Supercomputing platforms (both parallel and distributed systems) have been largely unused, however, because of the perceived difficulty in accessing and using the machines. Also, significant differences in the system architectures of sequential, scalar computers and parallel and/or vector supercomputers must be considered. We have transferred a grassland simulation model (developed on a VAX) to a Cray Y-MP/C90. We describe porting the model to the Cray and the changes we made to exploit the parallelism in the application and improve code execution. The Cray executed the model 30 times faster than the VAX and 10 times faster than a Unix workstation. We achieved an additional speedup of 30 percent by using the compiler's vectoring and 'in-line' capabilities. The code runs at only about 5 percent of the Cray's peak speed because it ineffectively uses the vector and parallel processing capabilities of the Cray. We expect that by restructuring the code, it could execute an additional six to ten times faster.

  15. NASA. Lewis Research Center Advanced Modulation and Coding Project: Introduction and overview

    NASA Technical Reports Server (NTRS)

    Budinger, James M.

    1992-01-01

    The Advanced Modulation and Coding Project at LeRC is sponsored by the Office of Space Science and Applications, Communications Division, Code EC, at NASA Headquarters and conducted by the Digital Systems Technology Branch of the Space Electronics Division. Advanced Modulation and Coding is one of three focused technology development projects within the branch's overall Processing and Switching Program. The program consists of industry contracts for developing proof-of-concept (POC) and demonstration model hardware, university grants for analyzing advanced techniques, and in-house integration and testing of performance verification and systems evaluation. The Advanced Modulation and Coding Project is broken into five elements: (1) bandwidth- and power-efficient modems; (2) high-speed codecs; (3) digital modems; (4) multichannel demodulators; and (5) very high-data-rate modems. At least one contract and one grant were awarded for each element.

  16. Interfaces for Advanced Computing.

    ERIC Educational Resources Information Center

    Foley, James D.

    1987-01-01

    Discusses the coming generation of supercomputers that will have the power to make elaborate "artificial realities" that facilitate user-computer communication. Illustrates these technological advancements with examples of the use of head-mounted monitors which are connected to position and orientation sensors, and gloves that track finger and…

  17. Workshop on Advances in NASA-Relevant, Minimally Invasive Instrumentation

    NASA Technical Reports Server (NTRS)

    1985-01-01

    The purpose of this meeting is to highlight those advances in instrumentation and methodology that can be applied to the medical problems that will be encountered as the duration of manned space missions is extended. Information on work that is presently being done by NASA as well as other approaches in which NASA is not participating will be exchanged. The NASA-sponsored efforts that will be discussed are part of the overall Space Medicine Program that has been undertaken by NASA to address the medical problems of manned spaceflight. These problems include those that have been observed in the past as well as those which are anticipated as missions become longer, traverse different orbits, or are in any way different. This conference is arranged in order to address the types of instrumentation that might be used in several major medical problem areas. Instrumentation that will help in the cardiovascular, musculoskeletal, and psychological areas, among others will be presented. Interest lies in identifying instrumentation which will help in learning more about ourselves through experiments performed directly on humans. Great emphasis is placed on non-invasive approaches, although every substantial program basic to animal research will be needed in the foreseeable future. Space Medicine is a rather small affair in what is primarily an engineering organization. Space Medicine is conducted throughout NASA by a very small skeleton staff at the headquarters office in Washington and by our various field centers. These centers include the Johnson Space Center in Houston, Texas, the Ames Research Center in Moffett Field, California, the Jet Propulsion Laboratory in Pasadena, California, the Kennedy Space Center in Florida, and the Langley Research Center in Hampton, Virginia. Throughout these various centers, work is conducted in-house by NASA's own staff scientists, physicians, and engineers. In addition, various universities, industries, and other government laboratories

  18. Recent Advances in Solar Sail Propulsion at NASA

    NASA Technical Reports Server (NTRS)

    Johnson, Les; Young, Roy M.; Montgomery, Edward E., IV

    2006-01-01

    Supporting NASA's Science Mission Directorate, the In-Space Propulsion Technology Program is developing solar sail propulsion for use in robotic science and exploration of the solar system. Solar sail propulsion will provide longer on-station operation, increased scientific payload mass fraction, and access to previously inaccessible orbits for multiple potential science missions. Two different 20-meter solar sail systems were produced and successfully completed functional vacuum testing last year in NASA Glenn's Space Power Facility at Plum Brook Station, Ohio. The sails were designed and developed by ATK Space Systems and L'Garde, respectively. These sail systems consist of a central structure with four deployable booms that support the sails. This sail designs are robust enough for deployments in a one atmosphere, one gravity environment, and are scalable to much larger solar sails-perhaps as much as 150 meters on a side. In addition, computation modeling and analytical simulations have been performed to assess the scalability of the technology to the large sizes (>150 meters) required for first generation solar sails missions. Life and space environmental effects testing of sail and component materials are also nearly complete. This paper will summarize recent technology advancements in solar sails and their successful ambient and vacuum testing.

  19. NASA Glenn Research Center Support of the Advanced Stirling Radioisotope Generator Project

    NASA Technical Reports Server (NTRS)

    Wilson, Scott D.; Wong, Wayne A.

    2015-01-01

    A high-efficiency radioisotope power system was being developed for long-duration NASA space science missions. The U.S. Department of Energy (DOE) managed a flight contract with Lockheed Martin Space Systems Company to build Advanced Stirling Radioisotope Generators (ASRGs), with support from NASA Glenn Research Center. DOE initiated termination of that contract in late 2013, primarily due to budget constraints. Sunpower, Inc., held two parallel contracts to produce Advanced Stirling Convertors (ASCs), one with Lockheed Martin to produce ASC-F flight units, and one with Glenn for the production of ASC-E3 engineering unit "pathfinders" that are built to the flight design. In support of those contracts, Glenn provided testing, materials expertise, Government-furnished equipment, inspection capabilities, and related data products to Lockheed Martin and Sunpower. The technical support included material evaluations, component tests, convertor characterization, and technology transfer. Material evaluations and component tests were performed on various ASC components in order to assess potential life-limiting mechanisms and provide data for reliability models. Convertor level tests were conducted to characterize performance under operating conditions that are representative of various mission conditions. Despite termination of the ASRG flight development contract, NASA continues to recognize the importance of high-efficiency ASC power conversion for Radioisotope Power Systems (RPS) and continues investment in the technology, including the continuation of the ASC-E3 contract. This paper describes key Government support for the ASRG project and future tests to be used to provide data for ongoing reliability assessments.

  20. Secured Advanced Federated Environment (SAFE): A NASA Solution for Secure Cross-Organization Collaboration

    NASA Technical Reports Server (NTRS)

    Chow, Edward; Spence, Matthew Chew; Pell, Barney; Stewart, Helen; Korsmeyer, David; Liu, Joseph; Chang, Hsin-Ping; Viernes, Conan; Gogorth, Andre

    2003-01-01

    This paper discusses the challenges and security issues inherent in building complex cross-organizational collaborative projects and software systems within NASA. By applying the design principles of compartmentalization, organizational hierarchy and inter-organizational federation, the Secured Advanced Federated Environment (SAFE) is laying the foundation for a collaborative virtual infrastructure for the NASA community. A key element of SAFE is the Micro Security Domain (MSD) concept, which balances the need to collaborate and the need to enforce enterprise and local security rules. With the SAFE approach, security is an integral component of enterprise software and network design, not an afterthought.

  1. Tools for 3D scientific visualization in computational aerodynamics at NASA Ames Research Center

    NASA Technical Reports Server (NTRS)

    Bancroft, Gordon; Plessel, Todd; Merritt, Fergus; Watson, Val

    1989-01-01

    Hardware, software, and techniques used by the Fluid Dynamics Division (NASA) for performing visualization of computational aerodynamics, which can be applied to the visualization of flow fields from computer simulations of fluid dynamics about the Space Shuttle, are discussed. Three visualization techniques applied, post-processing, tracking, and steering, are described, as well as the post-processing software packages used, PLOT3D, SURF (Surface Modeller), GAS (Graphical Animation System), and FAST (Flow Analysis software Toolkit). Using post-processing methods a flow simulation was executed on a supercomputer and, after the simulation was complete, the results were processed for viewing. It is shown that the high-resolution, high-performance three-dimensional workstation combined with specially developed display and animation software provides a good tool for analyzing flow field solutions obtained from supercomputers.

  2. Update on Risk Reduction Activities for a Liquid Advanced Booster for NASA's Space Launch System

    NASA Technical Reports Server (NTRS)

    Crocker, Andrew M.; Doering, Kimberly B; Meadows, Robert G.; Lariviere, Brian W.; Graham, Jerry B.

    2015-01-01

    The stated goals of NASA's Research Announcement for the Space Launch System (SLS) Advanced Booster Engineering Demonstration and/or Risk Reduction (ABEDRR) are to reduce risks leading to an affordable Advanced Booster that meets the evolved capabilities of SLS; and enable competition by mitigating targeted Advanced Booster risks to enhance SLS affordability. Dynetics, Inc. and Aerojet Rocketdyne (AR) formed a team to offer a wide-ranging set of risk reduction activities and full-scale, system-level demonstrations that support NASA's ABEDRR goals. For NASA's SLS ABEDRR procurement, Dynetics and AR formed a team to offer a series of full-scale risk mitigation hardware demonstrations for an affordable booster approach that meets the evolved capabilities of the SLS. To establish a basis for the risk reduction activities, the Dynetics Team developed a booster design that takes advantage of the flight-proven Apollo-Saturn F-1. Using NASA's vehicle assumptions for the SLS Block 2, a two-engine, F-1-based booster design delivers 150 mT (331 klbm) payload to LEO, 20 mT (44 klbm) above NASA's requirements. This enables a low-cost, robust approach to structural design. During the ABEDRR effort, the Dynetics Team has modified proven Apollo-Saturn components and subsystems to improve affordability and reliability (e.g., reduce parts counts, touch labor, or use lower cost manufacturing processes and materials). The team has built hardware to validate production costs and completed tests to demonstrate it can meet performance requirements. State-of-the-art manufacturing and processing techniques have been applied to the heritage F-1, resulting in a low recurring cost engine while retaining the benefits of Apollo-era experience. NASA test facilities have been used to perform low-cost risk-reduction engine testing. In early 2014, NASA and the Dynetics Team agreed to move additional large liquid oxygen/kerosene engine work under Dynetics' ABEDRR contract. Also led by AR, the

  3. Real World Uses For Nagios APIs

    NASA Technical Reports Server (NTRS)

    Singh, Janice

    2014-01-01

    This presentation describes the Nagios 4 APIs and how the NASA Advanced Supercomputing at Ames Research Center is employing them to upgrade its graphical status display (the HUD) and explain why it's worth trying to use them yourselves.

  4. Spatiotemporal modeling of node temperatures in supercomputers

    DOE PAGES

    Storlie, Curtis Byron; Reich, Brian James; Rust, William Newton; ...

    2016-06-10

    Los Alamos National Laboratory (LANL) is home to many large supercomputing clusters. These clusters require an enormous amount of power (~500-2000 kW each), and most of this energy is converted into heat. Thus, cooling the components of the supercomputer becomes a critical and expensive endeavor. Recently a project was initiated to investigate the effect that changes to the cooling system in a machine room had on three large machines that were housed there. Coupled with this goal was the aim to develop a general good-practice for characterizing the effect of cooling changes and monitoring machine node temperatures in this andmore » other machine rooms. This paper focuses on the statistical approach used to quantify the effect that several cooling changes to the room had on the temperatures of the individual nodes of the computers. The largest cluster in the room has 1,600 nodes that run a variety of jobs during general use. Since extremes temperatures are important, a Normal distribution plus generalized Pareto distribution for the upper tail is used to model the marginal distribution, along with a Gaussian process copula to account for spatio-temporal dependence. A Gaussian Markov random field (GMRF) model is used to model the spatial effects on the node temperatures as the cooling changes take place. This model is then used to assess the condition of the node temperatures after each change to the room. The analysis approach was used to uncover the cause of a problematic episode of overheating nodes on one of the supercomputing clusters. Lastly, this same approach can easily be applied to monitor and investigate cooling systems at other data centers, as well.« less

  5. Status of NASA's Advanced Radioisotope Power Conversion Technology Research and Development

    NASA Technical Reports Server (NTRS)

    Wong, Wayne A.; Anderson, David J.; Tuttle, Karen L.; Tew, Roy C.

    2006-01-01

    NASA s Advanced Radioisotope Power Systems (RPS) development program is funding the advancement of next generation power conversion technologies that will enable future missions that have requirements that can not be met by either the ubiquitous photovoltaic systems or by current Radioisotope Power Systems (RPS). Requirements of advanced radioisotope power systems include high efficiency and high specific power (watts/kilogram) in order to meet mission requirements with less radioisotope fuel and lower mass. Other Advanced RPS development goals include long-life, reliability, and scalability so that these systems can meet requirements for a variety of future space applications including continual operation surface missions, outer-planetary missions, and solar probe. This paper provides an update on the Radioisotope Power Conversion Technology Project which awarded ten Phase I contracts for research and development of a variety of power conversion technologies consisting of Brayton, Stirling, thermoelectrics, and thermophotovoltaics. Three of the contracts continue during the current Phase II in the areas of thermoelectric and Stirling power conversion. The accomplishments to date of the contractors, project plans, and status will be summarized.

  6. A History of High-Performance Computing

    NASA Technical Reports Server (NTRS)

    2006-01-01

    Faster than most speedy computers. More powerful than its NASA data-processing predecessors. Able to leap large, mission-related computational problems in a single bound. Clearly, it s neither a bird nor a plane, nor does it need to don a red cape, because it s super in its own way. It's Columbia, NASA s newest supercomputer and one of the world s most powerful production/processing units. Named Columbia to honor the STS-107 Space Shuttle Columbia crewmembers, the new supercomputer is making it possible for NASA to achieve breakthroughs in science and engineering, fulfilling the Agency s missions, and, ultimately, the Vision for Space Exploration. Shortly after being built in 2004, Columbia achieved a benchmark rating of 51.9 teraflop/s on 10,240 processors, making it the world s fastest operational computer at the time of completion. Putting this speed into perspective, 20 years ago, the most powerful computer at NASA s Ames Research Center, home of the NASA Advanced Supercomputing Division (NAS), ran at a speed of about 1 gigaflop (one billion calculations per second). The Columbia supercomputer is 50,000 times faster than this computer and offers a tenfold increase in capacity over the prior system housed at Ames. What s more, Columbia is considered the world s largest Linux-based, shared-memory system. The system is offering immeasurable benefits to society and is the zenith of years of NASA/private industry collaboration that has spawned new generations of commercial, high-speed computing systems.

  7. A Long History of Supercomputing

    ScienceCinema

    Grider, Gary

    2018-06-13

    As part of its national security science mission, Los Alamos National Laboratory and HPC have a long, entwined history dating back to the earliest days of computing. From bringing the first problem to the nation’s first computer to building the first machine to break the petaflop barrier, Los Alamos holds many “firsts” in HPC breakthroughs. Today, supercomputers are integral to stockpile stewardship and the Laboratory continues to work with vendors in developing the future of HPC.

  8. Heart Fibrillation and Parallel Supercomputers

    NASA Technical Reports Server (NTRS)

    Kogan, B. Y.; Karplus, W. J.; Chudin, E. E.

    1997-01-01

    The Luo and Rudy 3 cardiac cell mathematical model is implemented on the parallel supercomputer CRAY - T3D. The splitting algorithm combined with variable time step and an explicit method of integration provide reasonable solution times and almost perfect scaling for rectilinear wave propagation. The computer simulation makes it possible to observe new phenomena: the break-up of spiral waves caused by intracellular calcium and dynamics and the non-uniformity of the calcium distribution in space during the onset of the spiral wave.

  9. A Layered Solution for Supercomputing Storage

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grider, Gary

    To solve the supercomputing challenge of memory keeping up with processing speed, a team at Los Alamos National Laboratory developed two innovative memory management and storage technologies. Burst buffers peel off data onto flash memory to support the checkpoint/restart paradigm of large simulations. MarFS adds a thin software layer enabling a new tier for campaign storage—based on inexpensive, failure-prone disk drives—between disk drives and tape archives.

  10. Update on Risk Reduction Activities for a Liquid Advanced Booster for NASA's Space Launch System

    NASA Technical Reports Server (NTRS)

    Crocker, Andy; Greene, William D.

    2017-01-01

    Goals of NASA's Advanced Booster Engineering Demonstration and/or Risk Reduction (ABEDRR) are to: (1) Reduce risks leading to an affordable Advanced Booster that meets the evolved capabilities of SLS. (2) Enable competition by mitigating targeted Advanced Booster risks to enhance SLS affordability. SLS Block 1 vehicle is being designed to carry 70 mT to LEO: (1) Uses two five-segment solid rocket boosters (SRBs) similar to the boosters that helped power the space shuttle to orbit. Evolved 130 mT payload class rocket requires an advanced booster with more thrust than any existing U.S. liquid-or solid-fueled boosters

  11. Japanese project aims at supercomputer that executes 10 gflops

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burskey, D.

    1984-05-03

    Dubbed supercom by its multicompany design team, the decade-long project's goal is an engineering supercomputer that can execute 10 billion floating-point operations/s-about 20 times faster than today's supercomputers. The project, guided by Japan's Ministry of International Trade and Industry (MITI) and the Agency of Industrial Science and Technology encompasses three parallel research programs, all aimed at some angle of the superconductor. One program should lead to superfast logic and memory circuits, another to a system architecture that will afford the best performance, and the last to the software that will ultimately control the computer. The work on logic and memorymore » chips is based on: GAAS circuit; Josephson junction devices; and high electron mobility transistor structures. The architecture will involve parallel processing.« less

  12. Acoustic prediction methods for the NASA generalized advanced propeller analysis system (GAPAS)

    NASA Technical Reports Server (NTRS)

    Padula, S. L.; Block, P. J. W.

    1984-01-01

    Classical methods of propeller performance analysis are coupled with state-of-the-art Aircraft Noise Prediction Program (ANOPP:) techniques to yield a versatile design tool, the NASA Generalized Advanced Propeller Analysis System (GAPAS) for the novel quiet and efficient propellers. ANOPP is a collection of modular specialized programs. GAPAS as a whole addresses blade geometry and aerodynamics, rotor performance and loading, and subsonic propeller noise.

  13. Advanced Thermal Barrier and Environmental Barrier Coating Development at NASA GRC

    NASA Technical Reports Server (NTRS)

    Zhu, Dongming; Robinson, Craig

    2017-01-01

    This presentation summarizes NASA's advanced thermal barrier and environmental barrier coating systems, and the coating performance improvements that has recently been achieved and documented in laboratory simulated rig test conditions. One of the emphases has been placed on the toughness and impact resistance enhancements of the low conductivity, defect cluster thermal barrier coating systems. The advances in the next generation environmental barrier coatings for SiCSiC ceramic matrix composites have also been highlighted, particularly in the design of a new series of oxide-silicate composition systems to be integrated with next generation SiC-SiC turbine engine components for 2700F coating applications. Major technical barriers in developing the thermal and environmental barrier coating systems are also described. The performance and model validations in the rig simulated turbine combustion, heat flux, steam and calcium-magnesium-aluminosilicate (CMAS) environments have helped the current progress in improved temperature capability, environmental stability, and long-term fatigue-environment system durability of the advanced thermal and environmental barrier coating systems.

  14. Supercomputer analysis of sedimentary basins.

    PubMed

    Bethke, C M; Altaner, S P; Harrison, W J; Upson, C

    1988-01-15

    Geological processes of fluid transport and chemical reaction in sedimentary basins have formed many of the earth's energy and mineral resources. These processes can be analyzed on natural time and distance scales with the use of supercomputers. Numerical experiments are presented that give insights to the factors controlling subsurface pressures, temperatures, and reactions; the origin of ores; and the distribution and quality of hydrocarbon reservoirs. The results show that numerical analysis combined with stratigraphic, sea level, and plate tectonic histories provides a powerful tool for studying the evolution of sedimentary basins over geologic time.

  15. Open NASA Earth Exchange (OpenNEX): A Public-Private Partnership for Climate Change Research

    NASA Astrophysics Data System (ADS)

    Nemani, R. R.; Lee, T. J.; Michaelis, A.; Ganguly, S.; Votava, P.

    2014-12-01

    NASA Earth Exchange (NEX) is a data, computing and knowledge collaborative that houses satellite, climate and ancillary data where a community of researchers can come together to share modeling and analysis codes, scientific results, knowledge and expertise on a centralized platform with access to large supercomputing resources. As a part of broadening the community beyond NASA-funded researchers, NASA through an agreement with Amazon Inc. made available to the public a large collection of Climate and Earth Sciences satellite data. The data, available through the Open NASA Earth Exchange (OpenNEX) platform hosted by Amazon Web Services (AWS) public cloud, consists of large amounts of global land surface imaging, vegetation conditions, climate observations and climate projections. In addition to the data, users of OpenNEX platform can also watch lectures from leading experts, learn basic access and use of the available data sets. In order to advance White House initiatives such as Open Data, Big Data and Climate Data and the Climate Action Plan, NASA over the past six months conducted the OpenNEX Challenge. The two-part challenge was designed to engage the public in creating innovative ways to use NASA data and address climate change impacts on economic growth, health and livelihood. Our intention was that the challenges allow citizen scientists to realize the value of NASA data assets and offers NASA new ideas on how to share and use that data. The first "ideation" challenge, closed on July 31st attracted over 450 participants consisting of climate scientists, hobbyists, citizen scientists, IT experts and App developers. Winning ideas from the first challenge will be incorporated into the second "builder" challenge currently targeted to launch mid-August and close by mid-November. The winner(s) will be formally announced at AGU in December of 2014. We will share our experiences and lessons learned over the past year from OpenNEX, a public-private partnership for

  16. Probing the cosmic causes of errors in supercomputers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    Cosmic rays from outer space are causing errors in supercomputers. The neutrons that pass through the CPU may be causing binary data to flip leading to incorrect calculations. Los Alamos National Laboratory has developed detectors to determine how much data is being corrupted by these cosmic particles.

  17. Overview of NASA's Space Solar Power Technology Advanced Research and Development Program

    NASA Technical Reports Server (NTRS)

    Howell, Joe; Mankins, John C.; Davis, N. Jan (Technical Monitor)

    2001-01-01

    Large solar power satellite (SPS) systems that might provide base load power into terrestrial markets were examined extensively in the 1970s by the US Department of Energy (DOE) and the National Aeronautics and Space Administration (NASA). Following a hiatus of about 15 years, the subject of space solar power (SSP) was reexamined by NASA from 1995-1997 in the 'fresh look' study, and during 1998 in an SSP 'concept definition study', and during 1999-2000 in the SSP Exploratory Research and Technology (SERT) program. As a result of these efforts, during 2001, NASA has initiated the SSP Technology Advanced Research and Development (STAR-Dev) program based on informed decisions. The goal of the STAR-Dev program is to conduct preliminary strategic technology research and development to enable large, multi-megawatt to gigawatt-class space solar power (SSP) systems and wireless power transmission (WPT) for government missions and commercial markets (in-space and terrestrial). Specific objectives include: (1) Release a NASA Research Announcement (NRA) for SSP Projects; (2) Conduct systems studies; (3) Develop Component Technologies; (4) Develop Ground and Flight demonstration systems; and (5) Assess and/or Initiate Partnerships. Accomplishing these objectives will allow informed future decisions regarding further SSP and related research and development investments by both NASA management and prospective external partners. In particular, accomplishing these objectives will also guide further definition of SSP and related technology roadmaps including performance objectives, resources and schedules; including 'multi-purpose' applications (commercial, science, and other government).

  18. NASA Performance Report

    NASA Technical Reports Server (NTRS)

    2000-01-01

    Introduction NASA's mission is to advance and communicate scientific knowledge and understanding of Earth, the solar system, and the universe; to advance human exploration, use, and development of space; and to research, develop, verify, and transfer advanced aeronautics, space, and related technologies. In support of this mission, NASA has a strategic architecture that consists of four Enterprises supported by four Crosscutting Processes. The Strategic Enterprises are NASA's primary mission areas to include Earth Science, Space Science, Human Exploration and Development of Space, and Aerospace Technology. NASA's Crosscutting Processes are Manage Strategically, Provide Aerospace Products and Capabilities, Generate Knowledge and Communicate Knowledge. The implementation of NASA programs, science, and technology research occurs primarily at our Centers. NASA consists of a Headquarters, nine Centers, and the Jet Propulsion Laboratory, as well as several ancillary installations and offices in the United States and abroad. The nine Centers are as follows: (1) Ames Research Center, (2) Dryden Flight Research Center (DFRC), (3) Glenn Research Center (GRC), (4) Goddard Space Flight Center (GSFC), (5) Johnson Space Center, (6) Kennedy Space Center (KSC), (7) Langley Research Center (LaRC), (8) Marshall Space Flight Center (MSFC), and (9) Stennis Space Center (SSC).

  19. Integration Of PanDA Workload Management System With Supercomputers for ATLAS and Data Intensive Science

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    De, K; Jha, S; Klimentov, A

    2016-01-01

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Managementmore » System for managing the workflow for all data processing on over 150 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250,000 cores with a peak performance of 0.3 petaFLOPS, LHC data taking runs require more resources than Grid computing can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, Europe and Russia (in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF), MIRA supercomputer at Argonne Leadership Computing Facilities (ALCF), Supercomputer at the National Research Center Kurchatov Institute , IT4 in Ostrava and others). Current approach utilizes modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on LCFs multi-core worker nodes. This

  20. Space Power Architectures for NASA Missions: The Applicability and Benefits of Advanced Power and Electric Propulsion

    NASA Technical Reports Server (NTRS)

    Hoffman, David J.

    2001-01-01

    The relative importance of electrical power systems as compared with other spacecraft bus systems is examined. The quantified benefits of advanced space power architectures for NASA Earth Science, Space Science, and Human Exploration and Development of Space (HEDS) missions is then presented. Advanced space power technologies highlighted include high specific power solar arrays, regenerative fuel cells, Stirling radioisotope power sources, flywheel energy storage and attitude control, lithium ion polymer energy storage and advanced power management and distribution.

  1. Advanced Solar Cell and Array Technology for NASA Deep Space Missions

    NASA Technical Reports Server (NTRS)

    Piszczor, Michael; Benson, Scott; Scheiman, David; Finacannon, Homer; Oleson, Steve; Landis, Geoffrey

    2008-01-01

    A recent study by the NASA Glenn Research Center assessed the feasibility of using photovoltaics (PV) to power spacecraft for outer planetary, deep space missions. While the majority of spacecraft have relied on photovoltaics for primary power, the drastic reduction in solar intensity as the spacecraft moves farther from the sun has either limited the power available (severely curtailing scientific operations) or necessitated the use of nuclear systems. A desire by NASA and the scientific community to explore various bodies in the outer solar system and conduct "long-term" operations using using smaller, "lower-cost" spacecraft has renewed interest in exploring the feasibility of using photovoltaics for to Jupiter, Saturn and beyond. With recent advances in solar cell performance and continuing development in lightweight, high power solar array technology, the study determined that photovoltaics is indeed a viable option for many of these missions.

  2. High-Performance Computing: Industry Uses of Supercomputers and High-Speed Networks. Report to Congressional Requesters.

    ERIC Educational Resources Information Center

    General Accounting Office, Washington, DC. Information Management and Technology Div.

    This report was prepared in response to a request for information on supercomputers and high-speed networks from the Senate Committee on Commerce, Science, and Transportation, and the House Committee on Science, Space, and Technology. The following information was requested: (1) examples of how various industries are using supercomputers to…

  3. Access to Supercomputers. Higher Education Panel Report 69.

    ERIC Educational Resources Information Center

    Holmstrom, Engin Inel

    This survey was conducted to provide the National Science Foundation with baseline information on current computer use in the nation's major research universities, including the actual and potential use of supercomputers. Questionnaires were sent to 207 doctorate-granting institutions; after follow-ups, 167 institutions (91% of the institutions…

  4. Advanced Environmental Barrier Coating Development for SiC/SiC Ceramic Matrix Composites: NASA's Perspectives

    NASA Technical Reports Server (NTRS)

    Zhu, Dongming

    2016-01-01

    This presentation reviews NASA environmental barrier coating (EBC) system development programs and the coating materials evolutions for protecting the SiC/SiC Ceramic Matrix Composites in order to meet the next generation engine performance requirements. The presentation focuses on several generations of NASA EBC systems, EBC-CMC component system technologies for SiC/SiC ceramic matrix composite combustors and turbine airfoils, highlighting the temperature capability and durability improvements in simulated engine high heat flux, high pressure, high velocity, and with mechanical creep and fatigue loading conditions. The current EBC development emphasis is placed on advanced NASA 2700F candidate environmental barrier coating systems for SiC/SiC CMCs, their performance benefits and design limitations in long-term operation and combustion environments. Major technical barriers in developing environmental barrier coating systems, the coating integrations with next generation CMCs having the improved environmental stability, erosion-impact resistance, and long-term fatigue-environment system durability performance are described. The research and development opportunities for advanced turbine airfoil environmental barrier coating systems by utilizing improved compositions, state-of-the-art processing methods, and simulated environment testing and durability modeling are discussed.

  5. Recent Efforts in Advanced High Frequency Communications at the Glenn Research Center in Support of NASA Mission

    NASA Technical Reports Server (NTRS)

    Miranda, Felix A.

    2015-01-01

    This presentation will discuss research and technology development work at the NASA Glenn Research Center in advanced frequency communications in support of NASAs mission. An overview of the work conducted in-house and also in collaboration with academia, industry, and other government agencies (OGA) in areas such as antenna technology, power amplifiers, radio frequency (RF) wave propagation through Earths atmosphere, ultra-sensitive receivers, among others, will be presented. In addition, the role of these and other related RF technologies in enabling the NASA next generation space communications architecture will be also discussed.

  6. The TESS science processing operations center

    NASA Astrophysics Data System (ADS)

    Jenkins, Jon M.; Twicken, Joseph D.; McCauliff, Sean; Campbell, Jennifer; Sanderfer, Dwight; Lung, David; Mansouri-Samani, Masoud; Girouard, Forrest; Tenenbaum, Peter; Klaus, Todd; Smith, Jeffrey C.; Caldwell, Douglas A.; Chacon, A. D.; Henze, Christopher; Heiges, Cory; Latham, David W.; Morgan, Edward; Swade, Daryl; Rinehart, Stephen; Vanderspek, Roland

    2016-08-01

    The Transiting Exoplanet Survey Satellite (TESS) will conduct a search for Earth's closest cousins starting in early 2018 and is expected to discover 1,000 small planets with Rp < 4 R⊕ and measure the masses of at least 50 of these small worlds. The Science Processing Operations Center (SPOC) is being developed at NASA Ames Research Center based on the Kepler science pipeline and will generate calibrated pixels and light curves on the NASA Advanced Supercomputing Division's Pleiades supercomputer. The SPOC will also search for periodic transit events and generate validation products for the transit-like features in the light curves. All TESS SPOC data products will be archived to the Mikulski Archive for Space Telescopes (MAST).

  7. A Layered Solution for Supercomputing Storage

    ScienceCinema

    Grider, Gary

    2018-06-13

    To solve the supercomputing challenge of memory keeping up with processing speed, a team at Los Alamos National Laboratory developed two innovative memory management and storage technologies. Burst buffers peel off data onto flash memory to support the checkpoint/restart paradigm of large simulations. MarFS adds a thin software layer enabling a new tier for campaign storage—based on inexpensive, failure-prone disk drives—between disk drives and tape archives.

  8. Supercomputer algorithms for efficient linear octree encoding of three-dimensional brain images.

    PubMed

    Berger, S B; Reis, D J

    1995-02-01

    We designed and implemented algorithms for three-dimensional (3-D) reconstruction of brain images from serial sections using two important supercomputer architectures, vector and parallel. These architectures were represented by the Cray YMP and Connection Machine CM-2, respectively. The programs operated on linear octree representations of the brain data sets, and achieved 500-800 times acceleration when compared with a conventional laboratory workstation. As the need for higher resolution data sets increases, supercomputer algorithms may offer a means of performing 3-D reconstruction well above current experimental limits.

  9. NREL's Building-Integrated Supercomputer Provides Heating and Efficient Computing (Fact Sheet)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    2014-09-01

    NREL's Energy Systems Integration Facility (ESIF) is meant to investigate new ways to integrate energy sources so they work together efficiently, and one of the key tools to that investigation, a new supercomputer, is itself a prime example of energy systems integration. NREL teamed with Hewlett-Packard (HP) and Intel to develop the innovative warm-water, liquid-cooled Peregrine supercomputer, which not only operates efficiently but also serves as the primary source of building heat for ESIF offices and laboratories. This innovative high-performance computer (HPC) can perform more than a quadrillion calculations per second as part of the world's most energy-efficient HPC datamore » center.« less

  10. Designing a connectionist network supercomputer.

    PubMed

    Asanović, K; Beck, J; Feldman, J; Morgan, N; Wawrzynek, J

    1993-12-01

    This paper describes an effort at UC Berkeley and the International Computer Science Institute to develop a supercomputer for artificial neural network applications. Our perspective has been strongly influenced by earlier experiences with the construction and use of a simpler machine. In particular, we have observed Amdahl's Law in action in our designs and those of others. These observations inspire attention to many factors beyond fast multiply-accumulate arithmetic. We describe a number of these factors along with rough expressions for their influence and then give the applications targets, machine goals and the system architecture for the machine we are currently designing.

  11. Building black holes: supercomputer cinema.

    PubMed

    Shapiro, S L; Teukolsky, S A

    1988-07-22

    A new computer code can solve Einstein's equations of general relativity for the dynamical evolution of a relativistic star cluster. The cluster may contain a large number of stars that move in a strong gravitational field at speeds approaching the speed of light. Unstable star clusters undergo catastrophic collapse to black holes. The collapse of an unstable cluster to a supermassive black hole at the center of a galaxy may explain the origin of quasars and active galactic nuclei. By means of a supercomputer simulation and color graphics, the whole process can be viewed in real time on a movie screen.

  12. Simulating the Dynamics of Earth's Core: Using NCCS Supercomputers Speeds Calculations

    NASA Technical Reports Server (NTRS)

    2002-01-01

    If one wanted to study Earth's core directly, one would have to drill through about 1,800 miles of solid rock to reach liquid core-keeping the tunnel from collapsing under pressures that are more than 1 million atmospheres and then sink an instrument package to the bottom that could operate at 8,000 F with 10,000 tons of force crushing every square inch of its surface. Even then, several of these tunnels would probably be needed to obtain enough data. Faced with difficult or impossible tasks such as these, scientists use other available sources of information - such as seismology, mineralogy, geomagnetism, geodesy, and, above all, physical principles - to derive a model of the core and, study it by running computer simulations. One NASA researcher is doing just that on NCCS computers. Physicist and applied mathematician Weijia Kuang, of the Space Geodesy Branch, and his collaborators at Goddard have what he calls the,"second - ever" working, usable, self-consistent, fully dynamic, three-dimensional geodynamic model (see "The Geodynamic Theory"). Kuang runs his model simulations on the supercomputers at the NCCS. He and Jeremy Bloxham, of Harvard University, developed the original version, written in Fortran 77, in 1996.

  13. SPHERES tethered formation flight testbed: advancements in enabling NASA's SPECS mission

    NASA Astrophysics Data System (ADS)

    Chung, Soon-Jo; Adams, Danielle; Saenz-Otero, Alvar; Kong, Edmund; Miller, David W.; Leisawitz, David; Lorenzini, Enrico; Sell, Steve

    2006-06-01

    This paper reports on efforts to control a tethered formation flight spacecraft array for NASA's SPECS mission using the SPHERES test-bed developed by the MIT Space Systems Laboratory. Specifically, advances in methodology and experimental results realized since the 2005 SPIE paper are emphasized. These include a new test-bed setup with a reaction wheel assembly, a novel relative attitude measurement system using force torque sensors, and modeling of non-ideal tethers to account for tether vibration modes. The nonlinear equations of motion of multi-vehicle tethered spacecraft with elastic flexible tethers are derived from Lagrange's equations. The controllability analysis indicates that both array resizing and spin-up are fully controllable by the reaction wheels and the tether motor, thereby saving thruster fuel consumption. Based upon this analysis, linear and nonlinear controllers have been successfully implemented on the tethered SPHERES testbed, and tested at the NASA MSFC's flat floor facility using two and three SPHERES configurations.

  14. Test Rack Development for Extended Operation of Advanced Stirling Convertors at NASA Glenn Research Center

    NASA Technical Reports Server (NTRS)

    Dugala, Gina M.

    2009-01-01

    The U.S. Department of Energy (DOE), Lockheed Martin Space Company (LMSC), Sun power Inc., and NASA Glenn Research Center (GRC) have been developing an Advanced Stirling Radioisotope Generator (ASRG) for use as a power system on space science missions. This generator will make use of free-piston Stirling convertors to achieve higher conversion efficiency than currently available alternatives. NASA GRC's support of ASRG development includes extended operation testing of Advanced Stirling Convertors (ASCs) developed by Sunpower Inc. In the past year, NASA GRC has been building a test facility to support extended operation of a pair of engineering level ASCs. Operation of the convertors in the test facility provides convertor performance data over an extended period of time. Mechanical support hardware, data acquisition software, and an instrumentation rack were developed to prepare the pair of convertors for continuous extended operation. Short-term tests were performed to gather baseline performance data before extended operation was initiated. These tests included workmanship vibration, insulation thermal loss characterization, low-temperature checkout, and fUll-power operation. Hardware and software features are implemented to ensure reliability of support systems. This paper discusses the mechanical support hardware, instrumentation rack, data acquisition software, short-term tests, and safety features designed to support continuous unattended operation of a pair of ASCs.

  15. An Application-Based Performance Evaluation of NASAs Nebula Cloud Computing Platform

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Heistand, Steve; Jin, Haoqiang; Chang, Johnny; Hood, Robert T.; Mehrotra, Piyush; Biswas, Rupak

    2012-01-01

    The high performance computing (HPC) community has shown tremendous interest in exploring cloud computing as it promises high potential. In this paper, we examine the feasibility, performance, and scalability of production quality scientific and engineering applications of interest to NASA on NASA's cloud computing platform, called Nebula, hosted at Ames Research Center. This work represents the comprehensive evaluation of Nebula using NUTTCP, HPCC, NPB, I/O, and MPI function benchmarks as well as four applications representative of the NASA HPC workload. Specifically, we compare Nebula performance on some of these benchmarks and applications to that of NASA s Pleiades supercomputer, a traditional HPC system. We also investigate the impact of virtIO and jumbo frames on interconnect performance. Overall results indicate that on Nebula (i) virtIO and jumbo frames improve network bandwidth by a factor of 5x, (ii) there is a significant virtualization layer overhead of about 10% to 25%, (iii) write performance is lower by a factor of 25x, (iv) latency for short MPI messages is very high, and (v) overall performance is 15% to 48% lower than that on Pleiades for NASA HPC applications. We also comment on the usability of the cloud platform.

  16. Third NASA Advanced Composites Technology Conference, volume 1, part 1

    NASA Technical Reports Server (NTRS)

    Davis, John G., Jr. (Compiler); Bohon, Herman L. (Compiler)

    1993-01-01

    This document is a compilation of papers presented at the Third NASA Advanced Composites Technology (ACT) Conference. The ACT Program is a major multi-year research initiative to achieve a national goal of technology readiness before the end of the decade. Conference papers recorded results of research in the ACT Program in the specific areas of automated fiber placement, resin transfer molding, textile preforms, and stitching as these processes influence design, performance, and cost of composites in aircraft structures. Papers sponsored by the Department of Defense on the Design and Manufacturing of Low Cost Composites (DMLCC) are also included in Volume 2 of this document.

  17. Blizzard 2016

    NASA Image and Video Library

    2017-12-08

    A NASA Center for Climate Simulation supercomputer model that shows the flow of ‪#‎Blizzard2016‬ thru Sunday. Learn more here: go.nasa.gov/1WBm547 NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  18. Demonstration of Cost-Effective, High-Performance Computing at Performance and Reliability Levels Equivalent to a 1994 Vector Supercomputer

    NASA Technical Reports Server (NTRS)

    Babrauckas, Theresa

    2000-01-01

    The Affordable High Performance Computing (AHPC) project demonstrated that high-performance computing based on a distributed network of computer workstations is a cost-effective alternative to vector supercomputers for running CPU and memory intensive design and analysis tools. The AHPC project created an integrated system called a Network Supercomputer. By connecting computer work-stations through a network and utilizing the workstations when they are idle, the resulting distributed-workstation environment has the same performance and reliability levels as the Cray C90 vector Supercomputer at less than 25 percent of the C90 cost. In fact, the cost comparison between a Cray C90 Supercomputer and Sun workstations showed that the number of distributed networked workstations equivalent to a C90 costs approximately 8 percent of the C90.

  19. Advances in integrated system heath management system technologies : overview of NASA and industry collaborative activities

    NASA Technical Reports Server (NTRS)

    Dixit, Sunil; Brown, Steve; Fijany, Amir; Park, Han; Mackey, Ryan; James, Mark; Baroth, Ed

    2005-01-01

    This paper will describe recent advances in ISHM technologies made through collaboration between NASA and industry. In particular, the paper will focus on past, present, and future technology development and maturation efforts at the Jet Propulsion Laboratory (JPL) and its industry partner, Northrop Grumman lntegrated Systems (NGIS).

  20. Multi-Disciplinary Analysis for Future Launch Systems Using NASA's Advanced Engineering Environment (AEE)

    NASA Technical Reports Server (NTRS)

    Monell, D.; Mathias, D.; Reuther, J.; Garn, M.

    2003-01-01

    A new engineering environment constructed for the purposes of analyzing and designing Reusable Launch Vehicles (RLVs) is presented. The new environment has been developed to allow NASA to perform independent analysis and design of emerging RLV architectures and technologies. The new Advanced Engineering Environment (AEE) is both collaborative and distributed. It facilitates integration of the analyses by both vehicle performance disciplines and life-cycle disciplines. Current performance disciplines supported include: weights and sizing, aerodynamics, trajectories, propulsion, structural loads, and CAD-based geometries. Current life-cycle disciplines supported include: DDT&E cost, production costs, operations costs, flight rates, safety and reliability, and system economics. Involving six NASA centers (ARC, LaRC, MSFC, KSC, GRC and JSC), AEE has been tailored to serve as a web-accessed agency-wide source for all of NASA's future launch vehicle systems engineering functions. Thus, it is configured to facilitate (a) data management, (b) automated tool/process integration and execution, and (c) data visualization and presentation. The core components of the integrated framework are a customized PTC Windchill product data management server, a set of RLV analysis and design tools integrated using Phoenix Integration's Model Center, and an XML-based data capture and transfer protocol. The AEE system has seen production use during the Initial Architecture and Technology Review for the NASA 2nd Generation RLV program, and it continues to undergo development and enhancements in support of its current main customer, the NASA Next Generation Launch Technology (NGLT) program.

  1. Development of the advanced life support Systems Integration Research Facility at NASA's Johnson Space Center

    NASA Technical Reports Server (NTRS)

    Tri, Terry O.; Thompson, Clifford D.

    1992-01-01

    Future NASA manned missions to the moon and Mars will require development of robust regenerative life support system technologies which offer high reliability and minimal resupply. To support the development of such systems, early ground-based test facilities will be required to demonstrate integrated, long-duration performance of candidate regenerative air revitalization, water recovery, and thermal management systems. The advanced life support Systems Integration Research Facility (SIRF) is one such test facility currently being developed at NASA's Johnson Space Center. The SIRF, when completed, will accommodate unmanned and subsequently manned integrated testing of advanced regenerative life support technologies at ambient and reduced atmospheric pressures. This paper provides an overview of the SIRF project, a top-level description of test facilities to support the project, conceptual illustrations of integrated test article configurations for each of the three SIRF systems, and a phased project schedule denoting projected activities and milestones through the next several years.

  2. The Third Air Force/NASA Symposium on Recent Advances in Multidisciplinary Analysis and Optimization

    NASA Technical Reports Server (NTRS)

    1990-01-01

    The third Air Force/NASA Symposium on Recent Advances in Multidisciplinary Analysis and Optimization was held on 24-26 Sept. 1990. Sessions were on the following topics: dynamics and controls; multilevel optimization; sensitivity analysis; aerodynamic design software systems; optimization theory; analysis and design; shape optimization; vehicle components; structural optimization; aeroelasticity; artificial intelligence; multidisciplinary optimization; and composites.

  3. Data communication requirements for the advanced NAS network

    NASA Technical Reports Server (NTRS)

    Levin, Eugene; Eaton, C. K.; Young, Bruce

    1986-01-01

    The goal of the Numerical Aerodynamic Simulation (NAS) Program is to provide a powerful computational environment for advanced research and development in aeronautics and related disciplines. The present NAS system consists of a Cray 2 supercomputer connected by a data network to a large mass storage system, to sophisticated local graphics workstations, and by remote communications to researchers throughout the United States. The program plan is to continue acquiring the most powerful supercomputers as they become available. In the 1987/1988 time period it is anticipated that a computer with 4 times the processing speed of a Cray 2 will be obtained and by 1990 an additional supercomputer with 16 times the speed of the Cray 2. The implications of this 20-fold increase in processing power on the data communications requirements are described. The analysis was based on models of the projected workload and system architecture. The results are presented together with the estimates of their sensitivity to assumptions inherent in the models.

  4. Mass Storage System Upgrades at the NASA Center for Computational Sciences

    NASA Technical Reports Server (NTRS)

    Tarshish, Adina; Salmon, Ellen; Macie, Medora; Saletta, Marty

    2000-01-01

    The NASA Center for Computational Sciences (NCCS) provides supercomputing and mass storage services to over 1200 Earth and space scientists. During the past two years, the mass storage system at the NCCS went through a great deal of changes both major and minor. Tape drives, silo control software, and the mass storage software itself were upgraded, and the mass storage platform was upgraded twice. Some of these upgrades were aimed at achieving year-2000 compliance, while others were simply upgrades to newer and better technologies. In this paper we will describe these upgrades.

  5. Integration Of PanDA Workload Management System With Supercomputers for ATLAS and Data Intensive Science

    NASA Astrophysics Data System (ADS)

    Klimentov, A.; De, K.; Jha, S.; Maeno, T.; Nilsson, P.; Oleynik, D.; Panitkin, S.; Wells, J.; Wenaus, T.

    2016-10-01

    The.LHC, operating at CERN, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 150 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250,000 cores with a peak performance of 0.3 petaFLOPS, LHC data taking runs require more resources than grid can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility. Current approach utilizes modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on LCFs multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads on several supercomputing platforms for ALICE and ATLAS experiments and it is in full pro duction for the ATLAS since September 2015. We will present our current accomplishments with running PanDA at supercomputers and demonstrate our ability to use PanDA as a portal independent of the

  6. The TESS Science Processing Operations Center

    NASA Technical Reports Server (NTRS)

    Jenkins, Jon M.; Twicken, Joseph D.; McCauliff, Sean; Campbell, Jennifer; Sanderfer, Dwight; Lung, David; Mansouri-Samani, Masoud; Girouard, Forrest; Tenenbaum, Peter; Klaus, Todd; hide

    2016-01-01

    The Transiting Exoplanet Survey Satellite (TESS) will conduct a search for Earth's closest cousins starting in early 2018 and is expected to discover approximately 1,000 small planets with R(sub p) less than 4 (solar radius) and measure the masses of at least 50 of these small worlds. The Science Processing Operations Center (SPOC) is being developed at NASA Ames Research Center based on the Kepler science pipeline and will generate calibrated pixels and light curves on the NASA Advanced Supercomputing Division's Pleiades supercomputer. The SPOC will also search for periodic transit events and generate validation products for the transit-like features in the light curves. All TESS SPOC data products will be archived to the Mikulski Archive for Space Telescopes (MAST).

  7. The Sky's the Limit When Super Students Meet Supercomputers.

    ERIC Educational Resources Information Center

    Trotter, Andrew

    1991-01-01

    In a few select high schools in the U.S., supercomputers are allowing talented students to attempt sophisticated research projects using simultaneous simulations of nature, culture, and technology not achievable by ordinary microcomputers. Schools can get their students online by entering contests and seeking grants and partnerships with…

  8. Supercomputer use in orthopaedic biomechanics research: focus on functional adaptation of bone.

    PubMed

    Hart, R T; Thongpreda, N; Van Buskirk, W C

    1988-01-01

    The authors describe two biomechanical analyses carried out using numerical methods. One is an analysis of the stress and strain in a human mandible, and the other analysis involves modeling the adaptive response of a sheep bone to mechanical loading. The computing environment required for the two types of analyses is discussed. It is shown that a simple stress analysis of a geometrically complex mandible can be accomplished using a minicomputer. However, more sophisticated analyses of the same model with dynamic loading or nonlinear materials would require supercomputer capabilities. A supercomputer is also required for modeling the adaptive response of living bone, even when simple geometric and material models are use.

  9. Reconfiguration of NASA GRC's Vacuum Facility 6 for Testing of Advanced Electric Propulsion System (AEPS) Hardware

    NASA Technical Reports Server (NTRS)

    Peterson, Peter; Kamhawi, Hani; Huang, Wensheng; Yim, John; Haag, Tom; Mackey, Jonathan; McVetta, Mike; Sorrelle, Luke; Tomsik, Tom; Gilligan, Ryan; hide

    2016-01-01

    The NASA Hall Effect Rocket with Magnetic Shielding (HERMeS) 12.5 kilowatt Hall thruster has been the subject of extensive technology maturation in preparation for development into a flight propulsion system. The HERMeS thruster is being developed and tested at NASA GRC and NASA JPL through support of the Space Technology Mission Directorate and is intended to be used as the electric propulsion system on the Power and Propulsion Element of the recently announced Deep Space Gateway. The Advanced Electric Propulsion System (AEPS) contract was awarded to Aerojet Rocketdyne to develop the HERMeS system into a flight system for use by NASA. To address the hardware test needs of the AEPS project, NASA GRC launched an effort to reconfigure Vacuum Facility 6 for high-power electric propulsion testing including upgrades and reconfigurations necessary to conduct performance, plasma plume, and system level integration testing. Results of the verification and validation testing with HERMeS Technology Demonstration Unit (TDU) 1 and TDU-3 Hall thrusters are also included.

  10. Reconfiguration of NASA GRC's Vacuum Facility 6 for Testing of Advanced Electric Propulsion System (AEPS) Hardware

    NASA Technical Reports Server (NTRS)

    Peterson, Peter Y.; Kamhawi, Hani; Huang, Wensheng; Yim, John; Haag, Tom; Mackey, Jonathan; McVetta, Mike; Sorrelle, Luke; Tomsik, Tom; Gilligan, Ryan; hide

    2017-01-01

    The NASA Hall Effect Rocket with Magnetic Shielding (HERMeS) 12.5 kilowatt Hall thruster has been the subject of extensive technology maturation in preparation for development into a flight propulsion system. The HERMeS thruster is being developed and tested at NASA GRC and NASA JPL through support of the Space Technology Mission Directorate and is intended to be used as the electric propulsion system on the Power and Propulsion Element of the recently announced Deep Space Gateway. The Advanced Electric Propulsion System (AEPS) contract was awarded to Aerojet Rocketdyne to develop the HERMeS system into a flight system for use by NASA. To address the hardware test needs of the AEPS project, NASA GRC launched an effort to reconfigure Vacuum Facility 6 for high-power electric propulsion testing including upgrades and reconfigurations necessary to conduct performance, plasma plume, and system level integration testing. Results of the verification and validation testing with HERMeS Technology Demonstration Unit (TDU) 1 and TDU-3 Hall thrusters are also included.

  11. Reconfiguration of NASA GRC's Vacuum Facility 6 for Testing of Advanced Electric Propulsion System (AEPS) Hardware

    NASA Technical Reports Server (NTRS)

    Peterson, Peter Y.; Kamhawi, Hani; Huang, Wensheng; Yim, John T.; Haag, Thomas W.; Mackey, Jonathan A.; McVetta, Michael S.; Sorrelle, Luke T.; Tomsik, Thomas M.; Gilligan, Ryan P.; hide

    2018-01-01

    The NASA Hall Effect Rocket with Magnetic Shielding (HERMeS) 12.5 kW Hall thruster has been the subject of extensive technology maturation in preparation for development into a flight propulsion system. The HERMeS thruster is being developed and tested at NASA GRC and NASA JPL through support of the Space Technology Mission Directorate (STMD) and is intended to be used as the electric propulsion system on the Power and Propulsion Element (PPE) of the recently announced Deep Space Gateway (DSG). The Advanced Electric Propulsion System (AEPS) contract was awarded to Aerojet-Rocketdyne to develop the HERMeS system into a flight system for use by NASA. To address the hardware test needs of the AEPS project, NASA GRC launched an effort to reconfigure Vacuum Facility 6 (VF-6) for high-power electric propulsion testing including upgrades and reconfigurations necessary to conduct performance, plasma plume, and system level integration testing. Results of the verification and validation testing with HERMeS Technology Demonstration Unit (TDU)-1 and TDU-3 Hall thrusters are also included.

  12. NOAA announces significant investment in next generation of supercomputers

    Science.gov Websites

    provide more timely, accurate weather forecasts. (Credit: istockphoto.com) Today, NOAA announced the next phase in the agency's efforts to increase supercomputing capacity to provide more timely, accurate turn will lead to more timely, accurate, and reliable forecasts." Ahead of this upgrade, each of

  13. An Overview of Advanced Elastomeric Seal Development and Testing Capabilities at NASA Glenn Research Center

    NASA Technical Reports Server (NTRS)

    Dunlap, Patrick H.

    2014-01-01

    NASA is developing advanced space-rated elastomeric seals to support future space exploration missions to low Earth orbit, the Moon, near Earth asteroids, and other destinations. This includes seals for a new docking system and vehicle hatches. These seals must exhibit extremely low leak rates to ensure that astronauts have sufficient breathable air for extended missions. Seal compression loads must be below prescribed limits so as not to overload the mechanisms that compress them, and seal adhesion forces must be low to allow the sealed interface to be separated when required (e.g., during undocking or hatch opening). NASA Glenn Research Center has developed a number of unique test fixtures to measure the leak rates and compression and adhesion loads of candidate seal designs under simulated thermal, vacuum, and engagement conditions. Tests can be performed on full-scale seals with diameters on the order of 50 in., subscale seals that are about 12 in. in diameter, and smaller specimens such as O-rings. Test conditions include temperatures ranging from -238 to 662 F (-150 to 350 C), operational pressure gradients, and seal-on-seal or seal-on-flange mating configurations. Nominal and off-nominal conditions (e.g., incomplete seal compression) can also be simulated. This paper describes the main design features and capabilities of each type of test apparatus and provides an overview of advanced seal development activities at NASA Glenn.

  14. Third NASA Advanced Composites Technology Conference, volume 1, part 2

    NASA Technical Reports Server (NTRS)

    Davis, John G., Jr. (Compiler); Bohon, Herman L. (Compiler)

    1993-01-01

    This document is a compilation of papers presented at the Third NASA Advanced Composites Technology (ACT) Conference held at Long Beach, California, 8-11 June 1992. The ACT Program is a major multi-year research initiative to achieve a national goal of technology readiness before the end of the decade. Conference papers recorded results of research in the ACT Program in the specific areas of automated fiber placement, resin transfer molding, textile preforms, and stitching as these processes influence design, performance, and cost of composites in aircraft structures. Papers sponsored by the Department of Defense on the Design and Manufacturing of Low Cost Composites (DMLCC) are also included in Volume 2 of this document.

  15. Advanced Durability and Damage Tolerance Design and Analysis Methods for Composite Structures: Lessons Learned from NASA Technology Development Programs

    NASA Technical Reports Server (NTRS)

    Harris, Charles E.; Starnes, James H., Jr.; Shuart, Mark J.

    2003-01-01

    Aerospace vehicles are designed to be durable and damage tolerant. Durability is largely an economic life-cycle design consideration whereas damage tolerance directly addresses the structural airworthiness (safety) of the vehicle. However, both durability and damage tolerance design methodologies must address the deleterious effects of changes in material properties and the initiation and growth of microstructural damage that may occur during the service lifetime of the vehicle. Durability and damage tolerance design and certification requirements are addressed for commercial transport aircraft and NASA manned spacecraft systems. The state-of-the-art in advanced design and analysis methods is illustrated by discussing the results of several recently completed NASA technology development programs. These programs include the NASA Advanced Subsonic Technology Program demonstrating technologies for large transport aircraft and the X-33 hypersonic test vehicle demonstrating technologies for a single-stage-to-orbit space launch vehicle.

  16. Group-based variant calling leveraging next-generation supercomputing for large-scale whole-genome sequencing studies.

    PubMed

    Standish, Kristopher A; Carland, Tristan M; Lockwood, Glenn K; Pfeiffer, Wayne; Tatineni, Mahidhar; Huang, C Chris; Lamberth, Sarah; Cherkas, Yauheniya; Brodmerkel, Carrie; Jaeger, Ed; Smith, Lance; Rajagopal, Gunaretnam; Curran, Mark E; Schork, Nicholas J

    2015-09-22

    Next-generation sequencing (NGS) technologies have become much more efficient, allowing whole human genomes to be sequenced faster and cheaper than ever before. However, processing the raw sequence reads associated with NGS technologies requires care and sophistication in order to draw compelling inferences about phenotypic consequences of variation in human genomes. It has been shown that different approaches to variant calling from NGS data can lead to different conclusions. Ensuring appropriate accuracy and quality in variant calling can come at a computational cost. We describe our experience implementing and evaluating a group-based approach to calling variants on large numbers of whole human genomes. We explore the influence of many factors that may impact the accuracy and efficiency of group-based variant calling, including group size, the biogeographical backgrounds of the individuals who have been sequenced, and the computing environment used. We make efficient use of the Gordon supercomputer cluster at the San Diego Supercomputer Center by incorporating job-packing and parallelization considerations into our workflow while calling variants on 437 whole human genomes generated as part of large association study. We ultimately find that our workflow resulted in high-quality variant calls in a computationally efficient manner. We argue that studies like ours should motivate further investigations combining hardware-oriented advances in computing systems with algorithmic developments to tackle emerging 'big data' problems in biomedical research brought on by the expansion of NGS technologies.

  17. Mission oriented R and D and the advancement of technology: The impact of NASA contributions, volume 2

    NASA Technical Reports Server (NTRS)

    Robbins, M. D.; Kelley, J. A.; Elliott, L.

    1972-01-01

    NASA contributions to the advancement of major developments in twelve selected fields of technology are presented. The twelve fields of technology discussed are: (1) cryogenics, (2) electrochemical energy conversion and storage, (3) high-temperature ceramics, (4) high-temperature metals (5) integrated circuits, (6) internal gas dynamics (7) materials machining and forming, (8) materials joining, (9) microwave systems, (10) nondestructive testing, (11) simulation, and (12) telemetry. These field were selected on the basis of both NASA and nonaerospace interest and activity.

  18. Sequence search on a supercomputer.

    PubMed

    Gotoh, O; Tagashira, Y

    1986-01-10

    A set of programs was developed for searching nucleic acid and protein sequence data bases for sequences similar to a given sequence. The programs, written in FORTRAN 77, were optimized for vector processing on a Hitachi S810-20 supercomputer. A search of a 500-residue protein sequence against the entire PIR data base Ver. 1.0 (1) (0.5 M residues) is carried out in a CPU time of 45 sec. About 4 min is required for an exhaustive search of a 1500-base nucleotide sequence against all mammalian sequences (1.2M bases) in Genbank Ver. 29.0. The CPU time is reduced to about a quarter with a faster version.

  19. Review of NASA's (National Aeronautics and Space Administration) Numerical Aerodynamic Simulation Program

    NASA Technical Reports Server (NTRS)

    1984-01-01

    NASA has planned a supercomputer for computational fluid dynamics research since the mid-1970's. With the approval of the Numerical Aerodynamic Simulation Program as a FY 1984 new start, Congress requested an assessment of the program's objectives, projected short- and long-term uses, program design, computer architecture, user needs, and handling of proprietary and classified information. Specifically requested was an examination of the merits of proceeding with multiple high speed processor (HSP) systems contrasted with a single high speed processor system. The panel found NASA's objectives and projected uses sound and the projected distribution of users as realistic as possible at this stage. The multiple-HSP, whereby new, more powerful state-of-the-art HSP's would be integrated into a flexible network, was judged to present major advantages over any single HSP system.

  20. Ultrascalable petaflop parallel supercomputer

    DOEpatents

    Blumrich, Matthias A [Ridgefield, CT; Chen, Dong [Croton On Hudson, NY; Chiu, George [Cross River, NY; Cipolla, Thomas M [Katonah, NY; Coteus, Paul W [Yorktown Heights, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Hall, Shawn [Pleasantville, NY; Haring, Rudolf A [Cortlandt Manor, NY; Heidelberger, Philip [Cortlandt Manor, NY; Kopcsay, Gerard V [Yorktown Heights, NY; Ohmacht, Martin [Yorktown Heights, NY; Salapura, Valentina [Chappaqua, NY; Sugavanam, Krishnan [Mahopac, NY; Takken, Todd [Brewster, NY

    2010-07-20

    A massively parallel supercomputer of petaOPS-scale includes node architectures based upon System-On-a-Chip technology, where each processing node comprises a single Application Specific Integrated Circuit (ASIC) having up to four processing elements. The ASIC nodes are interconnected by multiple independent networks that optimally maximize the throughput of packet communications between nodes with minimal latency. The multiple networks may include three high-speed networks for parallel algorithm message passing including a Torus, collective network, and a Global Asynchronous network that provides global barrier and notification functions. These multiple independent networks may be collaboratively or independently utilized according to the needs or phases of an algorithm for optimizing algorithm processing performance. The use of a DMA engine is provided to facilitate message passing among the nodes without the expenditure of processing resources at the node.

  1. Virtualizing Super-Computation On-Board Uas

    NASA Astrophysics Data System (ADS)

    Salami, E.; Soler, J. A.; Cuadrado, R.; Barrado, C.; Pastor, E.

    2015-04-01

    Unmanned aerial systems (UAS, also known as UAV, RPAS or drones) have a great potential to support a wide variety of aerial remote sensing applications. Most UAS work by acquiring data using on-board sensors for later post-processing. Some require the data gathered to be downlinked to the ground in real-time. However, depending on the volume of data and the cost of the communications, this later option is not sustainable in the long term. This paper develops the concept of virtualizing super-computation on-board UAS, as a method to ease the operation by facilitating the downlink of high-level information products instead of raw data. Exploiting recent developments in miniaturized multi-core devices is the way to speed-up on-board computation. This hardware shall satisfy size, power and weight constraints. Several technologies are appearing with promising results for high performance computing on unmanned platforms, such as the 36 cores of the TILE-Gx36 by Tilera (now EZchip) or the 64 cores of the Epiphany-IV by Adapteva. The strategy for virtualizing super-computation on-board includes the benchmarking for hardware selection, the software architecture and the communications aware design. A parallelization strategy is given for the 36-core TILE-Gx36 for a UAS in a fire mission or in similar target-detection applications. The results are obtained for payload image processing algorithms and determine in real-time the data snapshot to gather and transfer to ground according to the needs of the mission, the processing time, and consumed watts.

  2. NASA EEE Parts and Advanced Interconnect Program (AIP)

    NASA Technical Reports Server (NTRS)

    Gindorf, T.; Garrison, A.

    1996-01-01

    none given From Program Objectives: I. Accelerate the readiness of new technologies through development of validation, assessment and test method/tools II. Provide NASA Projects infusion paths for emerging technologies III. Provide NASA Projects technology selection, application and validation guidelines for harware and processes IV. Disseminate quality assurance, reliability, validation, tools and availability information to the NASA community.

  3. STAMPS: Software Tool for Automated MRI Post-processing on a supercomputer.

    PubMed

    Bigler, Don C; Aksu, Yaman; Miller, David J; Yang, Qing X

    2009-08-01

    This paper describes a Software Tool for Automated MRI Post-processing (STAMP) of multiple types of brain MRIs on a workstation and for parallel processing on a supercomputer (STAMPS). This software tool enables the automation of nonlinear registration for a large image set and for multiple MR image types. The tool uses standard brain MRI post-processing tools (such as SPM, FSL, and HAMMER) for multiple MR image types in a pipeline fashion. It also contains novel MRI post-processing features. The STAMP image outputs can be used to perform brain analysis using Statistical Parametric Mapping (SPM) or single-/multi-image modality brain analysis using Support Vector Machines (SVMs). Since STAMPS is PBS-based, the supercomputer may be a multi-node computer cluster or one of the latest multi-core computers.

  4. Toward a Proof of Concept Cloud Framework for Physics Applications on Blue Gene Supercomputers

    NASA Astrophysics Data System (ADS)

    Dreher, Patrick; Scullin, William; Vouk, Mladen

    2015-09-01

    Traditional high performance supercomputers are capable of delivering large sustained state-of-the-art computational resources to physics applications over extended periods of time using batch processing mode operating environments. However, today there is an increasing demand for more complex workflows that involve large fluctuations in the levels of HPC physics computational requirements during the simulations. Some of the workflow components may also require a richer set of operating system features and schedulers than normally found in a batch oriented HPC environment. This paper reports on progress toward a proof of concept design that implements a cloud framework onto BG/P and BG/Q platforms at the Argonne Leadership Computing Facility. The BG/P implementation utilizes the Kittyhawk utility and the BG/Q platform uses an experimental heterogeneous FusedOS operating system environment. Both platforms use the Virtual Computing Laboratory as the cloud computing system embedded within the supercomputer. This proof of concept design allows a cloud to be configured so that it can capitalize on the specialized infrastructure capabilities of a supercomputer and the flexible cloud configurations without resorting to virtualization. Initial testing of the proof of concept system is done using the lattice QCD MILC code. These types of user reconfigurable environments have the potential to deliver experimental schedulers and operating systems within a working HPC environment for physics computations that may be different from the native OS and schedulers on production HPC supercomputers.

  5. Optical clock distribution in supercomputers using polyimide-based waveguides

    NASA Astrophysics Data System (ADS)

    Bihari, Bipin; Gan, Jianhua; Wu, Linghui; Liu, Yujie; Tang, Suning; Chen, Ray T.

    1999-04-01

    Guided-wave optics is a promising way to deliver high-speed clock-signal in supercomputer with minimized clock-skew. Si- CMOS compatible polymer-based waveguides for optoelectronic interconnects and packaging have been fabricated and characterized. A 1-to-48 fanout optoelectronic interconnection layer (OIL) structure based on Ultradel 9120/9020 for the high-speed massive clock signal distribution for a Cray T-90 supercomputer board has been constructed. The OIL employs multimode polymeric channel waveguides in conjunction with surface-normal waveguide output coupler and 1-to-2 splitters. Surface-normal couplers can couple the optical clock signals into and out from the H-tree polyimide waveguides surface-normally, which facilitates the integration of photodetectors to convert optical-signal to electrical-signal. A 45-degree surface- normal couplers has been integrated at each output end. The measured output coupling efficiency is nearly 100 percent. The output profile from 45-degree surface-normal coupler were calculated using Fresnel approximation. the theoretical result is in good agreement with experimental result. A total insertion loss of 7.98 dB at 850 nm was measured experimentally.

  6. The NASA Reanalysis Ensemble Service - Advanced Capabilities for Integrated Reanalysis Access and Intercomparison

    NASA Astrophysics Data System (ADS)

    Tamkin, G.; Schnase, J. L.; Duffy, D.; Li, J.; Strong, S.; Thompson, J. H.

    2017-12-01

    NASA's efforts to advance climate analytics-as-a-service are making new capabilities available to the research community: (1) A full-featured Reanalysis Ensemble Service (RES) comprising monthly means data from multiple reanalysis data sets, accessible through an enhanced set of extraction, analytic, arithmetic, and intercomparison operations. The operations are made accessible through NASA's climate data analytics Web services and our client-side Climate Data Services Python library, CDSlib; (2) A cloud-based, high-performance Virtual Real-Time Analytics Testbed supporting a select set of climate variables. This near real-time capability enables advanced technologies like Spark and Hadoop-based MapReduce analytics over native NetCDF files; and (3) A WPS-compliant Web service interface to our climate data analytics service that will enable greater interoperability with next-generation systems such as ESGF. The Reanalysis Ensemble Service includes the following: - New API that supports full temporal, spatial, and grid-based resolution services with sample queries - A Docker-ready RES application to deploy across platforms - Extended capabilities that enable single- and multiple reanalysis area average, vertical average, re-gridding, standard deviation, and ensemble averages - Convenient, one-stop shopping for commonly used data products from multiple reanalyses including basic sub-setting and arithmetic operations (e.g., avg, sum, max, min, var, count, anomaly) - Full support for the MERRA-2 reanalysis dataset in addition to, ECMWF ERA-Interim, NCEP CFSR, JMA JRA-55 and NOAA/ESRL 20CR… - A Jupyter notebook-based distribution mechanism designed for client use cases that combines CDSlib documentation with interactive scenarios and personalized project management - Supporting analytic services for NASA GMAO Forward Processing datasets - Basic uncertainty quantification services that combine heterogeneous ensemble products with comparative observational products (e

  7. A New Way of Doing Business: Reusable Launch Vehicle Advanced Thermal Protection Systems Technology Development: NASA Ames and Rockwell International Partnership

    NASA Technical Reports Server (NTRS)

    Carroll, Carol W.; Fleming, Mary; Hogenson, Pete; Green, Michael J.; Rasky, Daniel J. (Technical Monitor)

    1995-01-01

    NASA Ames Research Center and Rockwell International are partners in a Cooperative Agreement (CA) for the development of Thermal Protection Systems (TPS) for the Reusable Launch Vehicle (RLV) Technology Program. This Cooperative Agreement is a 30 month effort focused on transferring NASA innovations to Rockwell and working as partners to advance the state-of-the-art in several TPS areas. The use of a Cooperative Agreement is a new way of doing business for NASA and Industry which eliminates the traditional customer/contractor relationship and replaces it with a NASA/Industry partnership.

  8. The Design and Implementation of NASA's Advanced Flight Computing Module

    NASA Technical Reports Server (NTRS)

    Alkakaj, Leon; Straedy, Richard; Jarvis, Bruce

    1995-01-01

    This paper describes a working flight computer Multichip Module developed jointly by JPL and TRW under their respective research programs in a collaborative fashion. The MCM is fabricated by nCHIP and is packaged within a 2 by 4 inch Al package from Coors. This flight computer module is one of three modules under development by NASA's Advanced Flight Computer (AFC) program. Further development of the Mass Memory and the programmable I/O MCM modules will follow. The three building block modules will then be stacked into a 3D MCM configuration. The mass and volume of the flight computer MCM achieved at 89 grams and 1.5 cubic inches respectively, represent a major enabling technology for future deep space as well as commercial remote sensing applications.

  9. NASA's Advanced Propulsion Technology Activities for Third Generation Fully Reusable Launch Vehicle Applications

    NASA Technical Reports Server (NTRS)

    Hueter, Uwe

    2000-01-01

    NASA's Office of Aeronautics and Space Transportation Technology (OASTT) established the following three major goals, referred to as "The Three Pillars for Success": Global Civil Aviation, Revolutionary Technology Leaps, and Access to Space. The Advanced Space Transportation Program Office (ASTP) at the NASA's Marshall Space Flight Center in Huntsville, Ala. focuses on future space transportation technologies under the "Access to Space" pillar. The Propulsion Projects within ASTP under the investment area of Spaceliner100, focus on the earth-to-orbit (ETO) third generation reusable launch vehicle technologies. The goals of Spaceliner 100 is to reduce cost by a factor of 100 and improve safety by a factor of 10,000 over current conditions. The ETO Propulsion Projects in ASTP, are actively developing combination/combined-cycle propulsion technologies that utilized airbreathing propulsion during a major portion of the trajectory. System integration, components, materials and advanced rocket technologies are also being pursued. Over the last several years, one of the main thrusts has been to develop rocket-based combined cycle (RBCC) technologies. The focus has been on conducting ground tests of several engine designs to establish the RBCC flowpaths performance. Flowpath testing of three different RBCC engine designs is progressing. Additionally, vehicle system studies are being conducted to assess potential operational space access vehicles utilizing combined-cycle propulsion systems. The design, manufacturing, and ground testing of a scale flight-type engine are planned. The first flight demonstration of an airbreathing combined cycle propulsion system is envisioned around 2005. The paper will describe the advanced propulsion technologies that are being being developed under the ETO activities in the ASTP program. Progress, findings, and future activities for the propulsion technologies will be discussed.

  10. Computational mechanics analysis tools for parallel-vector supercomputers

    NASA Technical Reports Server (NTRS)

    Storaasli, Olaf O.; Nguyen, Duc T.; Baddourah, Majdi; Qin, Jiangning

    1993-01-01

    Computational algorithms for structural analysis on parallel-vector supercomputers are reviewed. These parallel algorithms, developed by the authors, are for the assembly of structural equations, 'out-of-core' strategies for linear equation solution, massively distributed-memory equation solution, unsymmetric equation solution, general eigensolution, geometrically nonlinear finite element analysis, design sensitivity analysis for structural dynamics, optimization search analysis and domain decomposition. The source code for many of these algorithms is available.

  11. Improving NASA's Multiscale Modeling Framework for Tropical Cyclone Climate Study

    NASA Technical Reports Server (NTRS)

    Shen, Bo-Wen; Nelson, Bron; Cheung, Samson; Tao, Wei-Kuo

    2013-01-01

    One of the current challenges in tropical cyclone (TC) research is how to improve our understanding of TC interannual variability and the impact of climate change on TCs. Recent advances in global modeling, visualization, and supercomputing technologies at NASA show potential for such studies. In this article, the authors discuss recent scalability improvement to the multiscale modeling framework (MMF) that makes it feasible to perform long-term TC-resolving simulations. The MMF consists of the finite-volume general circulation model (fvGCM), supplemented by a copy of the Goddard cumulus ensemble model (GCE) at each of the fvGCM grid points, giving 13,104 GCE copies. The original fvGCM implementation has a 1D data decomposition; the revised MMF implementation retains the 1D decomposition for most of the code, but uses a 2D decomposition for the massive copies of GCEs. Because the vast majority of computation time in the MMF is spent computing the GCEs, this approach can achieve excellent speedup without incurring the cost of modifying the entire code. Intelligent process mapping allows differing numbers of processes to be assigned to each domain for load balancing. The revised parallel implementation shows highly promising scalability, obtaining a nearly 80-fold speedup by increasing the number of cores from 30 to 3,335.

  12. An Overview of Advanced Elastomeric Seal Development and Testing Capabilities at NASA Glenn Research Center

    NASA Technical Reports Server (NTRS)

    Dunlap, Patrick H., Jr.

    2014-01-01

    NASA is developing advanced space-rated elastomeric seals to support future space exploration missions to low Earth orbit, the Moon, near Earth asteroids, and other destinations. This includes seals for a new docking system and vehicle hatches. These seals must exhibit extremely low leak rates to ensure that astronauts have sufficient breathable air for extended missions. Seal compression loads must be below prescribed limits so as not to overload the mechanisms that compress them, and seal adhesion forces must be low to allow the sealed interface to be separated when required (e.g., during undocking or hatch opening). NASA Glenn Research Center has developed a number of unique test fixtures to measure the leak rates and compression and adhesion loads of candidate seal designs under simulated thermal, vacuum, and engagement conditions. Tests can be performed on fullscale seals with diameters on the order of 50 in., subscale seals that are about 12 in. in diameter, and smaller specimens such as O-rings. Test conditions include temperatures ranging from -238 to 662degF (-150 to 350degC), operational pressure gradients, and seal-on-seal or seal-on-flange mating configurations. Nominal and off-nominal conditions (e.g., incomplete seal compression) can also be simulated. This paper describes the main design features and capabilities of each type of test apparatus and provides an overview of advanced seal development activities at NASA Glenn.

  13. Test Rack Development for Extended Operation of Advanced Stirling Convertors at NASA Glenn Research Center

    NASA Technical Reports Server (NTRS)

    Dugala, Gina M.

    2010-01-01

    The U.S. Department of Energy, Lockheed Martin Space Systems Company, Sunpower Inc., and NASA Glenn Research Center (GRC) have been developing an Advanced Stirling Radioisotope Generator (ASRG) for use as a power system on space science missions. This generator will make use of free-piston Stirling convertors to achieve higher conversion efficiency than with currently available alternatives. One part of NASA GRC's support of ASRG development includes extended operation testing of Advanced Stirling Convertors (ASCs) developed by Sunpower Inc. and GRC. The ASC consists of a free-piston Stirling engine integrated with a linear alternator. NASA GRC has been building test facilities to support extended operation of the ASCs for several years. Operation of the convertors in the test facility provides convertor performance data over an extended period of time. One part of the test facility is the test rack, which provides a means for data collection, convertor control, and safe operation. Over the years, the test rack requirements have changed. The initial ASC test rack utilized an alternating-current (AC) bus for convertor control; the ASRG Engineering Unit (EU) test rack can operate with AC bus control or with an ASC Control Unit (ACU). A new test rack is being developed to support extended operation of the ASC-E2s with higher standards of documentation, component selection, and assembly practices. This paper discusses the differences among the ASC, ASRG EU, and ASC-E2 test racks.

  14. Design of multiple sequence alignment algorithms on parallel, distributed memory supercomputers.

    PubMed

    Church, Philip C; Goscinski, Andrzej; Holt, Kathryn; Inouye, Michael; Ghoting, Amol; Makarychev, Konstantin; Reumann, Matthias

    2011-01-01

    The challenge of comparing two or more genomes that have undergone recombination and substantial amounts of segmental loss and gain has recently been addressed for small numbers of genomes. However, datasets of hundreds of genomes are now common and their sizes will only increase in the future. Multiple sequence alignment of hundreds of genomes remains an intractable problem due to quadratic increases in compute time and memory footprint. To date, most alignment algorithms are designed for commodity clusters without parallelism. Hence, we propose the design of a multiple sequence alignment algorithm on massively parallel, distributed memory supercomputers to enable research into comparative genomics on large data sets. Following the methodology of the sequential progressiveMauve algorithm, we design data structures including sequences and sorted k-mer lists on the IBM Blue Gene/P supercomputer (BG/P). Preliminary results show that we can reduce the memory footprint so that we can potentially align over 250 bacterial genomes on a single BG/P compute node. We verify our results on a dataset of E.coli, Shigella and S.pneumoniae genomes. Our implementation returns results matching those of the original algorithm but in 1/2 the time and with 1/4 the memory footprint for scaffold building. In this study, we have laid the basis for multiple sequence alignment of large-scale datasets on a massively parallel, distributed memory supercomputer, thus enabling comparison of hundreds instead of a few genome sequences within reasonable time.

  15. Knowledge Acquisition and Management for the NASA Earth Exchange (NEX)

    NASA Astrophysics Data System (ADS)

    Votava, P.; Michaelis, A.; Nemani, R. R.

    2013-12-01

    NASA Earth Exchange (NEX) is a data, computing and knowledge collaboratory that houses NASA satellite, climate and ancillary data where a focused community can come together to share modeling and analysis codes, scientific results, knowledge and expertise on a centralized platform with access to large supercomputing resources. As more and more projects are being executed on NEX, we are increasingly focusing on capturing the knowledge of the NEX users and provide mechanisms for sharing it with the community in order to facilitate reuse and accelerate research. There are many possible knowledge contributions to NEX, it can be a wiki entry on the NEX portal contributed by a developer, information extracted from a publication in an automated way, or a workflow captured during code execution on the supercomputing platform. The goal of the NEX knowledge platform is to capture and organize this information and make it easily accessible to the NEX community and beyond. The knowledge acquisition process consists of three main faucets - data and metadata, workflows and processes, and web-based information. Once the knowledge is acquired, it is processed in a number of ways ranging from custom metadata parsers to entity extraction using natural language processing techniques. The processed information is linked with existing taxonomies and aligned with internal ontology (which heavily reuses number of external ontologies). This forms a knowledge graph that can then be used to improve users' search query results as well as provide additional analytics capabilities to the NEX system. Such a knowledge graph will be an important building block in creating a dynamic knowledge base for the NEX community where knowledge is both generated and easily shared.

  16. 2004 NASA Seal/Secondary Air System Workshop, Volume 1

    NASA Technical Reports Server (NTRS)

    2005-01-01

    The 2004 NASA Seal/Secondary Air System workshop covered the following topics: (1) Overview of NASA s new Exploration Initiative program aimed at exploring the Moon, Mars, and beyond; (2) Overview of the NASA-sponsored Ultra-Efficient Engine Technology (UEET) program; (3) Overview of NASA Glenn s seal program aimed at developing advanced seals for NASA s turbomachinery, space, and reentry vehicle needs; (4) Reviews of NASA prime contractor and university advanced sealing concepts including tip clearance control, test results, experimental facilities, and numerical predictions; and (5) Reviews of material development programs relevant to advanced seals development. The NASA UEET overview illustrated for the reader the importance of advanced technologies, including seals, in meeting future turbine engine system efficiency and emission goals. For example, the NASA UEET program goals include an 8- to 15-percent reduction in fuel burn, a 15-percent reduction in CO2, a 70-percent reduction in NOx, CO, and unburned hydrocarbons, and a 30-dB noise reduction relative to program baselines. The workshop also covered several programs NASA is funding to develop technologies for the Exploration Initiative and advanced reusable space vehicle technologies. NASA plans on developing an advanced docking and berthing system that would permit any vehicle to dock to any on-orbit station or vehicle, as part of NASA s new Exploration Initiative. Plans to develop the necessary mechanism and androgynous seal technologies were reviewed. Seal challenges posed by reusable re-entry space vehicles include high-temperature operation, resiliency at temperature to accommodate gap changes during operation, and durability to meet mission requirements.

  17. High temporal resolution mapping of seismic noise sources using heterogeneous supercomputers

    NASA Astrophysics Data System (ADS)

    Gokhberg, Alexey; Ermert, Laura; Paitz, Patrick; Fichtner, Andreas

    2017-04-01

    Time- and space-dependent distribution of seismic noise sources is becoming a key ingredient of modern real-time monitoring of various geo-systems. Significant interest in seismic noise source maps with high temporal resolution (days) is expected to come from a number of domains, including natural resources exploration, analysis of active earthquake fault zones and volcanoes, as well as geothermal and hydrocarbon reservoir monitoring. Currently, knowledge of noise sources is insufficient for high-resolution subsurface monitoring applications. Near-real-time seismic data, as well as advanced imaging methods to constrain seismic noise sources have recently become available. These methods are based on the massive cross-correlation of seismic noise records from all available seismic stations in the region of interest and are therefore very computationally intensive. Heterogeneous massively parallel supercomputing systems introduced in the recent years combine conventional multi-core CPU with GPU accelerators and provide an opportunity for manifold increase and computing performance. Therefore, these systems represent an efficient platform for implementation of a noise source mapping solution. We present the first results of an ongoing research project conducted in collaboration with the Swiss National Supercomputing Centre (CSCS). The project aims at building a service that provides seismic noise source maps for Central Europe with high temporal resolution (days to few weeks depending on frequency and data availability). The service is hosted on the CSCS computing infrastructure; all computationally intensive processing is performed on the massively parallel heterogeneous supercomputer "Piz Daint". The solution architecture is based on the Application-as-a-Service concept in order to provide the interested external researchers the regular access to the noise source maps. The solution architecture includes the following sub-systems: (1) data acquisition responsible for

  18. Space Transportation and the Computer Industry: Learning from the Past

    NASA Technical Reports Server (NTRS)

    Merriam, M. L.; Rasky, D.

    2002-01-01

    Since the space shuttle began flying in 1981, NASA has made a number of attempts to advance the state of the art in space transportation. In spite of billions of dollars invested, and several concerted attempts, no replacement for the shuttle is expected before 2010. Furthermore, the cost of access to space has dropped very slowly over the last two decades. On the other hand, the same two decades have seen dramatic progress in the computer industry. Computational speeds have increased by about a factor of 1000 and available memory, disk space, and network bandwidth has seen similar increases. At the same time, the cost of computing has dropped by about a factor of 10000. Is the space transportation problem simply harder? Or is there something to be learned from the computer industry? In looking for the answers, this paper reviews the early history of NASA's experience with supercomputers and NASA's visionary course change in supercomputer procurement strategy.

  19. Advanced Lithium-Ion Cell Development for NASA's Constellation Missions

    NASA Technical Reports Server (NTRS)

    Reid, Concha M.; Miller, Thomas B.; Manzo, Michelle A.; Mercer, Carolyn R.

    2008-01-01

    The Energy Storage Project of NASA s Exploration Technology Development Program is developing advanced lithium-ion batteries to meet the requirements for specific Constellation missions. NASA GRC, in conjunction with JPL and JSC, is leading efforts to develop High Energy and Ultra High Energy cells for three primary Constellation customers: Altair, Extravehicular Activities (EVA), and Lunar Surface Systems. The objective of the High Energy cell development is to enable a battery system that can operationally deliver approximately 150 Wh/kg for 2000 cycles. The Ultra High Energy cell development will enable a battery system that can operationally deliver 220 Wh/kg for 200 cycles. To accomplish these goals, cathode, electrolyte, separator, and safety components are being developed for High Energy Cells. The Ultra High Energy cell development adds lithium alloy anodes to the component development portfolio to enable much higher cell-level specific energy. The Ultra High Energy cell development is targeted for the ascent stage of Altair, which is the Lunar Lander, and for power for the Portable Life support System of the EVA Lunar spacesuit. For these missions, mass is highly critical, but only a limited number of cycles are required. The High Energy cell development is primarily targeted for Mobility Systems (rovers) for Lunar Surface Systems, however, due to the high risk nature of the Ultra High Energy cell development, the High Energy cell will also serve as a backup technology for Altair and EVA. This paper will discuss mission requirements and the goals of the material, component, and cell development efforts in further detail.

  20. NASA's computer science research program

    NASA Technical Reports Server (NTRS)

    Larsen, R. L.

    1983-01-01

    Following a major assessment of NASA's computing technology needs, a new program of computer science research has been initiated by the Agency. The program includes work in concurrent processing, management of large scale scientific databases, software engineering, reliable computing, and artificial intelligence. The program is driven by applications requirements in computational fluid dynamics, image processing, sensor data management, real-time mission control and autonomous systems. It consists of university research, in-house NASA research, and NASA's Research Institute for Advanced Computer Science (RIACS) and Institute for Computer Applications in Science and Engineering (ICASE). The overall goal is to provide the technical foundation within NASA to exploit advancing computing technology in aerospace applications.

  1. Recent advances in active noise and vibration control at NASA Langley Research Center

    NASA Astrophysics Data System (ADS)

    Gibbs, Gary P.; Cabell, Randolph H.; Palumbo, Daniel L.; Silcox, Richard J.; Turner, Travis L.

    2002-11-01

    Over the past 15 years NASA has investigated the use of active control technology for aircraft interior noise. More recently this work has been supported through the Advanced Subsonic Technology Noise Reduction Program (1994-2001), High Speed Research Program (1994-1999), and through the Quiet Aircraft Technology Program (2000-present). The interior environment is recognized as an important element in flight safety, crew communications and fatigue, as well as passenger comfort. This presentation will overview research in active noise and vibration control relating to interior noise being investigated by NASA. The research to be presented includes: active control of aircraft fuselage sidewall transmission due to turbulent boundary layer or jet noise excitation, active control of interior tones due to propeller excitation of aircraft structures, and adaptive stiffening of structures for noise, vibration, and fatigue control. Work on actuator technology ranging from piezoelectrics, shape memory actuators, and fluidic actuators will be described including applications. Control system technology will be included that is experimentally based, real-time, and adaptive.

  2. 2002 NASA Seal/Secondary Air System Workshop. Volume 1

    NASA Technical Reports Server (NTRS)

    Steinetz, Bruce M. (Editor); Hendricks, Robert C. (Editor)

    2003-01-01

    The 2002 NASA Seal/Secondary Air System Workshop covered the following topics: (i) Overview of NASA s perspective of aeronautics and space technology for the 21st century; (ii) Overview of the NASA-sponsored Ultra-Efficient Engine Technology (UEET), Turbine-Based Combined-Cycle (TBCC), and Revolutionary Turbine Accelator (RTA) programs; (iii) Overview of NASA Glenn's seal program aimed at developing advanced seals for NASA's turbomachinery, space propulsion, and reentry vehicle needs; (iv) Reviews of sealing concepts, test results, experimental facilities, and numerical predictions; and (v) Reviews of material development programs relevant to advanced seals development. The NASA UEET and TBCC/RTA program overviews illustrated for the reader the importance of advanced technologies, including seals, in meeting future turbine engine system efficiency and emission goals. For example, the NASA UEET program goals include an 8- to 15-percent reduction in fuel burn, a 15-percent reduction in CO2, a 70-percent reduction in NOx, CO, and unburned hydrocarbons, and a 30-dB noise reduction relative to program baselines. The workshop also covered several programs NASA is funding to investigate advanced reusable space vehicle technologies (X-38) and advanced space ram/scramjet propulsion systems. Seal challenges posed by these advanced systems include high-temperature operation, resiliency at the operating temperature to accommodate sidewall flexing, and durability to last many missions.

  3. HACC: Simulating sky surveys on state-of-the-art supercomputing architectures

    NASA Astrophysics Data System (ADS)

    Habib, Salman; Pope, Adrian; Finkel, Hal; Frontiere, Nicholas; Heitmann, Katrin; Daniel, David; Fasel, Patricia; Morozov, Vitali; Zagaris, George; Peterka, Tom; Vishwanath, Venkatram; Lukić, Zarija; Sehrish, Saba; Liao, Wei-keng

    2016-01-01

    Current and future surveys of large-scale cosmic structure are associated with a massive and complex datastream to study, characterize, and ultimately understand the physics behind the two major components of the 'Dark Universe', dark energy and dark matter. In addition, the surveys also probe primordial perturbations and carry out fundamental measurements, such as determining the sum of neutrino masses. Large-scale simulations of structure formation in the Universe play a critical role in the interpretation of the data and extraction of the physics of interest. Just as survey instruments continue to grow in size and complexity, so do the supercomputers that enable these simulations. Here we report on HACC (Hardware/Hybrid Accelerated Cosmology Code), a recently developed and evolving cosmology N-body code framework, designed to run efficiently on diverse computing architectures and to scale to millions of cores and beyond. HACC can run on all current supercomputer architectures and supports a variety of programming models and algorithms. It has been demonstrated at scale on Cell- and GPU-accelerated systems, standard multi-core node clusters, and Blue Gene systems. HACC's design allows for ease of portability, and at the same time, high levels of sustained performance on the fastest supercomputers available. We present a description of the design philosophy of HACC, the underlying algorithms and code structure, and outline implementation details for several specific architectures. We show selected accuracy and performance results from some of the largest high resolution cosmological simulations so far performed, including benchmarks evolving more than 3.6 trillion particles.

  4. HACC: Simulating sky surveys on state-of-the-art supercomputing architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Habib, Salman; Pope, Adrian; Finkel, Hal

    2016-01-01

    Current and future surveys of large-scale cosmic structure are associated with a massive and complex datastream to study, characterize, and ultimately understand the physics behind the two major components of the ‘Dark Universe’, dark energy and dark matter. In addition, the surveys also probe primordial perturbations and carry out fundamental measurements, such as determining the sum of neutrino masses. Large-scale simulations of structure formation in the Universe play a critical role in the interpretation of the data and extraction of the physics of interest. Just as survey instruments continue to grow in size and complexity, so do the supercomputers thatmore » enable these simulations. Here we report on HACC (Hardware/Hybrid Accelerated Cosmology Code), a recently developed and evolving cosmology N-body code framework, designed to run efficiently on diverse computing architectures and to scale to millions of cores and beyond. HACC can run on all current supercomputer architectures and supports a variety of programming models and algorithms. It has been demonstrated at scale on Cell- and GPU-accelerated systems, standard multi-core node clusters, and Blue Gene systems. HACC’s design allows for ease of portability, and at the same time, high levels of sustained performance on the fastest supercomputers available. We present a description of the design philosophy of HACC, the underlying algorithms and code structure, and outline implementation details for several specific architectures. We show selected accuracy and performance results from some of the largest high resolution cosmological simulations so far performed, including benchmarks evolving more than 3.6 trillion particles.« less

  5. A report documenting the completion of the Los Alamos National Laboratory portion of the ASC level II milestone ""Visualization on the supercomputing platform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahrens, James P; Patchett, John M; Lo, Li - Ta

    2011-01-24

    This report provides documentation for the completion of the Los Alamos portion of the ASC Level II 'Visualization on the Supercomputing Platform' milestone. This ASC Level II milestone is a joint milestone between Sandia National Laboratory and Los Alamos National Laboratory. The milestone text is shown in Figure 1 with the Los Alamos portions highlighted in boldfaced text. Visualization and analysis of petascale data is limited by several factors which must be addressed as ACES delivers the Cielo platform. Two primary difficulties are: (1) Performance of interactive rendering, which is the most computationally intensive portion of the visualization process. Formore » terascale platforms, commodity clusters with graphics processors (GPUs) have been used for interactive rendering. For petascale platforms, visualization and rendering may be able to run efficiently on the supercomputer platform itself. (2) I/O bandwidth, which limits how much information can be written to disk. If we simply analyze the sparse information that is saved to disk we miss the opportunity to analyze the rich information produced every timestep by the simulation. For the first issue, we are pursuing in-situ analysis, in which simulations are coupled directly with analysis libraries at runtime. This milestone will evaluate the visualization and rendering performance of current and next generation supercomputers in contrast to GPU-based visualization clusters, and evaluate the perfromance of common analysis libraries coupled with the simulation that analyze and write data to disk during a running simulation. This milestone will explore, evaluate and advance the maturity level of these technologies and their applicability to problems of interest to the ASC program. In conclusion, we improved CPU-based rendering performance by a a factor of 2-10 times on our tests. In addition, we evaluated CPU and CPU-based rendering performance. We encourage production visualization experts to consider

  6. Supercomputers Join the Fight against Cancer – U.S. Department of Energy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    The Department of Energy has some of the best supercomputers in the world. Now, they’re joining the fight against cancer. Learn about our new partnership with the National Cancer Institute and GlaxoSmithKline Pharmaceuticals.

  7. NASA Advanced Explorations Systems: Advancements in Life Support Systems

    NASA Technical Reports Server (NTRS)

    Shull, Sarah A.; Schneider, Walter F.

    2016-01-01

    The NASA Advanced Exploration Systems (AES) Life Support Systems (LSS) project strives to develop reliable, energy-efficient, and low-mass spacecraft systems to provide environmental control and life support systems (ECLSS) critical to enabling long duration human missions beyond low Earth orbit (LEO). Highly reliable, closed-loop life support systems are among the capabilities required for the longer duration human space exploration missions assessed by NASA's Habitability Architecture Team (HAT). The LSS project is focused on four areas: architecture and systems engineering for life support systems, environmental monitoring, air revitalization, and wastewater processing and water management. Starting with the international space station (ISS) LSS systems as a point of departure (where applicable), the mission of the LSS project is three-fold: 1. Address discrete LSS technology gaps 2. Improve the reliability of LSS systems 3. Advance LSS systems towards integrated testing on the ISS. This paper summarized the work being done in the four areas listed above to meet these objectives. Details will be given on the following focus areas: Systems Engineering and Architecture- With so many complex systems comprising life support in space, it is important to understand the overall system requirements to define life support system architectures for different space mission classes, ensure that all the components integrate well together and verify that testing is as representative of destination environments as possible. Environmental Monitoring- In an enclosed spacecraft that is constantly operating complex machinery for its own basic functionality as well as science experiments and technology demonstrations, it's possible for the environment to become compromised. While current environmental monitors aboard the ISS will alert crew members and mission control if there is an emergency, long-duration environmental monitoring cannot be done in-orbit as current methodologies

  8. Calibrating Building Energy Models Using Supercomputer Trained Machine Learning Agents

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sanyal, Jibonananda; New, Joshua Ryan; Edwards, Richard

    2014-01-01

    Building Energy Modeling (BEM) is an approach to model the energy usage in buildings for design and retrofit purposes. EnergyPlus is the flagship Department of Energy software that performs BEM for different types of buildings. The input to EnergyPlus can often extend in the order of a few thousand parameters which have to be calibrated manually by an expert for realistic energy modeling. This makes it challenging and expensive thereby making building energy modeling unfeasible for smaller projects. In this paper, we describe the Autotune research which employs machine learning algorithms to generate agents for the different kinds of standardmore » reference buildings in the U.S. building stock. The parametric space and the variety of building locations and types make this a challenging computational problem necessitating the use of supercomputers. Millions of EnergyPlus simulations are run on supercomputers which are subsequently used to train machine learning algorithms to generate agents. These agents, once created, can then run in a fraction of the time thereby allowing cost-effective calibration of building models.« less

  9. Scheduling for Parallel Supercomputing: A Historical Perspective of Achievable Utilization

    NASA Technical Reports Server (NTRS)

    Jones, James Patton; Nitzberg, Bill

    1999-01-01

    The NAS facility has operated parallel supercomputers for the past 11 years, including the Intel iPSC/860, Intel Paragon, Thinking Machines CM-5, IBM SP-2, and Cray Origin 2000. Across this wide variety of machine architectures, across a span of 10 years, across a large number of different users, and through thousands of minor configuration and policy changes, the utilization of these machines shows three general trends: (1) scheduling using a naive FIFO first-fit policy results in 40-60% utilization, (2) switching to the more sophisticated dynamic backfilling scheduling algorithm improves utilization by about 15 percentage points (yielding about 70% utilization), and (3) reducing the maximum allowable job size further increases utilization. Most surprising is the consistency of these trends. Over the lifetime of the NAS parallel systems, we made hundreds, perhaps thousands, of small changes to hardware, software, and policy, yet, utilization was affected little. In particular these results show that the goal of achieving near 100% utilization while supporting a real parallel supercomputing workload is unrealistic.

  10. Commercialization of Advanced Communications Technology Satellite (ACTS) technology

    NASA Astrophysics Data System (ADS)

    Plecity, Mark S.; Strickler, Walter M.; Bauer, Robert A.

    1996-03-01

    In an on-going effort to maintain United States leadership in communication satellite technology, the National Aeronautics and Space Administration (NASA), led the development of the Advanced Communications Technology Satellite (ACTS). NASA's ACTS program provides industry, academia, and government agencies the opportunity to perform both technology and telecommunication service experiments with a leading-edge communication satellite system. Over 80 organizations are using ACTS as a multi server test bed to establish communication technologies and services of the future. ACTS was designed to provide demand assigned multiple access (DAMA) digital communications with a minimum switchable circuit bandwidth of 64 Kbps, and a maximum channel bandwidth of 900 MHZ. It can, therefore, provide service to thin routes as well as connect fiber backbones in supercomputer networks, across oceans, or restore full communications in the event of national or manmade disaster. Service can also be provided to terrestrial and airborne mobile users. Commercial applications of ACTS technologies include: telemedicine; distance education; Department of Defense operations; mobile communications, aeronautical applications, terrestrial applications, and disaster recovery. This paper briefly describes the ACTS system and the enabling technologies employed by ACTS including Ka-band hopping spot beams, on-board routing and switching, and rain fade compensation. When used in conjunction with a time division multiple access (TDMA) architecture, these technologies provide a higher capacity, lower cost satellite system. Furthermore, examples of completed user experiments, future experiments, and plans of organizations to commercialize ACTS technology in their own future offerings will be discussed.

  11. NASA's Long-range Technology Goals

    NASA Technical Reports Server (NTRS)

    1990-01-01

    This document is part of the Final Report performed under contract NASW-3864, titled "NASA's Long-Range Technology Goals". The objectives of the effort were: To identify technologies whose development falls within NASA's capability and purview, and which have high potential for leapfrog advances in the national industrial posture in the 2005-2010 era. To define which of these technologies can also enable quantum jumps in the national space program. To assess mechanisms of interaction between NASA and industry constituencies for realizing the leapfrog technologies. This Volume details the findings pertaining to the advanced space-enabling technologies.

  12. NASA's CSTI Earth-to-Orbit Propulsion Program - On-target technology transfer to advanced space flight programs

    NASA Technical Reports Server (NTRS)

    Escher, William J. D.; Herr, Paul N.; Stephenson, Frank W., Jr.

    1990-01-01

    NASA's Civil Space Technology Initiative encompasses among its major elements the Earth-to-Orbit Propulsion Program (ETOPP) for future launch vehicles, which is budgeted to the extent of $20-30 million/year for the development of essential technologies. ETOPP technologies include, in addition to advanced materials and processes and design/analysis computational tools, the advanced systems-synthesis technologies required for definition of highly reliable LH2 and hydrocarbon fueled rocket engines to be operated at significantly reduced levels of risk and cost relative to the SSME. Attention is given to the technology-transfer services of ETOPP.

  13. NASA's Geospatial Interoperability Office(GIO)Program

    NASA Technical Reports Server (NTRS)

    Weir, Patricia

    2004-01-01

    NASA produces vast amounts of information about the Earth from satellites, supercomputer models, and other sources. These data are most useful when made easily accessible to NASA researchers and scientists, to NASA's partner Federal Agencies, and to society as a whole. A NASA goal is to apply its data for knowledge gain, decision support and understanding of Earth, and other planetary systems. The NASA Earth Science Enterprise (ESE) Geospatial Interoperability Office (GIO) Program leads the development, promotion and implementation of information technology standards that accelerate and expand the delivery of NASA's Earth system science research through integrated systems solutions. Our overarching goal is to make it easy for decision-makers, scientists and citizens to use NASA's science information. NASA's Federal partners currently participate with NASA and one another in the development and implementation of geospatial standards to ensure the most efficient and effective access to one another's data. Through the GIO, NASA participates with its Federal partners in implementing interoperability standards in support of E-Gov and the associated President's Management Agenda initiatives by collaborating on standards development. Through partnerships with government, private industry, education and communities the GIO works towards enhancing the ESE Applications Division in the area of National Applications and decision support systems. The GIO provides geospatial standards leadership within NASA, represents NASA on the Federal Geographic Data Committee (FGDC) Coordination Working Group and chairs the FGDC's Geospatial Applications and Interoperability Working Group (GAI) and supports development and implementation efforts such as Earth Science Gateway (ESG), Space Time Tool Kit and Web Map Services (WMS) Global Mosaic. The GIO supports NASA in the collection and dissemination of geospatial interoperability standards needs and progress throughout the agency including

  14. LLMapReduce: Multi-Lingual Map-Reduce for Supercomputing Environments

    DTIC Science & Technology

    2015-11-20

    1990s. Popularized by Google [36] and Apache Hadoop [37], map-reduce has become a staple technology of the ever- growing big data community...Lexington, MA, U.S.A Abstract— The map-reduce parallel programming model has become extremely popular in the big data community. Many big data ...to big data users running on a supercomputer. LLMapReduce dramatically simplifies map-reduce programming by providing simple parallel programming

  15. NASA Exploration Forum: Human Path to Mars

    NASA Image and Video Library

    2014-04-29

    Jason Crusan, Director of NASA's Advanced Exploration Systems Division, speaks during an Exploration Forum showcasing NASA's human exploration path to Mars in the James E. Webb Auditorium at NASA Headquarters on Tuesday, April 29, 2014. Photo Credit: (NASA/Joel Kowsky)

  16. The decay of NASA's technical culture

    NASA Technical Reports Server (NTRS)

    Mccurdy, Howard E.

    1989-01-01

    Changes in the organization structure and technical research activities of NASA since 1970 are evaluated. The creation of NASA and the original organizational structure and operation of NASA are reviewed. The relationship between organization and advanced technology is discussed and suggestions are given for ways of maintaining NASA as a high reliability organization.

  17. NASA University Research Centers Technical Advances in Aeronautics, Space Sciences and Technology, Earth Systems Sciences, Global Hydrology, and Education. Volumes 2 and 3

    NASA Technical Reports Server (NTRS)

    Coleman, Tommy L. (Editor); White, Bettie (Editor); Goodman, Steven (Editor); Sakimoto, P. (Editor); Randolph, Lynwood (Editor); Rickman, Doug (Editor)

    1998-01-01

    This volume chronicles the proceedings of the 1998 NASA University Research Centers Technical Conference (URC-TC '98), held on February 22-25, 1998, in Huntsville, Alabama. The University Research Centers (URCS) are multidisciplinary research units established by NASA at 11 Historically Black Colleges or Universities (HBCU's) and 3 Other Minority Universities (OMU's) to conduct research work in areas of interest to NASA. The URC Technical Conferences bring together the faculty members and students from the URC's with representatives from other universities, NASA, and the aerospace industry to discuss recent advances in their fields.

  18. Air Breathing Propulsion Controls and Diagnostics Research at NASA Glenn Under NASA Aeronautics Research Mission Programs

    NASA Technical Reports Server (NTRS)

    Garg, Sanjay

    2015-01-01

    The Intelligent Control and Autonomy Branch (ICA) at NASA (National Aeronautics and Space Administration) Glenn Research Center (GRC) in Cleveland, Ohio, is leading and participating in various projects in partnership with other organizations within GRC and across NASA, the U.S. aerospace industry, and academia to develop advanced controls and health management technologies that will help meet the goals of the NASA Aeronautics Research Mission Directorate (ARMD) Programs. These efforts are primarily under the various projects under the Advanced Air Vehicles Program (AAVP), Airspace Operations and Safety Program (AOSP) and Transformative Aeronautics Concepts Program (TAC). The ICA Branch is focused on advancing the state-of-the-art of aero-engine control and diagnostics technologies to help improve aviation safety, increase efficiency, and enable operation with reduced emissions. This paper describes the various ICA research efforts under the NASA Aeronautics Research Mission Programs with a summary of motivation, background, technical approach, and recent accomplishments for each of the research tasks.

  19. The BlueGene/L supercomputer

    NASA Astrophysics Data System (ADS)

    Bhanota, Gyan; Chen, Dong; Gara, Alan; Vranas, Pavlos

    2003-05-01

    The architecture of the BlueGene/L massively parallel supercomputer is described. Each computing node consists of a single compute ASIC plus 256 MB of external memory. The compute ASIC integrates two 700 MHz PowerPC 440 integer CPU cores, two 2.8 Gflops floating point units, 4 MB of embedded DRAM as cache, a memory controller for external memory, six 1.4 Gbit/s bi-directional ports for a 3-dimensional torus network connection, three 2.8 Gbit/s bi-directional ports for connecting to a global tree network and a Gigabit Ethernet for I/O. 65,536 of such nodes are connected into a 3-d torus with a geometry of 32×32×64. The total peak performance of the system is 360 Teraflops and the total amount of memory is 16 TeraBytes.

  20. NSF Says It Will Support Supercomputer Centers in California and Illinois.

    ERIC Educational Resources Information Center

    Strosnider, Kim; Young, Jeffrey R.

    1997-01-01

    The National Science Foundation will increase support for supercomputer centers at the University of California, San Diego and the University of Illinois, Urbana-Champaign, while leaving unclear the status of the program at Cornell University (New York) and a cooperative Carnegie-Mellon University (Pennsylvania) and University of Pittsburgh…

  1. Supporting Development for the Stirling Radioisotope Generator and Advanced Stirling Technology Development at NASA Glenn

    NASA Technical Reports Server (NTRS)

    Thieme, Lanny G.; Schreiber, Jeffrey G.

    2005-01-01

    A high-efficiency, 110-W(sub e) (watts electric) Stirling Radioisotope Generator (SRG110) for possible use on future NASA Space Science missions is being developed by the Department of Energy, Lockheed Martin, Stirling Technology Company (STC), and NASA Glenn Research Center (GRC). Potential mission use includes providing spacecraft onboard electric power for deep space missions and power for unmanned Mars rovers. GRC is conducting an in-house supporting technology project to assist in SRG110 development. One-, three-, and six-month heater head structural benchmark tests have been completed in support of a heater head life assessment. Testing is underway to evaluate the key epoxy bond of the permanent magnets to the linear alternator stator lamination stack. GRC has completed over 10,000 hours of extended duration testing of the Stirling convertors for the SRG110, and a three-year test of two Stirling convertors in a thermal vacuum environment will be starting shortly. GRC is also developing advanced technology for Stirling convertors, aimed at substantially improving the specific power and efficiency of the convertor and the overall generator. Sunpower, Inc. has begun the development of a lightweight Stirling convertor, under a NASA Research Announcement (NRA) award, that has the potential to double the system specific power to about 8 W(sub e) per kilogram. GRC has performed random vibration testing of a lowerpower version of this convertor to evaluate robustness for surviving launch vibrations. STC has also completed the initial design of a lightweight convertor. Status of the development of a multi-dimensional computational fluid dynamics code and high-temperature materials work on advanced superalloys, refractory metal alloys, and ceramics are also discussed.

  2. Efficient development of memory bounded geo-applications to scale on modern supercomputers

    NASA Astrophysics Data System (ADS)

    Räss, Ludovic; Omlin, Samuel; Licul, Aleksandar; Podladchikov, Yuri; Herman, Frédéric

    2016-04-01

    Numerical modeling is an actual key tool in the area of geosciences. The current challenge is to solve problems that are multi-physics and for which the length scale and the place of occurrence might not be known in advance. Also, the spatial extend of the investigated domain might strongly vary in size, ranging from millimeters for reactive transport to kilometers for glacier erosion dynamics. An efficient way to proceed is to develop simple but robust algorithms that perform well and scale on modern supercomputers and permit therefore very high-resolution simulations. We propose an efficient approach to solve memory bounded real-world applications on modern supercomputers architectures. We optimize the software to run on our newly acquired state-of-the-art GPU cluster "octopus". Our approach shows promising preliminary results on important geodynamical and geomechanical problematics: we have developed a Stokes solver for glacier flow and a poromechanical solver including complex rheologies for nonlinear waves in stressed rocks porous rocks. We solve the system of partial differential equations on a regular Cartesian grid and use an iterative finite difference scheme with preconditioning of the residuals. The MPI communication happens only locally (point-to-point); this method is known to scale linearly by construction. The "octopus" GPU cluster, which we use for the computations, has been designed to achieve maximal data transfer throughput at minimal hardware cost. It is composed of twenty compute nodes, each hosting four Nvidia Titan X GPU accelerators. These high-density nodes are interconnected with a parallel (dual-rail) FDR InfiniBand network. Our efforts show promising preliminary results for the different physics investigated. The glacier flow solver achieves good accuracy in the relevant benchmarks and the coupled poromechanical solver permits to explain previously unresolvable focused fluid flow as a natural outcome of the porosity setup. In both cases

  3. 2005 NASA Seal/Secondary Air System Workshop, Volume 1

    NASA Technical Reports Server (NTRS)

    Steinetz, Bruce M. (Editor); Hendricks, Robert C. (Editor)

    2006-01-01

    The 2005 NASA Seal/Secondary Air System workshop covered the following topics: (i) Overview of NASA s new Exploration Initiative program aimed at exploring the Moon, Mars, and beyond; (ii) Overview of the NASA-sponsored Propulsion 21 Project; (iii) Overview of NASA Glenn s seal project aimed at developing advanced seals for NASA s turbomachinery, space, and reentry vehicle needs; (iv) Reviews of NASA prime contractor, vendor, and university advanced sealing concepts including tip clearance control, test results, experimental facilities, and numerical predictions; and (v) Reviews of material development programs relevant to advanced seals development. Turbine engine studies have shown that reducing high-pressure turbine (HPT) blade tip clearances will reduce fuel burn, lower emissions, retain exhaust gas temperature margin, and increase range. Several organizations presented development efforts aimed at developing faster clearance control systems and associated technology to meet future engine needs. The workshop also covered several programs NASA is funding to develop technologies for the Exploration Initiative and advanced reusable space vehicle technologies. NASA plans on developing an advanced docking and berthing system that would permit any vehicle to dock to any on-orbit station or vehicle. Seal technical challenges (including space environments, temperature variation, and seal-on-seal operation) as well as plans to develop the necessary "androgynous" seal technologies were reviewed. Researchers also reviewed tests completed for the shuttle main landing gear door seals.

  4. NASA-universities relationships in aero/space engineering: A review of NASA's program

    NASA Technical Reports Server (NTRS)

    1985-01-01

    NASA is concerned about the health of aerospace engineering departments at U.S. universities. The number of advanced degrees in aerospace engineering has declined. There is concern that universities' facilities, research equipment, and instrumentation may be aging or outmoded and therefore affect the quality of research and education. NASA requested that the National Research Council's Aeronautics and Space Engineering Board (ASEB) review NASA's support of universities and make recommendations to improve the program's effectiveness.

  5. Advanced Control Surface Seal Development at NASA GRC for Future Space Launch Vehicles

    NASA Technical Reports Server (NTRS)

    Dunlap, Patrick H., Jr.; Steinetz, Bruce M.; DeMange, Jeffrey J.

    2003-01-01

    NASA s Glenn Research Center (GRC) is developing advanced control surface seal technologies for future space launch vehicles as part of the Next Generation Launch Technology project (NGLT). New resilient seal designs are currently being fabricated and high temperature seal preloading devices are being developed as a means of improving seal resiliency. GRC has designed several new test rigs to simulate the temperatures, pressures, and scrubbing conditions that seals would have to endure during service. A hot compression test rig and hot scrub test rig have been developed to perform tests at temperatures up to 3000 F. Another new test rig allows simultaneous seal flow and scrub tests at room temperature to evaluate changes in seal performance with scrubbing. These test rigs will be used to evaluate the new seal designs. The group is also performing tests on advanced TPS seal concepts for Boeing using these new test facilities.

  6. The NASA-JPL advanced propulsion program

    NASA Technical Reports Server (NTRS)

    Frisbee, Robert H.

    1994-01-01

    The NASA Advanced Propulsion Concepts (APC) program at the Jet Propulsion Laboratory (JPL) consists of two main areas: The first involves cooperative modeling and research activities between JPL and various universities and industry; the second involves research at universities and industry that is directly supported by JPL. The cooperative research program consists of mission studies, research and development of ion engine technology using C-60 (Buckminsterfullerene) propellant, and research and development of lithium-propellant Lorentz-force accelerator (LFA) engine technology. The university/industry- supported research includes research (modeling and proof-of-concept experiments) in advanced, long-life electric propulsion, and in fusion propulsion. These propulsion concepts were selected primarily to cover a range of applications from near-term to far-term missions. For example, the long-lived pulsed-xenon thruster research that JPL is supporting at Princeton University addresses the near-term need for efficient, long-life attitude control and station-keeping propulsion for Earth-orbiting spacecraft. The C-60-propellant ion engine has the potential for good efficiency in a relatively low specific impulse (Isp) range (10,000 - 30,000 m/s) that is optimum for relatively fast (less than 100 day) cis-lunar (LEO/GEO/Lunar) missions employing near-term, high-specific mass electric propulsion vehicles. Research and modeling on the C-60-ion engine are currently being performed by JPL (engine demonstration), Caltech (C-60 properties), MIT (plume modeling), and USC (diagnostics). The Li-propellant LFA engine also has good efficiency in the modest Isp range (40,000 - 50,000 m/s) that is optimum for near-to-mid-term megawatt-class solar- and nuclear-electric propulsion vehicles used for Mars missions transporting cargo (in support of a piloted mission). Research and modeling on the Li-LFA engine are currently being performed by JPL (cathode development), Moscow Aviation

  7. 2008 NASA Seal/Secondary Air System Workshop

    NASA Technical Reports Server (NTRS)

    Steinetz, Bruce M. (Editor); Hendricks, Robert C. (Editor); Delgado, Irebert R. (Editor)

    2009-01-01

    The 2008 NASA Seal/Secondary Air System Workshop covered the following topics: (i) Overview of NASA s new Orion project aimed at developing a new spacecraft that will fare astronauts to the International Space Station, the Moon, Mars, and beyond; (ii) Overview of NASA s fundamental aeronautics technology project; (iii) Overview of NASA Glenn s seal project aimed at developing advanced seals for NASA s turbomachinery, space, and reentry vehicle needs; (iv) Reviews of NASA prime contractor, vendor, and university advanced sealing concepts, test results, experimental facilities, and numerical predictions; and (v) Reviews of material development programs relevant to advanced seals development. Turbine engine studies have shown that reducing seal leakage as well as high-pressure turbine (HPT) blade tip clearances will reduce fuel burn, lower emissions, retain exhaust gas temperature margin, and increase range. Turbine seal development topics covered include a method for fast-acting HPT blade tip clearance control, noncontacting low-leakage seals, intershaft seals, and a review of engine seal performance requirements for current and future Army engine platforms.

  8. An Overview of NASA's Intelligent Systems Program

    NASA Technical Reports Server (NTRS)

    Cooke, Daniel E.; Norvig, Peter (Technical Monitor)

    2001-01-01

    NASA and the Computer Science Research community are poised to enter a critical era. An era in which - it seems - that each needs the other. Market forces, driven by the immediate economic viability of computer science research results, place Computer Science in a relatively novel position. These forces impact how research is done, and could, in worst case, drive the field away from significant innovation opting instead for incremental advances that result in greater stability in the market place. NASA, however, requires significant advances in computer science research in order to accomplish the exploration and science agenda it has set out for itself. NASA may indeed be poised to advance computer science research in this century much the way it advanced aero-based research in the last.

  9. NASA Applications of Molecular Nanotechnology

    NASA Technical Reports Server (NTRS)

    Globus, Al; Bailey, David; Han, Jie; Jaffe, Richard; Levit, Creon; Merkle, Ralph; Srivastava, Deepak

    1998-01-01

    Laboratories throughout the world are rapidly gaining atomically precise control over matter. As this control extends to an ever wider variety of materials, processes and devices, opportunities for applications relevant to NASA's missions will be created. This document surveys a number of future molecular nanotechnology capabilities of aerospace interest. Computer applications, launch vehicle improvements, and active materials appear to be of particular interest. We also list a number of applications for each of NASA's enterprises. If advanced molecular nanotechnology can be developed, almost all of NASA's endeavors will be radically improved. In particular, a sufficiently advanced molecular nanotechnology can arguably bring large scale space colonization within our grasp.

  10. The NASA "PERS" Program: Solid Polymer Electrolyte Development for Advanced Lithium-Based Batteries

    NASA Technical Reports Server (NTRS)

    Baldwin, Richard S.; Bennett, William R.

    2007-01-01

    In fiscal year 2000, The National Aeronautics and Space Administration (NASA) and the Air Force Research Laboratory (AFRL) established a collaborative effort to support the development of polymer-based, lithium-based cell chemistries and battery technologies to address the next generation of aerospace applications and mission needs. The ultimate objective of this development program, which was referred to as the Polymer Energy Rechargeable System (PERS), was to establish a world-class technology capability and U.S. leadership in polymer-based battery technology for aerospace applications. Programmatically, the PERS initiative exploited both interagency collaborations to address common technology and engineering issues and the active participation of academia and private industry. The initial program phases focused on R&D activities to address the critical technical issues and challenges at the cell level. Out of a total of 38 proposals received in response to a NASA Research Announcement (NRA) solicitation, 18 proposals (13 contracts and 5 grants) were selected for initial award to address these technical challenges. Brief summaries of technical approaches, results and accomplishments of the PERS Program development efforts are presented. With Agency support provided through FY 2004, the PERS Program efforts were concluded in 2005, as internal reorganizations and funding cuts resulted in shifting programmatic priorities within NASA. Technically, the PERS Program participants explored, to various degrees over the lifetime of the formal program, a variety of conceptual approaches for developing and demonstrating performance of a viable advanced solid polymer electrolyte possessing the desired attributes, as well as several participants addressing all components of an integrated cell configuration. Programmatically, the NASA PERS Program was very successful, even though the very challenging technical goals for achieving a viable solid polymer electrolyte material or

  11. Supercomputer Simulations Help Develop New Approach to Fight Antibiotic Resistance

    ScienceCinema

    Zgurskaya, Helen; Smith, Jeremy

    2018-06-13

    ORNL leveraged powerful supercomputing to support research led by University of Oklahoma scientists to identify chemicals that seek out and disrupt bacterial proteins called efflux pumps, known to be a major cause of antibiotic resistance. By running simulations on Titan, the team selected molecules most likely to target and potentially disable the assembly of efflux pumps found in E. coli bacteria cells.

  12. Accomplishments of the Advanced Reusable Technologies (ART) RBCC Project at NASA/Marshall Space Flight Center

    NASA Technical Reports Server (NTRS)

    Nelson, Karl W.; McArthur, J. Craig (Technical Monitor)

    2001-01-01

    The focus of the NASA / Marshall Space Flight Center (MSFC) Advanced Reusable Technologies (ART) project is to advance and develop Rocket-Based Combined-Cycle (RBCC) technologies. The ART project began in 1996 as part of the Advanced Space Transportation Program (ASTP). The project is composed of several activities including RBCC engine ground testing, tool development, vehicle / mission studies, and component testing / development. The major contractors involved in the ART project are Aerojet and Rocketdyne. A large database of RBCC ground test data was generated for the air-augmented rocket (AAR), ramjet, scramjet, and ascent rocket modes of operation for both the Aerojet and Rocketdyne concepts. Transition between consecutive modes was also demonstrated as well as trajectory simulation. The Rocketdyne freejet tests were conducted at GASL in the Flight Acceleration Simulation Test (FAST) facility. During a single test, the FAST facility is capable of simulating both the enthalpy and aerodynamic conditions over a range of Mach numbers in a flight trajectory. Aerojet performed freejet testing in the Pebble Bed facility at GASL as well as direct-connect testing at GASL. Aerojet also performed sea-level static (SLS) testing at the Aerojet A-Zone facility in Sacramento, CA. Several flight-type flowpath components were developed under the ART project. Aerojet designed and fabricated ceramic scramjet injectors. The structural design of the injectors will be tested in a simulated scramjet environment where thermal effects and performance will be assessed. Rocketdyne will be replacing the cooled combustor in the A5 rig with a flight-weight combustor that is near completion. Aerojet's formed duct panel is currently being fabricated and will be tested in the SLS rig in Aerojet's A-Zone facility. Aerojet has already successfully tested a cooled cowl panel in the same facility. In addition to MSFC, other NASA centers have contributed to the ART project as well. Inlet testing

  13. Supercomputing '91; Proceedings of the 4th Annual Conference on High Performance Computing, Albuquerque, NM, Nov. 18-22, 1991

    NASA Technical Reports Server (NTRS)

    1991-01-01

    Various papers on supercomputing are presented. The general topics addressed include: program analysis/data dependence, memory access, distributed memory code generation, numerical algorithms, supercomputer benchmarks, latency tolerance, parallel programming, applications, processor design, networks, performance tools, mapping and scheduling, characterization affecting performance, parallelism packaging, computing climate change, combinatorial algorithms, hardware and software performance issues, system issues. (No individual items are abstracted in this volume)

  14. NASA/NOAA Earth Science Electronic Theater 1999. Earth Science Observations, Analysis and Visualization: Roots in the 60s: Vision for the Next Millennium

    NASA Technical Reports Server (NTRS)

    Hasler, Fritz

    1999-01-01

    The Etheater presents visualizations which span the period from the original Suomi/Hasler animations of the first ATS-1 GEO weather satellite images in 1966 ....... to the latest 1999 NASA Earth Science Vision for the next 25 years. Hot off the SGI-Onyx Graphics-Supercomputer are NASA's visualizations of Hurricanes Mitch, Georges, Fran and Linda. These storms have been recently featured on the covers of National Geographic, Time, Newsweek and Popular Science. Highlights will be shown from the NASA hurricane visualization resource video tape in standard and HDTV that has been used repeatedly this season on National and International network TV. Results will be presented from a new paper on automatic wind measurements in Hurricane Luis from 1-min GOES images that appeared in the November BAMS.

  15. Design applications for supercomputers

    NASA Technical Reports Server (NTRS)

    Studerus, C. J.

    1987-01-01

    The complexity of codes for solutions of real aerodynamic problems has progressed from simple two-dimensional models to three-dimensional inviscid and viscous models. As the algorithms used in the codes increased in accuracy, speed and robustness, the codes were steadily incorporated into standard design processes. The highly sophisticated codes, which provide solutions to the truly complex flows, require computers with large memory and high computational speed. The advent of high-speed supercomputers, such that the solutions of these complex flows become more practical, permits the introduction of the codes into the design system at an earlier stage. The results of several codes which either were already introduced into the design process or are rapidly in the process of becoming so, are presented. The codes fall into the area of turbomachinery aerodynamics and hypersonic propulsion. In the former category, results are presented for three-dimensional inviscid and viscous flows through nozzle and unducted fan bladerows. In the latter category, results are presented for two-dimensional inviscid and viscous flows for hypersonic vehicle forebodies and engine inlets.

  16. Multi-petascale highly efficient parallel supercomputer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Asaad, Sameh; Bellofatto, Ralph E.; Blocksome, Michael A.

    A Multi-Petascale Highly Efficient Parallel Supercomputer of 100 petaflop-scale includes node architectures based upon System-On-a-Chip technology, where each processing node comprises a single Application Specific Integrated Circuit (ASIC). The ASIC nodes are interconnected by a five dimensional torus network that optimally maximize the throughput of packet communications between nodes and minimize latency. The network implements collective network and a global asynchronous network that provides global barrier and notification functions. Integrated in the node design include a list-based prefetcher. The memory system implements transaction memory, thread level speculation, and multiversioning cache that improves soft error rate at the same time andmore » supports DMA functionality allowing for parallel processing message-passing.« less

  17. NASA Advanced Concepts Office, Earth-To-Orbit Team Design Process and Tools

    NASA Technical Reports Server (NTRS)

    Waters, Eric D.; Creech, Dennis M.; Garcia, Jessica; Threet, Grady E., Jr.; Phillips, Alan

    2012-01-01

    The Earth-to-Orbit Team (ETO) of the Advanced Concepts Office (ACO) at NASA Marshall Space Flight Center (MSFC) is considered the pre-eminent go-to group for pre-phase A and phase A concept definition. Over the past several years the ETO team has evaluated thousands of launch vehicle concept variations for a significant number of studies including agency-wide efforts such as the Exploration Systems Architecture Study (ESAS), Constellation, Heavy Lift Launch Vehicle (HLLV), Augustine Report, Heavy Lift Propulsion Technology (HLPT), Human Exploration Framework Team (HEFT), and Space Launch System (SLS). The ACO ETO Team is called upon to address many needs in NASA s design community; some of these are defining extremely large trade-spaces, evaluating advanced technology concepts which have not been addressed by a large majority of the aerospace community, and the rapid turn-around of highly time critical actions. It is the time critical actions, those often limited by schedule or little advanced warning, that have forced the five member ETO team to develop a design process robust enough to handle their current output level in order to meet their customer s needs. Based on the number of vehicle concepts evaluated over the past year this output level averages to four completed vehicle concepts per day. Each of these completed vehicle concepts includes a full mass breakdown of the vehicle to a tertiary level of subsystem components and a vehicle trajectory analysis to determine optimized payload delivery to specified orbital parameters, flight environments, and delta v capability. A structural analysis of the vehicle to determine flight loads based on the trajectory output, material properties, and geometry of the concept is also performed. Due to working in this fast-paced and sometimes rapidly changing environment, the ETO Team has developed a finely tuned process to maximize their delivery capabilities. The objective of this paper is to describe the interfaces

  18. NASA Advanced Concepts Office, Earth-To-Orbit Team Design Process and Tools

    NASA Technical Reports Server (NTRS)

    Waters, Eric D.; Garcia, Jessica; Threet, Grady E., Jr.; Phillips, Alan

    2013-01-01

    The Earth-to-Orbit Team (ETO) of the Advanced Concepts Office (ACO) at NASA Marshall Space Flight Center (MSFC) is considered the pre-eminent "go-to" group for pre-phase A and phase A concept definition. Over the past several years the ETO team has evaluated thousands of launch vehicle concept variations for a significant number of studies including agency-wide efforts such as the Exploration Systems Architecture Study (ESAS), Constellation, Heavy Lift Launch Vehicle (HLLV), Augustine Report, Heavy Lift Propulsion Technology (HLPT), Human Exploration Framework Team (HEFT), and Space Launch System (SLS). The ACO ETO Team is called upon to address many needs in NASA's design community; some of these are defining extremely large trade-spaces, evaluating advanced technology concepts which have not been addressed by a large majority of the aerospace community, and the rapid turn-around of highly time critical actions. It is the time critical actions, those often limited by schedule or little advanced warning, that have forced the five member ETO team to develop a design process robust enough to handle their current output level in order to meet their customer's needs. Based on the number of vehicle concepts evaluated over the past year this output level averages to four completed vehicle concepts per day. Each of these completed vehicle concepts includes a full mass breakdown of the vehicle to a tertiary level of subsystem components and a vehicle trajectory analysis to determine optimized payload delivery to specified orbital parameters, flight environments, and delta v capability. A structural analysis of the vehicle to determine flight loads based on the trajectory output, material properties, and geometry of the concept is also performed. Due to working in this fast-paced and sometimes rapidly changing environment, the ETO Team has developed a finely tuned process to maximize their delivery capabilities. The objective of this paper is to describe the interfaces

  19. The impact of the U.S. supercomputing initiative will be global

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crawford, Dona

    2016-01-15

    Last July, President Obama issued an executive order that created a coordinated federal strategy for HPC research, development, and deployment called the U.S. National Strategic Computing Initiative (NSCI). However, this bold, necessary step toward building the next generation of supercomputers has inaugurated a new era for U.S. high performance computing (HPC).

  20. 2006 NASA Seal/Secondary Air System Workshop; Volume 1

    NASA Technical Reports Server (NTRS)

    Steinetz, Bruce, M. (Editor); Hendricks, Robert C. (Editor); Delgado, Irebert (Editor)

    2007-01-01

    The 2006 NASA Seal/Secondary Air System workshop covered the following topics: (i) Overview of NASA s new Exploration Initiative program aimed at exploring the Moon, Mars, and beyond; (ii) Overview of NASA s new fundamental aeronautics technology project; (iii) Overview of NASA Glenn Research Center s seal project aimed at developing advanced seals for NASA s turbomachinery, space, and reentry vehicle needs; (iv) Reviews of NASA prime contractor, vendor, and university advanced sealing concepts including tip clearance control, test results, experimental facilities, and numerical predictions; and (v) Reviews of material development programs relevant to advanced seals development. Turbine engine studies have shown that reducing seal leakages as well as high-pressure turbine (HPT) blade tip clearances will reduce fuel burn, lower emissions, retain exhaust gas temperature margin, and increase range. Several organizations presented development efforts aimed at developing faster clearance control systems and associated technology to meet future engine needs. The workshop also covered several programs NASA is funding to develop technologies for the Exploration Initiative and advanced reusable space vehicle technologies. NASA plans on developing an advanced docking and berthing system that would permit any vehicle to dock to any on-orbit station or vehicle. Seal technical challenges (including space environments, temperature variation, and seal-on-seal operation) as well as plans to develop the necessary "androgynous" seal technologies were reviewed. Researchers also reviewed seal technologies employed by the Apollo command module that serve as an excellent basis for seals for NASA s new Crew Exploration Vehicle (CEV).

  1. Technological Innovations from NASA

    NASA Technical Reports Server (NTRS)

    Pellis, Neal R.

    2006-01-01

    The challenge of human space exploration places demands on technology that push concepts and development to the leading edge. In biotechnology and biomedical equipment development, NASA science has been the seed for numerous innovations, many of which are in the commercial arena. The biotechnology effort has led to rational drug design, analytical equipment, and cell culture and tissue engineering strategies. Biomedical research and development has resulted in medical devices that enable diagnosis and treatment advances. NASA Biomedical developments are exemplified in the new laser light scattering analysis for cataracts, the axial flow left ventricular-assist device, non contact electrocardiography, and the guidance system for LASIK surgery. Many more developments are in progress. NASA will continue to advance technologies, incorporating new approaches from basic and applied research, nanotechnology, computational modeling, and database analyses.

  2. NASA advanced space photovoltaic technology-status, potential and future mission applications

    NASA Technical Reports Server (NTRS)

    Flood, Dennis J.; Piszczor, Michael, Jr.; Stella, Paul M.; Bennett, Gary L.

    1989-01-01

    The NASA program in space photovoltaic research and development encompasses a wide range of emerging options for future space power systems, and includes both cell and array technology development. The long range goals are to develop technology capable of achieving 300 W/kg for planar arrays, and 300 W/sq m for concentrator arrays. InP and GaAs planar and concentrator cell technologies are under investigation for their potential high efficiency and good radiation resistance. The Advanced Photovoltaic Solar Array (APSA) program is a near term effort aimed at demonstrating 130 W/kg beginning of life specific power using thin (62 micrometer) silicon cells. It is intended to be technology transparent to future high efficiency cells and provides the baseline for development of the 300 W/kg array.

  3. Recent results from advanced research on space solar cells at NASA

    NASA Technical Reports Server (NTRS)

    Flood, Dennis J.

    1990-01-01

    The NASA program in space photovoltaic research and development encompasses a wide range of emerging options for future space power systems, and includes both cell and array technology development. The long range goals are to develop technology capable of achieving 300 W/kg for planar arrays, and 300 W/sq m for concentrator arrays. InP and GaAs planar and concentrator cell technologies are under investigation for their potential high efficiency and good radiation resistance. The Advanced Photovoltaic Solar Array (APSA) program is a near term effort aimed at demonstrating 130 W/kg beginning of life specific power using thin (62 pm) silicon cells. It is intended to be technology transparent to future high efficiency cells and provides the baseline for development of the 300 W/kg array.

  4. NASA Lewis Helps Develop Advanced Saw Blades for the Lumber Industry

    NASA Technical Reports Server (NTRS)

    1998-01-01

    NASA Lewis Research Center's Structures and Material Divisions are centers of excellence in high-temperature alloys for aerospace applications such as advanced aircraft and rocket engines. Lewis' expertise in these fields was enlisted in the development of a new generation of circular sawblades for the lumber industry to use in cutting logs into boards. The U.S. Department of Agriculture's (USDA) Forest Products Laboratory and their supplier had succeeded in developing a thinner sawblade by using a nickel-based alloy, but they needed to reduce excessive warping due to residual stresses. They requested assistance from Lewis' experts, who successfully eliminated the residual stress problem and increased blade strength by over 12 percent. They achieved this by developing an innovative heat treatment based on their knowledge of nickel-based superalloys used in aeropropulsion applications.

  5. Supercomputing 2002: NAS Demo Abstracts

    NASA Technical Reports Server (NTRS)

    Parks, John (Technical Monitor)

    2002-01-01

    The hyperwall is a new concept in visual supercomputing, conceived and developed by the NAS Exploratory Computing Group. The hyperwall will allow simultaneous and coordinated visualization and interaction of an array of processes, such as a the computations of a parameter study or the parallel evolutions of a genetic algorithm population. Making over 65 million pixels available to the user, the hyperwall will enable and elicit qualitatively new ways of leveraging computers to accomplish science. It is currently still unclear whether we will be able to transport the hyperwall to SC02. The crucial display frame still has not been completed by the metal fabrication shop, although they promised an August delivery. Also, we are still working the fragile node issue, which may require transplantation of the compute nodes from the present 2U cases into 3U cases. This modification will increase the present 3-rack configuration to 5 racks.

  6. Developments in the simulation of compressible inviscid and viscous flow on supercomputers

    NASA Technical Reports Server (NTRS)

    Steger, J. L.; Buning, P. G.

    1985-01-01

    In anticipation of future supercomputers, finite difference codes are rapidly being extended to simulate three-dimensional compressible flow about complex configurations. Some of these developments are reviewed. The importance of computational flow visualization and diagnostic methods to three-dimensional flow simulation is also briefly discussed.

  7. White paper: A plan for cooperation between NASA and DARPA to establish a center for advanced architectures

    NASA Technical Reports Server (NTRS)

    Denning, P. J.; Adams, G. B., III; Brown, R. L.; Kanerva, P.; Leiner, B. M.; Raugh, M. R.

    1986-01-01

    Large, complex computer systems require many years of development. It is recognized that large scale systems are unlikely to be delivered in useful condition unless users are intimately involved throughout the design process. A mechanism is described that will involve users in the design of advanced computing systems and will accelerate the insertion of new systems into scientific research. This mechanism is embodied in a facility called the Center for Advanced Architectures (CAA). CAA would be a division of RIACS (Research Institute for Advanced Computer Science) and would receive its technical direction from a Scientific Advisory Board established by RIACS. The CAA described here is a possible implementation of a center envisaged in a proposed cooperation between NASA and DARPA.

  8. Batteries at NASA - Today and Beyond

    NASA Technical Reports Server (NTRS)

    Reid, Concha M.

    2015-01-01

    NASA uses batteries for virtually all of its space missions. Batteries can be bulky and heavy, and some chemistries are more prone to safety issues than others. To meet NASA's needs for safe, lightweight, compact and reliable batteries, scientists and engineers at NASA develop advanced battery technologies that are suitable for space applications and that can satisfy these multiple objectives. Many times, these objectives compete with one another, as the demand for more and more energy in smaller packages dictates that we use higher energy chemistries that are also more energetic by nature. NASA partners with companies and universities, like Xavier University of Louisiana, to pool our collective knowledge and discover innovative technical solutions to these challenges. This talk will discuss a little about NASA's use of batteries and why NASA seeks more advanced chemistries. A short primer on battery chemistries and their chemical reactions is included. Finally, the talk will touch on how the work under the Solid High Energy Lithium Battery (SHELiB) grant to develop solid lithium-ion conducting electrolytes and solid-state batteries can contribute to NASA's mission.

  9. Development of Advanced Environmental Barrier Coatings for SiC/SiC Composites at NASA GRC: Prime-Reliant Design and Durability Perspectives

    NASA Technical Reports Server (NTRS)

    Zhu, Dongming

    2017-01-01

    Environmental barrier coatings (EBCs) are considered technologically important because of the critical needs and their ability to effectively protect the turbine hot-section SiC/SiC ceramic matrix composite (CMC) components in harsh engine combustion environments. The development of NASA's advanced environmental barrier coatings have been aimed at significantly improved the coating system temperature capability, stability, erosion-impact, and CMAS resistance for SiC/SiC turbine airfoil and combustors component applications. The NASA environmental barrier coating developments have also emphasized thermo-mechanical creep and fatigue resistance in simulated engine heat flux and environments. Experimental results and models for advanced EBC systems will be presented to help establishing advanced EBC composition design methodologies, performance modeling and life predictions, for achieving prime-reliant, durable environmental coating systems for 2700-3000 F engine component applications. Major technical barriers in developing environmental barrier coating systems and the coating integration with next generation composites having further improved temperature capability, environmental stability, EBC-CMC fatigue-environment system durability will be discussed.

  10. NAS technical summaries: Numerical aerodynamic simulation program, March 1991 - February 1992

    NASA Technical Reports Server (NTRS)

    1992-01-01

    NASA created the Numerical Aerodynamic Simulation (NAS) Program in 1987 to focus resources on solving critical problems in aeroscience and related disciplines by utilizing the power of the most advanced supercomputers available. The NAS Program provides scientists with the necessary computing power to solve today's most demanding computational fluid dynamics problems and serves as a pathfinder in integrating leading-edge supercomputing technologies, thus benefiting other supercomputer centers in Government and industry. This report contains selected scientific results from the 1991-92 NAS Operational Year, March 4, 1991 to March 3, 1992, which is the fifth year of operation. During this year, the scientific community was given access to a Cray-2 and a Cray Y-MP. The Cray-2, the first generation supercomputer, has four processors, 256 megawords of central memory, and a total sustained speed of 250 million floating point operations per second. The Cray Y-MP, the second generation supercomputer, has eight processors and a total sustained speed of one billion floating point operations per second. Additional memory was installed this year, doubling capacity from 128 to 256 megawords of solid-state storage-device memory. Because of its higher performance, the Cray Y-MP delivered approximately 77 percent of the total number of supercomputer hours used during this year.

  11. Storage and network bandwidth requirements through the year 2000 for the NASA Center for Computational Sciences

    NASA Technical Reports Server (NTRS)

    Salmon, Ellen

    1996-01-01

    The data storage and retrieval demands of space and Earth sciences researchers have made the NASA Center for Computational Sciences (NCCS) Mass Data Storage and Delivery System (MDSDS) one of the world's most active Convex UniTree systems. Science researchers formed the NCCS's Computer Environments and Research Requirements Committee (CERRC) to relate their projected supercomputing and mass storage requirements through the year 2000. Using the CERRC guidelines and observations of current usage, some detailed projections of requirements for MDSDS network bandwidth and mass storage capacity and performance are presented.

  12. NASA Webb Telescope

    NASA Image and Video Library

    2017-12-08

    NASA image release September 17, 2010 In preparation for a cryogenic test NASA Goddard technicians install instrument mass simulators onto the James Webb Space Telescope ISIM structure. The ISIM Structure supports and holds the four Webb telescope science instruments : the Mid-Infrared Instrument (MIRI), the Near-Infrared Camera (NIRCam), the Near-Infrared Spectrograph (NIRSpec) and the Fine Guidance Sensor (FGS). Credit: NASA/GSFC/Chris Gunn To learn more about the James Webb Space Telescope go to: www.jwst.nasa.gov/ NASA Goddard Space Flight Center contributes to NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s endeavors by providing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Join us on Facebook

  13. NASA Education Implementation Plan 2015-2017

    ERIC Educational Resources Information Center

    National Aeronautics and Space Administration, 2015

    2015-01-01

    The NASA Education Implementation Plan (NEIP) provides an understanding of the role of NASA in advancing the nation's STEM education and workforce pipeline. The document outlines the roles and responsibilities that NASA Education has in approaching and achieving the agency's and administration's strategic goals in STEM Education. The specific…

  14. Some Problems and Solutions in Transferring Ecosystem Simulation Codes to Supercomputers

    NASA Technical Reports Server (NTRS)

    Skiles, J. W.; Schulbach, C. H.

    1994-01-01

    Many computer codes for the simulation of ecological systems have been developed in the last twenty-five years. This development took place initially on main-frame computers, then mini-computers, and more recently, on micro-computers and workstations. Recent recognition of ecosystem science as a High Performance Computing and Communications Program Grand Challenge area emphasizes supercomputers (both parallel and distributed systems) as the next set of tools for ecological simulation. Transferring ecosystem simulation codes to such systems is not a matter of simply compiling and executing existing code on the supercomputer since there are significant differences in the system architectures of sequential, scalar computers and parallel and/or vector supercomputers. To more appropriately match the application to the architecture (necessary to achieve reasonable performance), the parallelism (if it exists) of the original application must be exploited. We discuss our work in transferring a general grassland simulation model (developed on a VAX in the FORTRAN computer programming language) to a Cray Y-MP. We show the Cray shared-memory vector-architecture, and discuss our rationale for selecting the Cray. We describe porting the model to the Cray and executing and verifying a baseline version, and we discuss the changes we made to exploit the parallelism in the application and to improve code execution. As a result, the Cray executed the model 30 times faster than the VAX 11/785 and 10 times faster than a Sun 4 workstation. We achieved an additional speed-up of approximately 30 percent over the original Cray run by using the compiler's vectorizing capabilities and the machine's ability to put subroutines and functions "in-line" in the code. With the modifications, the code still runs at only about 5% of the Cray's peak speed because it makes ineffective use of the vector processing capabilities of the Cray. We conclude with a discussion and future plans.

  15. Overview of NASA Glenn Seal Project

    NASA Technical Reports Server (NTRS)

    Steinetz, Bruce M.; Dunlap, Patrick; Proctor, Margaret; Delgado, Irebert; Finkbeiner, Josh; DeMange, Jeff; Daniels, Christopher C.; Taylor, Shawn; Oswald, Jay

    2006-01-01

    NASA Glenn is currently performing seal research supporting both advanced turbine engine development and advanced space vehicle/propulsion system development. Studies have shown that decreasing parasitic leakage through applying advanced seals will increase turbine engine performance and decrease operating costs. Studies have also shown that higher temperature, long life seals are critical in meeting next generation space vehicle and propulsion system goals in the areas of performance, reusability, safety, and cost. NASA Glenn is developing seal technology and providing technical consultation for the Agency s key aero- and space technology development programs.

  16. NASA Space Technology Roadmaps and Priorities: Restoring NASA's Technological Edge and Paving the Way for a New Era in Space

    NASA Technical Reports Server (NTRS)

    2012-01-01

    Success in executing future NASA space missions will depend on advanced technology developments that should already be underway. It has been years since NASA has had a vigorous, broad-based program in advanced space technology development, and NASA's technology base is largely depleted. As noted in a recent National Research Council report on the U.S. civil space program: Future U.S. leadership in space requires a foundation of sustained technology advances that can enable the development of more capable, reliable, and lower-cost spacecraft and launch vehicles to achieve space program goals. A strong advanced technology development foundation is needed also to enhance technology readiness of new missions, mitigate their technological risks, improve the quality of cost estimates, and thereby contribute to better overall mission cost management. Yet financial support for this technology base has eroded over the years. The United States is now living on the innovation funded in the past and has an obligation to replenish this foundational element. NASA has developed a draft set of technology roadmaps to guide the development of space technologies under the leadership of the NASA Office of the Chief Technologist. The NRC appointed the Steering Committee for NASA Technology Roadmaps and six panels to evaluate the draft roadmaps, recommend improvements, and prioritize the technologies within each and among all of the technology areas as NASA finalizes the roadmaps. The steering committee is encouraged by the initiative NASA has taken through the Office of the Chief Technologist (OCT) to develop technology roadmaps and to seek input from the aerospace technical community with this study.

  17. ParaBTM: A Parallel Processing Framework for Biomedical Text Mining on Supercomputers.

    PubMed

    Xing, Yuting; Wu, Chengkun; Yang, Xi; Wang, Wei; Zhu, En; Yin, Jianping

    2018-04-27

    A prevailing way of extracting valuable information from biomedical literature is to apply text mining methods on unstructured texts. However, the massive amount of literature that needs to be analyzed poses a big data challenge to the processing efficiency of text mining. In this paper, we address this challenge by introducing parallel processing on a supercomputer. We developed paraBTM, a runnable framework that enables parallel text mining on the Tianhe-2 supercomputer. It employs a low-cost yet effective load balancing strategy to maximize the efficiency of parallel processing. We evaluated the performance of paraBTM on several datasets, utilizing three types of named entity recognition tasks as demonstration. Results show that, in most cases, the processing efficiency can be greatly improved with parallel processing, and the proposed load balancing strategy is simple and effective. In addition, our framework can be readily applied to other tasks of biomedical text mining besides NER.

  18. Computational fluid dynamics research at the United Technologies Research Center requiring supercomputers

    NASA Technical Reports Server (NTRS)

    Landgrebe, Anton J.

    1987-01-01

    An overview of research activities at the United Technologies Research Center (UTRC) in the area of Computational Fluid Dynamics (CFD) is presented. The requirement and use of various levels of computers, including supercomputers, for the CFD activities is described. Examples of CFD directed toward applications to helicopters, turbomachinery, heat exchangers, and the National Aerospace Plane are included. Helicopter rotor codes for the prediction of rotor and fuselage flow fields and airloads were developed with emphasis on rotor wake modeling. Airflow and airload predictions and comparisons with experimental data are presented. Examples are presented of recent parabolized Navier-Stokes and full Navier-Stokes solutions for hypersonic shock-wave/boundary layer interaction, and hydrogen/air supersonic combustion. In addition, other examples of CFD efforts in turbomachinery Navier-Stokes methodology and separated flow modeling are presented. A brief discussion of the 3-tier scientific computing environment is also presented, in which the researcher has access to workstations, mid-size computers, and supercomputers.

  19. Computational fluid dynamics research at the United Technologies Research Center requiring supercomputers

    NASA Astrophysics Data System (ADS)

    Landgrebe, Anton J.

    1987-03-01

    An overview of research activities at the United Technologies Research Center (UTRC) in the area of Computational Fluid Dynamics (CFD) is presented. The requirement and use of various levels of computers, including supercomputers, for the CFD activities is described. Examples of CFD directed toward applications to helicopters, turbomachinery, heat exchangers, and the National Aerospace Plane are included. Helicopter rotor codes for the prediction of rotor and fuselage flow fields and airloads were developed with emphasis on rotor wake modeling. Airflow and airload predictions and comparisons with experimental data are presented. Examples are presented of recent parabolized Navier-Stokes and full Navier-Stokes solutions for hypersonic shock-wave/boundary layer interaction, and hydrogen/air supersonic combustion. In addition, other examples of CFD efforts in turbomachinery Navier-Stokes methodology and separated flow modeling are presented. A brief discussion of the 3-tier scientific computing environment is also presented, in which the researcher has access to workstations, mid-size computers, and supercomputers.

  20. NASA Trapezoidal Wing Computations Including Transition and Advanced Turbulence Modeling

    NASA Technical Reports Server (NTRS)

    Rumsey, C. L.; Lee-Rausch, E. M.

    2012-01-01

    Flow about the NASA Trapezoidal Wing is computed with several turbulence models by using grids from the first High Lift Prediction Workshop in an effort to advance understanding of computational fluid dynamics modeling for this type of flowfield. Transition is accounted for in many of the computations. In particular, a recently-developed 4-equation transition model is utilized and works well overall. Accounting for transition tends to increase lift and decrease moment, which improves the agreement with experiment. Upper surface flap separation is reduced, and agreement with experimental surface pressures and velocity profiles is improved. The predicted shape of wakes from upstream elements is strongly influenced by grid resolution in regions above the main and flap elements. Turbulence model enhancements to account for rotation and curvature have the general effect of increasing lift and improving the resolution of the wing tip vortex as it convects downstream. However, none of the models improve the prediction of surface pressures near the wing tip, where more grid resolution is needed.

  1. Processing and Preparation of Advanced Stirling Convertors for Extended Operation at NASA Glenn Research Center

    NASA Technical Reports Server (NTRS)

    Oriti, Salvatore M.; Cornell, Peggy A.

    2008-01-01

    The U.S. Department of Energy (DOE), Lockheed Martin Space Company (LMSC), Sunpower Inc., and NASA Glenn Research Center (GRC) have been developing an Advanced Stirling Radioisotope Generator (ASRG) for use as a power system on space science missions. This generator will make use of the free-piston Stirling convertors to achieve higher conversion efficiency than currently available alternatives. NASA GRC is supporting the development of the ASRG by providing extended operation of several Sunpower Inc. Advanced Stirling Convertors (ASCs). In the past year and a half, eight ASCs have operated in continuous, unattended mode in both air and thermal vacuum environments. Hardware, software, and procedures were developed to prepare each convertor for extended operation with intended durations on the order of tens of thousands of hours. Steps taken to prepare a convertor for long-term operation included geometry measurements, thermocouple instrumentation, evaluation of working fluid purity, evacuation with bakeout, and high purity charge. Actions were also taken to ensure the reliability of support systems, such as data acquisition and automated shutdown checkouts. Once a convertor completed these steps, it underwent short-term testing to gather baseline performance data before initiating extended operation. These tests included insulation thermal loss characterization, low-temperature checkout, and full-temperature and power demonstration. This paper discusses the facilities developed to support continuous, unattended operation, and the processing results of the eight ASCs currently on test.

  2. Parallel-vector solution of large-scale structural analysis problems on supercomputers

    NASA Technical Reports Server (NTRS)

    Storaasli, Olaf O.; Nguyen, Duc T.; Agarwal, Tarun K.

    1989-01-01

    A direct linear equation solution method based on the Choleski factorization procedure is presented which exploits both parallel and vector features of supercomputers. The new equation solver is described, and its performance is evaluated by solving structural analysis problems on three high-performance computers. The method has been implemented using Force, a generic parallel FORTRAN language.

  3. NASA SBIR product catalog, 1991

    NASA Technical Reports Server (NTRS)

    1991-01-01

    This catalog is a partial list of products of NASA SBIR (Small Business Innovation Research) projects that have advanced to some degree into Phase 3. While most of the products evolved from work conducted during SBIR Phase 1 and 2, a few advanced to commercial status solely from Phase 1 activities. The catalog presents information provided to NASA by SBIR contractors who wished to have their products exhibited at Technology 2001, a NASA-sponsored technology transfer conference held in San Jose, California, on December 4, 5, and 6, 1991. The catalog presents the product information in the following technology areas: computer and communication systems; information processing and AI; robotics and automation; signal and image processing; microelectronics; electronic devices and equipment; microwave electronic devices; optical devices and lasers; advanced materials; materials processing; materials testing and NDE; materials instrumentation; aerodynamics and aircraft; fluid mechanics and measurement; heat transfer devices; refrigeration and cryogenics; energy conversion devices; oceanographic instruments; atmosphere monitoring devices; water management; life science instruments; and spacecraft electromechanical systems.

  4. NASA's P-3 at Sunrise

    NASA Image and Video Library

    2017-12-08

    NASA's P-3B airborne laboratory on the ramp at Thule Air Base in Greenland early on the morning of Mar. 21, 2013. Credit: NASA/Goddard/Christy Hansen NASA's Operation IceBridge is an airborne science mission to study Earth's polar ice. For more information about IceBridge, visit: www.nasa.gov/icebridge NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  5. Open Innovation at NASA: A New Business Model for Advancing Human Health and Performance Innovations

    NASA Technical Reports Server (NTRS)

    Davis, Jeffrey R.; Richard, Elizabeth E.; Keeton, Kathryn E.

    2014-01-01

    This paper describes a new business model for advancing NASA human health and performance innovations and demonstrates how open innovation shaped its development. A 45 percent research and technology development budget reduction drove formulation of a strategic plan grounded in collaboration. We describe the strategy execution, including adoption and results of open innovation initiatives, the challenges of cultural change, and the development of virtual centers and a knowledge management tool to educate and engage the workforce and promote cultural change.

  6. Krylov subspace methods on supercomputers

    NASA Technical Reports Server (NTRS)

    Saad, Youcef

    1988-01-01

    A short survey of recent research on Krylov subspace methods with emphasis on implementation on vector and parallel computers is presented. Conjugate gradient methods have proven very useful on traditional scalar computers, and their popularity is likely to increase as three-dimensional models gain importance. A conservative approach to derive effective iterative techniques for supercomputers has been to find efficient parallel/vector implementations of the standard algorithms. The main source of difficulty in the incomplete factorization preconditionings is in the solution of the triangular systems at each step. A few approaches consisting of implementing efficient forward and backward triangular solutions are described in detail. Polynomial preconditioning as an alternative to standard incomplete factorization techniques is also discussed. Another efficient approach is to reorder the equations so as to improve the structure of the matrix to achieve better parallelism or vectorization. An overview of these and other ideas and their effectiveness or potential for different types of architectures is given.

  7. NASA Engineering and Technology Advancement Office: A proposal to the administrator

    NASA Technical Reports Server (NTRS)

    Schulze, Norman R.

    1993-01-01

    NASA has continually had problems with cost, schedule, performance, reliability, quality, and safety aspects in programs. Past solutions have not provided the answers needed, and a major change is needed in the way of doing business. A new approach is presented for consideration. These problems are all engineering matters, and therefore, require engineering solutions. Proper engineering tools are needed to fix engineering problems. Headquarters is responsible for providing the management structure to support programs with appropriate engineering tools. A guide to define those tools and an approach for putting them into place is provided. Recommendations include establishing a new Engineering and Technology Advancement Office, requesting a review of this proposal by the Administrator since this subject requires a top level decision. There has been a wide peer review conducted by technical staff at Headquarters, the Field Installations, and others in industry as discussed.

  8. Summary and recent results from the NASA advanced High Speed Propeller Research Program

    NASA Technical Reports Server (NTRS)

    Mitchell, G. A.; Mikkelson, D. C.

    1982-01-01

    Advanced high-speed propellers offer large performance improvements for aircraft that cruise in the Mach 0.7 to 0.8 speed regime. The current status of the NASA research program on high-speed propeller aerodynamics, acoustics, and aeroelastics is described. Recent wind tunnel results for five 8- to 10-blade advanced models are compared with analytical predictions. Test results show that blade sweep was important in achieving net efficiencies near 80 percent at Mach 0.8 and reducing near-field cruise noise by dB. Lifting line and lifting surface aerodynamic analysis codes are under development and some initial lifting line results are compared with propeller force and probe data. Some initial laser velocimeter measurements of the flow field velocities of an 8-bladed 45 deg swept propeller are shown. Experimental aeroelastic results indicate that cascade effects and blade sweep strongly affect propeller aeroelastic characteristics. Comparisons of propeller near-field noise data with linear acoustic theory indicate that the theory adequate predicts near-field noise for subsonic tip speeds but overpredicts the noise for supersonic tip speeds. Potential large gains in propeller efficiency of 7 to 11 percent at Mach 0.8 may be possible with advanced counter-rotation propellers.

  9. Overview of the NASA Advanced In-Space Propulsion Project

    NASA Technical Reports Server (NTRS)

    LaPointe, Michael

    2011-01-01

    In FY11, NASA established the Enabling Technologies Development and Demonstration (ETDD) Program, a follow on to the earlier Exploration Technology Development Program (ETDP) within the NASA Exploration Systems Mission Directorate. Objective: Develop, mature and test enabling technologies for human space exploration.

  10. NASA Automatic Information Security Handbook

    NASA Technical Reports Server (NTRS)

    1993-01-01

    This handbook details the Automated Information Security (AIS) management process for NASA. Automated information system security is becoming an increasingly important issue for all NASA managers and with rapid advancements in computer and network technologies and the demanding nature of space exploration and space research have made NASA increasingly dependent on automated systems to store, process, and transmit vast amounts of mission support information, hence the need for AIS systems and management. This handbook provides the consistent policies, procedures, and guidance to assure that an aggressive and effective AIS programs is developed, implemented, and sustained at all NASA organizations and NASA support contractors.

  11. Overview of NASA Glenn Seal Project

    NASA Technical Reports Server (NTRS)

    Steinetz, Bruce M.; Dunlap, Patrick H., Jr.; Proctor, Margaret; Delgado, Irebert; Finkbeiner,Joshua; deGroh, Henry; Ritzert, Frank; Daniels, Christopher; DeMange, Jeff; Taylor, Shawn; hide

    2009-01-01

    NASA Glenn is currently performing seal research supporting both advanced turbine engine development and advanced space vehicle/propulsion system development. Studies have shown that decreasing parasitic leakage by applying advanced seals will increase turbine engine performance and decrease operating costs. Studies have also shown that higher temperature, long life seals are critical in meeting next generation space vehicle and propulsion system goals in the areas of performance, reusability, safety, and cost. Advanced docking system seals need to be very robust resisting space environmental effects while exhibiting very low leakage and low compression and adhesion forces. NASA Glenn is developing seal technology and providing technical consultation for the Agencys key aero- and space technology development programs.

  12. 2001 NASA Seal/secondary Air System Workshop, Volume 1. Volume 1

    NASA Technical Reports Server (NTRS)

    Steinetz, Bruce M. (Editor); Hendricks, Robert C. (Editor)

    2002-01-01

    The 2001 NASA Seal/Secondary Air System Workshop covered the following topics: (i) overview of NASA's Vision for 21st Century Aircraft; (ii) overview of NASA-sponsored Ultra-Efficient Engine Technology (UEET); (iii) reviews of sealing concepts, test results, experimental facilities, and numerical predictions; and (iv) reviews of material development programs relevant to advanced seals development. The NASA UEET overview illustrates for the reader the importance of advanced technologies, including seals, in meeting future turbine engine system efficiency and emission goals. The NASA UEET program goals include an 8-to 15-percent reduction in fuel burn, a 15-percent reduction in CO2, a 70-percent reduction in NOx, CO, and unburned hydrocarbons, and a 30-dB noise reduction relative to program baselines. The workshop also covered several programs NASA is funding to investigate advanced reusable space vehicle technologies (X-38) and advanced space ram/scramjet propulsion systems. Seal challenges posed by these advanced systems include high-temperature operation, resiliency at the operating temperature to accommodate sidewall flexing, and durability to last many missions.

  13. NASA Exploration Forum: Human Path to Mars

    NASA Image and Video Library

    2014-04-29

    Sam Scimemi, Director of NASA's International Space Station Division, left, Phil McAlister, Director of NASA's Commercial Spaceflight Division, second from left, Dan Dumbacher, Deputy Associate Administrator of NASA's Exploration Systems Development, center, Michele Gates, Senior Technical Advisor of NASA's Human Exploration and Operations Mission Directorate, second from right, and Jason Crusan, Director of NASA's Advanced Exploration Systems Division, right, sit on a panel during an Exploration Forum showcasing NASA's human exploration path to Mars in the James E. Webb Auditorium at NASA Headquarters on Tuesday, April 29, 2014. Photo Credit: (NASA/Joel Kowsky)

  14. A special purpose silicon compiler for designing supercomputing VLSI systems

    NASA Technical Reports Server (NTRS)

    Venkateswaran, N.; Murugavel, P.; Kamakoti, V.; Shankarraman, M. J.; Rangarajan, S.; Mallikarjun, M.; Karthikeyan, B.; Prabhakar, T. S.; Satish, V.; Venkatasubramaniam, P. R.

    1991-01-01

    Design of general/special purpose supercomputing VLSI systems for numeric algorithm execution involves tackling two important aspects, namely their computational and communication complexities. Development of software tools for designing such systems itself becomes complex. Hence a novel design methodology has to be developed. For designing such complex systems a special purpose silicon compiler is needed in which: the computational and communicational structures of different numeric algorithms should be taken into account to simplify the silicon compiler design, the approach is macrocell based, and the software tools at different levels (algorithm down to the VLSI circuit layout) should get integrated. In this paper a special purpose silicon (SPS) compiler based on PACUBE macrocell VLSI arrays for designing supercomputing VLSI systems is presented. It is shown that turn-around time and silicon real estate get reduced over the silicon compilers based on PLA's, SLA's, and gate arrays. The first two silicon compiler characteristics mentioned above enable the SPS compiler to perform systolic mapping (at the macrocell level) of algorithms whose computational structures are of GIPOP (generalized inner product outer product) form. Direct systolic mapping on PLA's, SLA's, and gate arrays is very difficult as they are micro-cell based. A novel GIPOP processor is under development using this special purpose silicon compiler.

  15. New NASA Technologies for Space Exploration

    NASA Technical Reports Server (NTRS)

    Calle, Carlos I.

    2015-01-01

    NASA is developing new technologies to enable planetary exploration. NASA's Space Launch System is an advance vehicle for exploration beyond LEO. Robotic explorers like the Mars Science Laboratory are exploring Mars, making discoveries that will make possible the future human exploration of the planet. In this presentation, we report on technologies being developed at NASA KSC for planetary exploration.

  16. NASA Tech House

    NASA Technical Reports Server (NTRS)

    1977-01-01

    The NASA Technology Utilization House, called Tech House, was designed and constructed at NASA's Langley Research Center in Hampton, Virginia, to demonstrate new technology that is available or will be available in the next several years and how the application of aerospace technology could help advance the homebuilding industry. Solar energy use, energy and water conservation, safety, security, and cost were major considerations in adapting the aerospace technology to the construction of Tech House.

  17. NASA's Ship-Aircraft Bio-Optical Research (SABOR)

    NASA Image and Video Library

    2014-08-25

    Sunset Over the Gulf of Maine On July 20, 2013, scientists at sea with NASA's SABOR experiment witnessed a spectacular sunset over the Gulf of Maine. NASA's Ship-Aircraft Bio-Optical Research (SABOR) experiment is a coordinated ship and aircraft observation campaign off the Atlantic coast of the United States, an effort to advance space-based capabilities for monitoring microscopic plants that form the base of the marine food chain. Read more: 1.usa.gov/WWRVzj Credit: NASA/SABOR/Wayne Slade, Sequoia Scientific .NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  18. NASA's Ship-Aircraft Bio-Optical Research (SABOR)

    NASA Image and Video Library

    2017-12-08

    Instruments Overboard On July 26, 2014, scientists worked past dusk to prepare and deploy the optical instruments and ocean water sensors during NASA's SABOR experiment. NASA's Ship-Aircraft Bio-Optical Research (SABOR) experiment is a coordinated ship and aircraft observation campaign off the Atlantic coast of the United States, an effort to advance space-based capabilities for monitoring microscopic plants that form the base of the marine food chain. Read more: 1.usa.gov/WWRVzj Credit: NASA/SABOR/Wayne Slade, Sequoia Scientific . NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  19. HEP Computing Tools, Grid and Supercomputers for Genome Sequencing Studies

    NASA Astrophysics Data System (ADS)

    De, K.; Klimentov, A.; Maeno, T.; Mashinistov, R.; Novikov, A.; Poyda, A.; Tertychnyy, I.; Wenaus, T.

    2017-10-01

    PanDA - Production and Distributed Analysis Workload Management System has been developed to address ATLAS experiment at LHC data processing and analysis challenges. Recently PanDA has been extended to run HEP scientific applications on Leadership Class Facilities and supercomputers. The success of the projects to use PanDA beyond HEP and Grid has drawn attention from other compute intensive sciences such as bioinformatics. Recent advances of Next Generation Genome Sequencing (NGS) technology led to increasing streams of sequencing data that need to be processed, analysed and made available for bioinformaticians worldwide. Analysis of genomes sequencing data using popular software pipeline PALEOMIX can take a month even running it on the powerful computer resource. In this paper we will describe the adaptation the PALEOMIX pipeline to run it on a distributed computing environment powered by PanDA. To run pipeline we split input files into chunks which are run separately on different nodes as separate inputs for PALEOMIX and finally merge output file, it is very similar to what it done by ATLAS to process and to simulate data. We dramatically decreased the total walltime because of jobs (re)submission automation and brokering within PanDA. Using software tools developed initially for HEP and Grid can reduce payload execution time for Mammoths DNA samples from weeks to days.

  20. Multiprocessing on supercomputers for computational aerodynamics

    NASA Technical Reports Server (NTRS)

    Yarrow, Maurice; Mehta, Unmeel B.

    1991-01-01

    Little use is made of multiple processors available on current supercomputers (computers with a theoretical peak performance capability equal to 100 MFLOPS or more) to improve turnaround time in computational aerodynamics. The productivity of a computer user is directly related to this turnaround time. In a time-sharing environment, such improvement in this speed is achieved when multiple processors are used efficiently to execute an algorithm. The concept of multiple instructions and multiple data (MIMD) is applied through multitasking via a strategy that requires relatively minor modifications to an existing code for a single processor. This approach maps the available memory to multiple processors, exploiting the C-Fortran-Unix interface. The existing code is mapped without the need for developing a new algorithm. The procedure for building a code utilizing this approach is automated with the Unix stream editor.

  1. Networking Technologies Enable Advances in Earth Science

    NASA Technical Reports Server (NTRS)

    Johnson, Marjory; Freeman, Kenneth; Gilstrap, Raymond; Beck, Richard

    2004-01-01

    This paper describes an experiment to prototype a new way of conducting science by applying networking and distributed computing technologies to an Earth Science application. A combination of satellite, wireless, and terrestrial networking provided geologists at a remote field site with interactive access to supercomputer facilities at two NASA centers, thus enabling them to validate and calibrate remotely sensed geological data in near-real time. This represents a fundamental shift in the way that Earth scientists analyze remotely sensed data. In this paper we describe the experiment and the network infrastructure that enabled it, analyze the data flow during the experiment, and discuss the scientific impact of the results.

  2. The Q continuum simulation: Harnessing the power of GPU accelerated supercomputers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heitmann, Katrin; Frontiere, Nicholas; Sewell, Chris

    2015-08-01

    Modeling large-scale sky survey observations is a key driver for the continuing development of high-resolution, large-volume, cosmological simulations. We report the first results from the "Q Continuum" cosmological N-body simulation run carried out on the GPU-accelerated supercomputer Titan. The simulation encompasses a volume of (1300 Mpc)(3) and evolves more than half a trillion particles, leading to a particle mass resolution of m(p) similar or equal to 1.5 . 10(8) M-circle dot. At thismass resolution, the Q Continuum run is currently the largest cosmology simulation available. It enables the construction of detailed synthetic sky catalogs, encompassing different modeling methodologies, including semi-analyticmore » modeling and sub-halo abundance matching in a large, cosmological volume. Here we describe the simulation and outputs in detail and present first results for a range of cosmological statistics, such as mass power spectra, halo mass functions, and halo mass-concentration relations for different epochs. We also provide details on challenges connected to running a simulation on almost 90% of Titan, one of the fastest supercomputers in the world, including our usage of Titan's GPU accelerators.« less

  3. Experiences From NASA/Langley's DMSS Project

    NASA Technical Reports Server (NTRS)

    1996-01-01

    There is a trend in institutions with high performance computing and data management requirements to explore mass storage systems with peripherals directly attached to a high speed network. The Distributed Mass Storage System (DMSS) Project at the NASA Langley Research Center (LaRC) has placed such a system into production use. This paper will present the experiences, both good and bad, we have had with this system since putting it into production usage. The system is comprised of: 1) National Storage Laboratory (NSL)/UniTree 2.1, 2) IBM 9570 HIPPI attached disk arrays (both RAID 3 and RAID 5), 3) IBM RS6000 server, 4) HIPPI/IPI3 third party transfers between the disk array systems and the supercomputer clients, a CRAY Y-MP and a CRAY 2, 5) a "warm spare" file server, 6) transition software to convert from CRAY's Data Migration Facility (DMF) based system to DMSS, 7) an NSC PS32 HIPPI switch, and 8) a STK 4490 robotic library accessed from the IBM RS6000 block mux interface. This paper will cover: the performance of the DMSS in the following areas: file transfer rates, migration and recall, and file manipulation (listing, deleting, etc.); the appropriateness of a workstation class of file server for NSL/UniTree with LaRC's present storage requirements in mind the role of the third party transfers between the supercomputers and the DMSS disk array systems in DMSS; a detailed comparison (both in performance and functionality) between the DMF and DMSS systems LaRC's enhancements to the NSL/UniTree system administration environment the mechanism for DMSS to provide file server redundancy the statistics on the availability of DMSS the design and experiences with the locally developed transparent transition software which allowed us to make over 1.5 million DMF files available to NSL/UniTree with minimal system outage

  4. Agenda of the Fourth Annual Summer Conference, NASA/USRA University Advanced Design Program

    NASA Technical Reports Server (NTRS)

    1988-01-01

    Presentations given by the participants at the fourth annual summer conference of the NASA/USRA University Advanced Design Program are summarized. The study topics include potential space and aeronautics projects which could be undertaken during a 20 to 30 year period beginning with the Space Station Initial Operating Configuration (IOC) scheduled for the early to mid-1990's. This includes system design studies for both manned and unmanned endeavors; e.g., lunar launch and landing facilities and operations, variable artificial gravity facility for the Space Station, manned Mars aircraft and delivery system, long term space habitat, construction equipment for lunar bases, Mars oxygen production system, trans-Pacific high speed civil transport, V/STOL aircraft concepts, etc.

  5. Advancing the Journey to Mars on This Week @NASA – October 30, 2015

    NASA Image and Video Library

    2015-10-30

    During an Oct. 28 keynote speech at the Center for American Progress, in Washington, NASA Administrator Charlie Bolden spoke about the advancement made on the journey to Mars and what lies ahead for future administrations and policy makers. NASA’s recently released report “Journey to Mars: Pioneering Next Steps in Space Exploration,” outlines its plan to reach Mars in phases – with technology demonstrations and research aboard the International Space Station, followed by hardware and procedure development in the proving ground around the moon, before sending humans to the Red Planet. Also, Space station spacewalk, Another record in space for Kelly, Mars Landing Sites/ Exploration Zones Workshop, Cassini’s “deep dive” flyby and more!

  6. Integration of PanDA workload management system with Titan supercomputer at OLCF

    NASA Astrophysics Data System (ADS)

    De, K.; Klimentov, A.; Oleynik, D.; Panitkin, S.; Petrosyan, A.; Schovancova, J.; Vaniachine, A.; Wenaus, T.

    2015-12-01

    The PanDA (Production and Distributed Analysis) workload management system (WMS) was developed to meet the scale and complexity of LHC distributed computing for the ATLAS experiment. While PanDA currently distributes jobs to more than 100,000 cores at well over 100 Grid sites, the future LHC data taking runs will require more resources than Grid computing can possibly provide. To alleviate these challenges, ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). The current approach utilizes a modified PanDA pilot framework for job submission to Titan's batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on Titan's multicore worker nodes. It also gives PanDA new capability to collect, in real time, information about unused worker nodes on Titan, which allows precise definition of the size and duration of jobs submitted to Titan according to available free resources. This capability significantly reduces PanDA job wait time while improving Titan's utilization efficiency. This implementation was tested with a variety of Monte-Carlo workloads on Titan and is being tested on several other supercomputing platforms. Notice: This manuscript has been authored, by employees of Brookhaven Science Associates, LLC under Contract No. DE-AC02-98CH10886 with the U.S. Department of Energy. The publisher by accepting the manuscript for publication acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes.

  7. Building more powerful less expensive supercomputers using Processing-In-Memory (PIM) LDRD final report.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Murphy, Richard C.

    2009-09-01

    This report details the accomplishments of the 'Building More Powerful Less Expensive Supercomputers Using Processing-In-Memory (PIM)' LDRD ('PIM LDRD', number 105809) for FY07-FY09. Latency dominates all levels of supercomputer design. Within a node, increasing memory latency, relative to processor cycle time, limits CPU performance. Between nodes, the same increase in relative latency impacts scalability. Processing-In-Memory (PIM) is an architecture that directly addresses this problem using enhanced chip fabrication technology and machine organization. PIMs combine high-speed logic and dense, low-latency, high-bandwidth DRAM, and lightweight threads that tolerate latency by performing useful work during memory transactions. This work examines the potential ofmore » PIM-based architectures to support mission critical Sandia applications and an emerging class of more data intensive informatics applications. This work has resulted in a stronger architecture/implementation collaboration between 1400 and 1700. Additionally, key technology components have impacted vendor roadmaps, and we are in the process of pursuing these new collaborations. This work has the potential to impact future supercomputer design and construction, reducing power and increasing performance. This final report is organized as follow: this summary chapter discusses the impact of the project (Section 1), provides an enumeration of publications and other public discussion of the work (Section 1), and concludes with a discussion of future work and impact from the project (Section 1). The appendix contains reprints of the refereed publications resulting from this work.« less

  8. NASA Exploration Forum: Human Path to Mars

    NASA Image and Video Library

    2014-04-29

    Sam Scimemi, Director of NASA's International Space Station Division, second from left, Phil McAlister, Director of NASA's Commercial Spaceflight Division, third from left, Dan Dumbacher, Deputy Associate Administrator of NASA's Exploration Systems Development, center, Michele Gates, Senior Technical Advisor of NASA's Human Exploration and Operations Mission Directorate, second from right, and Jason Crusan, Director of NASA's Advanced Exploration Systems Division, right, sit on a panel during an Exploration Forum showcasing NASA's human exploration path to Mars in the James E. Webb Auditorium at NASA Headquarters on Tuesday, April 29, 2014. Photo Credit: (NASA/Joel Kowsky)

  9. High performance computing for advanced modeling and simulation of materials

    NASA Astrophysics Data System (ADS)

    Wang, Jue; Gao, Fei; Vazquez-Poletti, Jose Luis; Li, Jianjiang

    2017-02-01

    The First International Workshop on High Performance Computing for Advanced Modeling and Simulation of Materials (HPCMS2015) was held in Austin, Texas, USA, Nov. 18, 2015. HPCMS 2015 was organized by Computer Network Information Center (Chinese Academy of Sciences), University of Michigan, Universidad Complutense de Madrid, University of Science and Technology Beijing, Pittsburgh Supercomputing Center, China Institute of Atomic Energy, and Ames Laboratory.

  10. The ChemViz Project: Using a Supercomputer To Illustrate Abstract Concepts in Chemistry.

    ERIC Educational Resources Information Center

    Beckwith, E. Kenneth; Nelson, Christopher

    1998-01-01

    Describes the Chemistry Visualization (ChemViz) Project, a Web venture maintained by the University of Illinois National Center for Supercomputing Applications (NCSA) that enables high school students to use computational chemistry as a technique for understanding abstract concepts. Discusses the evolution of computational chemistry and provides a…

  11. An Aerodynamic Performance Evaluation of the NASA/Ames Research Center Advanced Concepts Flight Simulator. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Donohue, Paul F.

    1987-01-01

    The results of an aerodynamic performance evaluation of the National Aeronautics and Space Administration (NASA)/Ames Research Center Advanced Concepts Flight Simulator (ACFS), conducted in association with the Navy-NASA Joint Institute of Aeronautics, are presented. The ACFS is a full-mission flight simulator which provides an excellent platform for the critical evaluation of emerging flight systems and aircrew performance. The propulsion and flight dynamics models were evaluated using classical flight test techniques. The aerodynamic performance model of the ACFS was found to realistically represent that of current day, medium range transport aircraft. Recommendations are provided to enhance the capabilities of the ACFS to a level forecast for 1995 transport aircraft. The graphical and tabular results of this study will establish a performance section of the ACFS Operation's Manual.

  12. 2007 NASA Seal/Secondary Air System Workshop. Volume 1

    NASA Technical Reports Server (NTRS)

    Steinetz, Bruce M.; Hendricks, Robert C.; Delgado, Irebert

    2008-01-01

    The 2007 NASA Seal/Secondary Air System workshop covered the following topics: (i) Overview of NASA's new Orion project aimed at developing a new spacecraft that will fare astronauts to the International Space Station, the Moon, Mars, and beyond; (ii) Overview of NASA's fundamental aeronautics technology project; (iii) Overview of NASA Glenn s seal project aimed at developing advanced seals for NASA's turbomachinery, space, and reentry vehicle needs; (iv) Reviews of NASA prime contractor, vendor, and university advanced sealing concepts, test results, experimental facilities, and numerical predictions; and (v) Reviews of material development programs relevant to advanced seals development. Turbine engine studies have shown that reducing seal leakage as well as high-pressure turbine (HPT) blade tip clearances will reduce fuel burn, lower emissions, retain exhaust gas temperature margin, and increase range. Turbine seal development topics covered include a method for fast-acting HPT blade tip clearance control, noncontacting low-leakage seals, intershaft seals, and a review of engine seal performance requirements for current and future Army engine platforms.

  13. Will Your Next Supercomputer Come from Costco?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Farber, Rob

    2007-04-15

    A fun topic for April, one that is not an April fool’s joke, is that you can purchase a commodity 200+ Gflop (single-precision) Linux supercomputer for around $600 from your favorite electronic vendor. Yes, it’s true. Just walk in and ask for a Sony Playstation 3 (PS3), take it home and install Linux on it. IBM has provided an excellent tutorial for installing Linux and building applications at http://www-128.ibm.com/developerworks/power/library/pa-linuxps3-1. If you want to raise some eyebrows at work, then submit a purchase request for a Sony PS3 game console and watch the reactions as your paperwork wends its way throughmore » the procurement process.« less

  14. Supporting Development for the Stirling Radioisotope Generator and Advanced Stirling Technology Development at NASA Glenn Research Center

    NASA Technical Reports Server (NTRS)

    Thieme, Lanny G.; Schreiber, Jeffrey G.

    2005-01-01

    A high-efficiency, 110-We (watts electric) Stirling Radioisotope Generator (SRG110) for possible use on future NASA Space Science missions is being developed by the Department of Energy, Lockheed Martin, Stirling Technology Company (STC), and NASA Glenn Research Center (GRC). Potential mission use includes providing spacecraft onboard electric power for deep space missions and power for unmanned Mars rovers. GRC is conducting an in-house supporting technology project to assist in SRG110 development. One-, three-, and six-month heater head structural benchmark tests have been completed in support of a heater head life assessment. Testing is underway to evaluate the key epoxy bond of the permanent magnets to the linear alternator stator lamination stack. GRC has completed over 10,000 hours of extended duration testing of the Stirling convertors for the SRG110, and a three-year test of two Stirling convertors in a thermal vacuum environment will be starting shortly. GRC is also developing advanced technology for Stirling convertors, aimed at substantially improving the specific power and efficiency of the convertor and the overall generator. Sunpower, Inc. has begun the development of a lightweight Stirling convertor, under a NASA Research Announcement (NRA) award, that has the potential to double the system specific power to about 8 We/kg. GRC has performed random vibration testing of a lower-power version of this convertor to evaluate robustness for surviving launch vibrations. STC has also completed the initial design of a lightweight convertor. Status of the development of a multi-dimensional computational fluid dynamics code and high-temperature materials work on advanced superalloys, refractory metal alloys, and ceramics are also discussed.

  15. Two Micron Laser Technology Advancements at NASA Langley Research Center

    NASA Technical Reports Server (NTRS)

    Singh, Upendra N.

    2010-01-01

    An Independent Laser Review Panel set up to examine NASA s space-based lidar missions and the technology readiness of lasers appropriate for space-based lidars indicated a critical need for an integrated research and development strategy to move laser transmitter technology from low technical readiness levels to the higher levels required for space missions. Based on the review, a multiyear Laser Risk Reduction Program (LRRP) was initiated by NASA in 2002 to develop technologies that ensure the successful development of the broad range of lidar missions envisioned by NASA. This presentation will provide an overview of the development of pulsed 2-micron solid-state laser technologies at NASA Langley Research Center for enabling space-based measurement of wind and carbon dioxide.

  16. Air Breathing Propulsion Controls and Diagnostics Research at NASA Glenn Under NASA Aeronautics Research Mission Programs

    NASA Technical Reports Server (NTRS)

    Garg, Sanjay

    2014-01-01

    The Intelligent Control and Autonomy Branch (ICA) at NASA (National Aeronautics and Space Administration) Glenn Research Center (GRC) in Cleveland, Ohio, is leading and participating in various projects in partnership with other organizations within GRC and across NASA, the U.S. aerospace industry, and academia to develop advanced controls and health management technologies that will help meet the goals of the NASA Aeronautics Research Mission Directorate (ARMD) Programs. These efforts are primarily under the various projects under the Fundamental Aeronautics Program (FAP) and the Aviation Safety Program (ASP). The ICA Branch is focused on advancing the state-of-the-art of aero-engine control and diagnostics technologies to help improve aviation safety, increase efficiency, and enable operation with reduced emissions. This paper describes the various ICA research efforts under the NASA Aeronautics Research Mission Programs with a summary of motivation, background, technical approach, and recent accomplishments for each of the research tasks.

  17. Congressional Panel Seeks To Curb Access of Foreign Students to U.S. Supercomputers.

    ERIC Educational Resources Information Center

    Kiernan, Vincent

    1999-01-01

    Fearing security problems, a congressional committee on Chinese espionage recommends that foreign students and other foreign nationals be barred from using supercomputers at national laboratories unless they first obtain export licenses from the federal government. University officials dispute the data on which the report is based and find the…

  18. Supercomputing with TOUGH2 family codes for coupled multi-physics simulations of geologic carbon sequestration

    NASA Astrophysics Data System (ADS)

    Yamamoto, H.; Nakajima, K.; Zhang, K.; Nanai, S.

    2015-12-01

    Powerful numerical codes that are capable of modeling complex coupled processes of physics and chemistry have been developed for predicting the fate of CO2 in reservoirs as well as its potential impacts on groundwater and subsurface environments. However, they are often computationally demanding for solving highly non-linear models in sufficient spatial and temporal resolutions. Geological heterogeneity and uncertainties further increase the challenges in modeling works. Two-phase flow simulations in heterogeneous media usually require much longer computational time than that in homogeneous media. Uncertainties in reservoir properties may necessitate stochastic simulations with multiple realizations. Recently, massively parallel supercomputers with more than thousands of processors become available in scientific and engineering communities. Such supercomputers may attract attentions from geoscientist and reservoir engineers for solving the large and non-linear models in higher resolutions within a reasonable time. However, for making it a useful tool, it is essential to tackle several practical obstacles to utilize large number of processors effectively for general-purpose reservoir simulators. We have implemented massively-parallel versions of two TOUGH2 family codes (a multi-phase flow simulator TOUGH2 and a chemically reactive transport simulator TOUGHREACT) on two different types (vector- and scalar-type) of supercomputers with a thousand to tens of thousands of processors. After completing implementation and extensive tune-up on the supercomputers, the computational performance was measured for three simulations with multi-million grid models, including a simulation of the dissolution-diffusion-convection process that requires high spatial and temporal resolutions to simulate the growth of small convective fingers of CO2-dissolved water to larger ones in a reservoir scale. The performance measurement confirmed that the both simulators exhibit excellent

  19. Senator Barbara Mikulski Visits NASA Goddard

    NASA Image and Video Library

    2017-12-08

    Maryland's Sen. Barbara Mikulski greeted employees at NASA's Goddard Space Flight Center in Greenbelt, Maryland, during a packed town hall meeting Jan. 6. She discussed her history with Goddard and appropriations for NASA in 2016. Read more: www.nasa.gov/feature/goddard/2016/maryland-sen-barbara-mi... Credit: NASA/Goddard/Rebecca Roth NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  20. Senator Barbara Mikulski Visits NASA Goddard

    NASA Image and Video Library

    2016-01-06

    Maryland's Sen. Barbara Mikulski greeted employees at NASA's Goddard Space Flight Center in Greenbelt, Maryland, during a packed town hall meeting Jan. 6. She discussed her history with Goddard and appropriations for NASA in 2016. Read more: www.nasa.gov/feature/goddard/2016/maryland-sen-barbara-mi... Credit: NASA/Goddard/Rebecca Roth NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  1. Senator Barbara Mikulski Visits NASA Goddard

    NASA Image and Video Library

    2017-12-08

    Maryland's Sen. Barbara Mikulski greeted employees at NASA's Goddard Space Flight Center in Greenbelt, Maryland, during a packed town hall meeting Jan. 6. She discussed her history with Goddard and appropriations for NASA in 2016. Read more: www.nasa.gov/feature/goddard/2016/maryland-sen-barbara-mi... Credit: NASA/Goddard/Bill Hrybyk NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  2. Senator Barbara Mikulski Visits NASA Goddard

    NASA Image and Video Library

    2016-01-06

    Maryland's Sen. Barbara Mikulski greeted employees at NASA's Goddard Space Flight Center in Greenbelt, Maryland, during a packed town hall meeting Jan. 6. She discussed her history with Goddard and appropriations for NASA in 2016. Read more: www.nasa.gov/feature/goddard/2016/maryland-sen-barbara-mi... Credit: NASA/Goddard/Bill Hrybyk NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  3. Advancement of a 30K W Solar Electric Propulsion System Capability for NASA Human and Robotic Exploration Missions

    NASA Technical Reports Server (NTRS)

    Smith, Bryan K.; Nazario, Margaret L.; Manzella, David H.

    2012-01-01

    Solar Electric Propulsion has evolved into a demonstrated operational capability performing station keeping for geosynchronous satellites, enabling challenging deep-space science missions, and assisting in the transfer of satellites from an elliptical orbit Geostationary Transfer Orbit (GTO) to a Geostationary Earth Orbit (GEO). Advancing higher power SEP systems will enable numerous future applications for human, robotic, and commercial missions. These missions are enabled by either the increased performance of the SEP system or by the cost reductions when compared to conventional chemical propulsion systems. Higher power SEP systems that provide very high payload for robotic missions also trade favorably for the advancement of human exploration beyond low Earth orbit. Demonstrated reliable systems are required for human space flight and due to their successful present day widespread use and inherent high reliability, SEP systems have progressively become a viable entrant into these future human exploration architectures. NASA studies have identified a 30 kW-class SEP capability as the next appropriate evolutionary step, applicable to wide range of both human and robotic missions. This paper describes the planning options, mission applications, and technology investments for representative 30kW-class SEP mission concepts under consideration by NASA

  4. Lynda Barry Visits NASA Goddard

    NASA Image and Video Library

    2017-12-08

    Cartoonist and professor of creativity Lynda Barry presented the benefits of creativity in everyday life as part of Goddard's Office of Communications Story Lab seminar series. Read more: www.nasa.gov/feature/goddard/2016/cartoonist-discusses-cr... Credit: NASA/Goddard/Rebecca Roth NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  5. NASA NASA CONNECT: Special World Space Congress. [Videotape].

    ERIC Educational Resources Information Center

    National Aeronautics and Space Administration, Hampton, VA. Langley Research Center.

    NASA CONNECT is an annual series of free integrated mathematics, science, and technology instructional distance learning programs for students in grades 5-8. This video presents the World Space Congress 2002, the meeting of the decade for space professionals. Topics discussed range from the discovery of distant planets to medical advancements,…

  6. Swedish Delegation Visits NASA Goddard

    NASA Image and Video Library

    2017-12-08

    Swedish Delegation Visits GSFC – May 3, 2017 - Members of the Royal Swedish Academy of Engineering Sciences listen to Dr. Compton Tucker’s presentation on NASA’s earth science research activities in the Piers Sellers Visualization Theatre in Building 28 at NASA Goddard. Photo Credit: NASA/Goddard/Rebecca Roth Read more: go.nasa.gov/2p1rP0h NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  7. Swedish Delegation Visits NASA Goddard

    NASA Image and Video Library

    2017-12-08

    Swedish Delegation Visits GSFC – May 3, 2017 - Members of the Royal Swedish Academy of Engineering Sciences listen to Dr. Compton Tucker’s presentation on NASA’s earth science research activities in the Piers Sellers Visualization Theatre in Building 28 at NASA Goddard. Credit: NASA/Goddard/Bill Hrybyk Read more: go.nasa.gov/2p1rP0h NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  8. Swedish Delegation Visits NASA Goddard

    NASA Image and Video Library

    2017-12-08

    Swedish Delegation Visits GSFC – May 3, 2017 - Members of the Royal Swedish Academy of Engineering Sciences listen to Dr. Joihn Mather’s presentation on NASA’s astrophysics research activities in the Piers Sellers Visualization Theatre in Building 28 at NASA Goddard. Credit: NASA/Goddard/Bill Hrybyk Read more: go.nasa.gov/2p1rP0h NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  9. Advanced EVA Capabilities: A Study for NASA's Revolutionary Aerospace Systems Concept Program

    NASA Technical Reports Server (NTRS)

    Hoffman, Stephen J.

    2004-01-01

    This report documents the results of a study carried out as part of NASA s Revolutionary Aerospace Systems Concepts Program examining the future technology needs of extravehicular activities (EVAs). The intent of this study is to produce a comprehensive report that identifies various design concepts for human-related advanced EVA systems necessary to achieve the goals of supporting future space exploration and development customers in free space and on planetary surfaces for space missions in the post-2020 timeframe. The design concepts studied and evaluated are not limited to anthropomorphic space suits, but include a wide range of human-enhancing EVA technologies as well as consideration of coordination and integration with advanced robotics. The goal of the study effort is to establish a baseline technology "road map" that identifies and describes an investment and technical development strategy, including recommendations that will lead to future enhanced synergistic human/robot EVA operations. The eventual use of this study effort is to focus evolving performance capabilities of various EVA system elements toward the goal of providing high performance human operational capabilities for a multitude of future space applications and destinations. The data collected for this study indicate a rich and diverse history of systems that have been developed to perform a variety of EVA tasks, indicating what is possible. However, the data gathered for this study also indicate a paucity of new concepts and technologies for advanced EVA missions - at least any that researchers are willing to discuss in this type of forum.

  10. Mission oriented R and D and the advancement of technology: The imapct of NASA contributions, volume 1

    NASA Technical Reports Server (NTRS)

    Robbins, M. D.; Kelley, J. A.; Elliott, L.

    1972-01-01

    The contributions of NASA to the advancement of major developments in several selected fields of technology are identified. Subjects discussed are: (1) developing new knowledge, (2) developing new technology, (3) demonstrating the application of new technology for the first time, (4) augmenting existing technology, (5) applying existing technology in a new context, (6) stimulating industry to acquire or develop new technology, (7) identifying problem areas requiring further research, and (8) creating new markets.

  11. NASA Satellite View of Antarctica

    NASA Image and Video Library

    2017-12-08

    NASA image acquired November 2, 2011 The Moderate Resolution Imaging Spectroradiometer (MODIS) instrument on NASA's Terra satellite captured this image of the Knox, Budd Law Dome, and Sabrina Coasts, Antarctica on November 2, 2011 at 01:40 UTC (Nov. 1 at 9:40 p.m. EDT). Operation Ice Bridge is exploring Antarctic ice, and more information can be found at www.nasa.gov/icebridge. Image Credit: NASA Goddard MODIS Rapid Response Team NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  12. NASA Applied Sciences Program

    NASA Technical Reports Server (NTRS)

    Estes, Sue M.; Haynes, J. A.

    2009-01-01

    NASA's strategic Goals: a) Develop a balanced overall program of science, exploration, and aeronautics consistent with the redirection of human spaceflight program to focus on exploration. b) Study Earth from space to advance scientific understanding and meet societal needs. NASA's partnership efforts in global modeling and data assimilation over the next decade will shorten the distance from observations to answers for important, leading-edge science questions. NASA's Applied Sciences program will continue the Agency's efforts in benchmarking the assimilation of NASA research results into policy and management decision-support tools that are vital for the Nation's environment, economy, safety, and security. NASA also is working with NOAH and inter-agency forums to transition mature research capabilities to operational systems, primarily the polar and geostationary operational environmental satellites, and to utilize fully those assets for research purposes.

  13. High performance real-time flight simulation at NASA Langley

    NASA Technical Reports Server (NTRS)

    Cleveland, Jeff I., II

    1994-01-01

    In order to meet the stringent time-critical requirements for real-time man-in-the-loop flight simulation, computer processing operations must be deterministic and be completed in as short a time as possible. This includes simulation mathematical model computational and data input/output to the simulators. In 1986, in response to increased demands for flight simulation performance, personnel at NASA's Langley Research Center (LaRC), working with the contractor, developed extensions to a standard input/output system to provide for high bandwidth, low latency data acquisition and distribution. The Computer Automated Measurement and Control technology (IEEE standard 595) was extended to meet the performance requirements for real-time simulation. This technology extension increased the effective bandwidth by a factor of ten and increased the performance of modules necessary for simulator communications. This technology is being used by more than 80 leading technological developers in the United States, Canada, and Europe. Included among the commercial applications of this technology are nuclear process control, power grid analysis, process monitoring, real-time simulation, and radar data acquisition. Personnel at LaRC have completed the development of the use of supercomputers for simulation mathematical model computational to support real-time flight simulation. This includes the development of a real-time operating system and the development of specialized software and hardware for the CAMAC simulator network. This work, coupled with the use of an open systems software architecture, has advanced the state of the art in real time flight simulation. The data acquisition technology innovation and experience with recent developments in this technology are described.

  14. Computational chemistry

    NASA Technical Reports Server (NTRS)

    Arnold, J. O.

    1987-01-01

    With the advent of supercomputers, modern computational chemistry algorithms and codes, a powerful tool was created to help fill NASA's continuing need for information on the properties of matter in hostile or unusual environments. Computational resources provided under the National Aerodynamics Simulator (NAS) program were a cornerstone for recent advancements in this field. Properties of gases, materials, and their interactions can be determined from solutions of the governing equations. In the case of gases, for example, radiative transition probabilites per particle, bond-dissociation energies, and rates of simple chemical reactions can be determined computationally as reliably as from experiment. The data are proving to be quite valuable in providing inputs to real-gas flow simulation codes used to compute aerothermodynamic loads on NASA's aeroassist orbital transfer vehicles and a host of problems related to the National Aerospace Plane Program. Although more approximate, similar solutions can be obtained for ensembles of atoms simulating small particles of materials with and without the presence of gases. Computational chemistry has application in studying catalysis, properties of polymers, all of interest to various NASA missions, including those previously mentioned. In addition to discussing these applications of computational chemistry within NASA, the governing equations and the need for supercomputers for their solution is outlined.

  15. Solving global shallow water equations on heterogeneous supercomputers

    PubMed Central

    Fu, Haohuan; Gan, Lin; Yang, Chao; Xue, Wei; Wang, Lanning; Wang, Xinliang; Huang, Xiaomeng; Yang, Guangwen

    2017-01-01

    The scientific demand for more accurate modeling of the climate system calls for more computing power to support higher resolutions, inclusion of more component models, more complicated physics schemes, and larger ensembles. As the recent improvements in computing power mostly come from the increasing number of nodes in a system and the integration of heterogeneous accelerators, how to scale the computing problems onto more nodes and various kinds of accelerators has become a challenge for the model development. This paper describes our efforts on developing a highly scalable framework for performing global atmospheric modeling on heterogeneous supercomputers equipped with various accelerators, such as GPU (Graphic Processing Unit), MIC (Many Integrated Core), and FPGA (Field Programmable Gate Arrays) cards. We propose a generalized partition scheme of the problem domain, so as to keep a balanced utilization of both CPU resources and accelerator resources. With optimizations on both computing and memory access patterns, we manage to achieve around 8 to 20 times speedup when comparing one hybrid GPU or MIC node with one CPU node with 12 cores. Using a customized FPGA-based data-flow engines, we see the potential to gain another 5 to 8 times improvement on performance. On heterogeneous supercomputers, such as Tianhe-1A and Tianhe-2, our framework is capable of achieving ideally linear scaling efficiency, and sustained double-precision performances of 581 Tflops on Tianhe-1A (using 3750 nodes) and 3.74 Pflops on Tianhe-2 (using 8644 nodes). Our study also provides an evaluation on the programming paradigm of various accelerator architectures (GPU, MIC, FPGA) for performing global atmospheric simulation, to form a picture about both the potential performance benefits and the programming efforts involved. PMID:28282428

  16. NASA Engineers Conduct Low Light Test on New Technology for NASA Webb Telescope

    NASA Image and Video Library

    2014-09-02

    NASA engineers inspect a new piece of technology developed for the James Webb Space Telescope, the micro shutter array, with a low light test at NASA's Goddard Space Flight Center in Greenbelt, Maryland. Developed at Goddard to allow Webb's Near Infrared Spectrograph to obtain spectra of more than 100 objects in the universe simultaneously, the micro shutter array uses thousands of tiny shutters to capture spectra from selected objects of interest in space and block out light from all other sources. Credit: NASA/Goddard/Chris Gunn NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  17. Interstellar Propulsion Research Within NASA

    NASA Technical Reports Server (NTRS)

    Johnson, Les; Cook, Stephen (Technical Monitor)

    2001-01-01

    NASA is actively conducting advanced propulsion research and technology development in various in-space transportation technologies with potential application to interstellar missions and precursors. Within the last few years, interest in the scientific community in interstellar missions as well as outer heliospheric missions, which could function as interstellar precursor missions, has increased. A mission definition team was charted by NASA to define such a precursor, The Interstellar Probe, which resulted in a prioritization of relatively near-term transportation technologies to support its potential implementation. In addition, the goal of finding and ultimately imaging extra solar planets has raised the issue of our complete inability to mount an expedition to such as planet, should one be found. Even contemplating such a mission with today's technology is a stretch of the imagination. However, there are several propulsion concepts, based on known physics, that have promise to enable interstellar exploration in the future. NASA is making small, incremental investments in some key advanced propulsion technologies in an effort to advance their state-of-the-art in support potential future mission needs. These technologies, and their relative maturity, are described.

  18. Opportunities for leveraging OS virtualization in high-end supercomputing.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bridges, Patrick G.; Pedretti, Kevin Thomas Tauke

    2010-11-01

    This paper examines potential motivations for incorporating virtualization support in the system software stacks of high-end capability supercomputers. We advocate that this will increase the flexibility of these platforms significantly and enable new capabilities that are not possible with current fixed software stacks. Our results indicate that compute, virtual memory, and I/O virtualization overheads are low and can be further mitigated by utilizing well-known techniques such as large paging and VMM bypass. Furthermore, since the addition of virtualization support does not affect the performance of applications using the traditional native environment, there is essentially no disadvantage to its addition.

  19. Performance Evaluation of Supercomputers using HPCC and IMB Benchmarks

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Ciotti, Robert; Gunney, Brian T. N.; Spelce, Thomas E.; Koniges, Alice; Dossa, Don; Adamidis, Panagiotis; Rabenseifner, Rolf; Tiyyagura, Sunil R.; Mueller, Matthias; hide

    2006-01-01

    The HPC Challenge (HPCC) benchmark suite and the Intel MPI Benchmark (IMB) are used to compare and evaluate the combined performance of processor, memory subsystem and interconnect fabric of five leading supercomputers - SGI Altix BX2, Cray XI, Cray Opteron Cluster, Dell Xeon cluster, and NEC SX-8. These five systems use five different networks (SGI NUMALINK4, Cray network, Myrinet, InfiniBand, and NEC IXS). The complete set of HPCC benchmarks are run on each of these systems. Additionally, we present Intel MPI Benchmarks (IMB) results to study the performance of 11 MPI communication functions on these systems.

  20. Implementation of the NAS Parallel Benchmarks in Java

    NASA Technical Reports Server (NTRS)

    Frumkin, Michael A.; Schultz, Matthew; Jin, Haoqiang; Yan, Jerry; Biegel, Bryan (Technical Monitor)

    2002-01-01

    Several features make Java an attractive choice for High Performance Computing (HPC). In order to gauge the applicability of Java to Computational Fluid Dynamics (CFD), we have implemented the NAS (NASA Advanced Supercomputing) Parallel Benchmarks in Java. The performance and scalability of the benchmarks point out the areas where improvement in Java compiler technology and in Java thread implementation would position Java closer to Fortran in the competition for CFD applications.

  1. Performance and Scalability of the NAS Parallel Benchmarks in Java

    NASA Technical Reports Server (NTRS)

    Frumkin, Michael A.; Schultz, Matthew; Jin, Haoqiang; Yan, Jerry; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    Several features make Java an attractive choice for scientific applications. In order to gauge the applicability of Java to Computational Fluid Dynamics (CFD), we have implemented the NAS (NASA Advanced Supercomputing) Parallel Benchmarks in Java. The performance and scalability of the benchmarks point out the areas where improvement in Java compiler technology and in Java thread implementation would position Java closer to Fortran in the competition for scientific applications.

  2. Improvements in the Scalability of the NASA Goddard Multiscale Modeling Framework for Hurricane Climate Studies

    NASA Technical Reports Server (NTRS)

    Shen, Bo-Wen; Tao, Wei-Kuo; Chern, Jiun-Dar

    2007-01-01

    Improving our understanding of hurricane inter-annual variability and the impact of climate change (e.g., doubling CO2 and/or global warming) on hurricanes brings both scientific and computational challenges to researchers. As hurricane dynamics involves multiscale interactions among synoptic-scale flows, mesoscale vortices, and small-scale cloud motions, an ideal numerical model suitable for hurricane studies should demonstrate its capabilities in simulating these interactions. The newly-developed multiscale modeling framework (MMF, Tao et al., 2007) and the substantial computing power by the NASA Columbia supercomputer show promise in pursuing the related studies, as the MMF inherits the advantages of two NASA state-of-the-art modeling components: the GEOS4/fvGCM and 2D GCEs. This article focuses on the computational issues and proposes a revised methodology to improve the MMF's performance and scalability. It is shown that this prototype implementation enables 12-fold performance improvements with 364 CPUs, thereby making it more feasible to study hurricane climate.

  3. Swedish Delegation Visits NASA Goddard

    NASA Image and Video Library

    2017-12-08

    Swedish Delegation Visits GSFC – May 3, 2017 –Goddard Space Flight Center senior management and members of the Royal Swedish Academy walk towards Building 29 as part of the Swedish delegation’s tour of the center. Credit: NASA/Goddard/Bill Hrybyk Read more: go.nasa.gov/2p1rP0h NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  4. NASA Goddard All Hands Meeting

    NASA Image and Video Library

    2017-12-08

    Monday, September 30, 2013 - NASA Goddard civil servant and contractor employees were invited to an all hands meeting with Center Director Chris Scolese and members of the senior management team to learn the latest information about a possible partial government shutdown that could happen as early as midnight. Credit: NASA/Goddard/Bill Hrybyk NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  5. Aerospace Communications Technologies in Support of NASA Mission

    NASA Technical Reports Server (NTRS)

    Miranda, Felix A.

    2016-01-01

    NASA is endeavoring in expanding communications capabilities to enable and enhance robotic and human exploration of space and to advance aero communications here on Earth. This presentation will discuss some of the research and technology development work being performed at the NASA Glenn Research Center in aerospace communications in support of NASAs mission. An overview of the work conducted in-house and in collaboration with academia, industry, and other government agencies (OGA) to advance radio frequency (RF) and optical communications technologies in the areas of antennas, ultra-sensitive receivers, power amplifiers, among others, will be presented. In addition, the role of these and other related RF and optical communications technologies in enabling the NASA next generation aerospace communications architecture will be also discussed.

  6. Supercomputer modeling of hydrogen combustion in rocket engines

    NASA Astrophysics Data System (ADS)

    Betelin, V. B.; Nikitin, V. F.; Altukhov, D. I.; Dushin, V. R.; Koo, Jaye

    2013-08-01

    Hydrogen being an ecological fuel is very attractive now for rocket engines designers. However, peculiarities of hydrogen combustion kinetics, the presence of zones of inverse dependence of reaction rate on pressure, etc. prevents from using hydrogen engines in all stages not being supported by other types of engines, which often brings the ecological gains back to zero from using hydrogen. Computer aided design of new effective and clean hydrogen engines needs mathematical tools for supercomputer modeling of hydrogen-oxygen components mixing and combustion in rocket engines. The paper presents the results of developing verification and validation of mathematical model making it possible to simulate unsteady processes of ignition and combustion in rocket engines.

  7. Advanced Computing for Manufacturing.

    ERIC Educational Resources Information Center

    Erisman, Albert M.; Neves, Kenneth W.

    1987-01-01

    Discusses ways that supercomputers are being used in the manufacturing industry, including the design and production of airplanes and automobiles. Describes problems that need to be solved in the next few years for supercomputers to assume a major role in industry. (TW)

  8. NASA specification for manufacturing and performance requirements of NASA standard aerospace nickel-cadmium cells

    NASA Technical Reports Server (NTRS)

    1988-01-01

    On November 25, 1985, the NASA Chief Engineer established a NASA-wide policy to maintain and to require the use of the NASA standard for aerospace nickel-cadmium cells and batteries. The Associate Administrator for Safety, Reliability, Maintainability, and Quality Assurance stated on December 29, 1986, the intent to retain the NASA standard cell usage policy established by the Office of the Chief Engineer. The current NASA policy is also to incorporate technological advances as they are tested and proven for spaceflight applications. This policy will be implemented by modifying the existing standard cells or by developing new NASA standards and their specifications in accordance with the NASA's Aerospace Battery Systems Program Plan. This NASA Specification for Manufacturing and Performance Requirements of NASA Standard Aerospace Nickel-Cadmium Cells is prepared to provide requirements for the NASA standard nickel-cadmium cell. It is an interim specification pending resolution of the separator material availability. This specification has evolved from over 15 years of nickel-cadmium cell experience by NASA. Consequently, considerable experience has been collected and cell performance has been well characterized from many years of ground testing and from in-flight operations in both geosynchronous (GEO) and low earth orbit (LEO) applications. NASA has developed and successfully used two standard flight qualified cell designs.

  9. The PMS project: Poor man's supercomputer

    NASA Astrophysics Data System (ADS)

    Csikor, F.; Fodor, Z.; Hegedüs, P.; Horváth, V. K.; Katz, S. D.; Piróth, A.

    2001-02-01

    We briefly describe the Poor Man's Supercomputer (PMS) project carried out at Eötvös University, Budapest. The goal was to construct a cost effective, scalable, fast parallel computer to perform numerical calculations of physical problems that can be implemented on a lattice with nearest neighbour interactions. To this end we developed the PMS architecture using PC components and designed a special, low cost communication hardware and the driver software for Linux OS. Our first implementation of PMS includes 32 nodes (PMS1). The performance of PMS1 was tested by Lattice Gauge Theory simulations. Using pure SU(3) gauge theory or the bosonic part of the minimal supersymmetric extention of the standard model (MSSM) on PMS1 we obtained 3 / Mflops and 0.60 / Mflops price-to-sustained performance ratio for double and single precision operations, respectively. The design of the special hardware and the communication driver are freely available upon request for non-profit organizations.

  10. An Interface for Biomedical Big Data Processing on the Tianhe-2 Supercomputer.

    PubMed

    Yang, Xi; Wu, Chengkun; Lu, Kai; Fang, Lin; Zhang, Yong; Li, Shengkang; Guo, Guixin; Du, YunFei

    2017-12-01

    Big data, cloud computing, and high-performance computing (HPC) are at the verge of convergence. Cloud computing is already playing an active part in big data processing with the help of big data frameworks like Hadoop and Spark. The recent upsurge of high-performance computing in China provides extra possibilities and capacity to address the challenges associated with big data. In this paper, we propose Orion-a big data interface on the Tianhe-2 supercomputer-to enable big data applications to run on Tianhe-2 via a single command or a shell script. Orion supports multiple users, and each user can launch multiple tasks. It minimizes the effort needed to initiate big data applications on the Tianhe-2 supercomputer via automated configuration. Orion follows the "allocate-when-needed" paradigm, and it avoids the idle occupation of computational resources. We tested the utility and performance of Orion using a big genomic dataset and achieved a satisfactory performance on Tianhe-2 with very few modifications to existing applications that were implemented in Hadoop/Spark. In summary, Orion provides a practical and economical interface for big data processing on Tianhe-2.

  11. NASA Participates in Scout Jamboree

    NASA Image and Video Library

    2017-07-25

    Greg “Box” Johnson, executive director of Center for the Advancement of Science in Space (CASIS) and former astronaut, foreground, and NASA Acting Chief Technologist Douglas Terrier watch as attendees of the Boy Scouts of America National Jamboree launch a weather balloon, Tuesday, July 25, 2017 at the Summit Bechtel Reserve in Glen Jean, West Virginia. Photo Credit: (NASA/Bill Ingalls)

  12. An Experimental Evaluation of Advanced Rotorcraft Airfoils in the NASA Ames Eleven-foot Transonic Wind Tunnel

    NASA Technical Reports Server (NTRS)

    Flemming, Robert J.

    1984-01-01

    Five full scale rotorcraft airfoils were tested in the NASA Ames Eleven-Foot Transonic Wind Tunnel for full scale Reynolds numbers at Mach numbers from 0.3 to 1.07. The models, which spanned the tunnel from floor to ceiling, included two modern baseline airfoils, the SC1095 and SC1094 R8, which have been previously tested in other facilities. Three advanced transonic airfoils, designated the SSC-A09, SSC-A07, and SSC-B08, were tested to confirm predicted performance and provide confirmation of advanced airfoil design methods. The test showed that the eleven-foot tunnel is suited to two-dimensional airfoil testing. Maximum lift coefficients, drag coefficients, pitching moments, and pressure coefficient distributions are presented. The airfoil analysis codes agreed well with the data, with the Grumman GRUMFOIL code giving the best overall performance correlation.

  13. NASA's Global Hawk

    NASA Image and Video Library

    2014-09-23

    View from a Chase Plane; HS3 Science Flight 8 Wraps Up The chase plane accompanying NASA's Global Hawk No. 872 captured this picture on Sept. 19 after the Global Hawk completed science flight #8 where it gathered data from a weakening Tropical Storm Edouard over the North Atlantic Ocean. Credit: NASA -- The Hurricane and Severe Storm Sentinel (HS3) is a five-year mission specifically targeted to investigate the processes that underlie hurricane formation and intensity change in the Atlantic Ocean basin. HS3 is motivated by hypotheses related to the relative roles of the large-scale environment and storm-scale internal processes. Read more: espo.nasa.gov/missions/hs3/mission-gallery NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  14. Progress and supercomputing in computational fluid dynamics; Proceedings of U.S.-Israel Workshop, Jerusalem, Israel, December 1984

    NASA Technical Reports Server (NTRS)

    Murman, E. M. (Editor); Abarbanel, S. S. (Editor)

    1985-01-01

    Current developments and future trends in the application of supercomputers to computational fluid dynamics are discussed in reviews and reports. Topics examined include algorithm development for personal-size supercomputers, a multiblock three-dimensional Euler code for out-of-core and multiprocessor calculations, simulation of compressible inviscid and viscous flow, high-resolution solutions of the Euler equations for vortex flows, algorithms for the Navier-Stokes equations, and viscous-flow simulation by FEM and related techniques. Consideration is given to marching iterative methods for the parabolized and thin-layer Navier-Stokes equations, multigrid solutions to quasi-elliptic schemes, secondary instability of free shear flows, simulation of turbulent flow, and problems connected with weather prediction.

  15. Progress in Materials and Component Development for Advanced Lithium-ion Cells for NASA's Exploration Missions

    NASA Technical Reports Server (NTRS)

    Reid, Concha, M.; Reid, Concha M.

    2011-01-01

    Vehicles and stand-alone power systems that enable the next generation of human missions to the Moon will require energy storage systems that are safer, lighter, and more compact than current state-of-the- art (SOA) aerospace quality lithium-ion (Li-ion) batteries. NASA is developing advanced Li-ion cells to enable or enhance the power systems for the Altair Lunar Lander, Extravehicular Activities spacesuit, and rovers and portable utility pallets for Lunar Surface Systems. Advanced, high-performing materials are required to provide component-level performance that can offer the required gains at the integrated cell level. Although there is still a significant amount of work yet to be done, the present state of development activities has resulted in the synthesis of promising materials that approach the ultimate performance goals. This report on interim progress of the development efforts will elaborate on the challenges of the development activities, proposed strategies to overcome technical issues, and present performance of materials and cell components.

  16. Advanced Training Technologies and Learning Environments

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K. (Compiler); Malone, John B. (Compiler)

    1999-01-01

    This document contains the proceedings of the Workshop on Advanced Training Technologies and Learning Environments held at NASA Langley Research Center, Hampton, Virginia, March 9-10, 1999. The workshop was jointly sponsored by the University of Virginia's Center for Advanced Computational Technology and NASA. Workshop attendees were from NASA, other government agencies, industry, and universities. The objective of the workshop was to assess the status and effectiveness of different advanced training technologies and learning environments.

  17. Supercomputations and big-data analysis in strong-field ultrafast optical physics: filamentation of high-peak-power ultrashort laser pulses

    NASA Astrophysics Data System (ADS)

    Voronin, A. A.; Panchenko, V. Ya; Zheltikov, A. M.

    2016-06-01

    High-intensity ultrashort laser pulses propagating in gas media or in condensed matter undergo complex nonlinear spatiotemporal evolution where temporal transformations of optical field waveforms are strongly coupled to an intricate beam dynamics and ultrafast field-induced ionization processes. At the level of laser peak powers orders of magnitude above the critical power of self-focusing, the beam exhibits modulation instabilities, producing random field hot spots and breaking up into multiple noise-seeded filaments. This problem is described by a (3  +  1)-dimensional nonlinear field evolution equation, which needs to be solved jointly with the equation for ultrafast ionization of a medium. Analysis of this problem, which is equivalent to solving a billion-dimensional evolution problem, is only possible by means of supercomputer simulations augmented with coordinated big-data processing of large volumes of information acquired through theory-guiding experiments and supercomputations. Here, we review the main challenges of supercomputations and big-data processing encountered in strong-field ultrafast optical physics and discuss strategies to confront these challenges.

  18. Sweet potato in a vegetarian menu plan for NASA's Advanced Life Support Program.

    PubMed

    Wilson, C D; Pace, R D; Bromfield, E; Jones, G; Lu, J Y

    1998-01-01

    Sweet potato has been selected as one of the crops for NASA's Advanced Life Support Program. Sweet potato primarily provides carbohydrate--an important energy source, beta-carotene, and ascorbic acid to a space diet. This study focuses on menus incorporating two sets of sweet potato recipes developed at Tuskegee University. One set includes recipes for 10 vegetarian products containing fom 6% to 20% sweet potato on a dry weight basis (pancakes, waffles, tortillas, bread, pie, pound cake, pasta, vegetable patties, doughnuts, and pretzels) that have been formulated, subjected to sensory evaluation, and determined to be acceptable. These recipes and the other set of recipes, not tested organoleptically, were substituted in a 10-day vegetarian menu plan developed by the American Institute of Biological Sciences (AIBS) Kennedy Space Center Biomass Processing Technical Panel. At least one recipe containing sweet potato was included in each meal. An analysis of the nutritional quality of this menu compared to the original AIBS menu found improved beta-carotene content (p<0.05). All other nutrients, except vitamin B6, and calories were equal and in some instances greater than those listed for NASA's Controlled Ecological Life Support Systems RDA. These results suggest that sweet potato products can be used successfully in menus developed for space with the added benefit of increased nutrient value and dietary variety.

  19. Advanced Aerospace Materials by Design

    NASA Technical Reports Server (NTRS)

    Srivastava, Deepak; Djomehri, Jahed; Wei, Chen-Yu

    2004-01-01

    The advances in the emerging field of nanophase thermal and structural composite materials; materials with embedded sensors and actuators for morphing structures; light-weight composite materials for energy and power storage; and large surface area materials for in-situ resource generation and waste recycling, are expected to :revolutionize the capabilities of virtually every system comprising of future robotic and :human moon and mars exploration missions. A high-performance multiscale simulation platform, including the computational capabilities and resources of Columbia - the new supercomputer, is being developed to discover, validate, and prototype next generation (of such advanced materials. This exhibit will describe the porting and scaling of multiscale 'physics based core computer simulation codes for discovering and designing carbon nanotube-polymer composite materials for light-weight load bearing structural and 'thermal protection applications.

  20. Advancing NASA's Satellite Control Capabilities: More than Just Better Technology

    NASA Technical Reports Server (NTRS)

    Smith, Danford

    2008-01-01

    This viewgraph presentation reviews the work of the Goddard Mission Services Evolution Center (GMSEC) in the development of the NASA's satellite control capabilities. The purpose of the presentation is to provide a quick overview of NASA's Goddard Space Flight Center and our approach to coordinating the ground system resources and development activities across many different missions. NASA Goddard's work in developing and managing the current and future space exploration missions is highlighted. The GMSEC, was established to to coordinate ground and flight data systems development and services, to create a new standard ground system for many missions and to reflect the reality that business reengineering and mindset were just as important.

  1. The Age of the Supercomputer Gives Way to the Age of the Super Infrastructure.

    ERIC Educational Resources Information Center

    Young, Jeffrey R.

    1997-01-01

    In October 1997, the National Science Foundation will discontinue financial support for two university-based supercomputer facilities to concentrate resources on partnerships led by facilities at the University of California, San Diego and the University of Illinois, Urbana-Champaign. The reconfigured program will develop more user-friendly and…

  2. NASA's Ship-Aircraft Bio-Optical Research (SABOR)

    NASA Image and Video Library

    2017-12-08

    Storm in the Sargasso Sea Scientist aboard the R/V Endeavor in the Sargasso Sea put their research on hold on July 28, 2014, as a storm system brought high waves crashing onto the deck. NASA's Ship-Aircraft Bio-Optical Research (SABOR) experiment is a coordinated ship and aircraft observation campaign off the Atlantic coast of the United States, an effort to advance space-based capabilities for monitoring microscopic plants that form the base of the marine food chain. Read more: 1.usa.gov/WWRVzj Credit: NASA/SABOR/Chris Armanetti, University of Rhode Island .NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  3. NASA's Ship-Aircraft Bio-Optical Research (SABOR)

    NASA Image and Video Library

    2014-08-25

    What's in the Water? Robert Foster, of the City College of New York, filters seawater on July 23, 2414, for chlorophyll analysis in a lab on the R/V Endeavor. NASA's Ship-Aircraft Bio-Optical Research (SABOR) experiment is a coordinated ship and aircraft observation campaign off the Atlantic coast of the United States, an effort to advance space-based capabilities for monitoring microscopic plants that form the base of the marine food chain. Read more: 1.usa.gov/WWRVzj Credit: NASA/SABOR/Wayne Slade, Sequoia Scientific..NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  4. Advancing Innovation Through Collaboration: Implementation of the NASA Space Life Sciences Strategy

    NASA Technical Reports Server (NTRS)

    Davis, Jeffrey R.; Richard, Elizabeth E.

    2010-01-01

    On October 18, 2010, the NASA Human Health and Performance center (NHHPC) was opened to enable collaboration among government, academic and industry members. Membership rapidly grew to 90 members (http://nhhpc.nasa.gov ) and members began identifying collaborative projects as detailed in this article. In addition, a first workshop in open collaboration and innovation was conducted on January 19, 2011 by the NHHPC resulting in additional challenges and projects for further development. This first workshop was a result of the SLSD successes in running open innovation challenges over the past two years. In 2008, the NASA Johnson Space Center, Space Life Sciences Directorate (SLSD) began pilot projects in open innovation (crowd sourcing) to determine if these new internet-based platforms could indeed find solutions to difficult technical problems. From 2008 to 2010, the SLSD issued 34 challenges, 14 externally and 20 internally. The 14 external challenges were conducted through three different vendors: InnoCentive, Yet2.com and TopCoder. The 20 internal challenges were conducted using the InnoCentive platform, customized to NASA use, and promoted as NASA@Work. The results from the 34 challenges involved not only technical solutions that were reported previously at the 61st IAC, but also the formation of new collaborative relationships. For example, the TopCoder pilot was expanded by the NASA Space Operations Mission Directorate to the NASA Tournament Lab in collaboration with Harvard Business School and TopCoder. Building on these initial successes, the NHHPC workshop in January of 2011, and ongoing NHHPC member discussions, several important collaborations have been developed: (1) Space Act Agreement between NASA and GE for collaborative projects (2) NASA and academia for a Visual Impairment / Intracranial Hypertension summit (February 2011) (3) NASA and the DoD through the Defense Venture Catalyst Initiative (DeVenCI) for a technical needs workshop (June 2011) (4

  5. NASA's Ship-Aircraft Bio-Optical Research (SABOR)

    NASA Image and Video Library

    2017-12-08

    Seaweed and Light A type of seaweed called Sargassum, common in the Sargasso Sea, floats by an instrument deployed here on July 26, 2014, as part of NASA's SABOR experiment. Scientists from the City College of New York use the data to study the way light becomes polarized in various conditions both above and below the surface of the ocean. NASA's Ship-Aircraft Bio-Optical Research (SABOR) experiment is a coordinated ship and aircraft observation campaign off the Atlantic coast of the United States, an effort to advance space-based capabilities for monitoring microscopic plants that form the base of the marine food chain. Read more: 1.usa.gov/WWRVzj Credit: NASA/SABOR/Wayne Slade, Sequoia Scientific .NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  6. NASA's DC-8 Desert Shadow

    NASA Image and Video Library

    2017-12-08

    The DC-8 research aircraft casting its shadow on the ground in California's Mojave Desert during an IceBridge instrument check flight. Prior to field campaigns, IceBridge instrument and aircraft teams run the aircraft through a series of tests to ensure that everything is operating at peak condition. Credit: NASA / Jim Yungel NASA's Operation IceBridge is an airborne science mission to study Earth's polar ice. For more information about IceBridge, visit: www.nasa.gov/icebridge NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  7. Swedish Delegation Visits NASA Goddard

    NASA Image and Video Library

    2017-12-08

    Swedish Delegation Visits GSFC – May 3, 2017 - Members of Goddard Space Flight Center senior management and members of the Royal Swedish Academy of Engineering Sciences pose for a group photo in the atrium area of Building 28 at GSFC. Photo Credit: NASA/Goddard/Bill Hrybyk Read more: go.nasa.gov/2p1rP0h NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  8. Swedish Delegation Visits NASA Goddard

    NASA Image and Video Library

    2017-12-08

    Swedish Delegation Visits GSFC – May 3, 2017 – Members of Goddard Space Flight Center senior management introduce themselves to His Majesty Carl XVI Gustaf, King of Sweden and the members of the Royal Swedish Academy upon their arrival to Goddard. Credit: NASA/Goddard/Bill Hrybyk Read more: go.nasa.gov/2p1rP0h NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  9. Swedish Delegation Visits NASA Goddard

    NASA Image and Video Library

    2017-12-08

    Swedish Delegation Visits GSFC – May 3, 2017 –Goddard Space Flight Center senior management and members of the Royal Swedish Academy walk towards Building 29 as part of the Swedish delegation’s tour of the center. Photo Credit: NASA/Goddard/Rebecca Roth Read more: go.nasa.gov/2p1rP0h NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  10. Swedish Delegation Visits NASA Goddard

    NASA Image and Video Library

    2017-12-08

    Swedish Delegation Visits GSFC – May 3, 2017 - Members of the Royal Swedish Academy of Engineering Sciences listen to Catherine Peddie, Wide Field Infrared Survey Telescope (WFIRST) Deputy Project Manager use a full-scale model of WFIRST to describe the features of the observatory. Photo Credit: NASA/Goddard/Rebecca Roth Read more: go.nasa.gov/2p1rP0h NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  11. Swedish Delegation Visits NASA Goddard

    NASA Image and Video Library

    2017-12-08

    Swedish Delegation Visits GSFC – May 3, 2017 - Members of the Royal Swedish Academy of Engineering Sciences listen to Jim Jeletic, Deputy Project Manager of Hubble Space Telescope (HST) talk about telescope operations just outside the HST control center at Goddard. Photo Credit: NASA/Goddard/Rebecca Roth Read more: go.nasa.gov/2p1rP0h NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  12. Partnering to Change the Way NASA and the Nation Communicate Through Space

    NASA Technical Reports Server (NTRS)

    Vrotsos, Pete A.; Budinger, James M.; Bhasin, Kul; Ponchak, Denise S.

    2000-01-01

    For at least 20 years, the Space Communications Program at NASA Glenn Research Center (GRC) has focused on enhancing the capability and competitiveness of the U.S. commercial communications satellite industry. GRC has partnered with the industry on the development of enabling technologies to help maintain U.S. preeminence in the worldwide communications satellite marketplace. The Advanced Communications Technology Satellite (ACTS) has been the most significant space communications technology endeavor ever performed at GRC, and the centerpiece of GRC's communication technology program for the last decade. Under new sponsorship from NASA's Human Exploration and Development of Space Enterprise, GRC has transitioned the focus and direction of its program, from commercial relevance to NASA mission relevance. Instead of one major experimental spacecraft and one headquarters sponsor, GRC is now exploring opportunities for all of NASA's Enterprises to benefit from advances in space communications technologies, and accomplish their missions through the use of existing and emerging commercially provided services. A growing vision within NASA is to leverage the best commercial standards, technologies, and services as a starting point to satisfy NASA's unique needs. GRC's heritage of industry partnerships is closely aligned with this vision. NASA intends to leverage the explosive growth of the telecommunications industry through its impressive technology advancements and potential new commercial satellite systems. GRC's partnerships with the industry, academia, and other government agencies will directly support all four NASA's future mission needs, while advancing the state of the art of commercial practice. GRC now conducts applied research and develops and demonstrates advanced communications and network technologies in support of all four NASA Enterprises (Human Exploration and Development of Space, Space Science, Earth Science, and Aero-Space Technologies).

  13. NASA technology program for future civil air transports

    NASA Technical Reports Server (NTRS)

    Wright, H. T.

    1983-01-01

    An assessment is undertaken of the development status of technology, applicable to future civil air transport design, which is currently undergoing conceptual study or testing at NASA facilities. The NASA civil air transport effort emphasizes advanced aerodynamic computational capabilities, fuel-efficient engines, advanced turboprops, composite primary structure materials, advanced aerodynamic concepts in boundary layer laminarization and aircraft configuration, refined control, guidance and flight management systems, and the integration of all these design elements into optimal systems. Attention is given to such novel transport aircraft design concepts as forward swept wings, twin fuselages, sandwich composite structures, and swept blade propfans.

  14. NASA Space Launch System (SLS) Progress Report

    NASA Technical Reports Server (NTRS)

    Williams, Tom

    2012-01-01

    The briefing objectives are: (1) Explain the SLS current baseline architecture and the SLS block-upgrade approach. (2) Summarize the SLS evolutionary path in relation to the Advanced Booster and Advanced Development NASA Research Announcements.

  15. NASA's new university engineering space research programs

    NASA Technical Reports Server (NTRS)

    Sadin, Stanley R.

    1988-01-01

    The objective of a newly emerging element of NASA's university engineering programs is to provide a more autonomous element that will enhance and broaden the capabilities in academia, enabling them to participate more effectively in the U.S. civil space program. The programs utilize technical monitors at NASA centers to foster collaborative arrangements, exchange of personnel, and the sharing of facilities between NASA and the universities. The elements include: the university advanced space design program, which funds advanced systems study courses at the senior and graduate levels; the university space engineering research program that supports cross-disciplinary research centers; the outreach flight experiments program that offers engineering research opportunities to universities; and the planned university investigator's research program to provide grants to individuals with outstanding credentials.

  16. Proceedings of the Twentieth NASA Propagation Experimenters Meeting (NAPEX XX) and the Advanced Communications Technology Satellite (ACTS) Propagation Studies Miniworkshop

    NASA Technical Reports Server (NTRS)

    Golshan, Nassar (Editor)

    1996-01-01

    The NASA Propagation Experimenters (NAPEX) Meeting and associated Advanced Communications Technology Satellite (ACTS) Propagation Studies Miniworkshop convene yearly to discuss studies supported by the NASA Propagation Program. Representatives from the satellite communications (satcom)industry, academia, and government with an interest in space-ground radio wave propagation have peer discussion of work in progress, disseminate propagation results, and interact with the satcom industry. NAPEX XX, in Fairbanks, Alaska, June 4-5, 1996, had three sessions: (1) "ACTS Propagation Study: Background, Objectives, and Outcomes," covered results from thirteen station-years of Ka-band experiments; (2) "Propagation Studies for Mobile and Personal Satellite Applications," provided the latest developments in measurement, modeling, and dissemination of propagation phenomena of interest to the mobile, personal, and aeronautical satcom industry; and (3)"Propagation Research Topics," covered a range of topics including space/ground optical propagation experiments, propagation databases, the NASA Propagation Web Site, and revision plans for the NASA propagation effects handbooks. The ACTS Miniworkshop, June 6, 1996, covered ACTS status, engineering support for ACTS propagation terminals, and the ACTS Propagation Data Center. A plenary session made specific recommendations for the future direction of the program.

  17. Proceedings of the Fifteenth NASA Propagation Experimenters Meeting (NAPEX 15) and the Advanced Communications Technology Satellite (ACTS) Propagation Studies Miniworkshop

    NASA Technical Reports Server (NTRS)

    Davarian, Faramaz (Editor)

    1991-01-01

    The NASA Propagation Experimenters Meeting (NAPEX), supported by the NASA Propagation Program, is convened annually to discuss studies made on radio wave propagation by investigators from domestic and international organizations. The meeting was organized into three technical sessions. The first session was dedicated to Olympus and ACTS studies and experiments, the second session was focused on the propagation studies and measurements, and the third session covered computer-based propagation model development. In total, sixteen technical papers and some informal contributions were presented. Following NAPEX 15, the Advanced Communications Technology Satellite (ACTS) miniworkshop was held on 29 Jun. 1991, to review ACTS propagation activities, with emphasis on ACTS hardware development and experiment planning. Five papers were presented.

  18. Proceedings of the Eighteenth NASA Propagation Experimenters Meeting (NAPEX 18) and the Advanced Communications Technology Satellite (ACTS) Propagation Studies Miniworkshop

    NASA Technical Reports Server (NTRS)

    Davarian, Faramaz (Editor)

    1994-01-01

    The NASA Propagation Experimenters Meeting (NAPEX), supported by the NASA Propagation Program, is convened annually to discuss studies made on radio wave propagation by investigators from domestic and international organizations. Participants included representatives from Canada, the Netherlands, England, and the United States, including researchers from universities, government agencies, and private industry. The meeting was organized into two technical sessions. The first session was dedicated to slant path propagation studies and experiments. The second session focused on propagation studies for mobile, personal, and sound broadcast systems. In total, 14 technical papers and some informal contributions were presented. Preceding NAPEX_17, the Advanced Communications Technology Satellite (ACTS) Propagation Studies Miniworkshop was held to review ACTS propagation activities.

  19. Optimal wavelength-space crossbar switches for supercomputer optical interconnects.

    PubMed

    Roudas, Ioannis; Hemenway, B Roe; Grzybowski, Richard R; Karinou, Fotini

    2012-08-27

    We propose a most economical design of the Optical Shared MemOry Supercomputer Interconnect System (OSMOSIS) all-optical, wavelength-space crossbar switch fabric. It is shown, by analysis and simulation, that the total number of on-off gates required for the proposed N × N switch fabric can scale asymptotically as N ln N if the number of input/output ports N can be factored into a product of small primes. This is of the same order of magnitude as Shannon's lower bound for switch complexity, according to which the minimum number of two-state switches required for the construction of a N × N permutation switch is log2 (N!).

  20. Future Opportunities for Dynamic Power Systems for NASA Missions

    NASA Technical Reports Server (NTRS)

    Shaltens, Richard K.

    2007-01-01

    Dynamic power systems have the potential to be used in Radioisotope Power Systems (RPS) and Fission Surface Power Systems (FSPS) to provide high efficiency, reliable and long life power generation for future NASA applications and missions. Dynamic power systems have been developed by NASA over the decades, but none have ever operated in space. Advanced Stirling convertors are currently being developed at the NASA Glenn Research Center. These systems have demonstrated high efficiencies to enable high system specific power (>8 W(sub e)/kg) for 100 W(sub e) class Advanced Stirling Radioisotope Generators (ASRG). The ASRG could enable significant extended and expanded operation on the Mars surface and on long-life deep space missions. In addition, advanced high power Stirling convertors (>150 W(sub e)/kg), for use with surface fission power systems, could provide power ranging from 30 to 50 kWe, and would be enabling for both lunar and Mars exploration. This paper will discuss the status of various energy conversion options currently under development by NASA Glenn for the Radioisotope Power System Program for NASA s Science Mission Directorate (SMD) and the Prometheus Program for the Exploration Systems Mission Directorate (ESMD).

  1. NASA's Ship-Aircraft Bio-Optical Research (SABOR)

    NASA Image and Video Library

    2014-08-25

    Fixing the "Fish" On July 19, 2014, Wayne Slade of Sequoia Scientific, and Allen Milligan of Oregon State University, made adjustments to the "fish" that researchers used to hold seawater collected from a depth of about 3 meters (10 feet) while the ship was underway. NASA's Ship-Aircraft Bio-Optical Research (SABOR) experiment is a coordinated ship and aircraft observation campaign off the Atlantic coast of the United States, an effort to advance space-based capabilities for monitoring microscopic plants that form the base of the marine food chain. Read more: 1.usa.gov/WWRVzj Credit: NASA/SABOR/Wayne Slade, Sequoia Scientific .NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  2. NASA's Ship-Aircraft Bio-Optical Research (SABOR)

    NASA Image and Video Library

    2014-08-25

    Catnap at Sea Ali Chase of the University of Maine, and Courtney Kearney of the Naval Research Laboratory, caught a quick nap on July 24, 2014, while between successive stops at sea to make measurements from the R/V Endeavor. NASA's Ship-Aircraft Bio-Optical Research (SABOR) experiment is a coordinated ship and aircraft observation campaign off the Atlantic coast of the United States, an effort to advance space-based capabilities for monitoring microscopic plants that form the base of the marine food chain. Read more: 1.usa.gov/WWRVzj Credit: NASA/SABOR/Wayne Slade, Sequoia Scientific..NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  3. An Overview of NASA's Contributions to Energy Technology

    NASA Technical Reports Server (NTRS)

    Lyons, Valerie J.; Levine, Arlene S.

    2009-01-01

    The National Aeronautics and Space Administration (NASA) is well known for its many contributions to advancing technology for the aviation and space industries. It may be surprising to some that it has also made a major impact in advancing energy technologies. This paper presents a historic overview of some of the energy programs that NASA was involved in, as well as presenting some current energy-related work that is relevant to both aerospace and non-aerospace needs. In the past, NASA developed prototype electric cars, low-emission gas turbines, wind turbines, and solar-powered villages, to name a few of the major energy projects. The fundamental expertise in fluid mechanics, heat transfer, thermodynamics, mechanical and electrical engineering, and other related fields, found in NASA s workforce, can easily be applied to develop creative solutions to energy problems in space, aviation, or terrestrial systems.

  4. Second NASA Technical Interchange Meeting (TIM): Advanced Technology Lifecycle Analysis System (ATLAS) Technology Tool Box (TTB)

    NASA Technical Reports Server (NTRS)

    ONeil, D. A.; Mankins, J. C.; Christensen, C. B.; Gresham, E. C.

    2005-01-01

    The Advanced Technology Lifecycle Analysis System (ATLAS), a spreadsheet analysis tool suite, applies parametric equations for sizing and lifecycle cost estimation. Performance, operation, and programmatic data used by the equations come from a Technology Tool Box (TTB) database. In this second TTB Technical Interchange Meeting (TIM), technologists, system model developers, and architecture analysts discussed methods for modeling technology decisions in spreadsheet models, identified specific technology parameters, and defined detailed development requirements. This Conference Publication captures the consensus of the discussions and provides narrative explanations of the tool suite, the database, and applications of ATLAS within NASA s changing environment.

  5. Analysis of wavelet technology for NASA applications

    NASA Technical Reports Server (NTRS)

    Wells, R. O., Jr.

    1994-01-01

    The purpose of this grant was to introduce a broad group of NASA researchers and administrators to wavelet technology and to determine its future role in research and development at NASA JSC. The activities of several briefings held between NASA JSC scientists and Rice University researchers are discussed. An attached paper, 'Recent Advances in Wavelet Technology', summarizes some aspects of these briefings. Two proposals submitted to NASA reflect the primary areas of common interest. They are image analysis and numerical solutions of partial differential equations arising in computational fluid dynamics and structural mechanics.

  6. NASA Aerospace Flight Battery Systems Program Update

    NASA Technical Reports Server (NTRS)

    Manzo, Michelle; ODonnell, Patricia

    1997-01-01

    The objectives of NASA's Aerospace Flight Battery Systems Program is to: develop, maintain and provide tools for the validation and assessment of aerospace battery technologies; accelerate the readiness of technology advances and provide infusion paths for emerging technologies; provide NASA projects with the required database and validation guidelines for technology selection of hardware and processes relating to aerospace batteries; disseminate validation and assessment tools, quality assurance, reliability, and availability information to the NASA and aerospace battery communities; and ensure that safe, reliable batteries are available for NASA's future missions.

  7. The NASA Solar System Exploration Virtual Institute: International Efforts in Advancing Lunar Science with Prospects for the Future

    NASA Technical Reports Server (NTRS)

    Schmidt, Gregory K.

    2014-01-01

    The NASA Solar System Exploration Research Virtual Institute (SSERVI), originally chartered in 2008 as the NASA Lunar Science Institute (NLSI), is chartered to advance both the scientific goals needed to enable human space exploration, as well as the science enabled by such exploration. NLSI and SSERVI have in succession been "institutes without walls," fostering collaboration between domestic teams (7 teams for NLSI, 9 for SSERVI) as well as between these teams and the institutes' international partners, resulting in a greater global endeavor. SSERVI teams and international partners participate in sharing ideas, information, and data arising from their respective research efforts, and contribute to the training of young scientists and bringing the scientific results and excitement of exploration to the public. The domestic teams also respond to NASA's strategic needs, providing community-based responses to NASA needs in partnership with NASA's Analysis Groups. Through the many partnerships enabled by NLSI and SSERVI, scientific results have well exceeded initial projections based on the original PI proposals, proving the validity of the virtual institute model. NLSI and SSERVI have endeavored to represent not just the selected and funded domestic teams, but rather the entire relevant scientific community; this has been done through many means such as the annual Lunar Science Forum (now re-named Exploration Science Forum), community-based grass roots Focus Groups on a wide range of topics, and groups chartered to further the careers of young scientists. Additionally, NLSI and SSERVI have co-founded international efforts such as the pan-European lunar science consortium, with an overall goal of raising the tide of lunar science (and now more broadly exploration science) across the world.

  8. Swedish Delegation Visits NASA Goddard

    NASA Image and Video Library

    2017-12-08

    Swedish Delegation Visits GSFC – May 3, 2017 - Members of the Royal Swedish Academy of Engineering Sciences listen to James Pontius, Global Ecosystem Dynamics Investigator (GEDI) Project Manager and Bryan Blair, GEDI Deputy Principal Investigator talk about mission and science of GEDI and the collaborative work being done with Sweden. Photo Credit: NASA/Goddard/Rebecca Roth Read more: go.nasa.gov/2p1rP0h NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  9. NASA's SDO Sees Solar Flares

    NASA Image and Video Library

    2014-06-10

    A solar flare bursts off the left limb of the sun in this image captured by NASA's Solar Dynamics Observatory on June 10, 2014, at 7:41 a.m. EDT. This is classified as an X2.2 flare, shown in a blend of two wavelengths of light: 171 and 131 angstroms, colorized in gold and red, respectively. Credit: NASA/SDO/Goddard/Wiessinger NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  10. Solving large sparse eigenvalue problems on supercomputers

    NASA Technical Reports Server (NTRS)

    Philippe, Bernard; Saad, Youcef

    1988-01-01

    An important problem in scientific computing consists in finding a few eigenvalues and corresponding eigenvectors of a very large and sparse matrix. The most popular methods to solve these problems are based on projection techniques on appropriate subspaces. The main attraction of these methods is that they only require the use of the matrix in the form of matrix by vector multiplications. The implementations on supercomputers of two such methods for symmetric matrices, namely Lanczos' method and Davidson's method are compared. Since one of the most important operations in these two methods is the multiplication of vectors by the sparse matrix, methods of performing this operation efficiently are discussed. The advantages and the disadvantages of each method are compared and implementation aspects are discussed. Numerical experiments on a one processor CRAY 2 and CRAY X-MP are reported. Possible parallel implementations are also discussed.

  11. NASA's Advanced Exploration Systems Mars Transit Habitat Refinement Point of Departure Design

    NASA Technical Reports Server (NTRS)

    Simon, Matthew; Latorella, Kara; Martin, John; Cerro, Jeff; Lepsch, Roger; Jefferies, Sharon; Goodliff, Kandyce; McCleskey, Carey; Smitherman, David; Stromgren, Chel

    2017-01-01

    This paper describes the recently developed point of departure design for a long duration, reusable Mars Transit Habitat, which was established during a 2016 NASA habitat design refinement activity supporting the definition of NASA's Evolvable Mars Campaign. As part of its development of sustainable human Mars mission concepts achievable in the 2030s, the Evolvable Mars Campaign has identified desired durations and mass/dimensional limits for long duration Mars habitat designs to enable the currently assumed solar electric and chemical transportation architectures. The Advanced Exploration Systems Mars Transit Habitat Refinement Activity brought together habitat subsystem design expertise from across NASA to develop an increased fidelity, consensus design for a transit habitat within these constraints. The resulting design and data (including a mass equipment list) contained in this paper are intended to help teams across the agency and potential commercial, academic, or international partners understand: 1) the current architecture/habitat guidelines and assumptions, 2) performance targets of such a habitat (particularly in mass, volume, and power), 3) the driving technology/capability developments and architectural solutions which are necessary for achieving these targets, and 4) mass reduction opportunities and research/design needs to inform the development of future research and proposals. Data presented includes: an overview of the habitat refinement activity including motivation and process when informative; full documentation of the baseline design guidelines and assumptions; detailed mass and volume breakdowns; a moderately detailed concept of operations; a preliminary interior layout design with rationale; a list of the required capabilities necessary to enable the desired mass; and identification of any worthwhile trades/analyses which could inform future habitat design efforts. As a whole, the data in the paper show that a transit habitat meeting the 43

  12. NASA Scientific Balloon in Antarctica

    NASA Image and Video Library

    2017-12-08

    NASA image captured December 25, 2011 A NASA scientific balloon awaits launch in McMurdo, Antarctica. The balloon, carrying Indiana University's Cosmic Ray Electron Synchrotron Telescope (CREST), was launched on December 25. After a circum-navigational flight around the South Pole, the payload landed on January 5. The CREST payload is one of two scheduled as part of this seasons' annual NASA Antarctic balloon Campaign which is conducted in cooperation with the National Science Foundation's Office of Polar Programs. The campaign's second payload is the University of Arizona's Stratospheric Terahertz Observatory (STO). You can follow the flights at the Columbia Scientific Balloon Facility's web site at www.csbf.nasa.gov/antarctica/ice.htm Credit: NASA NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  13. NASA Technology Applications Team

    NASA Technical Reports Server (NTRS)

    1979-01-01

    The contributions of NASA to the advancement of the level of the technology base of the United States are highlighted. Technological transfer from preflight programs, the Viking program, the Apollo program, and the Shuttle and Skylab programs is reported.

  14. NASA Sees Hurricane Arthur's Cloud-Covered Eye

    NASA Image and Video Library

    2014-07-03

    This visible image of Tropical Storm Arthur was taken by the MODIS instrument aboard NASA's Aqua satellite on July 2 at 18:50 UTC (2:50 p.m. EDT). A cloud-covered eye is clearly visible. Credit: NASA Goddard MODIS Rapid Response Team Read more: www.nasa.gov/content/goddard/arthur-atlantic/ NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  15. Application of NASA's Advanced Life Support Technologies for Waste Treatment, Water Purification and Recycle, and Food Production in Polar Regions

    NASA Technical Reports Server (NTRS)

    Bubenheim, David L.; Lewis, Carol E.; Covington, M. Alan (Technical Monitor)

    1995-01-01

    NASA's advanced life support technologies are being combined with Arctic science and engineering knowledge to address the unique needs of the remote communities of Alaska through the Advanced Life Systems for Extreme Environments (ALSEE) project. ALSEE is a collaborative effort involving NASA, the State of Alaska, the University of Alaska, the North Slope Borough of Alaska, and the National Science Foundation (NSF). The focus is a major issue in the state of Alaska and other areas of the Circumpolar North, the health and welfare of its people, their lives and the subsistence lifestyle in remote communities, economic opportunity, and care for the environment. The project primarily provides treatment and reduction of waste, purification and recycling of water. and production of food. A testbed is being established to demonstrate the technologies which will enable safe, healthy, and autonomous function of remote communities and to establish the base for commercial development of the resulting technology into new industries. The challenge is to implement the technological capabilities in a manner compatible with the social and economic structures of the native communities, the state, and the commercial sector. Additional information is contained in the original extended abstract.

  16. A Summary on Progress in Materials Development for Advanced Lithium-ion Cells for NASA's Exploration Missions

    NASA Technical Reports Server (NTRS)

    Reid, Concha M.

    2011-01-01

    Vehicles and stand-alone power systems that enable the next generation of human missions to the moon will require energy storage systems that are safer, lighter, and more compact than current state-of-the-art (SOA) aerospace quality lithium-ion (Li-ion) batteries. NASA is developing advanced Li-ion cells to enable or enhance future human missions to Near Earth Objects, such as asteroids, planets, moons, libration points, and orbiting structures. Advanced, high-performing materials are required to provide component-level performance that can offer the required gains at the integrated cell level. Although there is still a significant amount of work yet to be done, the present state of development activities has resulted in the synthesis of promising materials that approach the ultimate performance goals. This paper on interim progress of the development efforts will present performance of materials and cell components and will elaborate on the challenges of the development activities and proposed strategies to overcome technical issues.

  17. NASA Handbook for Spacecraft Structural Dynamics Testing

    NASA Technical Reports Server (NTRS)

    Kern, Dennis L.; Scharton, Terry D.

    2005-01-01

    Recent advances in the area of structural dynamics and vibrations, in both methodology and capability, have the potential to make spacecraft system testing more effective from technical, cost, schedule, and hardware safety points of view. However, application of these advanced test methods varies widely among the NASA Centers and their contractors. Identification and refinement of the best of these test methodologies and implementation approaches has been an objective of efforts by the Jet Propulsion Laboratory on behalf of the NASA Office of the Chief Engineer. But to develop the most appropriate overall test program for a flight project from the selection of advanced methodologies, as well as conventional test methods, spacecraft project managers and their technical staffs will need overall guidance and technical rationale. Thus, the Chief Engineer's Office has recently tasked JPL to prepare a NASA Handbook for Spacecraft Structural Dynamics Testing. An outline of the proposed handbook, with a synopsis of each section, has been developed and is presented herein. Comments on the proposed handbook are solicited from the spacecraft structural dynamics testing community.

  18. NASA Handbook for Spacecraft Structural Dynamics Testing

    NASA Technical Reports Server (NTRS)

    Kern, Dennis L.; Scharton, Terry D.

    2004-01-01

    Recent advances in the area of structural dynamics and vibrations, in both methodology and capability, have the potential to make spacecraft system testing more effective from technical, cost, schedule, and hardware safety points of view. However, application of these advanced test methods varies widely among the NASA Centers and their contractors. Identification and refinement of the best of these test methodologies and implementation approaches has been an objective of efforts by the Jet Propulsion Laboratory on behalf of the NASA Office of the Chief Engineer. But to develop the most appropriate overall test program for a flight project from the selection of advanced methodologies, as well as conventional test methods, spacecraft project managers and their technical staffs will need overall guidance and technical rationale. Thus, the Chief Engineer's Office has recently tasked JPL to prepare a NASA Handbook for Spacecraft Structural Dynamics Testing. An outline of the proposed handbook, with a synopsis of each section, has been developed and is presented herein. Comments on the proposed handbook is solicited from the spacecraft structural dynamics testing community.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bailey, David H.

    The NAS Parallel Benchmarks (NPB) are a suite of parallel computer performance benchmarks. They were originally developed at the NASA Ames Research Center in 1991 to assess high-end parallel supercomputers. Although they are no longer used as widely as they once were for comparing high-end system performance, they continue to be studied and analyzed a great deal in the high-performance computing community. The acronym 'NAS' originally stood for the Numerical Aeronautical Simulation Program at NASA Ames. The name of this organization was subsequently changed to the Numerical Aerospace Simulation Program, and more recently to the NASA Advanced Supercomputing Center, althoughmore » the acronym remains 'NAS.' The developers of the original NPB suite were David H. Bailey, Eric Barszcz, John Barton, David Browning, Russell Carter, LeoDagum, Rod Fatoohi, Samuel Fineberg, Paul Frederickson, Thomas Lasinski, Rob Schreiber, Horst Simon, V. Venkatakrishnan and Sisira Weeratunga. The original NAS Parallel Benchmarks consisted of eight individual benchmark problems, each of which focused on some aspect of scientific computing. The principal focus was in computational aerophysics, although most of these benchmarks have much broader relevance, since in a much larger sense they are typical of many real-world scientific computing applications. The NPB suite grew out of the need for a more rational procedure to select new supercomputers for acquisition by NASA. The emergence of commercially available highly parallel computer systems in the late 1980s offered an attractive alternative to parallel vector supercomputers that had been the mainstay of high-end scientific computing. However, the introduction of highly parallel systems was accompanied by a regrettable level of hype, not only on the part of the commercial vendors but even, in some cases, by scientists using the systems. As a result, it was difficult to discern whether the new systems offered any fundamental performance

  20. The Role of Synthetic Biology in NASA's Missions

    NASA Technical Reports Server (NTRS)

    Rothschild, Lynn J.

    2016-01-01

    The time has come to for NASA to exploit synthetic biology in pursuit of its missions, including aeronautics, earth science, astrobiology and most notably, human exploration. Conversely, NASA advances the fundamental technology of synthetic biology as no one else can because of its unique expertise in the origin of life and life in extreme environments, including the potential for alternate life forms. This enables unique, creative "game changing" advances. NASA's requirement for minimizing upmass in flight will also drive the field toward miniaturization and automation. These drivers will greatly increase the utility of synthetic biology solutions for military, health in remote areas and commercial purposes. To this end, we have begun a program at NASA to explore the use of synthetic biology in NASA's missions, particular space exploration. As part of this program, we began hosting an iGEM team of undergraduates drawn from Brown and Stanford Universities to conduct synthetic biology research at NASA Ames Research Center. The 2011 team (http://2011.igem.org/Team:Brown-Stanford) produced an award-winning project on using synthetic biology as a basis for a human Mars settlement.